I wrote a graphics library for IoT - htcw_gfx
and I'm thinking of writing some sort of WYSIWYG UI widget editor that generates code for a UI such that you can visually craft your user interface and it produces the code for you sort of like Winforms in .NET.
Here's the deal though - this is IoT and I am loath to whip the heap because we're dealing with only 10s of KB sometimes so fragmentation management is a challenge.
I have two approaches for doing this.
Approach A: This would be much like .NET winforms in that there is an object model, and controls derive from "control" and the code generation simply instantiates the object.
Approach B: This would generate code to do direct draws of things like buttons, and would not abstract them into an object model.
I'm inclined toward approach (B) due to the memory issues mentioned above. Also it would just generally perform better, and yet the generation itself is more complex, and the generated code takes a lot more cognitive load to navigate and more effort to maintain. One of the reasons is despite its drawbacks, it may be the only realistic way to run the UI on platforms with very little SRAM.
Approach (A) would generate much more maintainable code, and the generation process is easier but also requires a ton of upfront effort to produce an object model, and has its own maintenance problems because I have to maintain code in 3 places instead of two every time I add a new widget to the generator: The designer, the generator code, AND the object model. It also would whip the heap hard enough that it wouldn't be realistic to run on certain platforms.
Anyone have any opinions or suggestions in terms of what I should do?
What I have tried:
Not applicable here since this isn't a how to question.