|
Hi,
A few more questions;
1.) Does your algorithm support polygons with holes[^]?
2.) Did you invent something new or implement an existing algorithm?
3.) Where can I find your work? I'd like to look at it.
Best Wishes,
-David Delaune
|
|
|
|
|
1. Heck no!*
2. I haven't been able to implement the algorithm. It is described at tutorialspoint somewhere but no implementation
3. I'll publish it here when I solve it.
Sorry if my OP implied I solved it. I did not mean to. If I had solved it I'd have found something else to get stuck on by now.
* adding, there's no way to even describe it with my API, except you should be able to approximate any polygon with holes by one without one that loops back on itself - as long as it's filled you won't be able to tell the difference.
Real programmers use butterflies
|
|
|
|
|
I just figured out a much easier way in theory to do it. Just scale the thing smaller and smaller by 1 pixel in any direction and keep redrawing it until you get down to a single point.
Unfortunately, in practice this won't work - the way the bresenham line drawing algo works it will leave little holes in the result. *sadface*
Real programmers use butterflies
|
|
|
|
|
Well,
That doesn't really make any sense. I don't understand the technical issues you are encountering. Drawing, filling and scaling 2D polygons is exceedingly trivial.
Based on what you have said I see the following requirements:
1.) You are only supporting simple polygons[^].
2.) You need to write to the video device in a scanline pattern[^].
3.) You cannot read pixels.
4.) You have not stated why you can't draw in a frame buffer. But I will assume that you either can't or are unwilling.
So I don't think that you can use the Nonzero-rule[^].
But you could use the even–odd rule[^].
Some other things for you to read for scaling your polygons:
Dot product[^]
Best Wishes,
-David Delaune
|
|
|
|
|
You know I've even read about that technique years ago, and for some reason I completely overlooked it - it went down the memory hole.
I'm not entirely sure even-odd will work until I try it, but I'll certainly try it.
Thanks in advance, even if it leaves me feeling like a bit of an idiot. I'll take it if it means a solution.
Real programmers use butterflies
|
|
|
|
|
honey the codewitch wrote: it leaves me feeling like a bit of an idiot You are obviously not an idiot, but rather experimenting with geometry and probably never looked here before. Everyone is clueless when they poke around in an unfamiliar topic. Everybody is standing on the shoulders of giants and very rarely do you find a human that generates something new. That's why I wanted to see what you created, I wanted to see if it was something new.
Anyway, I forgot to mention why you probably can't use the nonzero-rule. It works by summing the angles which means it would need to use a minimum of 3 rows. The even–odd rule would allow you to cast a single photon (ray of light) across one scanline and simply check for even/odd values.
I would recommend testing this stuff on your desktop and get out of that restrained environment during the test development.
Good luck,
-David Delaune
|
|
|
|
|
Randor wrote: I would recommend testing this stuff on your desktop and get out of that restrained environment during the test development.
That's actually part of why I wrote GFX to be runnable anywhere. I didn't want to develop it on a constrained environment. But it has to run on them.
My initial GFX library dumped its output solely using printf() and drawing ascii art after converting bitmaps to 16 color grayscale.
I used that to test my line drawing algorithms and such.
But now that all of that is done and pretty predictable, I find myself going back to the PC to code this thing less and less, either because I'm dealing with things like asynchronous draws which have no support on the PC, or most of what I'm doing works the first time or two after it compiles because i've built the foundation up by now enough that the coding is fairly high level.
Real programmers use butterflies
|
|
|
|
|
Okay, so I implemented it and what jumps out to me (and I feel like I'm missing something here) is that for me to use even-odd requires a nasty brute force when trying to fill.
here's rough C++ psuedocode i just typed to illustrate:
for(int y=0;y<height;++y) {
for(int x=0;x<width;++x) {
if(even_odd_is_point_in_poly(x,y,path,path_size)) {
draw_pixel(x,y,color);
}
}
}
Does that look right to you? It seems heavy handed to me. The wiki entry only shows how to determine if the point is in the polygon though, not how to quickly determine the extents by say, scanline. I feel like there has to be a faster way.
I just figured it out I think.
Real programmers use butterflies
modified 31-May-21 1:51am.
|
|
|
|
|
Hmmmm,
Of course there are faster ways. I would recommend getting your polygon algorithms working first and then move towards optimization. Obviously writing single pixels one at a time will be low performance. Writing sizeof(int) would be faster and SIMD instructions even faster. Same goes for whatever array you are using to store your polygon points.
'sparse' std::bitset would be lowest memory usage but slow as molasses.
std::bitset would be low memory but a little bit faster.
A huge array of quadword zeros and ones read with SIMD... fast and fat
Pick your poison.
|
|
|
|
|
I can't use the STL because it's pretty much non-existent on the arduino for anything non-trivial. Part of the reason is due to supporting 8 bit processors and all the constraints that usually come with them. The STL doesn't play well with 8kB of RAM. It's not that it won't work, it's just not really great for that, and if you only have 256kB of nvs program space to work with.
Because of that I've had to hand roll things like std::is_same<>
I can't do anything that specifically targets SIMD, because although some of the processors I target do support those instructions, there is no unified way to target it other than to cajole the C++ compiler into generating the right machine code. Frankly, I don't even know what SIMD looks like on, say a 32-bit Tensilica chip but I know it supports it in some form. Same with ARM Cortex CPUs.
I'm currently using run lengths so that I draw horizontal scanlines at a time. That cuts down device traffic (often SPI bus traffic), since I can almost always fill a rectangle with a color in less instructions than writing each individual pixel. - horizontal and vertical lines are technically filled rectangles. =)
Other than that though, it's still pixel by pixel. What really gets me though, is having to examine each point in the draw destination.
I've limited the search by getting a bounding rectangle for the polygon, but all it does is sort the points so it won't deal with "inside out" polygons. I don't rightly care, because that's almost never what you want anyway, and if you did you could just fill the screen before drawing it or something. I can add support for it fairly easily but it seems a waste of time.
I'm not worried about scanning the path segments in terms of time or space as I expect paths to be very small in practice. Like less than 30 or so points. You can do more of course, but it's on you because I make you pass in a buffer to use anyway.
What I'm concerned about is the brute force check of each pixel in the draw destination to see if it falls within the polygon. That seems ... inelegant to say the least.
I got it working less than 10 minutes after you pointed me to it. =)
Real programmers use butterflies
|
|
|
|
|
honey the codewitch wrote: I got it working less than 10 minutes after you pointed me to it. =) Congratulations. Now you see why I said the geometry was exceedingly simple.
Geometry has become my new hobby over the last few years. I want you to know something personal. I've been a member here on codeproject for over 18 years. I never make any wild physics/math claims (except one a few months ago). Over a year ago I predicted that the core of gas giants are diffuse and contain multiple closed-packed spheres[^]. I posted a brief mention about it over on Ycombinator[^].
Last month they found that indeed the core of Saturn spans 60% of it's diameter[^]. I am just playing around with n-spheres (14-248) dimensional geometries. That news gives me some confidence that at least some of what I am modeling might be correct.
I wish that I could get more people interested in geometry, I am seeing some interesting things.
Best Wishes,
-David Delaune
|
|
|
|
|
Randor wrote: I wish that I could get more people interested in geometry, I am seeing some interesting things
When I went over the high wall back in early 2017 I saw some things - the kinds of things you only see if you're crazy, because apparently I am.
Well, the most profound thing I ever saw - in my life - heck, if I live 6 lifetimes I will never see anything so beautiful - is the organic yet fractalish nature of reality itself, in motion.
It was infinite - folding back in on itself impossibly - the entire thing like a giant clockwork rose blooming, but exceptionally more beautiful.
So yeah, I can appreciate some geometry.
Real programmers use butterflies
|
|
|
|
|
Well,
Anyway, now I am looking forward to your next Lounge post explaining how you were mistaken and that your most challenging algorithm was actually easy as pi.
Best Wishes,
-David Delaune
|
|
|
|
|
I'll edit my original, crediting you with my epiphany. Thank you again.
Real programmers use butterflies
|
|
|
|
|
I was slow to reply because I guess algorithm implies a fairly contained piece of code. So I'd say it was an event dispatcher for telecom state machines.
It wasn't so much the algorithm, but the design around it. When you add lots of supplementary services to a basic call, building One Big State Machine creates a Big Ball of Mud. To keep the state machines separate, they run in an event-routing framework that allows state transitions to be announced, overridden, and/or supplemented. Chain of Responsibility plays a role in instantiating the state machines.
The algorithm for this was implemented in the state machine base class. I've thought about writing an article about it, but I doubt it would have much value because I haven't heard of another domain that requires this kind of solution.
|
|
|
|
|
I could see it in a message passer system like that used in microkernel operating systems.
Real programmers use butterflies
|
|
|
|
|
Although today it would be quite trivial to do, I reckon that a graphical game of Reversi on a RadioShack TRS80 with only 4K of RAM was my most challenging ever. The computer was pretty much unbeatable on the 'hard' setting.
So old that I did my first coding in octal via switches on a DEC PDP 8
|
|
|
|
|
+1 for bringing a game into this instead of code.
Real programmers use butterflies
|
|
|
|
|
Ah, goes back a bit. In 1980 I had to use an HP41CV to invert matrices so I could determine 3D coordinates of four sided plane shapes, all linked to one another (it was a building roof like a tent).
I still remember the buzz from cracking it.
|
|
|
|
|
Not exactly an algorithm but certainly the most challenging I had to do was read data from a LIDAR, at almost 100MB/s (that is mega byte per second), while doing 3D object detection using a third generation embedded core i5 (can not remember if it was a 13W or 17W CPU) with only 1GB of RAM and without dropping any packets/frames/information.
The LIDAR required a dedicated gigabit Ethernet connection to the motherboard. Even a switch in the connection would mean packets were dropped. And that CPU struggled to keep up with the data rate let alone do 3D object detection.
I'm so glad that implementing path finding and object collision on top of that was not my job
Best regards
|
|
|
|
|
Can't narrow it down to one. But when I encounter them, it has the following 2 characteristics:
1) I can't remember writing it (but there's unfortunately evidence that I did)
2) It can't be discerned how it works, or ever worked.
|
|
|
|
|
There's no *laughing so I don't cry* emoji for this relatable content so I improvised as best I could.
Real programmers use butterflies
|
|
|
|
|
Deciphering the HL7 (Healthcare) documentation.
|
|
|
|
|
find a way to compress 1 inch letters and symbols to fit on a small 2 inch tall screen with only 16K flash memory to work with
|
|
|
|
|
We are seeing it a lot in QA at the moment:
"I've written this in C++, but I remembered I need it in C and I'm running out of time - convert it for me?"
"I wrote this in Python, but I need it in C++ and I don't know Python - convert it for me?"
Normally with more spelling mistakes and much worse grammar.
What planet do you have to live on to run that through your head and think "Yeah, every one'll believe that"?
The assumption seems to be that anyone who answers question must be dumber than them - because nobody with a room-temperature IQ or higher would fall for it ...
The fun bit is that it probably doesn't do exactly what their homework wants anyway, and there is zero chance they will understand it enough to fix that, even if they do test it beyond getting a clean compile.
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
"Common sense is so rare these days, it should be classified as a super power" - Random T-shirt
AntiTwitter: @DalekDave is now a follower!
|
|
|
|