|
Very nice, thanks I'll check it out and let you know if it works out. While the sizes of circles change a bit, it's not too bad to just go through a few diameters.
The circles are filled, though I could do an edge detection to get rid of it if needed.
Be The Noise
|
|
|
|
|
The Wikipedia article makes it look harder than it is. Erosion (binary) can be easily implemented as only shifts and ANDs.
To recognize a circle:
1. Take an arc that's half the circle's circumference, and divide it into N segments. Each segment is a short vector.
2. For each vector, shift the image by that vector and AND it with the original image.
3. When you're done, pixels will remain only at the regions that were at the center of (at least) a circle of the original size.
4. Starting at the higher diameters will enable you to remove them first, so you can recognize the smaller diameters later.
"Microsoft -- Adding unnecessary complexity to your work since 1987!"
|
|
|
|
|
haha, you must've been reading my mind
This makes it much easier to implement. Thanks!
Be The Noise
|
|
|
|
|
Looking at this again, I realized Step 2 could be misinterpreted:
"2. For each vector, shift the image by that vector and AND it with the original image."
By "original image", I mean the image before the shift.
So,
foreach (vector in Vectors)
{
previousImage = image;
image.shiftBy (vector);
image.andWith (previousImage);
}
And all remaining pixels in 'image' are contained within (at least) a circle of the given radius.
"Microsoft -- Adding unnecessary complexity to your work since 1987!"
|
|
|
|
|
Hello,
When it comes to image processing tasks, I would say that it is much easier to discuss when there are few sample pictures available (if there are no some confidentiality restrictions of course). Talking about circles ... in some cases you can simplify things a lot by finding stand alone blobs/objects in a picture and then doing further shape analysis of those ...
|
|
|
|
|
Hi Andrew,
There is no confidentiality, and I have many samples of the images, but it would probably be easier to get some samples yourself. I'm working on a mobile app to identify traffic lights and tell me what color it is as I drive. I've found a lot of research on the topic, but most of the research methods use extra computers in the trunk of the car, so it doesn't work too well on a consumer smart phone.
I've actually been using some of the algorithms in the Aforge library to identify the circles (great work by the way). Reducing the resolution before I use the camera, and some blurring have helped a lot. I also use some color filtering to make sure I'm only looking for the colored lights within a certain threshold (Red, Amber, Green). I've also been toying with the accelerometers to do some course localization so I don't have to scan the entire image. All together, I'm getting some decent results, but I still need to put in a lot more time on the project. This is just something I'm doing for fun, not anything work related.
Right now I'm really dealing with false positives due to street lamps, and other car break lights, which is another reason I've been trying to localize the scanning. I'm also working through some instances where if the traffic light is back lit by a street lamp at night, or the sun during the day, it makes it very hard to spot; but I'm thinking some white balance can help with that.
Thanks for chiming in! If you have any ideas that you think may help with this, please feel free to pass it along!
Be The Noise
|
|
|
|
|
You know that the green in traffic lights actually has got a lot of blue in it as well. Fo r the colour blind.
|
|
|
|
|
does anybody know an algorithm to recognize trends in 2-D Line charts? Something that, for example in this chart returns an array with coordinate-Pairs A/B and B/C?
|
|
|
|
|
If you know what form of equation the data should follow, least squares (Google has some good references) will fit an equation set of data. Can need some matrix juggling, but that's what computers are for...
|
|
|
|
|
I'm going to guess this is to do with financial markets. I've been there myself, and I'll warn you, those trends are not nearly so real as the eye makes them look!
The simplest approach is to smooth out the 'noise' (all the little bumps between B and C) by applying a moving average, gaussian smooth or similar to the data, and then look for peaks and troughs in the smoothed signal. Alternatively you can differentiate the smoothed version which will give you a trend measurement and then look for where that is positive or negative (essentially the same thing from a different angle). But that means you are applying a preconception as to what is 'noise' and what is 'real data' which obviously affects the answer you get.
|
|
|
|
|
BobJanova wrote: I'm going to guess this is to do with financial markets.
you guessed right
BobJanova wrote: I've been there myself, and I'll warn you, those trends are not nearly so real as the eye makes them look!
Can you tell me more about it? why you think that? Do you know any good knowledge source about that topic?
BobJanova wrote: The simplest approach is to smooth out the 'noise' (all the little bumps between B and C) by applying a moving average, gaussian smooth or similar to the data, and then look for peaks and troughs in the smoothed signal. Alternatively you can differentiate the smoothed version which will give you a trend measurement and then look for where that is positive or negative (essentially the same thing from a different angle). But that means you are applying a preconception as to what is 'noise' and what is 'real data' which obviously affects the answer you get.
will think about that. What would be a more difficult approach?
thx for the answer, really helpful!
|
|
|
|
|
It might be worth pointing out that nobody in the >200 year history of all markets has ever been able to perform technical/trend analysis and reliably beat the market.
There is a strong proof why this is the case that you should understand in detail first.
http://en.wikipedia.org/wiki/Efficient-market_hypothesis[^]
|
|
|
|
|
Can you tell me more about it? why you think that?
Experience. I (and some partners) started down this road for a while, and we found it extremely difficult to produce an objective measurement that matched what the eye can see. Trends in markets appear to be fractal and (obviously, otherwise there'd be easy free money) non-deterministic. Furthermore, as the other post explains, there are good reasons to believe that markets are self-correcting so that any simple* analysis is by definition useless for predictive purposes.
A more complex approach is to look at features of secondary indicators, for example volatility or trade volume, in addition to the price. Some economists will tell you that while price is not predictable, volatility sometimes is – though whether that helps you predict price (essentially what you're trying to do in order to make money) is less clear!
*: The feedback mechanism that causes markets to be 'efficient' is that, if a clear inefficiency is seen, traders will take advantage of it until it is no longer profitable. Thus a simple analysis in this context is one that a sufficiently large proportion of market participants have access to. Realistically, if an investment bank knows how to do what you are trying, it won't make money.
|
|
|
|
|
i'm not trying to predict future prices only based on past chart patterns (i never thought technical analysis would work). What im trying to do is finding the causes for those patterns. If a message is released exactly at the turning point of a chart and if similar chart/message patterns (i know, this is the second big difficulty: how do you define Text-Similarity?) often occur, i'd guess the probability of a similar message causing a similar chart pattern is high.
|
|
|
|
|
Newbie18 wrote: If a message is released exactly at the turning point of a chart and if similar chart/message patterns (i know, this is the second big difficulty: how do you define Text-Similarity?) often occur, i'd guess the probability of a similar message causing a similar chart pattern is high.
Computing text-similarity is easy. Teaching the computer to do what you want, is impossible.
Aight, say you focus on determining the "cause" of a certain change in the graph; let's say there's a turning-point that can be easily identified. You'd then have to search all the news, and find a message that first relates to the change (most news will not, obviously).
Say you found all the messages relating Microsoft, just an hour before the big change in stock-price; how would you have the program differentiate between a rant on Windows and a statement from it's CEO? Given the way financials talk, how do you think your app would interpret a sentence like
"We have managed to stop the decline in growth."
You and I would have trouble interpreting that line, a bot would have even more trouble. No, we haven't even looked at the fact that some news takes more than an hour to reach investors, or that it may take longer to realize what's going on.
To make things worse, most of those turning-points will not be attributable to a single headline.
Bastard Programmer from Hell
if you can't read my code, try converting it here[^]
|
|
|
|
|
Eddy Vluggen wrote: Computing text-similarity is easy.
Now i'm curious. How would you do that?
The program doesn't have to interpret any information in the text, that's of course too difficult. If it can find a "class" of messages, that are similiar to each other and all released several hours before a rise of the price (it doesnt have to be a rise over a few hours, it could also be over a week or even a month) of the company mentioned in it, the relation should be obvious.
Eddy Vluggen wrote: To make things worse, most of those turning-points will not be attributable to a single headline.
If enough messages are processed, maybe i can find some, which have enough relevance of their own.
I know, it's not likely that my ideas are realizable, and if they are, most probably some Bank or Hedge Fund has already done it and i can't earn anything at all with my application. But i just have to try.
|
|
|
|
|
Newbie18 wrote: Now i'm curious. How would you do that?
There are multiple algorithms that can be used to determine whether two words 'sound alike' (like metaphone and soundex), what their "edit distance" is (Levenshtein[^]) - you could even download and abuse the wiktionary to find synonyms and validate against their soundex.
Newbie18 wrote: If it can find a "class" of messages, that are similiar to each other
Without interpreting the message, it's nigh impossible to determine whether you're dealing with "good" or "bad" news. Since you can't make that simple classification, I'd have my doubts about the validity on complexer classes.
Newbie18 wrote: I know, it's not likely that my ideas are realizable, and if they are, most probably some Bank or Hedge Fund has already done it
Not in this way.
They're most likely scanning for keywords, and have a human validating the most promising headlines. You could perhaps "train" some artificial network to recognize "positive" news, but again you'd hit new trouble - your bot might start to buy/sell based on false rumours.
Say you do find a correlation between trades (somewhat simpeler than correlation between price and news); say you notice the move in the price of copper before the price in silver moves - that might imply a connection, but it might also be a coincidence. In practice, silver often follows, but at some days it simply moves in a contrary direction due to "other factors" (like the discovery of a huge deposit of silver).
If trading were that easy, we would have replaced wall street completely with computers and gotten rid of the human influence a long time ago.
Bastard Programmer from Hell
if you can't read my code, try converting it here[^]
|
|
|
|
|
|
Edit: the location of this database of full moons: "Full Moon Dates Between (1900 - 2100)"[^] ... has eliminated my need to find an algorithm. But, I'll leave this "up," in case the link the database is useful to someone else.
No, I am not "into" astrology, but a friend asked me if I could write a calculator that given year, month, day, would indicate the number of full moons between that date, and the current date.
So, this is a question more of "curiosity," rather than one aimed at any practical result.
Here's a text-file dataset (times shown: GMT +1) of full moons from 1943-1953.[^]. Also see:[^].
What interests me is whether, given the variance between solar year time (in the "western" calendar, given "leap years," etc.) and lunar cycles, one can algorithmicly compute the number of years between the full moon falling on a certain day of the week in a specific week and month, and the next full moon falling on the same day of the month and week in the future.
Other potential complexities, of lunar cycle duration, and systems of lunation numbering, are well-described here:[^], and here:[^].
Now, if one had an algorithm that would compute the number of years with 13, rather than 12, moons per year, based on a starting year, month, date, I suppose that would make it easier.
Appreciate any thoughts, thanks, Bill
The glyphs you are reading now: are place-holders signifying the total absence of a signature.
modified 1-Jul-12 10:31am.
|
|
|
|
|
I have some code that calculates the date of Easter for any year, based on the first full moon after March 21st. I do not fully understand it but I know it works; I guess it could be adapted to what you are looking for.
|
|
|
|
|
Richard MacCutchan wrote: date of Easter for any year, based on the first full moon after March 21st.
Note that the date of Easter (western, not Greek Orthodox or Russian Orthodox) relies on a specific definition of what is a Full Moon - it is 'as seen from Rome', and it is after the first Sunday after the 21st March which makes an unsubtle variation in the actual date (up to +/- 7 days) as seen from the observer's position because it may be Sunday earlier or later locally than it is in Rome. Gauss's algorithm does not work 100% of the time (IIRC there was at least one date in the 1800s that his algorithm got wrong).
This is the formula that I have been using for the last 35 years (changed language several times but same method):
Date.prototype.Easter =
function(optYear)
{
var year = optYear
? ( optYear.constructor == Date
? optYear.getFullYear()
: ( optYear < 1900 ? optYear + 1900 : optYear )
)
: this.getFullYear();
var a = year % 19;
var b = Math.floor(year / 100);
var c = year % 100;
var d = Math.floor(b / 4);
var h = (19 * a + b - d - Math.floor((8 * b + 13) / 25) + 15) % 30;
var mu = Math.floor((a + 11 * h) / 319) - h;
var lambda = (2 * (b - d * 4) + Math.floor(c / 4) * 6 - c + mu + 32) % 7 - mu;
var month = Math.floor((lambda + 90) / 25);
return new Date(year, month - 1, (lambda + month + 19) % 32);
};
The variable names are based on the names in the original article cited in the comments but I have optimised the calculations (used to be 10 steps, now only 8 steps).
|
|
|
|
|
I did not bother to check every year in the last 2000+, since it seemed to work for plus or minus 5 years since I started using it.
One of these days I'm going to think of a really clever signature.
|
|
|
|
|
For this sort of question, and many more such, my reference is Dershowitz & Reingold's Calendrical Calculations. I have a first ed dead tree; I think it's now up to 3rd ed. All the code from the book (Lisp!) is available for free download. I think there's also Java available.
Google knows more than me... (but do either of us understand?)
Cheers,
Peter
Software rusts. Simon Stephenson, ca 1994. So does this signature. me, 2012
|
|
|
|
|
A lunar month equals 29 days, 12 hours, 44 minutes; so I'd divide the timespan by that, yielding a pretty good estimate. It might be off by one if either one of the dates is just before/after a full moon.
|
|
|
|
|
I am trying to develop a dynamic pathing algorithm for a game (all text) and running into an issue of finding the shortest path when all the edges(connectors) are the same cost or weight.
The algorithm below will capture all the rooms from start to finish, but the issue is sorting it out to find the shortest distance to the finish, perhaps new algorithm is needed? Thanks in advance for any assistance.
public void findRoute(ROOM_INFO startRoom, ROOM_INFO destinationRoom)
{
Dictionary<ROOM_INFO, bool> visitedStartRooms = new Dictionary<ROOM_INFO, bool>();
Dictionary<ROOM_INFO, bool> visitedStopRooms = new Dictionary<ROOM_INFO, bool>();
List<string> directions = new List<string>();
startQueue.Enqueue(startRoom);
destinationQueue.Enqueue(destinationRoom);
visitedStartRooms.Add(startRoom, true);
visitedStopRooms.Add(destinationRoom, true);
string direction = "";
bool foundRoom = false;
while (startQueue.Count != 0 || destinationQueue.Count != 0)
{
ROOM_INFO currentStartRoom = startQueue.Dequeue();
ROOM_INFO currentDestinationRoom = destinationQueue.Dequeue();
ROOM_INFO startNextRoom = new ROOM_INFO();
ROOM_INFO stopNextRoom = new ROOM_INFO();
if (currentStartRoom.Equals(destinationRoom))
{
foundRoom = true;
break;
}
else
{
foreach (string exit in currentDestinationRoom.exitData)
{
stopNextRoom = extractMapRoom(exit);
if (stopNextRoom.Equals(startRoom))
{
visitedStopRooms.Add(stopNextRoom, true);
foundRoom = true;
break;
}
if (stopNextRoom.mapNumber != 0 && stopNextRoom.roomNumber != 0)
{
if (!visitedStopRooms.ContainsKey(stopNextRoom))
{
if (visitedStartRooms.ContainsKey(stopNextRoom))
{
foundRoom = true;
}
else
{
destinationQueue.Enqueue(stopNextRoom);
visitedStopRooms.Add(stopNextRoom, true);
}
}
}
}
if (foundRoom)
{
break;
}
}
foreach (string exit in currentStartRoom.exitData)
{
startNextRoom = extractMapRoom(exit);
if (startNextRoom.Equals(destinationRoom))
{
visitedStartRooms.Add(startNextRoom, true);
foundRoom = true;
break;
}
if (startNextRoom.mapNumber != 0 && startNextRoom.roomNumber != 0)
{
if (!visitedStartRooms.ContainsKey(startNextRoom))
{
if (visitedStopRooms.ContainsKey(startNextRoom))
{
foundRoom = true;
break;
}
else
{
startQueue.Enqueue(startNextRoom);
visitedStartRooms.Add(startNextRoom, true);
}
}
}
}
if (foundRoom)
{
break;
}
}
}
}
|
|
|
|
|