

Zipping wouldn't work due to the very nature of fractals: they're all about chaos and unpredictability, whereas compression algorithms are all about detecting patterns and predictability of data sequences. The two don't mesh. Ever.
P.S: for the same reason restricting yourself to lower precision won't work either. In fact, with a precision of only about 55 bit (which I think is the usual length of the mantisse in a double), doing more than about 500 iterations will yield no meaningful data. It may still look nice when visualized (which is usually the point of fractal programs), but you are effectively looking at rounding errors. I would go so far as to say you need an arbitrary math library for more than 100 iterations.





I'm a bit in a hurry right now, but I'd like to point out that for 10000 iterations you need arbitrary precision math! The rounding errors from each iteration grow exponentially, and so will the size you need to store each resulting value in sufficient precision.
Unless of course you're fine that you're effectively looking at rounding errors after a few hundred iterations.





Stefan_Lang wrote: for 10000 iterations you need arbitrary precision math
Good point, well made. << Heads off to investigate >>
To be fair, at this point it's more proof of concept, can I design a set of classes that allow you to plug any fractal set into any colourising method (with any mapping technique to boot), it's not intended to be a finished product, why would the world need another fractal generator?
Thanks,
Mike





I had some time to get an estimate for the accumulation of the precisionbased error, and found that it's upper limit is about machine_precision*4^iterations , and the average error would be around machine_precision*2^iterations . That means the maximum error approaches 1 after about 28 iterations for double values (5556 bit mantisse), and the average error approaches 1 after about 56 iterations.
If you intend to store intermediate results, you might consider approaching this problem from the other side:
1. partition your intervals according to your machine precision, e. g. 2^16x2^16 points
2. for each point, store 1 if after one iteration the divergence criterium is met, or 0 if not. This is the iteration counter.
3. repeat for each point with a value of 0:
3.1 convert the result after one iteration to the resolution of your point array (i. e. check which pixel is closest)
3.2 check the iteration counter for the point corresponding to that value: if it is not 0, store this value+1. Else your counter is still 0.
3.3 when you're done with all points that still have a counter of 0, start all over again, for at most as many times as appropriate to your initial resolution (e. g. no more than 16 times if you started at 2^16x2^16)
This is a reversion of the actual algorithm, deliberately adding in the error (in step 3.1) that you would otherwise get when calculating at a given resolution (in this case 16 bit).





Thanks for spending time on this  I'm going to have to sit down with a large coffee (or 8) before I get my head around this. Separately I found GNU MP and some .Net wrappers. When I get there I'll have a decent look at both options
Thanks again,
Mik





From here: http://mrob.com/pub/muency/accuracy.html[^]
The commonlyseen views of the Mandelbrot Set have been challenged by arguments based on techniques like errorterm analysis (see Propagation of uncertainty). They show that if you want to get an accurate view of a lemniscate ZN you need to use at least N digits in all your calculations. The result is that most views of the Mandelbrot Set that people see on computers are (in theory) completely inaccurate or even "fictitious".
However, except for certain specific purposes (notably using iteration to trace an external ray), the problem is much smaller. The overwhelming amount of experimental evidence shows that ordinary Mandelbrot images (plotting e.g. the dwell for each point on a pixel grid) are indistinguishable from equivalent results produced by exact calculation. The images look the same to the human eye regardless of how many digits you use, as long as the number of digits is sufficient to distinguish the coordinates of the parameter value C.
This is because the roundoff errors added by each step in the iteration tend to cancel out as they would if randomly distributed, rather than systematically biased in one certain direction. See Systematic error, "Systematic versus random error".
Not that this means the discussion about errors is without value, especially since the above explanation of why you might choose to ignore it relies on an assumed internal cancellation of errors through randomness. Indeed the above validates the discussion but perhaps helps put into context how much effort one might want to put into higher accuracy vs. say better functionality etc.
Mike





Interesting. I have to admit I was a bit doubtful of my own line of argumentation, since I've seen incredibly detailed pictures from the Mandelbrot Set as early as 25 years ago, and I doubt many of them (if any) were calculated using arbitrary precision. Nor did their smoothness indicate anything even close to the error levels that error propagation theory would suggest.
Then again, I've seen some fixed point 16 bit implementations that were clearly useless for anything but calculating the full picture (at a low resolution, preferably)  zooming in pretty quickly revealed the huge errors this limited accuracy produced.
In any case, you should make sure that when you zoom in to some interesting area, your accuracy is still some orders of magnitude above the distance between pixels, or else you'll get the same kind of artifacts I mentioned in the previous paragraph.
P.S.: I considered how to model the cancelling out: the systematic error based on machine precision has a uniform distribution. Iterating this calculation, is like adding independent variables (up to 4 times in one iteration step), resulting in a distribution that looks more like a normal distribution. The expected error will be 0, on average, but what is of a greater interest is the likelyhood that the error exceeds some value that makes the result unusable (an error about the size of 1, or greater). Unfortunately my knowledge of probability theory is somewhat rusted, but I suppose if you can determine that likelyhood and it is on the order of no more than 1/(number of pixels), then you still should get good results for visualization purposes.





Try doing the computation 1 pixel at a time, and discarding results from all the iterations except 0, n1 and n, where n is the total number of iterations in the computation so far. This would reduce memory usage by complex numbers to just 48 bytes!
EDIT: Probably not 48 bytes, but 3 complex numbers need to be stored. This could be any amount, depending on the precision used.





Hello everyone
I need to generate a PDF417 bar code and do the reverse operation.
but the problem is that the text I need to encode is in Arabic like "أ ب ت ث ... "
I found a lot of SDK and online programs that help generating and reading pdf417 bar codes, but non of them support the Arabic language.
could anyone help me with that? and do I need to build a program for the whole pdf417 encoding which needs time?





I did not downvote this, but someone probably did so because you already posted this question in the C# forum.
Having been a member on this site for almost 4 years, you should know not to do that...
Soren Madsen





I guess it's an algorithm sort of question, or maybe not, but this is the closest approximation I can find...
I spent many years of my life creating mathematical models of systems, but all of them were more or less continuous functions  missile guidance, targeting, filtering  that sort of thing. But I'm trying to model a discontinuous system at the moment, and I haven't a clue how to start. The current problem, I have a series of lift stations, each containing a wet well  a hole in the ground that received liquid from upstream sources at unpredictable times  and a pair of pumps that switch on at preset levels to empty each well. The linear parts I can figure out, knowing such things as the pump flow rates, head pressures and frictional pipeline losses. But how do I model the discrete on/off times for each pump in order to maintain optimal transport rates without overflowing any well in the line?
Can anyone suggest a link or two that demonstrates how this is typically done? I'm thinking some kind of state machine model with discrete time intervals, but I could be completely off the mark.
Will Rogers never met me.





Hi Roger,
What you are looking for is "discrete event simulation". The basic idea is that the system runs off a queue of future events, sorted in time order, and picks them off and handles them one at a time. The "clock" jumps from one event to the next, which is where the "discrete time" bit comes from. Consider me filling a tub with a bucket.
Event 1, t=0: Bucket under tap, turn tap on. It takes 5 secs to do this, so schedule event 2, time 5.
Event 2, t=5: Tap running. Tap runs at 5 gpm, bucket holds 10gal. Schedule event 3, time 65.
Event 3, t=65: Bucket full. turn tap off. 2 sec. => event 4 @ t=67.
Event 4, t=67: Carry bucket to tub and tip in. 10 secs
Event 5, t=77: Is the tub full yet? ...
These events could easily be interleaved with you filling a different tub from a different tap. Things get interesting when we interact, by sharing, say, the tap. Queuing gets involved then. (btw, discrete event simulation is *the* way to do Monte Carlo queueing problems.)
Having said all that, I'm sure:
(a) your googlefu is at least as good as mine.
(b) there are free packages out there.
The hard work is in describing the system to the point where your model is complete enough to hang together. Conservation of matter is always a good starting point.
Cheers,
Peter
Software rusts. Simon Stephenson, ca 1994. So does this signature. me, 2012





Peter_in_2780 wrote: The hard work is in describing the system to the point where your model is complete enough to hang together.
That's what I'm trying to grasp, I think. Six stations, 12 pumps, two pumps don't output as much as 2x 1 pump, one pump starts at level 1, the other at level 2, but both shut down at level 0... It's not as simple as simply writing a differential equation for an electrical circuit.
Will Rogers never met me.





The good thing about simulation is that if it blows up, no physical damage is done! (Just as well, given some of the simulations I've run in the past!)
At each event, you need to update the state of the world (pump 1 is running, so the level in tank 23 is going down at 1000gpm, the pressure in the pipe at point X is ... ) then predict what "nonlinear" events are going to happen and when (tank 23 will reach lower limit switch level in 18 minutes, tank 28 will start filling at 1000gpm in 7 minutes ...) then plug them in as future events. All the continuous stuff (like solving DEs ) is hidden in the 'prediction' phase of event handling.
I must admit, the first few serious simulations I wrote, the system behaviour stuff was hardcoded. The event handling skeleton and utility functions were reused, and slowly morphed into a more general purpose beast that could actually be described as a 'package'. Sadly, it's all faded into history. Last seen in the bucket "things I might port from Fortran77 to C".
The size of your system is NOT an issue for getting a simulation running. If you can model one station, then adding five more (even with different parameters) is trivial.
If you want to continue this conversation offline, feel free.
Cheers,
Peter
Software rusts. Simon Stephenson, ca 1994. So does this signature. me, 2012





Thanks, Peter. I've spent much of the evening doing some paper model building, and I might just take you up on the offer. I've done a bunch of state transition stuff in the past, but it's been a long time. The prime rule was, that which can be observed, can be controlled  that which can't, can't. So now I'm looking at what can be observed directly, what can be inferred from those observations, and what state variables to control using that information. It's unfortunate that we have no means by which to observe directly whether a pump is running, just crude floats that are either on or off. If I could access actual pump states, I could do so much more for prediction and control, but the best I can hope for is to learn that, after a level was reached, and it failed to subside after a set period of time, the pump did not respond. That will have to do for now, but gives me a great tool for arguing that we should add more monitoring circuitry!
What I think I can do, though, is to model the proper operation condition, and use that to simulate different flows into and out of various stations in order to optimize the levels of the floats and perhaps, identify pumps that may be under or over sized. Later, if they let me add more monitoring, I can extend the model to failure prediction, and that's my end goal. I am a hardware weenie, after all. I really think it would be better to call an emergency crew out before the thing overflows, rather than waiting until a high level alarm is sounded just minutes before raw sewage starts running over the top.
I'll email you if I get stuck, and Thanks again for the offer!
Will Rogers never met me.





right now, i know how to get the mouse location and show the mouse and so left clicj right click and such. I am wondering if anyone had a formula for an AI for a path, eg
o = you sprite
 = wall can pass through
' = floor can pass through
* = path way for the spite
C = the cliked for the path to go
heres my example
o'''''
'*'''
'****C
I hope you understand basicly i want to it go a certain location it will go there, and if there is an object in its way it will use a formula to get around it.
all this nis in 2d by the way.
and if you have the formula can you please make it as simple as possible?






ty, a* seems the most simple, but it still is allot and will take a while to understand







So I'm working on a side project at the moment dealing with computer vision, and I find myself needing to identify circles of an unknown size in an image. I've found a lot of information online about using the Hough Transform for circles, and MANY variations of that transform. Is there anything else out there that can be used for this purpose? I'm looking for something else that is quicker than the Hough Transform, and I am willing to sacrifice some accuracy to achieve this.
Please note that I am not looking for a library or tool to do this for me (like OpenCV), I've found plenty of them, and they all use the Hough Transform. I'm looking for an actual algorithm or related research.
Be The Noise





AFAIK Hough is the best available. When the circles are prominent, i.e. have quite some thickness, you could reduce the resolution of your image so the thickness of the circle(s) becomes say 2 pixels; that should provide quite some performance improvement.
And of course image processing is a field where you can efficiently apply multithreading, as well as gain performance by putting locality of reference first (i.e. deal with bands or small areas, not entire images at once).





Thanks for the suggestions I'll definitely try reducing the resolution and try to utilize more multithreading (this is for a mobile app, and the benefits of multithreading aren't THAT great). I've been playing with blur and color changes as well to speed things up.
Be The Noise





You should also think of using the GPU for such tasks which can improve the performance a lot.






The erosion operator (http://en.wikipedia.org/wiki/Erosion_%28morphology%29[^] ) can detect circles faster than the Hough Transform.
You have to know the size in advance, though, although you can do N searches for N different diameters. (You'd have to do N searches for different sized circles using the Hough Transform also.)
Are the circles drawn as just the circumferences, or are they filled in?
"Microsoft  Adding unnecessary complexity to your work since 1987!"





Very nice, thanks I'll check it out and let you know if it works out. While the sizes of circles change a bit, it's not too bad to just go through a few diameters.
The circles are filled, though I could do an edge detection to get rid of it if needed.
Be The Noise





The Wikipedia article makes it look harder than it is. Erosion (binary) can be easily implemented as only shifts and ANDs.
To recognize a circle:
1. Take an arc that's half the circle's circumference, and divide it into N segments. Each segment is a short vector.
2. For each vector, shift the image by that vector and AND it with the original image.
3. When you're done, pixels will remain only at the regions that were at the center of (at least) a circle of the original size.
4. Starting at the higher diameters will enable you to remove them first, so you can recognize the smaller diameters later.
"Microsoft  Adding unnecessary complexity to your work since 1987!"





haha, you must've been reading my mind
This makes it much easier to implement. Thanks!
Be The Noise





Looking at this again, I realized Step 2 could be misinterpreted:
"2. For each vector, shift the image by that vector and AND it with the original image."
By "original image", I mean the image before the shift.
So,
foreach (vector in Vectors)
{
previousImage = image;
image.shiftBy (vector);
image.andWith (previousImage);
}
And all remaining pixels in 'image' are contained within (at least) a circle of the given radius.
"Microsoft  Adding unnecessary complexity to your work since 1987!"





Hello,
When it comes to image processing tasks, I would say that it is much easier to discuss when there are few sample pictures available (if there are no some confidentiality restrictions of course). Talking about circles ... in some cases you can simplify things a lot by finding stand alone blobs/objects in a picture and then doing further shape analysis of those ...





Hi Andrew,
There is no confidentiality, and I have many samples of the images, but it would probably be easier to get some samples yourself. I'm working on a mobile app to identify traffic lights and tell me what color it is as I drive. I've found a lot of research on the topic, but most of the research methods use extra computers in the trunk of the car, so it doesn't work too well on a consumer smart phone.
I've actually been using some of the algorithms in the Aforge library to identify the circles (great work by the way). Reducing the resolution before I use the camera, and some blurring have helped a lot. I also use some color filtering to make sure I'm only looking for the colored lights within a certain threshold (Red, Amber, Green). I've also been toying with the accelerometers to do some course localization so I don't have to scan the entire image. All together, I'm getting some decent results, but I still need to put in a lot more time on the project. This is just something I'm doing for fun, not anything work related.
Right now I'm really dealing with false positives due to street lamps, and other car break lights, which is another reason I've been trying to localize the scanning. I'm also working through some instances where if the traffic light is back lit by a street lamp at night, or the sun during the day, it makes it very hard to spot; but I'm thinking some white balance can help with that.
Thanks for chiming in! If you have any ideas that you think may help with this, please feel free to pass it along!
Be The Noise





You know that the green in traffic lights actually has got a lot of blue in it as well. Fo r the colour blind.





does anybody know an algorithm to recognize trends in 2D Line charts? Something that, for example in this chart returns an array with coordinatePairs A/B and B/C?





If you know what form of equation the data should follow, least squares (Google has some good references) will fit an equation set of data. Can need some matrix juggling, but that's what computers are for...





I'm going to guess this is to do with financial markets. I've been there myself, and I'll warn you, those trends are not nearly so real as the eye makes them look!
The simplest approach is to smooth out the 'noise' (all the little bumps between B and C) by applying a moving average, gaussian smooth or similar to the data, and then look for peaks and troughs in the smoothed signal. Alternatively you can differentiate the smoothed version which will give you a trend measurement and then look for where that is positive or negative (essentially the same thing from a different angle). But that means you are applying a preconception as to what is 'noise' and what is 'real data' which obviously affects the answer you get.





BobJanova wrote: I'm going to guess this is to do with financial markets.
you guessed right
BobJanova wrote: I've been there myself, and I'll warn you, those trends are not nearly so real as the eye makes them look!
Can you tell me more about it? why you think that? Do you know any good knowledge source about that topic?
BobJanova wrote: The simplest approach is to smooth out the 'noise' (all the little bumps between B and C) by applying a moving average, gaussian smooth or similar to the data, and then look for peaks and troughs in the smoothed signal. Alternatively you can differentiate the smoothed version which will give you a trend measurement and then look for where that is positive or negative (essentially the same thing from a different angle). But that means you are applying a preconception as to what is 'noise' and what is 'real data' which obviously affects the answer you get.
will think about that. What would be a more difficult approach?
thx for the answer, really helpful!





It might be worth pointing out that nobody in the >200 year history of all markets has ever been able to perform technical/trend analysis and reliably beat the market.
There is a strong proof why this is the case that you should understand in detail first.
http://en.wikipedia.org/wiki/Efficientmarket_hypothesis[^]





Can you tell me more about it? why you think that?
Experience. I (and some partners) started down this road for a while, and we found it extremely difficult to produce an objective measurement that matched what the eye can see. Trends in markets appear to be fractal and (obviously, otherwise there'd be easy free money) nondeterministic. Furthermore, as the other post explains, there are good reasons to believe that markets are selfcorrecting so that any simple* analysis is by definition useless for predictive purposes.
A more complex approach is to look at features of secondary indicators, for example volatility or trade volume, in addition to the price. Some economists will tell you that while price is not predictable, volatility sometimes is – though whether that helps you predict price (essentially what you're trying to do in order to make money) is less clear!
*: The feedback mechanism that causes markets to be 'efficient' is that, if a clear inefficiency is seen, traders will take advantage of it until it is no longer profitable. Thus a simple analysis in this context is one that a sufficiently large proportion of market participants have access to. Realistically, if an investment bank knows how to do what you are trying, it won't make money.





i'm not trying to predict future prices only based on past chart patterns (i never thought technical analysis would work). What im trying to do is finding the causes for those patterns. If a message is released exactly at the turning point of a chart and if similar chart/message patterns (i know, this is the second big difficulty: how do you define TextSimilarity?) often occur, i'd guess the probability of a similar message causing a similar chart pattern is high.





Newbie18 wrote: If a message is released exactly at the turning point of a chart and if similar chart/message patterns (i know, this is the second big difficulty: how do you define TextSimilarity?) often occur, i'd guess the probability of a similar message causing a similar chart pattern is high.
Computing textsimilarity is easy. Teaching the computer to do what you want, is impossible.
Aight, say you focus on determining the "cause" of a certain change in the graph; let's say there's a turningpoint that can be easily identified. You'd then have to search all the news, and find a message that first relates to the change (most news will not, obviously).
Say you found all the messages relating Microsoft, just an hour before the big change in stockprice; how would you have the program differentiate between a rant on Windows and a statement from it's CEO? Given the way financials talk, how do you think your app would interpret a sentence like
"We have managed to stop the decline in growth."
You and I would have trouble interpreting that line, a bot would have even more trouble. No, we haven't even looked at the fact that some news takes more than an hour to reach investors, or that it may take longer to realize what's going on.
To make things worse, most of those turningpoints will not be attributable to a single headline.
Bastard Programmer from Hell
if you can't read my code, try converting it here[^]





Eddy Vluggen wrote: Computing textsimilarity is easy.
Now i'm curious. How would you do that?
The program doesn't have to interpret any information in the text, that's of course too difficult. If it can find a "class" of messages, that are similiar to each other and all released several hours before a rise of the price (it doesnt have to be a rise over a few hours, it could also be over a week or even a month) of the company mentioned in it, the relation should be obvious.
Eddy Vluggen wrote: To make things worse, most of those turningpoints will not be attributable to a single headline.
If enough messages are processed, maybe i can find some, which have enough relevance of their own.
I know, it's not likely that my ideas are realizable, and if they are, most probably some Bank or Hedge Fund has already done it and i can't earn anything at all with my application. But i just have to try.





Newbie18 wrote: Now i'm curious. How would you do that?
There are multiple algorithms that can be used to determine whether two words 'sound alike' (like metaphone and soundex), what their "edit distance" is (Levenshtein[^])  you could even download and abuse the wiktionary to find synonyms and validate against their soundex.
Newbie18 wrote: If it can find a "class" of messages, that are similiar to each other
Without interpreting the message, it's nigh impossible to determine whether you're dealing with "good" or "bad" news. Since you can't make that simple classification, I'd have my doubts about the validity on complexer classes.
Newbie18 wrote: I know, it's not likely that my ideas are realizable, and if they are, most probably some Bank or Hedge Fund has already done it
Not in this way.
They're most likely scanning for keywords, and have a human validating the most promising headlines. You could perhaps "train" some artificial network to recognize "positive" news, but again you'd hit new trouble  your bot might start to buy/sell based on false rumours.
Say you do find a correlation between trades (somewhat simpeler than correlation between price and news); say you notice the move in the price of copper before the price in silver moves  that might imply a connection, but it might also be a coincidence. In practice, silver often follows, but at some days it simply moves in a contrary direction due to "other factors" (like the discovery of a huge deposit of silver).
If trading were that easy, we would have replaced wall street completely with computers and gotten rid of the human influence a long time ago.
Bastard Programmer from Hell
if you can't read my code, try converting it here[^]






Edit: the location of this database of full moons: "Full Moon Dates Between (1900  2100)"[^] ... has eliminated my need to find an algorithm. But, I'll leave this "up," in case the link the database is useful to someone else.
No, I am not "into" astrology, but a friend asked me if I could write a calculator that given year, month, day, would indicate the number of full moons between that date, and the current date.
So, this is a question more of "curiosity," rather than one aimed at any practical result.
Here's a textfile dataset (times shown: GMT +1) of full moons from 19431953.[^]. Also see:[^].
What interests me is whether, given the variance between solar year time (in the "western" calendar, given "leap years," etc.) and lunar cycles, one can algorithmicly compute the number of years between the full moon falling on a certain day of the week in a specific week and month, and the next full moon falling on the same day of the month and week in the future.
Other potential complexities, of lunar cycle duration, and systems of lunation numbering, are welldescribed here:[^], and here:[^].
Now, if one had an algorithm that would compute the number of years with 13, rather than 12, moons per year, based on a starting year, month, date, I suppose that would make it easier.
Appreciate any thoughts, thanks, Bill
The glyphs you are reading now: are placeholders signifying the total absence of a signature.
modified 1Jul12 10:31am.





I have some code that calculates the date of Easter for any year, based on the first full moon after March 21st. I do not fully understand it but I know it works; I guess it could be adapted to what you are looking for.





Richard MacCutchan wrote: date of Easter for any year, based on the first full moon after March 21st.
Note that the date of Easter (western, not Greek Orthodox or Russian Orthodox) relies on a specific definition of what is a Full Moon  it is 'as seen from Rome', and it is after the first Sunday after the 21st March which makes an unsubtle variation in the actual date (up to +/ 7 days) as seen from the observer's position because it may be Sunday earlier or later locally than it is in Rome. Gauss's algorithm does not work 100% of the time (IIRC there was at least one date in the 1800s that his algorithm got wrong).
This is the formula that I have been using for the last 35 years (changed language several times but same method):
Date.prototype.Easter = function(optYear) {
var year = optYear
? ( optYear.constructor == Date
? optYear.getFullYear()
: ( optYear < 1900 ? optYear + 1900 : optYear ) )
: this.getFullYear();
var a = year % 19;
var b = Math.floor(year / 100);
var c = year % 100;
var d = Math.floor(b / 4);
var h = (19 * a + b  d  Math.floor((8 * b + 13) / 25) + 15) % 30;
var mu = Math.floor((a + 11 * h) / 319)  h;
var lambda = (2 * (b  d * 4) + Math.floor(c / 4) * 6  c + mu + 32) % 7  mu;
var month = Math.floor((lambda + 90) / 25);
return new Date(year, month  1, (lambda + month + 19) % 32);
};
The variable names are based on the names in the original article cited in the comments but I have optimised the calculations (used to be 10 steps, now only 8 steps).





I did not bother to check every year in the last 2000+, since it seemed to work for plus or minus 5 years since I started using it.
One of these days I'm going to think of a really clever signature.





For this sort of question, and many more such, my reference is Dershowitz & Reingold's Calendrical Calculations. I have a first ed dead tree; I think it's now up to 3rd ed. All the code from the book (Lisp!) is available for free download. I think there's also Java available.
Google knows more than me... (but do either of us understand?)
Cheers,
Peter
Software rusts. Simon Stephenson, ca 1994. So does this signature. me, 2012





A lunar month equals 29 days, 12 hours, 44 minutes; so I'd divide the timespan by that, yielding a pretty good estimate. It might be off by one if either one of the dates is just before/after a full moon.




