Add your own alternative version
Stats
112.1K views 8.4K downloads 179 bookmarked
Posted
3 Jul 2014

Comments and Discussions



I was going through your code and i was wondering why you have taken as 114.
Does by taking this number are we able to get the optimized path and if yes, then how did you find this count?
I will be warmed if a get answer to my query.
Thanks in advance.
With Regards
Priyank Keshri





I expected people to ask me this when I just published the article... but noone asked.
Actually, when I did my tests I was going towards using 100 as the population... but then I decided to take 114, as this is 57x2.
Then, I know, why 57 times 2?
This has nothing to do with programming. This started as a joke and I kept using it for some time. If you see my articles you will see that I constantly use numbers as 657. 5757, 5758... and many other numbers that have 57 in it.
This joke started because 57 seems to be the mostly used number in many places that people chose a "random number", yet don't use an algorithm to do so. Effectively, people seem to avoid round numbers (10, 20, 100, 1000, etc). They also seem to avoid numbers that are finish by 5 or that are even, as those seems to be "too predictable". Maybe subconsciously, people try to find odd or, more specifically, prime numbers. 1, 3, 5, 7 would fit. 9, as it is not a prime, seem not to fit and also seem to be avoided as many "promotions" keep using values like 1.99, 5.99 etc, instead of using the more direct 2 or 6.
So, from those numbers, people don't want to get those too small or too big. So, 1 is out. Now we have 3, 5 and 7 as the most common numbers... if I need to choose only two digits, 35 will not fit because it ends by 5. So, 57, if I keep the numbers in order, would be OK. 53 also works. 73 could also work, but people seems to chose 57 more frequently.
If you want, start watching movies and try to identify house number, car plate numbers etc... you will see a lot of 57s there... usually not alone. Numbers like 357, 1257, 1571 and the like.
I simply went one step further and started using 57 on purpose... sometimes with some math on top of it.





Oh! that's seems cool. Thanks for the explanation. I was thinking this count might be some kind of limitation for combination of paths.
I have another doubt i.e. How much your logic is different from Dijkstra's Algorithm.
I was going through it, but was unable to get out something.
With Regards
Priyank keshri





I actually don't know the Dijkstra's Algorithm. But from what I was reading, it seems to cover all possibilities... if that's the case, it is a kind of "brute force" algorithm, but I can't really tell that right now. I need to better understand the Dijkstra's Algorithm to make a real comparison.
Edit: Actually, I just found a big difference. Dijkstra's Algorithm will ignore nodes that addup to the path. The TSP algorithm actually needs to visit all the nodes, so it can't simply find the "shortest path". In fact, the shortest path to the start point is not leave the start point at all...





Thanks for the help,
I need a last favor i.e. How to put timewindow in this algorithm, in order to get optimized route with preferred time for some location.





When running the algorithm you can simply put a counter... or if you really want to make it based on time, use a Stopwatch and see if a given time was already elapsed or not.
I know of one implementation that simply runs the code 100 times and that's all.





Hello,
What is about scalability of the solution ? Is this approach feasible for say 5000 nodes on simple hardware (4 cores / 16 Gb RAM) ?





16GB of memory is definitely a lot for this algorithm. Yet, 4 cores aren't going to help with the actual implementation, as it uses a single thread.
But the biggest thing is: What do you mean by feasible? The algorithm doesn't give an immediate answer. With 5000 nodes it will surely take a LOT of time to find the best solution. So, are you willing to wait?





Once I faced with practical questions  is there ability to solve salesman problem with 5000 actual clients ? And my consideration showed significant time consumptions for the result. But I could not imagine how long it would be ? A day, a week, a month, more ?
I do not mean to use your algorithm as is. It could be modified to accept 5000 nodes and use advantages of parallel computing. So, the question indeed  how long ? What is your assessment ?





Well, I will be only guessing... but I would say that after 1 day you will probably have something acceptable, but to find anything near the real best it would probably take a week or more.
Yet it is hard to tell. A 4 core computer can do it 4 times faster if using 4 threads, yet I don't know how fast is the CPU (and even if I did, I would not really know... it is only a matter of fact that CPU speed affects a lot). You can probably make things faster by not showing anything at each iteration (maybe showing after 1 thousand or more iterations), so there are many things that will actually affect the result.





Thank you. Indeed by my experience speed of going to the solution of genetic algorithms changes dramatically during calculations. At the beginning the population is being improved very quickly. But close to solution the speed becomes slower and slower. We could use distance between generations as an index of mature of the solution and accept one that is not the best, but calculated in reasonable time. Am I right ?






I specifically like how you adapted the operators to your specific problems, and illustrated your reasoning. I've found that people failing to apply a GA to some problem often didn't understand the importance of adapting both the encoding and the operators to your specific problem. Without that, a GA is simply too general to solve most problems very well!





Thank you for your vote, for your comment and for participating in the discussion about evos/GA.
I am also glad that you liked the adaptations I made to make it work for the problem. I was trying to make it simple, yet useful, and I thought that a nonadapted GA will generate too many invalid results.






Thank you very much!
Happy new year to you too!







Would give you one more 5 for gp or expression programming in another article if needed. But it seems outdated) for clean explanation and interesting problem.
Ok, my question is  have you tried to use not only embedded random, but other like Sobol or Faure sequences? Do they converge faster than mr.Knut algorithm (embedded) in general tasks? I work mostly with limited combinations.
I ve started to study more on evo when i have seen that genetic algorithm in Technical Analysis for fin market tool outperformed everything like monte carlo in searching convenient ranges for decisions.
But now i am far away from such kind of tools surpassing them by 10000x performance and simplicity.
But still sure that evo programming is super practical tool for hypothesis tests, tuning parameters, creating cascades of filters and even knowledge discovery.
Dont you think evo programming will be little bit better? I ve switched for evo algorithms combining numeric optimizations and expression programming for generic implemetation simplicity (certainly from my point of view). Being more or less satisfied with batch learning still cant finish with 2 simple tasks:
1. combine stochastic gradient descent with evo core;
2. combine online kernel learning with online evo and sgd algo.





About using other Random classes. I didn't try the ones you said, yet I tried different classes and didn't see any major difference. Some different random algorithms give a better result but are slower by themselves so, in the end, everything looks the same.
Also, for simplicity, I decided to keep using the basic Random class. Yet, that's quite easy to change if needed.
About evolutionary algorithms being better than other algorithms... well, my opinion is that other algorithms can always be better if they really target the problem. Evolutionary algorithms are "good" solution but rarely the best. Genetic algorithms are even more constrained (evolutionary algorithms don't require a static number of genes, for example). Yet, when there seems to be no good directed algorithm, evolutionary algorithms really do a very good job.
About your final items, this is the first time I see the "stochastic gradient descent" name, so I really can't say anything about it.





Sobol sequence outperforms general random generator in MonteCarlo simulation by 1000x and more by better uniform distribution for financial derivatives pricing. So we need less simulation paths.
So using some logic and tricks we dont need to use cudas, and many other hardware.
It is the question of online learning algorithms.
Certainly if we combine best technics with best hardware we may achieve interesting results in offline.
Gradient descent (or ascent  doesnt matter  depends on the path up or down on error curve), stochastic GD are from the same field.
Certainly you have to know about  it is from general optimization tasks.
GD is very simple and most effective optimization technic.
GD is used in least squares fitting for example. It is the most known and used optimization tool. Combining with sparsification using GD i ve got 2000x and more speed in kernel learning versus SMO for kernel support vector learning algorithms. It gave me possibility to use best regression and classification algorithms online.
In general best combination is  to use random initial population and to converge to the min or maximum by steps accordingly to angles (deltas, gradients). Gradient in general is vector of derivatives = vector of tanhens of angles in current points.
Real implementation is trivial (maths are annoying as usual).
So if we can measure error1 and error2 after getting 2 fitnesses certainly we can adjust our parameters (for example we have only 2: x and y) by deltas x += (x2x1)*(error1error2),y+= (y2y1)*(error1error2). Stochastic means we multiply delta by (error1error2) or sign(error1error2) adjusted by learning rate * random.NextDouble.
For what it is needed  speedup. GD is the fastest technic. Main problem for it  if the function is complicated with many peaks and bottoms it may converge to one of these local extremums. So evo gives good initial and if needed adjusted (sometimes called "injected") search points distribution.
But it is easy to see  technics are close each other. Evolutionary program suppose random distribution, GD or SGD are adding some heuristics as direction. In math theory everything is so complicated, after enlightment applied technics are very simple as usual.
Ok, thanks for the answer
modified 15Dec15 19:13pm.





There is no single algorithm that is 'best' for all types of optimization problems.
Gradient descent (or the Newton Method as it is commonly known) works well to find a local minimum of a continuous function. It can also be applied to functions of multiple variables, but in practical applications there are better alternatives, starting from the modified NewtonRaphson method, to methods based on higher order taylor expansion, and more. All these methods require continuous functions, and the optimization problem is always to find the parameters that deliver the minimal result for that function. A problem with all of these methods is that all they can do is find a local optimum, and it is usually not trivial to find out whether or not there is more than one, or if the local optimum you found is the best of all local optimums.
GAs are an entirely different class of solvers: they always perform a global search, and  unless you restrict the encoding and operators correspondingly  they will usually deliver the globally best solution. The big drawback is that you can't tell for how long you need to run the GA until it finds that optimum.
Also, GAs do not require a continuous function. In fact, GAs are most often used in combinatorial problems, such as TSP. There is indeed always a function (the cost function) that is evaluated and compared to see whether the current candidate represents a better solution than others. But the input to that function are not necessarily continuous (realvalued) parameters. Therefore you cannot calculate the gradient of that function, or apply some Newtonrelated method.
GOTOs are a bit like wire coat hangers: they tend to breed in the darkness, such that where there once were few, eventually there are many, and the program's architecture collapses beneath them. (Fran Poretto)





Sorry, dude, do you follow?
It was thin question concerning evolutionary programming and genenetic algorithms  fields is called genetic algorithms, the difference is in ga you work with bits, in evo algorithms you work with population of decisons mutating both mostly trough random generations (doesnt metter wich bit or figure you are changing or realising crossover or smthg else).
I am satisfied with evo. But i'am asking experienced programmer with convenient background about his experience. Because i see another article with expression programming based on GA.
There are not to much resources on it. For example some publications from Microsoft research team  James D.McCaffrey i conclude there are evo benefits. There are many others genetic algorithms: bee, ant colonee, pso etc. Last of them is called bat algorithm As for my opinion it is close to multi particles swarm optimization paths.
My choice is evolutionary programming.
Second part  how to converge. Uniform distribution is better from my point of view. If you plot basic random generator you ll see many points are around centroids and only after many epoch generations you ll see more or less painted full space.
And third part  combine gradient and stochastic descent to converge faster. I was not talking about it  i just told i have it in mind. In online machine learning for example GD  there is nothing better whatev er you do. Recursive least mean squares is just the same fitness function with weights adjusted with GD.
modified 5Jan16 7:38am.





First part, I am not sure I understand what you're trying to say. Your description of 'evo' states nothing that is not also part of GA, so I'm not at all sure you do understand the difference yourself, or else, if we're actually talking about the same thing. It doesn't seem like. Also, when you say 'evo', do you mean 'Evolutionary Algorithm' or 'Evolutionary Programming'? The latter is a subgroup of the former, just as GAs.
Second, this article is about GAs, and it is in that context that you have to read my answer: With respect to GAs, the question of whether or how well the GA converges does not depend on the random number generator at all. There are other factors that influence convergence in much more direct and obvious ways.
Third, again I am not sure what you meant to say. Your mentioning of machine learning and weights reminds me of neural networks, not GAs or EAs  these are entirely different fields of AI, and I don't understand why you bring them up in this context, except as an example where GD is best.
GOTOs are a bit like wire coat hangers: they tend to breed in the darkness, such that where there once were few, eventually there are many, and the program's architecture collapses beneath them. (Fran Poretto)





Trust me i ve done a lot of applied experiments and payed idiots too. And my choice is evolutionary programming instead of known genetic algorithms (or GA wich manipulates with bits). Evo programming acts on figures. Yes they are together from the same club called genetic algorithms in general. At least this ternminology is used in main sources.
It was very thin question to practician. I ve read some of his articles before.
Wish you happy new year.





The "club" is not Genetic Algorithms. Genetic algorithms are more specific and part of the "club" named Evolutionary algorithms.





Certainly you are right. Yes it is.





I just read up on various types of Evolutionary algorithms[^]. After your mentioning of machine learning and adjusting weights, it seems to me that you are talking about Neuroevolution[^]. Evolutionary programming[^] would also fit.
I also read up on EA and GA again, and found that, technically, it's really just a matter of definition: the only difference really is that EA doesn't make any specific mention on how condidate solutions are represented, and it therefore doesn't suggest any specific type of genetic operators  I really don't understand why anyone bothered to introduce so many different categories to algorithms that are so similar in nature.
Also, back in the days when I programmed GAs, the term EA didn't even exist! When I think about it, I've seen colleagues work on Neuroevolution (although in their case, they used the GA only to optimize the topology of the network structure and then gradient descent to optimize the weights for each candidate). The things I did were clearly GA. However, we might have expressed the candidate solutions without the chromosome strings. Considering the many constraints we had to implement into the genetic operators, the nature of the problem was more in the sense of the current definition of EA, rather than a GA: simply applying some random mutation or crossover often wouldn't result in a legal candidate, so we had to severely alter these operators.
But then, when I think about it, no one ever used a straight mutation or crossover operator out of the box  so if that is the condition for calling something a GA, then GAs don't exist! On the other hand, if you're allowed to call something a GA even after modifying these operators to fit a specific problem, then any EA can be expressed as a GA, and there is no point making a distinction at all!
Summary: no wonder we didn't understand each other  those definitions really don't make a lot of sense...
GOTOs are a bit like wire coat hangers: they tend to breed in the darkness, such that where there once were few, eventually there are many, and the program's architecture collapses beneath them. (Fran Poretto)





I never used and do not have any plan to use neural networks. Kernel methods outperform neural networks with similar accuracy, better speed and ability not to overfit. No matter regression or classification.
Modern Machine learning theory is known from Vapnik and Chervonenskis works who suggested it by 1969 ths. In 19981999 John Platt from tiny (micro) soft research laboratory suggested first playable algorithm called sequential minimal optimization. It gave possibility to resolve quadratic problem of support vector machine error function in reasonable time. Later many researches suggested significant improvements mostly based or principal components analysis and error function minimization.
For example i use regression that outperforms cached smo version by 2000x. What it gives in reality  due to PCA we have super sprarsification, fitness function computing economy and memory consumption economy. It s not a joke for example 1.500.000 trades prices decomposition is done in 4.5 seconds without any GPU, Amazon or Azures with non linear regression function wich cant be overfitted by default.
GD is used inside to adjust weights in recursive least squares fittings. If needed cascade or multiple kernel learning may be used.
I was really sasisfied that Gerbert Simon and Peter Norvig (first is Nobel laureat, second  author of Artificial Intelligence Modern Approach and technical director of Google) share clean idea  Neural networks been always dead. Btw last 10 machine learning world cups use only one family of learning algorithms  kernel in general and specifically suport vector machines. Why they are mostly trying to use many cores and gpus without simplification  it is another question. But no one word on neural networks.
I use Evo algorithms in finding some parameters or expression programming (like second article on GA). Error function as response of Bandwith of kernel functions is mostly monotonous and convex. As for expressions combinations, features extraction  evo or ga are super good. But i m really satisfied and cant finish with some additional optimizations.
So small tail on modern machine learning is finished.
Continuation in machine learning world cup.
Thank you for attention and happy new year.
modified 5Jan16 15:28pm.





We seem to come from very different backgrounds of work and knowledge, not to mention the time frame: I actively worked on ANNs and a little on GAs around +/1990. I use different terminology, if only because the terms and categorizations commonly used today didn't exist back then.
I know very little of the methods and works you've mentioned. Likewise, a lot of algorithms and methods I learned about are mostly unknown to people working in the field today.
My understanding of your work is that you use EA or similar only to find control parameters or maybe starting points for the actual problem you have, but then use GD to find the actual optimum. In that context, your statements totally make sense. but it doesn't change my response with regard to the effect of the random function on GAs. It may be different for other types of EAs though.
Thank you for the discussion, and a happy new year to you, too.
GOTOs are a bit like wire coat hangers: they tend to breed in the darkness, such that where there once were few, eventually there are many, and the program's architecture collapses beneath them. (Fran Poretto)





To answer your question: The method of random number generation is totally inconsequential with regards to the quality of genetic algorithm convergence. The way in which random numbers are used here is for selecting N out of M genes that need to be modified, or to pick 12 out of N chromosomes that become parent(s) of a new chromosome. For either use, the only thing that is important, is that each gene or chromosome is getting a roughly equal chance of being picked. The quality of random number generation only really makes a difference when you have a multidimensional random variable and you want to make sure that the components don't show a correlation. E. g. in a monte carlo simulation, you need the x and y values to be uncorrelated, and you can't guarantee that with standard random number generators.
The quality of a GA mostly depends on how well you manage to represent the solution as a chromosome, and how suitable the genetic operators are for the type of problem you try to solve.
There are also a number of parameters that influence how fast a GA converges to a local optimum, or if it converges at all. E. g. initial population size and the percentage of chromosomes dying off each generation from the previous generation (GAs are often described in such a way that the generated offspring replaces the previous generation, i. e. 100% of the prvious generation dies with every iteration, but that is not mandatory: you can choose to just replace a part of the population if that fits your problem better.)
GOTOs are a bit like wire coat hangers: they tend to breed in the darkness, such that where there once were few, eventually there are many, and the program's architecture collapses beneath them. (Fran Poretto)





I was asking about experience not theory from experienced programmer.
It seems to me you do not know difference between ga and evo. Just general optimization methods.
If i can have speed up 1000x without loosing accuracy i will use Sobol without any question.
Btw if you do not know in quantlib  financial industry minimum standard (may be boost) library sobol is included by default for paths modeling. There is no question to use it or not.
It was the question  about experience of using in general problems. Certainly we may test and that's all. But as i told any piece of experience is always very interesting.
So please no horses in vacuum.
Btw i do not like Monte Carlo  for it is speed and ability of missing extremums. There are many applied methods how to get rid of this theoretical trick. But guys from maths are falling in love with cuda and always show super unneded computing fields.
As for my opinion genetic algorithms are working much better, finding convenient min/max with good probability.
As i told i do not have any problem and satisfied by results i have. I was talking on gradient descent as not yet realised steps to converge faster.
I ve choosen evo programming  it is closer to combine with gradient descent to reduce evo steps.
modified 5Jan16 7:44am.





You were asking about the effect of the choice of random number generator on this GA, and I answered that, both from theoretical knowledge, and from practical experience. It may very well be different for some other evolutionary algorithms. But for GAs, it doesn't matter.
I only brought up Monte Carlo as a well known example for a multidimensional probabilistic variable, not as a suggestion, or an implication that it would be a good algorithm.
As for using gradient descent to help make a GAbased algorithm converge faster, that is obviously only possible if you have a function that you can determine a gradient for. In TSP, there is no gradient. That's all I'm saying.
Of course, you could use a GA to determine a good starting point for the search of an optimum on a continuous function that has many local minima, and then use gradient descent to find the actual local (and possibly global) minimum. It just doesn't apply to TSP.
GOTOs are a bit like wire coat hangers: they tend to breed in the darkness, such that where there once were few, eventually there are many, and the program's architecture collapses beneath them. (Fran Poretto)





OK, happy new year!
But if we are working mostly with error we may see relationship between parameters and error.
I agree it is not obvious for better tsp decision  we cant have garantee that function will be monotonne (sorry i may use not the best translation) but as we told we have to find better cost  so why not to try. At the same time we have problems with random gene generation and they may appear really randomly far/close from/to extremums of plain functions. What this GD may give  just one additional paths descending or ascending to/from known solution trying to improve result. We may take it as promoted euristic in some area.
Near logic is used in particles swarm optimizations and algorithm a la mode  bat.
In evo programming it is very close to main algorithm idea.
And please do not forget gradient descent even without any derivative is super fast in convex area.
Thank you very much for the answer, mate! Typing chars involved me in some logic remembering and seems i ve got an idea how to improve recursive least squares fitting  hyper fast kernel learning may be reduced by some more ticks
modified 5Jan16 9:58am.





Very nice article indeed. I am wondering if you would consider comparing with other algorithms like Minimum Spanning Tree. comparison measures could be execution time and the optimal path length. Another question is about the size of the GA population in which you are applying mutation and crossover, How would that affect the results. Would you consider a scaling issues? What about the time effect on different settings of using crossover or not? same for your 2 types of mutations.
Thanks a lot for this article. I found it interesting. I guess you can take this further exploring these questions and perhaps finds away into a real application.





I am not actually planning to compare this algorithm to any other algorithm. The idea was only to give the basics on how it works. Any variations (like bigger or smaller population, using or not crossovers etc) is left as an exercise to the reader.
In fact, using crossover or not is something you can already play with, even if you don't change the code.
Part of this code is actually used with a different purpose in this project: https://github.com/msiot/plotmyface[^]





The (usual) point of GAs is to solve a type of problem or size of problem that cannot be practically solved with deterministic algorithms. Minimum Spanning Trees can be found in nearlinear time with deterministic algorithms, so there's no point comparing that to GA performance, although, of course, nobody stops you from implementing a GA solution as a proof of concept.
GOTOs are a bit like wire coat hangers: they tend to breed in the darkness, such that where there once were few, eventually there are many, and the program's architecture collapses beneath them. (Fran Poretto)





I've done my own research into Genetic Algorithms (http://fathernatures.blogspot.com/2013/04/implementinggeneticalgorithminbash.html), the problems I've found are tuning (which seems to be more of an art than an exact science) and the problem set almost invariably would be better solved by bruteforce. The amount of comparisons necessary to get to an adequate answer far exceeds the number of comparisons it would take to brute force check every possible combination for this problemset, which would also give you the best answer.
Maybe I'm missing something, but I don't see the benefit (at least not in the travelling salesman problem).
I also noticed that your sample sizes (in your previous article) are very large. Is there a costbenefit to using such a drastically larger sample size?
David
Update  for those interested here are the results of my tuning experiments: http://fathernatures.blogspot.com/2013/04/optimizinggeneticalgorithm.html[^]
modified 3Nov14 13:47pm.





First, I don't consider genetic algorithms the best for this problem. But I wrote this article because many people said on my other article that they wanted to see it solving the TSP problem.
Second, I doubt that brute force alone will do a better job. You can test this sample with 57 destinations (+ the begin/end one). Brute force means all possibly combinations... or Factorial(57) (or 40526919504877216755680601905432322134980384796226602145184481280000000000000 possibilities... if your computer is able to process 1 billion of possibilities per second, it will take 1281588984545044549296720106805059772028068229996034524425232 years to finish the job with brute force [I calculated all years as having 366 days]).
About the large sample sizes... those were the ones that worked best for me.
modified 3Nov14 17:26pm.





Can you give an example of a problem set that you think it would be good for? I'd be interested in finding a better test case for testing/tuning. It might even inspire me to dust off my old code and give GA's another shot.
David





If you see the other article you will see that, in my opinion, GA (and evolutionary algorithms in general) are useful when we don't know a better algorithm to solve the problem.
So, we may get to the conclusion that if we are smart enough we can always find a better, specific algorithm.
Yet, if we already have the base GA setup, it may be faster to simply apply it to a new problem than to develop a specific algorithm for that new problem.
And in any case, the GA is doing a much better job than brute force. Maybe brute force with specific optimizations may be better, but brute force directly is surely not the answer. GA might not be the best answer, but it is an answer.





I agree with Paulo: GAs can be used very effectively for problems that you know (or suspect) to be NP complete. And that becomes more obvious when you look at larger problem sizes: I remember a report from a research group some 25 years ago(!) who found a probable best or nearbest (you cannever be sure) solution to a TSP problem with more than 1000 locations! Given the computing power back then, that was quite an achievement.
It's also important to understand that GAs are not the right solution if you absolutely must find the best possible solution, it cannot guarantee that, nor will you usually be able to verify whether or not the GAs solution is in fact the best, or just a solution that's near the optimum.
GOTOs are a bit like wire coat hangers: they tend to breed in the darkness, such that where there once were few, eventually there are many, and the program's architecture collapses beneath them. (Fran Poretto)





Great article but...
It only really solves half the problem of the TSP as in the real world you can't travel in straight lines, you need to follow roads. Somehow this would need to be applied to roads to work out the route  any suggestion on how to go about this.
Once again many thanks for a great article and an interesting read





The answer to this is pretty easy... the implementation is not.
It is simply a matter of replacing that GetDistance method with one capable of taking the real streets into account.
So, we can say that the genetic algorithm stays the same. It simply keeps generating "random" travel plans and calculating the distance between the locations.
But the code that calculates distance must be much more complete, effectively having to find the best routes from point A to point B before returning the distance. Such search itself can be composed of another genetic algorithm... or a completely different algorithm. I am pretty sure that such implementation alone is going to be bigger than the entire sample application I gave, and some kind of database will be needed to have the street information.





Your right, and getting the database of street information will probably be quite expensive.
Thanks again for the great article.





You also can measure all the streets for free








It really cool, I think that this algorithm can be improve somehow.







General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.

 Best "Everything Else" Article of July 2014 (Second Prize)
