Introduction
“Evolution continually innovates, but at each level it conserves the elements
that are recombined to yield the innovations.”―John H. Holland
Sometimes, in simulation modelling, imitating the realworld models, such as process timeline and resource scheduling, solving the search optimization and classification problems in statistical programming, providing filtering in digital signal encoding/decoding (telecom), designing the graphics image recognition applications, or even developing games, we usually require of using the paradigm of “flexible” computations, provided by the evolutionary computing strategies, to implement the nonlinear heuristic algorithms of finding the varieties of a certain problem rival optimized solutions, rather than using the classical linear approach that ensures of finding a single trivial problem solution, that, in the most cases, might not satisfy the optimization criterias. In this article we will discuss about the most common scenario of implementing the metaheuristic nonlinear AI evolutionary search algorithm (EA) intended to solve the calculus topological approximation problem of finding the “nearest” local neighbor values and for the given value of . In calculus, solving this problem, we’re typically using the parameter value of the local neighborhood – , that satisfies and is representing the actual “distance” between any value of defined and each of the feasible nearest entourage values of either or, that, obviously, can be calculated as and (1,2) respectively. Also the local neighborhood value of, in certain variations of the problem, can be treated as either the real numbers precision accuracy or the observational error during the measurement process. Normally, the both values of and obtained, are, actually, the endpoints of a certain interval, in which the given parameter value of resides. The following interval, in fact, is a subset of the entire domain of integer values that represents the topological space, in which, the following calculus problem is defined. The topology of the following calculus problem is, actually, the onedimensional space, which corresponds to the entire domain of integer values. In functional analysis, the interval endpoint values and for the given parameter value of are typically the local asymptotic values and of the trivial linear function defined in the interval. Fig. 1a. Illustrates the graphical representation of the calculus topological approximation problem of searching the nearest neighbor values of the given value of x when the local neighborhood is specified:
In the most cases the calculus topological approximation problem of finding the nearest neighbor values is solved as simple as performing the appropriate calculations using the formulas (1, 2), but, unfortunately, there're several cases, in which the local neighborhood value actually varies for each value of, or might not be the same for either or interval endpoint limits , which is, in turn, leading to the, so called, topological asymmetry. In this case, the following calculus topological approximation problem has the different formulation and is turned into the certain computational problem of finding the nearest neighbor values of the given value of x among the all of the possible candidate solutions of the following problem, represented as a certain unordered vector. The following vector actually contains the values either or, which obviously represent the feasible neighbor values for the value of. Each one of these values could be, for instance, obtained as the result of using the listed above formulas (1, 2) for the different values of, or can be manually specified as well. Fig.2b. shows the main differences between either the classical or computational diversity of the calculus topological approximation search problem of finding the nearest neighbor values of the given value of x:
In programming, we’re normally representing the vector as the integer array of N elements and are using the linear traversal algorithm to calculate the actual distance between the given parameter value of and the value of each element in the array . Further, we have to perform the trivial linear search to find the either minimal or maximal distance values, that are matching the appropriate elements of and in the array of the feasible candidate solutions . Unfortunately, using the linear nonheuristic algorithms, when used to solve calculus linear topological problem of finding the nearest neighbor values, has the number of disadvantages:

Classical nonheuristic algorithms, typically, allow finding the only one trivial calculationsbased solution of the approximation problem by performing the linear traversal of the entire array of possible problem solutions. Typically, using classical linear algorithms, we’re considering the first solution found, that meets the criteria of search, to be the most feasible solution of the optimization problem described above;

The classical nonheuristic traversal algorithms, in the most cases, are “fixed”. That means, we cannot use it to find the both and simultaneously without modification of the algorithm. We typically need to iterate through the array of solutions several times to find the either and , which causes a linear algorithm to degrade the overall performance and costeffectiveness, especially in that case when the array of the feasible problem’s solution candidates has excessively large size, or when the array size is growing during the data processing. The overall costeffectiveness of the classical linear algorithm always depends on, is proportional to the potential size of the array of feasible solution candidates, and is equal to.
As the alternative to the existing linear traversal algorithms, the algorithm implementation described in the next sections of this article is mainly based on the evolutionary programming concepts (EP) and allows achieving the more efficient results by providing the varieties of the optimized problem solutions, obtained as the result of flexible computations. Furthermore, it the general performance and productivity, but sometimes requires the additional resources to perform the complex data processing.
Background
The main goal of the evolutionary algorithm designed is to find the optimized endpoint values and (e.g. the nearest neighbor values) for the parameter value of x, in the array of the feasible problem’s solution candidates. Normally, the following array consists of the randomly selected integers, providing the feasible interval’s endpoint values, in which the parameter value of x resides. Now, let’s take a short glance at the basic representation of the main parameters of the algorithm being discussed. During the algorithm’s initialization phase, we, typically, represent the numerical array of the linear approximation problem feasible solution candidates as the initial population of chromosomes. Each individual chromosome in the population is typically held, in binary notation, as a certain sequence of bits. Normally, each bit set to either “0” or “1” in a sequence, maps to an appropriate atomic “gene” within a particular chromosome, and is used to encode a certain condition of the linear approximation problem. The condition parameter encoded within each chromosome normally represents a feasible interval, at which, the given value of x may reside. When solving the following linear approximation problem our target is to find the interval that has the minimal distance, in which the value of x may reside. The varieties of the problem conditions mostly depend on the total quantity of genes in their certain positions. The position of a particular gene within a chromosome is typically called a “locus”. The locus represents the position of the gene that encodes a certain condition parameter of the linear approximation problem for a particular “epoch” of the mimic evolutionary process. As well, “locus” is the point at which we perform the binary crossover of the chromosomes within a single population that belongs to a certain “epoch”. The value of the locus position is the same for each chromosome of each population within a single evolutionary epoch. In evolutionary programming (EP), we are using the unique set of genes encoded as either “0” or “1” at specific locus positions, which is typically called a “genotype”, to provide the multiple criteria for the nonlinear metaheuristic search process of finding the optimized linear approximation problem solutions. This criteria is typically called a condition parameter of a certain optimization problem. In this particular case, solving the problem of finding the nearest neighbor values, we provide a single optimization parameter for each chromosome in a population. The gene that encodes this parameter located at its appropriate position that is different for the each particular epoch. Fig. 2b. illustrates the binary representation of the “least” and the “most” fittest genes within a single chromosome. We’re using bits set to “1” to normally encode the condition parameter of a certain problem, and, respectively, the bits set to “0”, to indicate that a certain chromosome transmits its the most fittest genes to the chromosomes in the population that belongs to the next epoch of the evolutionary process. Each feasible solution candidate, represented as a chromosome, normally encodes the criteria by which we are determining the relative fitness for a particular problem’s solution. In this particular case, when the nonlinear metaheuristic search is used to find the nearest neighbor values of the given parameter value of x, the genes of each particular chromosome, are encoding the fitness value for the chromosome within each “epoch” of the mimic evolutionary process. Obviously the “least” and the “most” fittest chromosomes in the same population have its corresponding bit is set to either “0” or “1”, respectively, at the locus position of a particular gene. For example, there is a population that consists of the two chromosomes, that in a certain (epoch = 3), have the same (locus = 5) position. As shown on the fig. 2c. the first chromosome is deemed to be the “least” fittest, because it has the bit set to “0” in the appropriate locus position of gene (locus = 5). Whereas, the second chromosome is supposed to be the “most” fittest, because the appropriate bit is set to “1” at the similar locus position of gene. The locus position of gene, in this case, is constant (locus = 5) for the entire chromosome population. This type of the chromosomes encoding is used during the binary distribution selection and the offspring population breeding, which is thoroughly discussed below, in the next section of this article. After we have represented the criterion array of the problem solution candidates, as the initial population of chromosomes, we will have a need to perform the similar task with the parameter value of x by representing its integer value as a sequence of bits in binary notation as shown on the fig. 2d. Similarly, to the chromosomes from the initial population, the sequence of bits that represents the value of x, in this particular case, encodes the main parameters that impose the certain conditions for the process of finding the optimized problem solution candidates represented as the feasible intervals at which the value of x resides. The main optimization parameter that represents the lower endpoint of the interval, is encoded as “gene” (e.g. bit is set to “1”) at the position of the most significant digit (MSD) in the sequence of bits that represents the given parameter value of x. The following condition parameter is used to obtain the population of the “elite” chromosomes at the next phase of the evolutionary algorithm discussed. By using this parameter, we are defining the maximum allowable interval as shown in the example below:
x = 0000 1110 = 14, [0000 1000; 0000 1111] = [8; 15]
Normally, a certain chromosome in the population can be deemed as the problem’s possible solution candidate if it represents a certain interval, which is the subset of the shown above interval encoded by the value of x. The portion of bits between LSD and MSD positions, that the evolutionary algorithm described, uses as the pattern for the metaheuristic nonlinear search by partial match to find the most fittest interval which endpoints are the nearest neighbor values for the given parameter value of x. The following pattern encodes values that belong to the interval from the example above. In genetic programming, this pattern is called a “schema” and is used as the relevance template for the certain proportion of chromosomes in the population. In this case, the all chromosomes in a population that partially match the schema listed above are deemed to be the most fittest problem solutions. Finally, at this point, we have to calculate the value of the locus position for the initial chromosome population. To calculate this value, we are performing the linear search to find a single chromosome in a certain population that has the highest position of MSD. The highest position of MSD actually represents the initial locus position value for the entire chromosome population.
The most essential phase of the following algorithm discussed is the process of selecting the most fittest chromosomes, capable to breed the new populations at each successive epoch of the imitated evolutionary process. Let’s recall that, the classical genetic algorithms (GA), in order to perform the selection of the most fittest chromosomes, typically use the variety of methods such as either stochastic tournament or fitness proportionate selection (SCX), in which the decision of what particular chromosomes are deemed to be the most fittest, is made based on the calculation of the fitness function for each chromosome in the entire population. For that purpose, normally, either the stochastic probability or scale factor fitness functions are used. As well, the existing evolutional algorithms perform the crossing over and mutation with the two or even more selected the most fittest chromosomes to breed the new “offspring” population for the next epoch of evolutionary process, providing the new diversity of solutions for the linear approximation problem. In genetic programming (GP), the implementation of probability stochastic methods of selection, as well as providing the crossover and mutation operators, is mostly efficient when we need to provide the optimization of a certain linear function that has a vector of arguments and the multiple optimization criteria are defined. Another instance, when the classical genetic programming approach can be used is when we need to solve a certain system of linear equations, in the case when certain optimized problem’s conditions are imposed. In this case, each chromosome representing a certain solution candidate encodes enough optimization parameters, which allows efficiently use the crossover and mutation operators to obtain the new problem’s candidate solutions during the mimic evolutionary process. However, referring to the solving of the linear approximation problem of finding the “nearest” neighbors of a certain parameter value of x, described in this article, the use of the basic genetic programming approach, listed above, typically has the less efficiency, because, according the problem statement, we’re representing each problem’s feasible solution candidate as a chromosome that encodes the only one optimization parameter as the value of particular “gene” (e.g. a bit, set to “1”) at a certain locus position. That is actually why, using the crossover and mutation on the particular chromosomes in a certain population during a certain number of epochs, will not yield the results expected. Besides, those classical methods typically allow finding the most fittest problem’s solutions for an infinite number of epochs. It sometimes might cause achieving the unpredictable search results as well as may cause the significant costeffectiveness and productivity degrades of the AI evolutionary selective process.
As the alternative to the existing genetic programming approaches, in this article we will discuss about the evolutionary binary distribution selection algorithm that provides more effective solution of the problem of finding the nearest neighbor values for the given parameter value of x. Instead of using randomized stochastic methods of selection as well as performing the classical crossover and mutation, the following algorithm implements binary distribution selection process and binary recombination (crossover) to breed the new populations of the problem’s offspring solution candidates. Using of the binary distribution selection algorithm ensures of finding the nearest neighbor values of x for a finite number of epochs, which, in contrast to the existing selection and recombination methods, benefits in the overall costeffectiveness and performance of the imitated evolutionary process. Unlike the classical genetic algorithms, in which the selection process is performed prior to the crossing over and selection, in the following algorithm discussed, we combine these two processes to be performed as a single process of finding the optimized solutions of the linear approximation problem. According to the general ideas the proposed algorithm is based on, the result of the binary selection process is the recombination of each “parent” chromosome population into the two new breed offspring populations by distributing the most and the least fittest chromosomes between these two populations. In contrast to the classical genetic existing methods, in which the crossover and mutation is typically performed for the pair of two or more randomly chose chromosomes, binary distribution selection is normally performed for the entire chromosome population belonging to a certain evolutionary epoch and is mainly based on the classification of particular chromosomes using the criteria of fitness imposed to the certain “genes” for each chromosome in the population. The populations of most fittest chromosomes represent the multiple conditions of the optimization problem, whereas the populations of the least fittest chromosomes transfer its genes to the populations, which then could also be determined as the most fittest during the next epochs of the mimic evolutionary process.
Now, let’s take a closer look to the binary selection process implemented by the following algorithm discussed. During the selection process, we are iterating through the array of populations, retrieving each population. For each population in the array we are initially performing the check if the current population is the final breed offspring population, consisting of a single the most fittest “elite” chromosome, which has its condition encoding gene bit is set to 1 at the position that exactly matches the position of the most significant digit (MSD) for the parameter value of x. If so, we are appending this chromosome to the final breed target population, for which we will further be performing the linear search of the nearest neighbor values of x. Otherwise, we will proceed with the binary distribution selection performed for each chromosome population. During the binary distribution selection, we are actually performing the linear traversal of the current population and for each chromosome, represented as a sequence of bits, we verify if its conditionencoding bit is set to “1” at the current locus position for a certain epoch. We typically done by using the following bitwise operation:
#define digit(r,n) (r & (1 << n)) ? 1 : 0
If it is, the chromosome is deemed to be the most fittest, and we are appending it to the population of the most fittest chromosomes. Otherwise, if the gene encoding bit is set to “0” we are appending this chromosome to another population of the least fittest chromosomes. Actually, the following selection process is the process of the binary distributive recombination of the particular chromosomes to breed the new offspring population for the next epoch of the evolutionary process. Fig. 3a,b illustrate the selection process implemented by the following algorithm:
During the binary recombination, each chromosome selected transfers its genes to the chromosomes of the two new breed offspring populations belonging to the next evolutionary epoch. That’s why, the least fittest chromosomes become the most fittest in the next epoch of the evolutionary process, and vice versa. The recombination process implemented is actually a binary distribution between the either most or least fittest offspring populations, which is performed based on the determining of what particular chromosomes encode the problem’s conditions imposed for the current evolutionary epoch, and, thus, are the most fittest. After we have generated the two new offspring population of either the most or the least fittest chromosomes as the result of the binary recombination process, we are calculating the new locus position for the next epoch, associating it with two new breed offspring populations. Finally, we are appending these two new populations to the array of populations. After that, we are recursively fetching the next population from the array of population and are performing the similar tasks listed above. Actually, we are recursively performing selection until we have reached the last population in the array of population. The result of performing the binary distribution selection and recombination, we obtain a single population of the “elite” chromosomes, which are actually the feasible problem solution candidates. The block diagram of the binary distribution selection process is shown on Fig. 3c.
The final phase of the evolutionary algorithm described is the process of finding the nearest neighbor values and of the parameter value of x. For that purpose, we are performing the metaheuristic nonlinear search by partial match, in the target arrays of the either the most or the least fittest chromosomes obtained within the previous phase of the evolutionary algorithm described. We also will use the value of “schema” of the parameter value of x, calculated during the following algorithm initialization phase. This typically done by performing the linear search for each successive condition encoding gene locus position relative to a certain epoch of the imitated evolutionary process. To find the nearest minimum value for the given value of x, we have to iterate through the target array of chromosomes starting at the first chromosome of the array comparing the schema value of each chromosome with the schema calculated for the parameter value of x. At this point, we are searching to find the first occurrence of the chromosome matching the schema MSD position. For each chromosome, we actually need to test if the value of schema for the current chromosome in the target array is less than the value of schema for the parameter value of x in binary representation. The similar task is performed when we need to find the nearest maximum value for the given value of x. We are iterating through the target array starting from the last chromosome in the population and, for each chromosome, we are checking if the value of schema of the current chromosome is greater or equal to the value of schema of the parameter value of x. If so, we are considering this chromosome to be nearest maximum value for the given value of x.
Using the code
The following portion of code performs the evolutionary binary distribution selective operation by generating the new "child" populations of chromosomes that belong to the next epochs and appending them to the array of chromosomes. This operation is performed for each population until the final breed offspring population is not generated:
template <typename Ty = int>
void gene_selection(_GENE_POPULATION*& _population, Ty val)
{
int _lsd = 0, _msd = _lsd;
get_gene_locuses(val, _lsd, _msd);
int _gene_phenotype =
get_gene_phenotype(val, _lsd, _msd);
unsigned long _pop_index = 0;
unsigned long n_pop_index = _pop_index + 1;
while (_population[_pop_index].m_pop_size > 0)
{
int _gene_index = 0;
int _w_pop_index = 0;
int _s_pop_index = 0;
int _gene_pos = _population[_pop_index].m_gene_pos;
int _gene_s[_GENE_POOL_SIZE] = { 0 },
_gene_w[_GENE_POOL_SIZE] = { 0 };
while (_gene_index < _population[_pop_index].m_pop_size && _gene_pos >= 0)
{
if (digit(_population[_pop_index].m_gene_ptr[_gene_index].m_val, _gene_pos))
_gene_s[_s_pop_index++] =
_population[_pop_index].m_gene_ptr[_gene_index].m_val;
else _gene_w[_w_pop_index++] =
_population[_pop_index].m_gene_ptr[_gene_index].m_val;
_gene_index++;
}
if (_s_pop_index > 0)
{
_population[n_pop_index].m_gene_fit = 1;
_population[n_pop_index].m_pop_size = _s_pop_index;
_population[n_pop_index].m_gene_pos = _population[_pop_index].m_gene_pos  1;
gene_copy(_population[n_pop_index].m_gene_ptr, _gene_s, _s_pop_index);
n_pop_index++;
}
if (_w_pop_index > 0)
{
_population[n_pop_index].m_gene_fit = 0;
_population[n_pop_index].m_pop_size = _w_pop_index;
_population[n_pop_index].m_gene_pos = _population[_pop_index].m_gene_pos  1;
gene_copy(_population[n_pop_index].m_gene_ptr, _gene_w, _w_pop_index);
n_pop_index++;
}
_pop_index++;
}
}
This function perform the linear searching to find the particular populations that consists of the only one chromosome. It append the chromosomes found to the target integer array of the final offspring breed of chromosomes.
int gene_quantum_population(const _GENE_POPULATION* _population,
_GENE_POPULATION*& _population_target, int msd)
{
if (_population_target == NULL)
{
_population_target =
new _GENE_POPULATION[_GENE_POPULATION_SIZE];
memset((void*)_population_target, 0x00,
sizeof(_GENE_POPULATION)* _GENE_POPULATION_SIZE);
}
int nindex = 0;
for (int index = 0; _population[index].m_pop_size > 0; index++)
if (_population[index].m_gene_pos == 1 &&
_population[index].m_gene_ptr[0].m_msd == msd)
{
_population_target[nindex].m_gene_fit = _population[index].m_gene_fit;
_population_target[nindex].m_gene_pos = _population[index].m_gene_pos;
_population_target[nindex].m_pop_size = _population[index].m_pop_size;
_population_target[nindex++].m_gene_ptr = &_population[index].m_gene_ptr[0];
}
return nindex;
}
These two C++ routines perform the linear search on the resulting target array of the final breed chromosomes using the conditions based on the phenome match criterias for each chromosome and the value of x.
template <typename Ty = int>
Ty gene_find_minimum(Ty val, const _GENE_POPULATION* _population)
{
int _lsd = get_gene_lsd(val);
int _msd = get_gene_msd(val);
int _gene_phenotype =
get_gene_phenotype(val, _lsd, _msd);
for (int _gene_msd = _msd; _gene_msd >= 0; _gene_msd)
{
_GENE_POPULATION* _population_target = NULL;
int size = gene_quantum_population(_population, _population_target, _gene_msd);
for (int index = 0; index < size; index++)
{
int _schema = get_gene_phenotype(_population_target[index].m_gene_ptr[0].m_val, _lsd, _gene_msd);
if (_schema < _gene_phenotype && val != _population_target[index].m_gene_ptr[0].m_val)
return _population_target[index].m_gene_ptr[0].m_val;
}
}
return 1;
}
template <typename Ty = int>
Ty gene_find_maximum(Ty val, const _GENE_POPULATION* _population)
{
int _lsd = get_gene_lsd(val);
int _msd = get_gene_msd(val);
int _gene_phenotype =
get_gene_phenotype(val, _lsd, _msd);
int _gene_msd = _msd;
while (_gene_msd <= _gene_population[0].m_gene_pos)
{
_GENE_POPULATION* _population_target = NULL
int size = gene_quantum_population(_population, _population_target, _gene_msd);
for (int index = size  1; index >= 0; index)
{
int _schema = get_gene_phenotype(_population_target[index].m_gene_ptr[0].m_val, _lsd, _gene_msd);
if (_schema >= _gene_phenotype && val != _population_target[index].m_gene_ptr[0].m_val)
return _population_target[index].m_gene_ptr[0].m_val;
}
_gene_msd++;
}
return 1;
}
Output
Here's the debugging output provided for the C++ code that comes along with the following article.
{ 6 16 5 44 8 12 3 15 2 76 9 18 } val = 11
{ 6 16 5 44 8 12 3 15 2 76 9 18 } pop = 0 fit = 1 pos = 6 size = 12
6 = ( 0000 0110 ) lsd = 1 msd = 2
16 = ( 0001 0000 ) lsd = 4 msd = 4
5 = ( 0000 0101 ) lsd = 0 msd = 2
44 = ( 0010 1100 ) lsd = 2 msd = 5
8 = ( 0000 1000 ) lsd = 3 msd = 3
12 = ( 0000 1100 ) lsd = 2 msd = 3
3 = ( 0000 0011 ) lsd = 0 msd = 1
15 = ( 0000 1111 ) lsd = 0 msd = 3
2 = ( 0000 0010 ) lsd = 1 msd = 1
76 = ( 0100 1100 ) lsd = 2 msd = 6
9 = ( 0000 1001 ) lsd = 0 msd = 3
18 = ( 0001 0010 ) lsd = 1 msd = 4
{ 76 } pop = 1 fit = 1 pos = 5 size = 1
76 = ( 0100 1100 ) lsd = 2 msd = 6
{ 6 16 5 44 8 12 3 15 2 9 18 } pop = 2 fit = 0 pos = 5 size = 11
6 = ( 0000 0110 ) lsd = 1 msd = 2
16 = ( 0001 0000 ) lsd = 4 msd = 4
5 = ( 0000 0101 ) lsd = 0 msd = 2
44 = ( 0010 1100 ) lsd = 2 msd = 5
8 = ( 0000 1000 ) lsd = 3 msd = 3
12 = ( 0000 1100 ) lsd = 2 msd = 3
3 = ( 0000 0011 ) lsd = 0 msd = 1
15 = ( 0000 1111 ) lsd = 0 msd = 3
2 = ( 0000 0010 ) lsd = 1 msd = 1
9 = ( 0000 1001 ) lsd = 0 msd = 3
18 = ( 0001 0010 ) lsd = 1 msd = 4
{ 76 } pop = 3 fit = 0 pos = 4 size = 1
76 = ( 0100 1100 ) lsd = 2 msd = 6
{ 44 } pop = 4 fit = 1 pos = 4 size = 1
44 = ( 0010 1100 ) lsd = 2 msd = 5
{ 6 16 5 8 12 3 15 2 9 18 } pop = 5 fit = 0 pos = 4 size = 10
6 = ( 0000 0110 ) lsd = 1 msd = 2
16 = ( 0001 0000 ) lsd = 4 msd = 4
5 = ( 0000 0101 ) lsd = 0 msd = 2
8 = ( 0000 1000 ) lsd = 3 msd = 3
12 = ( 0000 1100 ) lsd = 2 msd = 3
3 = ( 0000 0011 ) lsd = 0 msd = 1
15 = ( 0000 1111 ) lsd = 0 msd = 3
2 = ( 0000 0010 ) lsd = 1 msd = 1
9 = ( 0000 1001 ) lsd = 0 msd = 3
18 = ( 0001 0010 ) lsd = 1 msd = 4
{ 76 } pop = 6 fit = 0 pos = 3 size = 1
76 = ( 0100 1100 ) lsd = 2 msd = 6
{ 44 } pop = 7 fit = 0 pos = 3 size = 1
44 = ( 0010 1100 ) lsd = 2 msd = 5
{ 16 18 } pop = 8 fit = 1 pos = 3 size = 2
16 = ( 0001 0000 ) lsd = 4 msd = 4
18 = ( 0001 0010 ) lsd = 1 msd = 4
{ 6 5 8 12 3 15 2 9 } pop = 9 fit = 0 pos = 3 size = 8
6 = ( 0000 0110 ) lsd = 1 msd = 2
5 = ( 0000 0101 ) lsd = 0 msd = 2
8 = ( 0000 1000 ) lsd = 3 msd = 3
12 = ( 0000 1100 ) lsd = 2 msd = 3
3 = ( 0000 0011 ) lsd = 0 msd = 1
15 = ( 0000 1111 ) lsd = 0 msd = 3
2 = ( 0000 0010 ) lsd = 1 msd = 1
9 = ( 0000 1001 ) lsd = 0 msd = 3
{ 76 } pop = 10 fit = 1 pos = 2 size = 1
76 = ( 0100 1100 ) lsd = 2 msd = 6
{ 44 } pop = 11 fit = 1 pos = 2 size = 1
44 = ( 0010 1100 ) lsd = 2 msd = 5
{ 16 18 } pop = 12 fit = 0 pos = 2 size = 2
16 = ( 0001 0000 ) lsd = 4 msd = 4
18 = ( 0001 0010 ) lsd = 1 msd = 4
{ 8 12 15 9 } pop = 13 fit = 1 pos = 2 size = 4
8 = ( 0000 1000 ) lsd = 3 msd = 3
12 = ( 0000 1100 ) lsd = 2 msd = 3
15 = ( 0000 1111 ) lsd = 0 msd = 3
9 = ( 0000 1001 ) lsd = 0 msd = 3
{ 6 5 3 2 } pop = 14 fit = 0 pos = 2 size = 4
6 = ( 0000 0110 ) lsd = 1 msd = 2
5 = ( 0000 0101 ) lsd = 0 msd = 2
3 = ( 0000 0011 ) lsd = 0 msd = 1
2 = ( 0000 0010 ) lsd = 1 msd = 1
{ 76 } pop = 15 fit = 1 pos = 1 size = 1
76 = ( 0100 1100 ) lsd = 2 msd = 6
{ 44 } pop = 16 fit = 1 pos = 1 size = 1
44 = ( 0010 1100 ) lsd = 2 msd = 5
{ 16 18 } pop = 17 fit = 0 pos = 1 size = 2
16 = ( 0001 0000 ) lsd = 4 msd = 4
18 = ( 0001 0010 ) lsd = 1 msd = 4
{ 12 15 } pop = 18 fit = 1 pos = 1 size = 2
12 = ( 0000 1100 ) lsd = 2 msd = 3
15 = ( 0000 1111 ) lsd = 0 msd = 3
{ 8 9 } pop = 19 fit = 0 pos = 1 size = 2
8 = ( 0000 1000 ) lsd = 3 msd = 3
9 = ( 0000 1001 ) lsd = 0 msd = 3
{ 6 5 } pop = 20 fit = 1 pos = 1 size = 2
6 = ( 0000 0110 ) lsd = 1 msd = 2
5 = ( 0000 0101 ) lsd = 0 msd = 2
{ 3 2 } pop = 21 fit = 0 pos = 1 size = 2
3 = ( 0000 0011 ) lsd = 0 msd = 1
2 = ( 0000 0010 ) lsd = 1 msd = 1
{ 76 } pop = 22 fit = 0 pos = 0 size = 1
76 = ( 0100 1100 ) lsd = 2 msd = 6
{ 44 } pop = 23 fit = 0 pos = 0 size = 1
44 = ( 0010 1100 ) lsd = 2 msd = 5
{ 18 } pop = 24 fit = 1 pos = 0 size = 1
18 = ( 0001 0010 ) lsd = 1 msd = 4
{ 16 } pop = 25 fit = 0 pos = 0 size = 1
16 = ( 0001 0000 ) lsd = 4 msd = 4
{ 15 } pop = 26 fit = 1 pos = 0 size = 1
15 = ( 0000 1111 ) lsd = 0 msd = 3
{ 12 } pop = 27 fit = 0 pos = 0 size = 1
12 = ( 0000 1100 ) lsd = 2 msd = 3
{ 8 9 } pop = 28 fit = 0 pos = 0 size = 2
8 = ( 0000 1000 ) lsd = 3 msd = 3
9 = ( 0000 1001 ) lsd = 0 msd = 3
{ 6 } pop = 29 fit = 1 pos = 0 size = 1
6 = ( 0000 0110 ) lsd = 1 msd = 2
{ 5 } pop = 30 fit = 0 pos = 0 size = 1
5 = ( 0000 0101 ) lsd = 0 msd = 2
{ 3 2 } pop = 31 fit = 1 pos = 0 size = 2
3 = ( 0000 0011 ) lsd = 0 msd = 1
2 = ( 0000 0010 ) lsd = 1 msd = 1
{ 76 } pop = 32 fit = 0 pos = 1 size = 1
76 = ( 0100 1100 ) lsd = 2 msd = 6
{ 44 } pop = 33 fit = 0 pos = 1 size = 1
44 = ( 0010 1100 ) lsd = 2 msd = 5
{ 18 } pop = 34 fit = 0 pos = 1 size = 1
18 = ( 0001 0010 ) lsd = 1 msd = 4
{ 16 } pop = 35 fit = 0 pos = 1 size = 1
16 = ( 0001 0000 ) lsd = 4 msd = 4
{ 15 } pop = 36 fit = 1 pos = 1 size = 1
15 = ( 0000 1111 ) lsd = 0 msd = 3
{ 12 } pop = 37 fit = 0 pos = 1 size = 1
12 = ( 0000 1100 ) lsd = 2 msd = 3
{ 9 } pop = 38 fit = 1 pos = 1 size = 1
9 = ( 0000 1001 ) lsd = 0 msd = 3
{ 8 } pop = 39 fit = 0 pos = 1 size = 1
8 = ( 0000 1000 ) lsd = 3 msd = 3
{ 6 } pop = 40 fit = 0 pos = 1 size = 1
6 = ( 0000 0110 ) lsd = 1 msd = 2
{ 5 } pop = 41 fit = 1 pos = 1 size = 1
5 = ( 0000 0101 ) lsd = 0 msd = 2
{ 3 } pop = 42 fit = 1 pos = 1 size = 1
3 = ( 0000 0011 ) lsd = 0 msd = 1
{ 2 } pop = 43 fit = 0 pos = 1 size = 1
2 = ( 0000 0010 ) lsd = 1 msd = 1
min val = 9
max val = 12
Points of Interest
Basically, the evolutionary algorithm described in the following article could be commonly be used for the following purposes such as normalization function used in AI back propagation neural networks that allows to avoid the neural network paralyze due to premature synaptic weights saturation with excessively small or large values. Besides, the following algorithm can be used in calculus to solve the problem of finding the most precise value of a certain function f(x), where 0<x<t. Finally, the following algorithm provides a costeffective optimization method for the linear search algorithm which makes it possible to “narrow” the potential size of the array by selecting a subset of the most appropriate values that meet the search condition requirements.
Appendix
Evolutionary algorithm – search based heuristic algorithm of finding the of optimized solutions set for a certain calculus problem by imitating the natural evolution processes such as natural selection, crossing over and mutation, etc. evolutionary algorithms in programming commonly used as an alternative for linear search algorithms.
Chromosome – an ordered sequence of “genes”. In evolutionary programming, each “chromosome” is a bits trait of an integer value, which, normally, represents the encoded optimization conditions for a certain calculus problem.
Gene – an individual particle of a “chromosome”, which can be encoded as either “0” or “1” bit, that represents the least and the most likely fitter gene. Each chromosome contains a fixed number of bits.
Population – a “pool” of “chromosomes” belonging to the same generation, capable to breed the new child population. The “population”, typically, represents the array of possible integer solutions of a calculus problem.
Pool – an array of integer values that represents “chromosomes” of a particular “population”.
Locus – the position of a certain “gene” in a “chromosome”. Locus is the point at which the “crossing over and mutation” breed operations performed. Each chromosome may contain one or more locus. Locus typically represented as an appropriate bit position in an integer binary value.
Selection – a process of finding “chromosomes” with the similar “genes” at each position of “locus”. In evolutionary algorithms, selection performed as the linear search operation in the specific array of integers.
Crossover and mutation – the basic evolutionary operations performed on the two or more parent “chromosomes” to breed the new child population by either exchanging or transforming genes in several portions of the chromosomes.
Fitness function – the function that allows determine the fitness of particular “chromosomes” in the population to perform “crossover and mutation” operations.
Bitwise operations – operations performed for each bit of a certain integer value. Bitwise operations include AND (&), OR (), XOR (), INV(~), NOT(!) operations.
Most significant digit (MSD) – the highest bit (e.g. the last occurrence of bit  “1”) in a certain “chromosome” bits sequence, which, normally, represents the most likely fitter solution.
Least significant digit (MSD) – the lowest bit (e.g. the first occurrence of bit  “1”) in a certain “chromosome” bits sequence, which, normally, represents the least likely fitter solution.
References
 Introduction to evolutionary algorithms., SURPRISE 96 Journal  1996, deptartment of computing, Imperial College of Science Technology and Medicine, 180 Queen's Gate, London, SW7 2BZ, UK. http://www.doc.ic.ac.uk/~nd/surprise_96/journal/vol1/hmw/article1.html
 Survival of the Fittest Bits, Scientific American Magazine  July, 1992, 415 Madison Avenue, New York, NY 100171111.
 Ming Shao, Liang Zhou. A summary of the Study on Quantum Evolutionary Algorithm. Shanghai Univerisity of Engineering Science, school of management, 333 Longteng Road, Shanghai, 201600, China.

Benjamin Doerr, Edda Happ, Christian Klein. Crossover can provably be useful in Evolutionary Computation., Theoretical Computer Science Journal, Vol. 425, March 2012.

Borhan Kazimipour, Xiaodong Li, A. K. Qin. A Review of Population Initialization Techniques for Evolutionary Algorithms. School of Computer Science and Information Technology, RMIT University, Melbourne, 3000, Victoria, Australia. School of Automation, Southeast University, Nanjing, China, 210096, June 2015.

Jacqueline A. Jones, Keith Harrow. Problem solving with C. Brooklyn College of the City University of New York. Scott/Jones Inc. Publishers, El Granada, CA, 94018, USA

B. W. Kernighan and D. M. Ritchie The C Programming Language Prentice Hall 1978, ISBN 0131101633.
History
 June 14, 2015  First version of the article published
 June 24, 2015  The 1st revised version of the article published
 July 08, 2015  The 2nd revised version of the article published
 July 12, 2015  The 3rd revised version of the article published
 July 14, 2015  The final revison of this article published