Summary
The Aristotle Problem Solver is an application that solves linear programming problems. An example of a linear programming problem is the optimization of a factory production with limited raw supplies so as to maximize the profits of selling the produced goods in specific prices.
1. Program Description
The program uses the Simplex Method to solve linear programming problems (see next section of the article for description of that method).
The main loop in the program’s code that actually solves the problems is presented below:
this>Algorithm_pass = 1;
this>Algorithm_subpass = 1;
do{
this>Solution_found = true;
this>kost = 0;
for(j=0; j<columns; j++)
{
this>number_readed = dedomena[(lines  1),j];
if( this>number_readed < this>kost )
{
this>Solution_found = false;
this>os = j;
this>kost = this>number_readed;
}
}
if( this>Solution_found == true )
{
this>richTextBox_Solution>Text = "SOLUTION\n";
MessageBox::Show( "Solution found!", "Simplex algorithm finished",
MessageBoxButtons::OK, MessageBoxIcon::Information );
StreamWriter^ sw5 = gcnew StreamWriter(
"c:\\Huo_Problem_Solver_Solution_array.txt");
for(i=0; i<lines; i++)
{
for(j=0; j<columns; j++)
{
sw5>WriteLine(String::Concat("dedomena[", i.ToString(), ",",
j.ToString(),"] = ", dedomena[i,j].ToString()));
}
}
sw5>Close();
int Non_zero_elements;
array<double>^ solution_array = gcnew array<double>(vn);
for(j=0; j<vn; j++)
{
Non_zero_elements = 0;
for(i=0; i<lines; i++)
{
if(dedomena[i,j] != 0)
Non_zero_elements++;
}
if(Non_zero_elements == 1)
{
for(i=0; i<lines; i++)
{
if(dedomena[i,j] != 0)
solution_array[j] = (dedomena[i,(columns1)] / dedomena[i,j]);
}
}
else
solution_array[j] = 0;
}
for(j=0; j<vn; j++)
{
switch(this>Problem_type)
{
case 0:
this>richTextBox_Solution>Text = String::Concat(
this>richTextBox_Solution>Text, "\n", "x",
(j+1).ToString(), " = ", solution_array[j].ToString() );
break;
case 1:
this>richTextBox_Solution>Text = String::Concat(
this>richTextBox_Solution>Text, "\nProduce ",
solution_array[j].ToString(), " units of product ",
(j+1).ToString() );
break;
};
}
switch(this>Problem_type)
{
case 0:
this>richTextBox_Solution>Text = String::Concat(
this>richTextBox_Solution>Text, "\n", "x", (j+1).ToString(),
" = ", solution_array[j].ToString() );
break;
case 1:
this>richTextBox_Solution>Text = String::Concat(
this>richTextBox_Solution>Text, "\n\nto maximize your profits." );
break;
};
}
if( this>Solution_found == false )
{
this>keks_min = dedomena[0,(columns  1)] / dedomena[0,os];
this>og = 0;
for(i=0; i<(lines1); i++)
{
this>keks = dedomena[i,(columns  1)] / dedomena[i,os];
if( this>keks < this>keks_min )
{
this>og = i;
this>keks_min = this>keks;
}
}
this>kentro = dedomena[og,os];
StreamWriter^ sw1 = gcnew StreamWriter(String::Concat(
"c:\\Huo_Problem_Solver_array_Pass_", this>Algorithm_pass.ToString(),
"_Subpass_", this>Algorithm_subpass.ToString(), ".txt"));
sw1>WriteLine( String::Concat("Columnguide:", this>os.ToString()) );
sw1>WriteLine( String::Concat("Rowguide:", this>og.ToString()) );
sw1>WriteLine( String::Concat("Center value:", this>kentro.ToString()) );
sw1>WriteLine( "" );
for(i=0; i<lines; i++)
{
for(j=0; j<columns; j++)
{
sw1>WriteLine(String::Concat("dedomena[", i.ToString(), ",",
j.ToString(),"] = ", dedomena[i,j].ToString()));
}
}
sw1>Close();
this>Algorithm_pass++;
this>Algorithm_subpass++;
for(i=0; i<lines; i++)
{
for(j=0; j<columns; j++)
{
if( (i != og) && (j != os) )
dedomena[i,j] = dedomena[i,j]  ( (dedomena[og,j]*dedomena[i,os]) /
kentro );
}
}
StreamWriter^ sw2 = gcnew StreamWriter(String::Concat(
"c:\\Huo_Problem_Solver_array_Pass_", this>Algorithm_pass.ToString(),
"_Subpass_", this>Algorithm_subpass.ToString(), ".txt"));
for(i=0; i<lines; i++)
{
for(j=0; j<columns; j++)
{
sw2>WriteLine(String::Concat("dedomena[", i.ToString(), ",",
j.ToString(),"] = ", dedomena[i,j].ToString()));
}
}
sw2>Close();
this>Algorithm_pass++;
this>Algorithm_subpass++;
for(j=0; j<columns; j++)
{
dedomena[og,j] = dedomena[og,j] / kentro;
}
StreamWriter^ sw3 = gcnew StreamWriter(String::Concat(
"c:\\Huo_Problem_Solver_array_Pass_", this>Algorithm_pass.ToString(),
"_Subpass_", this>Algorithm_subpass.ToString(), ".txt"));
for(i=0; i<lines; i++)
{
for(j=0; j<columns; j++)
{
sw3>WriteLine(String::Concat("dedomena[", i.ToString(),
",", j.ToString(),"] = ", dedomena[i,j].ToString()));
}
}
sw3>Close();
this>Algorithm_pass++;
this>Algorithm_subpass++;
for(i=0; i<lines; i++)
{
if( i != og )
dedomena[i,os] = 0;
}
StreamWriter^ sw4 = gcnew StreamWriter(String::Concat(
"c:\\Huo_Problem_Solver_array_Pass_", this>Algorithm_pass.ToString(),
"_Subpass_", this>Algorithm_subpass.ToString(), ".txt"));
for(i=0; i<lines; i++)
{
for(j=0; j<columns; j++)
{
sw4>WriteLine(String::Concat("dedomena[", i.ToString(), ",",
j.ToString(),"] = ", dedomena[i,j].ToString()));
}
}
sw4>Close();
this>Algorithm_pass++;
this>Algorithm_subpass++;
}
}while( this>Solution_found == false );
The code above is heavily commented both in English and in Greek. It implements all the steps of the Simplex Method as analyzed in the next chapter.
2. Algorithm Analysis
Linear Programming problems and the Simplex Method algorithm is presented in this section.
The source of the below text is Wikipedia sites:
 Linear programming
 Operations Research/The Simplex Method
That text is distributed under the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
2.1 Linear Programming Problems
In mathematics, linear programming (LP) is a technique for optimization of a linear objective function, subject to linear equality and linear inequality constraints. Informally, linear programming determines the way to achieve the best outcome (such as maximum profit or lowest cost) in a given mathematical model and given a list of requirements represented as linear equations.
More formally, given a polytope (for example, a polygon or a polyhedron), and a realvalued affine function
defined on this polytope, a linear programming method will find a point in the polytope where this function has the smallest (or largest) value. Such points may not exist, but if they do, searching through the polytope vertices is guaranteed to find at least one of them.
Linear programs are problems that can be expressed in canonical form:
Maximize
Subject to
represents the vector of variables (to be determined), while and are vectors of (known) coefficients and is a (known) matrix of coefficients. The expression to be maximized or minimized is called the objective function ( in this case). The equations are the constraints which specify a convex polyhedron over which the objective function is to be optimized.
Linear programming can be applied to various fields of study. Most extensively it is used in business and economic situations, but can also be utilized for some engineering problems. Some industries that use linear programming models include transportation, energy, telecommunications, and manufacturing. It has proved useful in modeling diverse types of problems in planning, routing, scheduling, assignment, and design.
2.2 Standard form LP models
If a LP model satisfies the following two conditions it is said to be in standard form:
 All the constraints with the exception of the nonnegativity restrictions are equations with nonnegative right hand side.
 All the variables are nonnegative.
For example,
Maximize z = 5x_{1} + 4x_{2}
subject to,
6x_{1} + 4x_{2} + s_{1} = 24,
x_{1} + 2x_{2} + s_{2} = 6,
x_{2} − x_{1} + s_{3} = 1,
x_{2} + s_{4} = 2,
is an LP model in standard form.
How do we convert a given model into a standard model while retaining its sense? This is done as follows:
 If there is a constraint of the type then the right hand side usually represents some limit on the resource (for example the amount of the raw material available) and the left hand side the usage of that resource (i.e. how much raw material is actually used) and so their difference represents the 'slack' or unused amount of the resource. So to convert into an equality we must add a variable representing the slack amount on the left hand side. This variable is known as the 'slack variable' and is obviously also nonnegative. If we represent it by s_{1} our constraint is converted into the equation 6x_{1} + 4x_{2} + s_{1} = 24. A similar thing is done in case of a constraint of the type . Here the left hand side has a 'surplus' or extra amount then the right hand side and so a nonnegative 'surplus variable' say s_{2} must be subtracted to get the equation 6x_{1} + 4x_{2} − s_{2} = 24.
 If the given variable x_{i} is given to be nonpositive then its negative y_{i} = − x_{i} is obviously nonnegative and can be substituted in the problem. The real problem comes in the case when the variable is allowed to take on any sign. Then it is called an 'unrestricted variable.' This is overcome by using the substitution where are both nonnegative. Intuitively if the variable x_{i} is positive then is positive and is zero, while if the variable x_{i} is negative then is zero and is positive. If x_{i} is zero then obviously are both zero.
A point to note is that the objective function in the original LP model and the standard model is the same.
2.3 Algebraic Solution of a LP
Let us now try to analyze a LP model algebraically. A solution of the standard LP model will be a solution for the original model (since the slack and surplus variables once removed would make the equations regain their old imbalance) and due to a similar reasoning a solution for the original model will also correspond to a solution for the standard model. Since the objective function is the same for both models so an optimal solution for the standard model will be optimal for the original model as well. Therefore we need to bother only about the standard model.
Now what are the candidates for the optimal solution? They are the solutions of the equality constraints. All we need to do is to find out the solutions and check which of those give the optimal value when put in the objective function. Now usually in the LP model the number of constraints, say m, is outnumbered by the number of variables, say n, and so there are infinitely many solutions to the system of constraints. We can't possibly examine the complete set of infinite solutions. However due to a mathematical result our work is shortened. The result is that if out of the n variables, nm variables are put to zero, and then if the constraint system can be solved then the solution will correspond to a corner point in the nspace. Such a solution is called a 'basic solution.' If in addition to being basic it happens to be feasible to the original problem, then it is called a 'basic feasible solution' (often abbreviated as BFS). Since the optimal solution is obtained on a corner point (as we observed graphically) so all we need to do is to examine all the basic feasible solutions (which are at most in number, reflective of the number of ways nm variables can be chosen among the total n to be zero) and then decide which one gives the maximum value for the objective function.
Let us consider an example:
Consider the following LP model:
Maximize z = 2x_{1} + 3x_{2}
subject to,
,
,
.
We first convert it into standard form:
Maximize z = 2x_{1} + 3x_{2}
subject to,
2x_{1} + x_{2} + s_{1} = 4,
x_{1} + 2x_{2} + s_{2} = 5,
.
The constraint system has m=2 constraints and n=4 variables. Thus we need to set 42=2 variables equal to zero to get a basic solution. For example putting x_{1} = 0 and x_{2} = 0 we get a solution s_{1} = 4 and s_{2} = 5. This is also clearly feasible and so is a basic feasible solution. The variables being put to zero, that are x_{1},x_{2} are called 'nonbasic variables' and s_{1},s_{2} are called 'basic variables.' The 'objective value' (the value that is obtained by putting the solution in the objective function) that these solutions (both basic and non basic) give on substitution in the objective function is zero.
The following table summarizes all the basic solutions to the problem:
Nonbasic variables 
Basic variables 
Basic solution 
Feasible 
Objective value 
(x_{1},x_{2})

(s_{1},s_{2}) 
(4,5) 
Yes 
0 
(x_{1},s_{1}) 
(x_{2},s_{2}) 
(4,3) 
No 
 
(x_{1},s_{2}) 
(x_{2},s_{1}) 
(2.5,1.5) 
Yes 
7.5 
(x_{2},s_{1}) 
(x_{1},s_{2}) 
(2,3) 
Yes 
4 
(x_{2},s_{2}) 
(x_{1},s_{1}) 
(5,6) 
No 
 
(s_{1},s_{2}) 
(x_{1},x_{2}) 
(1,2) 
Yes 
8 
Note that we have not bothered to calculate the objective value for infeasible solutions. The optimal solution is the one which yields the highest objective value i.e. x_{1} = 1,x_{2} = 2. Hence we have solved the LP model algebraically.
This procedure of solving LP models works for any number of variables but is very difficult to employ when there are a large number of constraints and variables. For example, for m = 10 and n = 20 it is necessary to solve sets of equations, which is clearly a staggering task. With the simplex method, you need only solve a few of these sets of equations, concentrating on those which give improving objective values.
2.4 The Simplex Method
The method in a nutshell is this. You start with a basic feasible solution of an LP in standard form (usually the one where all the slack variables are equal to the corresponding right hand sides and all other variables are zero) and replace one basic variable with one which is currently nonbasic to get a new basic solution (since nm variables remain zero). This is done in a fashion which ensures that the new basic solution is feasible and its objective value is at least as much as that of the previous BFS. This is repeated until it is clear that the current BFS can't be improved for a better objective value. In this way the optimal solution is achieved.
It is clear that one factor is crucial to the method: which variable should replace which. The variable which is replaced is called the 'leaving variable' and the variable which replaces it is known as the 'entering variable.' The design of the simplex method is such so that the process of choosing these two variables allows two things to happen. Firstly, the new objective value is an improvement (or at least equals) on the current one and secondly the new solution is feasible.
Let us now explain the method through an example. Consider our old chemical company problem in standard form:
Maximize z = 5x_{1} + 4x_{2}
subject to,
6x_{1} + 4x_{2} + s_{1} = 24,
x_{1} + 2x_{2} + s_{2} = 6,
x_{2} − x_{1} + s_{3} = 1,
x_{2} + s_{4} = 2,
.
Now an immediate BFS is obtained by putting all the x_{i} equal to zero. (Clearly the solution thus obtained will be feasible to the original problem as the right hand sides are all nonnegative which is precisely our solution.) If we consider our objective function z = 5x_{1} + 4x_{2} then it is evident that an increase in x_{1} or x_{2} will increase our objective value. (Note that currently both are zero being nonbasic). A unit increase in x_{1} will give a 5fold increase in the objective value while a unit increase in x_{2} will give a 4fold increase. It is logical that we elect to make the entering variable x_{1} in the next iteration.
In the tabular form of the simplex method the objective function is usually represented as z − 5x_{1} − 4x_{2} = 0. Also the table contains the system of constraints along with the BFS that is obtained. Only the coefficients are written as is usual when handling linear systems.
Basic 
z 
x_{1} 
x_{2} 
s_{1} 
s_{2} 
s_{3} 
s_{4} 
BFS 
z 
1 
5 
4 
0 
0 
0 
0 
0 
s_{1} 
0 
6 
4 
1 
0 
0 
0 
24 
s_{2} 
0 
1 
2 
0 
1 
0 
0 
6 
s_{3} 
0 
1 
1 
0 
0 
1 
0 
1 
s_{4} 
0 
0 
1 
0 
0 
0 
1 
2 
Now for the next iteration we have to decide the entering and the leaving variables. The entering variable is x_{1} as we discussed. In fact, due to our realignment of the objective function, the most negative value in the zrow of the simplex table will always be the entering variable for the next iteration. This is known as the 'optimality condition.' What about the leaving variable? We have to account for the fact that our next basic solution is feasible. So our leaving variable must be chosen with this thought in mind.
To decide the leaving variable we apply what is sometimes called as the 'feasibility condition.' That is as follows: we compute the quotient of the solution coordinates (that are 24, 6, 1 and 2) with the constraint coefficients of the entering variable x_{1} (that are 6, 1, 1 and 0). The following ratios are obtained: 24/6 = 4, 6/1 = 6, 1/1 = 1 and 2/0 = undefined. Ignoring the negative and the undefined ratio we now proceed to select the minimum of the other two ratios which is 4, obtained from dividing 24(the value of s_{1}) by 6. Since the minimum involved the division by s_{1}'s current value we take the leaving variable as s_{1}.
What is the justification behind this procedure? It is this. The minimum ratios actually represent the intercepts made by the constraints on the x_{1} axis. To see this look at the following graph:
Since currently all x_{i} are 0 we are considering the BFS corresponding to the origin. Now, in the next iteration according to the simplex method we should get a new BFS i.e move to a new corner point on the graph. We can induce an increase in the value of only one variable at a time by making it an entering variable, and since x_{1} is our entering variable our plan is to increase the value of x_{1}. From the graph we can see that the value of x_{1} must be increased to 4 at the point (4,0), which is the smallest nonnegative intercept with the x_{1} axis. An increase beyond that point is infeasible. Also at (4,0) the slack variable s_{1} assumes a zero value as the first constraint is satisfied as an equality there and so s_{1} becomes the leaving variable.
Now the problem is to determine the new solution. Although any procedure of solving a linear system can be applied at this stage, usually Gauss Jordan elimination is applied. It is based on a result in linear algebra that the elementary row transformations on a system [Ab] to [Hc] do not alter the solutions of the system. According to it the columns corresponding to the basicvariables in the table are given the shape of an identity matrix. (Readers familiar with linear algebra will recognize that it means that the matrix formed with the basis variable columns is transformed into reduced row echelon form.) The solution can then be simply read off from the right most solution column (as nm of the variables are put to zero and the rest including z have coefficient 1 in one constraint each). Since z is also a variable it's row is treated as one among the constraints comprising the linear system.
The entering variable column is called the 'pivot column' and the leaving variable row is called the 'pivot row.' The intersection of the pivot column and the leaving variable column is called the 'pivot element.' In our example the second row (of s_{1}) is the pivot row and the second column(of x_{1}) is the pivot column.
The computations needed to produce the new basic solution are:
 Replace the leaving variable in the 'Basic' column with the entering variable.
 New pivot row = Current pivot row ÷ Pivot element
For other rows, including the z row:
 New row = Current row  (Its pivot column coefficient)×(New pivot row)
In our case the computations proceed as follows:
 Replace s_{1} in the 'Basic' column with x_{1}
 New pivot x_{1} row = Current s_{1} row ÷ 6 =
 New z row = Current z row  (5)×New x_{1} row =
 New s_{2} row = Current s_{2} row  (1)×New x_{1} row =
 New s_{3} row = Current s_{3} row  (1)×New x_{1} row =
 New s_{4} row = Current s_{4} row  (0)×New x_{1} row =
Thus our new table is:
Basic 
z 
x_{1} 
x_{2} 
s_{1} 
s_{2} 
s_{3} 
s_{4} 
BFS 
z 
1 
0 


0 
0 
0 
20 
x_{1} 
0 
1 


0 
0 
0 
4 
s_{2} 
0 
0 


1 
0 
0 
2 
s_{3} 
0 
0 


0 
1 
0 
5 
s_{4} 
0 
0 
1 
0 
0 
0 
1 
2 
The optimality condition now shows that x_{2} is the entering variable. The feasiblity condition produces the ratios: 4/(2/3) = 6, 2/(4/3) = 1.5, 5/(5/3) = 3 and 2/1 = 2 of which the minimum is 1.5 produced by dividing the coefficient in the s_{2} row (i.e. the row in which the basic variable s_{2} has coefficient 1). So s_{2} becomes the leaving variable. The Gauss Jordan elimination process is applied again to get the following table:
Basic 
z 
x_{1} 
x_{2} 
s_{1} 
s_{2} 
s_{3} 
s_{4} 
BFS 
z 
1 
0 
0 


0 
0 
21 
x_{1} 
0 
1 
0 


0 
0 
3 
x_{2} 
0 
0 
1 


0 
0 

s_{3} 
0 
0 
0 


1 
0 

s_{4} 
0 
0 
0 


0 
1 

Based on the optimality condition none of the zrow coefficients associated with the nonbasic variables s_{1} and s_{2} are negative. Hence the last table is optimal. The optimal solution can thus be read off as and and the optimal value as z = 21. Obviously the variables s_{1} and s_{2} are zero. Our original problem involving only x_{1} and x_{2} also clearly has the same solution (just disregard the slacks).
We have dealt with a maximization problem. If the minimization case, since min z = max (z) (if z is a linear function, which it is) so we can either convert the problem into a maximization one or reverse the optimality condition. Instead of selecting the entering variable as the one with the most negative coefficient in the z row we can select the one with the most positive coefficient. The rest of the steps are the same.
3. How to Use the Program
The program is fairly easy to use. First, you must select the category of problem you want the program to solve, by clicking on the proper menu choice.
Then, you should enter all the data of the problem: the constraints and the equation parameters. In the case of the production optimization problem you must enter the following data: The price of the goods produced (necessary for the program to find the solution that results to highest profit), the quantity of each raw material required for each good produced, the quantity of each raw material that is available for goods production.
Press the button “Enter constraints” to enter the quantity of each raw material available and the quantity required for the production of each good.
Then press the button “Enter function to minimize / maximize” so as to enter the price of the goods produced (that function is what the program actually optimized: the total profit).
After everything is set, you press the Solve button and the program provides you with the optimal solution! Simple isn’t it? Simplex to be exact…!
Please use the message board below for any comments, reporting or suggestions for future versions.