
Comments and Discussions



Hi
I'm trying to apply your LinearRegression class to a simple array of equally spaced values and weights, and I'm a little confused about what should be in the array X that is passed to the Regress method. Please could you clarify for me using simple language (I'm a software developer not a mathematician!).
Thanks
Marcus





To do a linear regression, you usually have (at least) 2 sets of numbers: the X and the Y values. If you have 1 set of numbers (Y) that are equally spaced with respect to some number (X), that is the number you would use for X.
CQ de W5ALT
Walt Fair, Jr., P. E.
Comport Computing
Specializing in Technical Engineering Software





hello sir,
so glad to know your project.
I implemented your code.
here is my data,
daily energy consumptionhumiditytemperaturesunday....saturday
758875220001000
from sunday to saturday are 200 rows.
and it work well.
however, it didn't work after I eliminate data of saturday and sunday.
The error msg is from
if (Big == 0)
{
return false;
}
I was wondering if you have knew this issue?
cheers,
amelie





That error condition means that the matrix diagonal is all zeros, so the matrix inversion cannot be done. Usually that means you do not have enough independent data points.
CQ de W5ALT
Walt Fair, Jr., P. E.
Comport Computing
Specializing in Technical Engineering Software





Walt  Beautiful work. Your article teaches on many levels.
One tiny error? Need to add Wj inside the B vector on the far right. You do show Wj in the formula above the matrix expansion.
Code provides a tool of greatest value; however, I also find numerical examples helpful. Formal writing rarily gives a stepbystep example and I personally learn by monkey see. :) The following example may be helpful to some?
for y = 0.1 + 0.5x + x^2; (x,y) = {(1, 1.6), (2, 5.1), (3, 10.6), (4, 18.1)}
Add heavily weighted additional point (0, 0.25) and observe coefficients and compared to Excel method with specified intercept of 0.25.
(x,y,W) = {(0, 0.25, 1000000), (1, 1.6, 1), (2, 5.1, 1), (3, 10.6, 1), (4, 18.1, 1)}
x y weight x^0*W x^1*W x^2*W x^3*W x^4*W x^0*y*W x^1*y*W x^2*y*W
0 0.25 1000000 1000000 0 0 0 0 250000 0 0
1 1.6 1 1 1 1 1 1 1.6 1.6 1.6
2 5.1 1 1 2 4 8 16 5.1 10.2 20.4
3 10.6 1 1 3 9 27 81 10.6 31.8 95.4
4 18.1 1 1 4 16 64 256 18.1 72.4 289.6
       
1000004 10 30 100 354 250035.4 116 407
The weighted augmented matrix:
x^0 x^1 x^2 y
x^0: 1000004 10 30 250035.4
x^1: 10 30 100 116
x^2: 30 100 354 407
Operation  Divide each row by 1'st term to get leading value of 1:
x^0 x^1 x^2 y
x^0: 1 9.99996E06 2.99999E05 0.2500344
x^1: 1 3 10 11.6
x^2: 1 3.333333333 11.8 13.56666667
Operation  Subtract row 1 from row 2 and row 3 to zero out 1'st terms:
x^0 x^1 x^2 y
x^0: 1 9.99996E06 2.99999E05 0.2500344
x^1: 0 2.99999 9.99997 11.3499656
x^2: 0 3.333323333 11.79997 13.31663227
Operation  Divide row 1 and row 2 by respective X^1 value to get leading value of 1:
x^0 x^1 x^2 y
x^0: 1 9.99996E06 2.99999E05 0.2500344
x^1: 0 1 3.333334444 3.783334478
x^2: 0 1 3.54000162 3.995001665
Operation  Subtract row 2 from row 3 to zero out 1'st term:
x^0 x^1 x^2 y
x^0: 1 9.99996E06 2.99999E05 0.2500344
x^1: 0 1 3.333334444 3.783334478
x^2: 0 0 0.206667176 0.211667187
Operation  Divide row 3 by 1'st term to get leading value of 1:
x^0 x^1 x^2 y
x^0: 1 9.99996E06 2.99999E05 0.2500344
x^1: 0 1 3.333334444 3.783334478
x^2: 0 0 1 1.024193545
Operation  Subtract row 1's x^2 value times row 3 from row 1 to zero out row 1's x^2 value:
Subtract row 2's x^2 value times row 3 from row 2 to zero out row 2's x^2 value:
x^0 x^1 x^2 y
x^0: 1 9.99996E06 0 0.250003674
x^1: 0 1 0 0.369354856
x^2: 0 0 1 1.024193545
Operation  Subtract row 1's x^1 value times row 2 from row 1 to zero out row 1's x^1 value:
x^0 x^1 x^2 y
x^0: 1 0 0 0.249999980645164  This y value is the x^0 coefficient. The constant.
x^1: 0 1 0 0.369354856  This y value is the x^1 coefficient.
x^2: 0 0 1 1.024193545  This y value is the x^2 coefficient.
The above is the weighted least squares solution.
===================================================================================
Plugging in the original 4 (x,y) pairs into Excel and adding a 2'nd order trend line to the graph of these points with an intercept of 0.25 produces the following:
y = 1.02419355x2 + 0.36935484x + 0.25000000
R² = 0.99998130
Note how the constant term Excel generates is exactly 0.25.
Also, forcing a point off the perfect quadratic data points reduces the correlation coefficient. This is expected.
Slightly offtopic >> I have yet to figure out how to set up the augmented matrix to force a nonlinear least squares solution to exactly pass through a specified point.
If someone can explain how to build the augmented matrix for passing through a specified point ... I would be very grateful!
Walt's work was the first I found to explain in sufficient detail that I needed to successfully perform weighted least squares. Too many missing steps in so many articles makes them mostly some sort of conversation amongst those that can and frustrates those that can't. Often I find statements made by others that are simple wrong. There are many such statements found online on the slightly offtopic task of forcing a nonlinear fit through a specific point.
The work that goes into producing code is a couple magnitudes of order difficulty beyond providing a stepbystep basic math example. Throw in another order of magnitude for the idea of topic and willingness to help others. Plus an order of magnitude for the high quality document itself. Great job Walt! My salute to Walt along with considerably appreciation.
Cheerful number crunching to all.
 Tom






There is a problem with SymmetricMatrixInvert function if symmetric matrix have all diagonal elements equal to cero. Always return false as if no inverse exists and that´s not true like in next 2X2 matrix
0 1
1 0





Very good work!!!
I was trying to find an algorithm to predict future values of blood sugar levels in diabetic patients. I have the data of the sugar levels from patients aiming to a goal(normal value, eg:7). I know the mesure after every hour for fasting patients( x=level, y=time) like:
7AM > 12.6
8AM > 11.4
...
12PM > 8.5
I know the critical level is 3 and less (hypoglicemia) and 10 (hyperglicemia).
Considering the behavior of the body with the tree fators (Food,Medication,Exercise) and having mesured these behaviors I was trying to deduce the formula for each patient to calculate when in the future (Y) the goal level(X),the hyper or the hypo will be reached.
Thanks for the solution to my problem.
I'm considering to translate your code in ObjectiveC (my main programming language).
Have a nice day
Fernando Araujo
Montreal, Canada





This looked great, but when I tried it with larger number of rows (I tried 3000 or so), a weight of 1 and N=1 (just a simple linear regression) the same numbers in excel and in this regression start to show significant differences, specially in Rsq. For this I used the demo, just changed to be able to paste the rows.
When I tried large multiple linear regressions I got absolutely wrong results, again using a weight of 1. (I used the LinearRegression controlled by another program, not the polynomial hard coded in the demo).
I suspect that there is something wrong with the inversion of the matrix subroutine, perhaps loss of precision??
Any ideas??





Walt very nice article!
Ihave around 600 observations and around 18 variables do you think your code can help me?
I'm asking you becasue I saw one of the comments that talks about some limitations for big N, but I didn't understand if this N is the number of observations, number of variables or what?
Thanks,
Eli





Eli,
Thanks for your comment and best of luck with your work.
There's no mathematical reason that the algorithm shouldn't work for 600 data points and 18 variables, but there may be numerical problems, depending on the data and the correlation between the variables.
If you have large correlation between variables, the matrix becomes near singular and so the results may be suspect. In that case you probably need to rethink the selection of variables, because some of them may be nearly redundant.
If you have a large number of data points and the values are large, then you can get a loss of precision in the floating point representation of the matrix terms, with a resulting loss of precision in the answers. In that case you should consider normalizing the data first.
For either of those cases, I'd recommend a book on numerical methods and regression analysis, because it's far beyond what I could post here.
CQ de W5ALT
Walt Fair, Jr., P. E.
Comport Computing
Specializing in Technical Engineering Software





Hi, i find this article very interesting, but maybe i fail in something.
if i have this system x1,x2,x3, the equation is:
y=c1x1 + c2x2 + c3x3
right?
but where is B ???
in linear regression the equation is y=cx+b
but here??? how can i get this returns???
y = c1x1 + c2x2 + c3x3 + b
many thenks!





In your example case, you would add an x4 term and set the value of x4 always equal to 1.0, so then c4 would be the constant term that multiplies 1.
CQ de W5ALT
Walt Fair, Jr., P. E.
Comport Computing
Specializing in Technical Engineering Software





Thanks a lot for sharing your article it's excellent
MC Miguel Salas Z.





Thanks, Walt... Been trying to get this functionality in place all day. I finally stumbled upon this article, and was done in five minutes






Walt  Thanks very much for posting this. It works great!
I added a simple Regress method that takes a double[,] XY and int Order as inputs to make it a bit easier to fit a set of x,y data. The code is below and actually calls the original Regress method that you have. This method also assumes the weights are all 1.
Scott Raymond
public bool Regress(double[,] XY, int order)
{
int countXY = XY.GetLength(0);
double[,] x = new double[order + 1, countXY];
double[] y = new double[countXY];
double[] w = new double[countXY];
for (int i = 0; i < countXY; i++)
{
for (int j = 0; j <= order; j++)
{
x[j, i] = Math.Pow(XY[i, 0], j);
}
y[i] = XY[i, 1];
w[i] = 1;
}
return Regress(y, x, w);
}





Thank you for the comment and I{m glad you found it useful.
CQ de W5ALT
Walt Fair, Jr., P. E.
Comport Computing
Specializing in Technical Engineering Software








Would give a 5 but the code is very terse and tough to follow.





I feed in a simple symmetric matrix 2x2, like (a b) (b c) and only the first value is inverted result: (1/a b) (b c) same thing for 3x3





my bad wrong translation in java: double[][] V = new double[2][2]; V.length == 2; // and not 4 as in algo





Glad you got it working!
CQ de W5ALT
Walt Fair, Jr., P. E.
Comport Computing
Specializing in Technical Engineering Software





Thanks for a great article and source code.
Eyal





Hi,
Thank you! This is really useful and simple code! Have you thought about creating something similar for weighted logistic regression as well?
best wishes,
Njaatur





Hi there,
If you are still looking for a C# Logistic regression implementation, you may check this article for ordinary Logistic Regression in C#[^]. It is not a weighted algorithm, though.
Cheers,
César
César Roberto de Souza
http://www.crsouza.com





Thanks! I bookmarked your page. I used NewtonRaphson algorithm right now to fit logistic model. It was quite easy to add weights to values there.





Thanks for sharing your code and knowledge. It is a very valuable effort.
I am trying to apply the linear regresion on a data set of 300 points. I found that the fitting of the resulting Nth polynome to the data is good while N is lower or equal to 10. When evaluating with higer orders (11, 12…), the fitting of the last 50 poins is not adequate.
As a simple test, I tried to apply lineal regression to 300 points where x = 0,1,2…299 and y is constant (100). W is equal to 1.0 for all points. The fit of the resulting polynome and the data is good until N =10, but with N = 11 or above, the polynome does not fit to the last 50 points. Is this a natural limitation of this method? Could it be a result of information lost by quantization limitations?
By the way, this is the interpolation function that I added to the class in order to plot the resulting polynome and compare it to the original data.
public double Interp(double x)
{
double interp = this.C[0];
for (int i = 1; i < this.C.Length; i++)
{
interp += this.C[i] * Math.Pow(x, i);
}
return interp;
}
modified on Friday, November 6, 2009 1:01 PM





I suspect that you are seeing the effect of a near singular matrix. If you use multiple terms with a constant function, then all except for the constant term should be zero, since the correct equation for your example would be y = 100, which doesn't depend on x at all. Attempting to add more than the constant term cannot increase the goodness of fit, so I suspect that the differences you are seeing are due to fitting minor random variations in the numerical code.
For example, with a 10th order polynomial, calculating 299 to the 10th power is about 5.7E+24 and would lose precision due to the finite numerical calculations.
CQ de W5ALT
Walt Fair, Jr., P. E.
Comport Computing
Specializing in Technical Engineering Software





Would you recomend a different algoritm for higer order polynoms in order to avoid this problem? I am translating some algorithms that desiged using Mathcad for pattern clasification to C# and must use 12th order fitting. The regression algorithm that Mathcad uses provides better fitting for higer orders, but have no idea about its construction.
Regards and thanks again for sharing your knowledge.
Victor Lopez





You could look at the routines for Singular Value Decomposition (SVD) in Numerical Recipes. Unfortunately there is no C# version of the book, only C and C++.





Hi I was looking for easy linear regression few hours and finally wrote this code based on what linear regression means:
/// <summary>
/// Returns linear regresion valus for formula y = a + bx
/// </summary>
/// <param name="points">point we want to count linear regresion for</param>
/// <param name="a">a in formula y = a + bx</param>
/// <param name="b">>b in formula y = a + bx</param>
void LinearRegression(double[] points, out double a, out double b)
{
double sumOfAllXSquer = 0;
double sumOfAllY = 0;
double sumOfAllX = 0;
double summOfAllXMultiplyY = 0;
for (int i = 0; i < points.Length; i++)
{
sumOfAllX += (i + 1);
sumOfAllXSquer += (i + 1) * (i + 1);
sumOfAllY += points[i];
summOfAllXMultiplyY += (i + 1) * points[i];
}
a = (sumOfAllXSquer * sumOfAllY  sumOfAllX * summOfAllXMultiplyY) / (points.Length * sumOfAllXSquer  sumOfAllX * sumOfAllX);
b = (points.Length * summOfAllXMultiplyY  sumOfAllX * sumOfAllY) / (points.Length * sumOfAllXSquer  sumOfAllX * sumOfAllX);
}
Enjoy!





this model seems great  there are so few examples that let you input lists of the predictor variables and weights.
However, there seems to be a problem. I'm using 4 predictor variables and my Y's but the model it calculates only has 4 coefficients  shouldn't there also be a y intercept (beta_0 and a beta_1 to beta_4). As a sanity check I've ran the exact same data through excel's regression model and I get the answer I'm looking for with very different coeff's for the 4 X's AND a Y intercept.
Perhaps I'm not understanding how to use this package correctly.
Any suggestions?





If I understand correctly, you are fitting y vs. x1, x2, x3, and x4, in the form C1*x1 + C2*x2 + C3*x# + C4*x4. If I got that right, then inputting data for (y, x1, x2, x3, x4) and fitting would just give 4 linear coefficients of the x1, x2, x3, x4 terms.
To get a constant term, i.e. C0, you would need to put a constant term in the regression. In other words feed it data in the form (y, x0=1, x1, x2, x3, x4). Then the coefficient of the x0=1 term would be the intercept.
Some regression packages automatically include the constant term, but I chose not to include it, since at times it's not desired.
Thanks for the comment and let me know if that answered your question. If not, I'll try again.
CQ de W5ALT
Walt Fair, Jr., P. E.
Comport Computing
Specializing in Technical Engineering Software





Do you mean enter a vector for x0 of all 1's? I actually tried this first but it didn't work.
Something else that does seem to work better is to take all your Y values and transform them before and after the regression. Just subtract the C0 term from each Y value, run the regression and then use it in your final equation ( y = c0 + c1*x1 . . .)
The only problem with this is you usually won't know what your yintercept is ahead of time but just using your first y value seems to be a good approximation (at least for my purposes)
I would still be interested in hearing a clarification on the "inputingconstantx0's" issue. Have you tested this?





MattF@ID wrote: Do you mean enter a vector for x0 of all 1's? I actually tried this first but it didn't work.
...
I would still be interested in hearing a clarification on the "inputingconstantx0's" issue. Have you tested this?
Hmmmm, yes, I've done the 1's trick many times and never had a problem. It's a standard procedure and shown in several regression books including the one I referenced. In fact the example project uses 1 for all the constant terms in the polynomials. Can you show me the actual points you are using in the regression?
For the example shown at the top of the article with a quadratic polynomial, the points fed to the regression would be in (1, x, x^{2}, y) form:
(1, 0, 0, 4.098...), (1, 0.1, 0.01, 4.396...), (1, 0.2, 0.04, 4.614...), etc.
In other words, x0 = (1, 1, 1, ... ), x1 = (0, 0.1, 0.2, ...), x2 = (0, 0.01, 0.04, ... ), y = (4.098..., 4.396..., 4.614..., ...).
As you can see, the C0 value is correctly estimated very close to 4.0 which was used to generate the points.
CQ de W5ALT
Walt Fair, Jr., P. E.
Comport Computing
Specializing in Technical Engineering Software





It's nice when someone knowledgeable takes the time to simplify a complicated topic. I've seen too many programmers do the reverse.
Good job on the article and the code!
Dorian





When I use your regression code using data generated using the simple function y = x + 27, I expect a perfect correlation with a coefficient of 1.0. There is one independent variable and all of the weights are 1.
That is not what I obtain. However, if the Y and X arguments to Regress() are set to be devaitions from the mean ((X[i]Xbar) for example), then the expected results are obtained.
My preference would be to add code to Regress() to calculate the X and Y means and set the covariance array (V) using deviations of the Xs and Ys from their means.
The calculated Y values would also have the YBar added back in.
Thanks for your contribution.





Hello
I did not check the code, because just download it however I would like to know that if it does the stepwise regression or not?
regards
Abbas





No, the code does not implement a stepwise regression. I'm sure you know that a stepwise regression is simply a procedure for applying a sequence of regressions while adding parameters or removing parameters from the regression equation. The code here can be used to do each of the regressions, but you'd have to enclose it in your own algorithm to automatically add or remove parameters.
The PetroNerd
Walt Fair, Jr.
Comport Computing
Specializing in Technical Engineering Software





Thank you for your article. You asked if you need more code comments  I don't think so because, quite frankly, one needs to be familiar with applied statistics in order to understand it  and it's pretty straight forward code.





Hello,
Though the effort merits consideration this article in its current state does not, in my opinion, deserve more than 2 stars.
I believe a few things should be pointed out:
 First and probably most important of all this is NOT a robust Linear Regression method at all. More can be found on this subject in the Numerical Recipes or on the net search for Robust Linear Regression + LMS / LTS.
 the source code is really poorly commented. As an example the class and its method are not commented.
 The method SymmetricMatrixInvert is marked as public when I am not sure it should be in that class in the first place
 The method SymmetricMatrixInvert has a parameter  V  which is also a private member of the class. This is because no convention has been used at all to identify private members, local members and functions parameters
 It is difficult to understand what the RunTest method is there for at all
 Nothing is done concerning exceptions. As an example this part of the code (see below) could throw a division by zero exception. Nothing has been done to either prevent it or to give the caller better information:
double WSUM = 0;
for (int k = 0; k < M; k++)
{
YBAR = YBAR + W[k] * Y[k];
WSUM = WSUM + W[k];
}
YBAR = YBAR / WSUM; // <= here if all W are zero then WSUM is = to zero
So to conclude this code may be good to start with but a lot can be improved to raise it to the standard expected for a 5 stars.
All the best,
MM





Thanks for your comments. I will certainly take those into account and perhaps revise the article and source code. I'm always looking for ways to make it better and appreciate some candid comments. That's much more constructive than a simple bad vote leaving the author not knowing why!
As far as the robustness, I will check into that, since it's been awhile since I reviewed the Numerical Recipes code. As I recall, when I was doing fits of a few million data points and had about 100 correlation coefficients, the Numerical Recipes code wasn't all that great, but I'll recheck. Thanks for pointing that out.
Actually in my library, the matrix inversion is in a totally separate class that contains many more matrix and vector methods. For the purpose of this article, I decided to include it in this class, rather than having to write about a whole other class, 99% of which wouldn't be mentioned in the article. I apologize if that causes you grief.
Actually I follow normal conventions. All methods not specifically marked otherwise are always private in C#, so marking them private is redundant. Again, if you only use the matrix inversion for regression, one could obviously remove the parameter in the call to the inversion method and just access the class member. Please feel free to make those modifications if you choose to implement anything with the provided code.
As far as the RunTest method, I mentioned in the article that it wasn't covered in this article. I can remove it from this article, if you prefer. It's used for checking the plausibility of a regression model, which a Google for "Run Test" would show. In writing the article, I had to make a decision as to how much to cover in my available writing time. I guess I should have deleted that method, just as I did some of the others, but then you would probably ask why some of the variables were defined and calculated, but never used. Such is life.
As far as the exception, actually that exception can't occur, since setting all the W[i]'s to zero would cause a singular matrix problem that would be trapped much earlier in the code. In fact, the matrix inversion would not even be attempted and the method would return false before even attempting to do any further calculations. I don't consider redundant checks to be good practice, since they are equivalent to dead code, but perhaps some comments earlier in the code could clarify that for those who don't understand the significance of a singular matrix?
One question:
I agree the code comments are somewhat sparse if you're not real familiar with applied statistics. All of the variable names are the same as used in Draper and Smith, as well as other statistical books. If I add the comments and update the article, would you change your rating, or would it be a wasted effort?
I write articles and other web contributions as I have time.
Thanks again for the comments. It's always nice to understand how and why people evaluate things as they do.
The PetroNerd
Walt Fair, Jr.
Comport Computing
Specializing in Technical Engineering Software





When you say : I agree the code comments are somewhat sparse if you're not real familiar with applied statistics
I am not sure this point is relevant. I would say I am pretty much able to understand such basic statistics. The point I am making is not about readers abilities to understand the code but rather readers abilities to use the code and for other people to understand WHY things are done in a given way. Comments should say why and not really what the code does which anyone can read more or less. It is best practice to comment the code as much as possible / reasonable for everyone sanity and especially when publishing on a website such as codeproject. This code may be of interest to a lot of people who may just want to use it as a black box. In this case commenting the method is very valuable to everyone. And with it, the "RunTest" method would be clear to everyone without having to come back to the website to understand what the heck that method is for. You see the point here?
FYI, in the Numerical Recipes it is explain that regression methods fail as soon as there is a "bad" data which in practice happens by far more often than anyone would like / hope. If I remember well in the book the authors propose to reduce the sum of the median errors instead of the sum of the square errors of the residues. Their code is not perfect either and fail. But in my experience it fails less often than plain MLR. I too run and have written tools that calculate hundreds of thousands of multiple regression per day per user. And I am well aware of all the problems we have with bad instrument reading, wrong input, decoding error, etc and hence the need to have much more robust statistical method (LMS / LTS) as implemented in SPlus and the like.
If I add the comments and update the article, would you change your rating, or would it be a wasted effort?
Yes, I do not see why not.
I write articles and other web contributions as I have time.
Yes and it is a nice to see such effort and contributions.
All the best,
MM





Thanks again for the reply. I think we have a basic philosophical difference, alas. When you said:
Michael Moreno wrote: This code may be of interest to a lot of people who may just want to use it as a black box.
that makes my skin crawl. I would respond by saying: Please, if one does not understand multiple regression, please don't use my or anyone else's code thinking it will save the day! Actually I've made a nice living over the years cleaning up after people who used the wrong technique at the wrong time.
As far as more robust methods, again, that is a different problem. If least squares should not be used, then obviously applying it would not be advisable. I have other algorithms for doing nonlinear regressions, which I may write about if I have time some day. In my experience, there is no best general NLP method, but which works best depends heavily on the problem at hand and the character of the data.
When I have a few moments, I may try to add some comments to the code, but I sure won't be able to teach people who want to use it as a black box all the intricacies!
Regards,
Walt
The PetroNerd
Walt Fair, Jr.
Comport Computing
Specializing in Technical Engineering Software





Sure.
On the other hand we use the .Net framework every day. It is a complete black box but since it is commented we can use it.
People who use MatLab and the like also use black box and they do not care much. They use building blocks to do smarter things than they could have if they have to write all the code from scratch themselves.
All the best,
MM





Well, comparing a simple linear regression algorithm to the NET framework or to MatLab doesn't make much sense to me.
I do agree that people do indeed use tools they don't understand all the time. Sometimes it works out OK. On the other hand, I've made quite a nice living over the years cleaning up after cases where it didn't work out so well.
I guess my advice would be, if someone wants to use blackbox technology without understanding it, please keep me in mind. I charge double for cleaning up messes compared to doing it right the first time and I've never been out of work!
Regards!
The PetroNerd
Walt Fair, Jr.
Comport Computing
Specializing in Technical Engineering Software





Very good point Walt. Black Box programs simply don't exist in the World of inverse problems. The authors of Numerical Recipes state this over and over again, that is why there are so many different types of codes and variations for handling the different types of problems, and even then, no guarantees. There is no substitute for understanding the problem you are working with, and the limitations of the tools at your disposal.







General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.

Type  Article 
Licence  CPOL 
First Posted  17 Apr 2008 
Views  112,596 
Downloads  4,501 
Bookmarked  84 times 

