|
He could have meant it was faster to write than do the foreach and the if statements. Not that it runs faster. And I find it is much faster to write.
|
|
|
|
|
No, he meant to execute. I set up some experimental code that looped many times using the various methods like native code and linq and timed them. I also pointed out that the linq code was using anonymous methods and that they had overhead too.
|
|
|
|
|
Writing all of your code in one big Main function is faster than any of this "object-oriented" nonsense.
And using C or assembly will be much faster than this JIT-compiled C# nonsense.
Of course, it will take a lot longer to write, and be much harder to debug. But premature optimization is much more important than sleep!
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
It's about trade-offs though. There is nothing wrong with going for a worse performing technology\method if you gain elsewhere and the gain is an acceptable trade. But using linq over a foreach gives no real gain in the kind of situations we're talking about, so for no gain you are suffering in performance.
|
|
|
|
|
F-ES Sitecore wrote: using linq over a foreach gives no real gain
Except for more readable* and concise code.
* For those of us who have been assimilated.
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
That's a matter of opinion, in one of the articles posted on this thread MS explain why there is no ForEach for List and the reason being that it is less readable and offers no real advantage over the native foreach.
You also forget "harder to debug"
|
|
|
|
|
As others have pointed out, ForEach is the odd man out here. A foreach loop of the results of the sequence returned from LINQ is the better option.
But the bulk of LINQ is about telling the compiler what to do, not how to do it. And that makes the code much more readable (for some).
Imagine you start with a big block of code in a single method. It takes a list, filters it, sorts it, groups it, filters it some more, projects it, and then processes it. You've got a fairly complex method which is specific to one task. If you need to repeat any of those operations, you have to duplicate the code.
The first thing you would do is refactor the code, to move some of the common operations out into separate, simpler, reusable methods. You could then write simple unit tests for those methods, without having to set up the more complicated data for your original method, and without having to work out which part of the original method failed if the tests failed.
Then, you would reuse those simpler methods elsewhere when you needed to do the same thing. Need to filter a list? Call SomeClass.FilterAList . Need to group a list? Call SomeClass.MakeSomeGroups .
Pretty soon, you end up with a collection of utility methods that you're reusing everywhere. But the syntax is quite nasty:
var source = GetAList();
var filtered = SomeClass.FilterAList(source, SomeFilter);
var sorted = SomeClass.MakeItSorted(filtered, SomeSortingCondition);
var grouped = SomeClass.MakeSomeGroups(sorted, SomeGroupingCondition);
var filteredAgain = SomeClass.FilterAList(grouped, AnotherFilter);
var result = SomeClass.ProjectAList(filteredAgain, SomeProjection);
var result = SomeClass.ProjectAList(
SomeClass.FilterAList(
SomeClass.MakeSomeGroups(
SomeClass.MakeItSorted(
SomeClass.FilterAList(
GetAList(),
SomeFilter),
SomeSortingCondition),
SomeGroupingCondition),
AnotherFilter),
SomeProjection);
To tidy it up, you would like to be able to call each utility method as if it was defined on the IEnumerable<T> interface. You can't add the methods to the interface, since that would break everything that implemented it. So instead, you introduce extension methods, and the syntax becomes:
var result = GetAList()
.FilterAList(SomeFilter)
.MakeItSorted(SomeSortingCondition)
.MakeSomeGroups(SomeGroupingCondition)
.FilterAList(AnotherFilter)
.ProjectAList(SomeProjection);
Now it's much easier to see what's going on, which condition applies to which operation, etc.
Change the method names, and you've effectively reinvented LINQ.
F-ES Sitecore wrote: You also forget "harder to debug"
Only if you're trying to debug the framework code. If you stick to debugging your own code, it's easier to debug, because there's less of it.
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
I'm talking about using linq to foreach a collection vs using foreach. As I said in my post, it is fine to use linq if you are getting advantages such as in the example you just posted, but I thought I made it pretty clear that was not the kind of code I was talking about and also that I never said to never use linq.
Richard Deeming wrote: If you stick to debugging your own code, it's easier to debug, because there's less of it
var result = GetAList()
.FilterAList(SomeFilter)
.MakeItSorted(SomeSortingCondition)
.MakeSomeGroups(SomeGroupingCondition)
.FilterAList(AnotherFilter)
.ProjectAList(SomeProjection);
That line throws a null exception...can you look at the line that threw the exception and know what the issue is?
|
|
|
|
|
F-ES Sitecore wrote: That line throws a null exception...can you look at the line that threw the exception and know what the issue is?
Assuming I'm using LINQ, the most likely culprit would be GetAList returning null .
Failing that, I'd have a stack-trace to tell me where the exception occurred.
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
But you agree it wouldn't be immediately obvious like it would if you weren't using linq?
|
|
|
|
|
If I wasn't using LINQ, then I'd be able to identify which line in the massive complicated method the exception was thrown from.
Whether it would be obvious why the exception was thrown is a different matter.
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
So it's easier to debug without linq?
|
|
|
|
|
No, because you've still got a massive overly-complicated method to dig through to find the cause of the problem.
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
You instantly know the code that threw the error though so that's a big starting point. Let me give you a better example
string mytext = mydata.Where(a => a.Name != "Admin" && a.ID < 1000).OrderBy(b => b.Surname).SelectMany(c => c.Role).FisrOrDefault(d => d.Updated.Year == DateTime.Now.Year);
We've all seen code like this, right? Let's say it throws a null exception, good luck finding out what is null. If you split your code into functions\loops you don't have that issue.
|
|
|
|
|
Start by changing the code to:
string mytext = mydata
.Where(a => a.Name != "Admin" && a.ID < 1000)
.OrderBy(b => b.Surname)
.SelectMany(c => c.Role)
.FirstOrDefault(d => d.Updated.Year == DateTime.Now.Year);
Your stack trace will include a line number, which will tell you exactly which line you need to look at.
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
Guess null exceptions aren't a particularly great example of debugging, despite what you might read on CP they're not the hardest issues to track down. When it comes to logic issues with streams of chained linq statements if you want to debug them to find out better why you're getting\not getting the results you want you often have to isolate the steps and loop at them in-turn which is an additional faff you wouldn't have otherwise.
|
|
|
|
|
I still think that's easier to do if you're reusing small methods that do one clearly-defined thing, and which have been thoroughly tested, than if you've lumped all of the implementation into one giant method.
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
Regardless, debugging is still harder with chains of linq statements, that's the only point I was making.
|
|
|
|
|
Richard Deeming wrote: Writing all of your code in one big Main function is faster than any of this "object-oriented" nonsense.
That is, almost certainly, not true. With one big function, the optimizer will have to practically shut-down. Many smaller functions can be highly optimized.
Richard Deeming wrote: And using C or assembly will be much faster than this JIT-compiled C# nonsense.
Again, real world examples have shown that letting the computer do things like managing your resources, is much faster than trying to do it yourself manually.
Truth,
James
|
|
|
|
|
I would be a bit surprised if your first point was true. Please give me a couple examples of optimizations that depend on function size.
|
|
|
|
|
Optimization largely depends on tracking the lifetime of variables:
for(int i =0; i< 100; ++i)
{...}
will be better optimized, than this...
int i;
for(i =0; i< 100; ++i)
{...}
just because the compiler knows that "i" is never used again outside that for loop. In the latter, space must be allocated for i on the stack, and it must be stored there. In the first, "i" may live at it's entire existence in a register.
Now, in an example as small as the above, a good compiler may still realize that even the second "i" is not used again, but the larger that function gets, with more things to track, the optimizer begin to give up.
Truth,
James
|
|
|
|
|
That might be true but it comes at the cost of jumps and stack management for function calls. I think you will be hard-pressed to find an example of one block of code that runs slower than similar code split into more functions.
|
|
|
|
|
But then, what about library functions? Are you going to inline every call to ToUpper() or Trim()?
If you do, you have a unmanageable mess.
If you don't, then you're back to the costs of jumps and stack management, so what's a few more?
Truth,
James
|
|
|
|
|
"What's a few more" is often significant overhead.
I don't know what argument you think I'm making. The comment that is the topic of my comments simply said that writing your code in a big main function is faster that all this object oriented stuff but probably a bad idea. I'm agreeing with that.
|
|
|
|
|
Mike Marynowski wrote: I would be a bit surprised if your first point was true. Please give me a couple examples of optimizations that depend on function size.
Small functions can be in-lined by the optimiser, so this code
foreach(string x in y)
{
x = x.Trim().ToLower();
DoSomething (x);
}
function DoSomething(string s)
{
if (s.StartsWith("hello"))
{
s = "test";
}
}
might be optimised to this
foreach(string x in y)
{
x = x.Trim().ToLower();
if (x.StartsWith("hello"))
{
x = "test";
}
}
thus avoiding a code jump\stack update etc.
|
|
|
|