|
std::auto_ptr was deprecated even in C++11 it's not valid C++ anymore.
And RAII is not smart pointers.
modified 9-Feb-21 7:53am.
|
|
|
|
|
I mostly agree, except about PLINQ. If LINQ is like shoveling money into a brazier, PLINQ is like hiring a gang of temps to shovel the money in even faster.
|
|
|
|
|
Hahaha, that's fair I suppose, but at least it scales out, allowing you throw hardware at the problem since the software sucks as a matter of course.
Real programmers use butterflies
|
|
|
|
|
LINQ was engineered by the same person (s)? that engineered inline SQL in FoxPro.
That was the part I missed the most after abandoning FP / VFP ... and rejoiced when LINQ arrived.
If you write LOB apps, particularly ERP, you will understand.
LINQ sucks for those that think "partitioning" a problem is for weenies. They also mangle SQL; assuming the "optimizer" will always sort out their mess.
(soap box off)
Paradox: sometimes you have to write "more" code to get better performance.
It was only in wine that he laid down no limit for himself, but he did not allow himself to be confused by it.
― Confucian Analects: Rules of Confucius about his food
modified 8-Feb-21 15:46pm.
|
|
|
|
|
honey the codewitch wrote: It creates too many objects too quickly Can you elaborate?
As far as I know you have an extra enumerator per operation, so for example:
foreach (var whatever in whatevers)
{
if (whatever.IsValid)
{
filtered.Add(whatever);
}
} Has one enumerator, while
var filtered = whatevers.Where(x => x.IsValid).ToList(); has two enumerators (the Where will call "the original" enumerator, while the ToList will call the WhereEnumerator).
Other than that it's the same except that the LINQ example has an extra anonymous function (which isn't anonymous after compilation) and an extra function call for each iteration, but if the where clause is complicated enough you may do this in example 1 too.
That's hardly a performance penalty, but you just won 7 lines of code and made it more readable to boot.
The readability further increases when you do stuff like
whatevers.Where(x => x.IsValid)
.Select(x => new { Name = x.Name, Age = x.Age })
.OrderBy(x => x.Name)
.Take(10)
.ToList() at the expense of three extra enumerators.
Object instantiation is cheap, or so I've been told.
You also missed one, LINQ to Entities (or LINQ to SQL), which is also LINQ, but won't enumerate at all because the entire structure is compiled to an object structure and parsed into SQL.
Let's not talk about the performance implications on that one
For most operations it's not significantly slower though, while it saves a ton of time of SQL development and makes your database connection strong typed meaning less bugs, etc.
The extra milliseconds the user is waiting is won back in hours of development time
|
|
|
|
|
Great. Now try using it to generate a LALR(1) table or for that matter, even compute FIRST and FOLLOWS sets.
You'll see my issue with LINQ really quickly. Especially if you benchmark it.
I guess it all depends on what kind of code you're writing. These days I don't do a lot of bizdev, and I haven't touched a real database in years.
Adding, I don't think you're considering all the extra overhead of LINQ not actually combining all the operations that can be combined into a single iteration eg: iterating twice where once would do.
It just isn't smart enough. It's also problematic (and this isn't specific to LINQ but more of a general problem with functional programming) to do certain kinds of queries because some queries can be orders of magnitude faster if you're allowed to keep some state around. There's just no facility for that in LINQ. I don't blame LINQ for that, since it's more of a functional programming paradigm issue, but it still keeps me from being able to use it for a lot of what I would like to use functional programming constructs for.
Real programmers use butterflies
|
|
|
|
|
|
Sander Rossel wrote: For my customers a few extra objects, iterations and even MB's are no issue at all (they don't even know it exists), but my hourly rate is
Yeah I can understand that, but also I'm glad that's not my situation. I like having to cram as much functionality I can into modest hardware.
Real programmers use butterflies
|
|
|
|
|
My customers like having me cram as much functionality as I can into modest invoices
|
|
|
|
|
My customers are willing to pay for me because the alternative is worse.
Real programmers use butterflies
|
|
|
|
|
My customers are willing to pay for me, but they'll never admit it
Although I've had one saying "Sander, I trust you. If you send me an invoice I can be sure you're charging an honest price and that it's not too expensive."
Which of course raised his invoices by 10% (no really, I'm joking!)
I recently had a talk with two customers (from the same company) and one of them said "Sander, it's very nice what you've made for us, but that invoice was quite high."
To which the other person replied "Think of it like this, [name], if it wasn't for Sander we wouldn't have it at all and we really need it."
Despite me costing a lot of money, I'm don't think I'm expensive.
I add direct value with my custom software and most of my competition is slower and/or more expensive
All in all it's a fun job and haggling is just a part of it.
|
|
|
|
|
I'm a jerk i guess.
I don't haggle. I charge based on how much I like the project, and I don't take jobs I don't like.
I am fair about my invoices, and I itemize my time, but again, I don't haggle. Pay me or find someone else.
I cost what I cost, and I'm always told I'm worth it when I'm told anything.
One of my current clients actually told me I "walk on water" so I had to reduce his expectations for fear of drowning.
Real programmers use butterflies
|
|
|
|
|
I wish I had that luxury, but to me (work > no work) and many of my potential customers rather work with Excel, an old system or even paper than pay "too much" for their custom software
My hourly rate is fine though, so there's some room for haggling, but only for specific projects.
I don't doubt that if I'd cut my rate in half they'd still haggle though
And sometimes bringing a customer in is more important than money if I know they'll bring me more work in the future.
At the end of the day I've got bills to pay and the only way how is with paying customers.
|
|
|
|
|
Object instantiation is cheap, or so I've been told.
It can be (depends on the object of course), but the problem starts when the garbage collector comes calling
|
|
|
|
|
Right? I didn't want to get into it and potentially start an argument over GC arcana, but basically the concept behind a GC isn't so much that you save on allocations, but you pay for them after-the-fact.
I recently had a project that needed fast pointer increment allocation like a GC has but I didn't want to pay for the object sweeping so I simply made it so my little heaps could be defined to a fixed size (of which allocations would come out of) and you couldn't delete objects at all. You could clear the entire heap in one sweep, invalidating all the data at once though. Practically free, and the use case was such that it handled everything I needed.
GCs aren't all that and a bag of chips.
But then i'm not telling you anything you don't already know.
*hides from @SanderRossel*
Real programmers use butterflies
|
|
|
|
|
|
Assuming PLINQ's implementation is not terrible, you're probably incurring locking overhead. It doesn't make sense to try to use any kind of parallelization in the following scenarios
a) your problem has interdependent components such that you can't decouple the work done by B from the result of A and C depends on the result of both, so you're elephanted.
b) it doesn't do you a heck of a lot of good to query the same source in parallel with itself. It's hard to give you a good example in PLINQ but you want parallel op A to use a different datasource than B. In an RDBMS this principle is easier to understand. If I run a join across two tables, i don't have a lot i can do to make it parallel *unless* each table is on a separate drive ("spindle" in DB parlance) meaning the read operations of table A aren't dependent on waiting for read operations from B since they are different drive controllers working in parallel. The same basic idea would apply to PLINQ
If a or b are an issue, you'll probably end up incurring more overhead than you gain in throughput
Real programmers use butterflies
|
|
|
|
|
I tried with some code I have from my long past Physics PhD that integrate some equation over time... I have a multidimensional field and each dimension was in its own thread...
Mmm... come to think of it now, there is coupling between some variable I think, I wonder if it was the cause of the slow down... no matter.. not sure where this code even is now ^^
|
|
|
|
|
It's very likely. It can be really easy to miss interdependencies in formulas.
Real programmers use butterflies
|
|
|
|
|
Is LINQ bad'ish or orders of magnitude slower, than those hand-written operations? I ask because I wonder about performance implications myself, while also regarding code read-/maintainability (after all, if performance was all I cared about, I'd hand-optimize everything in assembly anyway, engineering is all about trade-offs).
|
|
|
|
|
Usually it's not terrible. Not orders of magnitude slower for what most people seem to use it for - queries in business software.
However, don't use it for what I'd call "real" functional programming.
If you're going to write a parser generator or scanner generator for example, you don't want to compute your tables using linq. In that case it *will* be orders of magnitude slower than most anything you could write by hand.
And I guess now you can tell what kind of software I write.
Real programmers use butterflies
|
|
|
|
|
Ah, I understand. Thank you for the explanation! Full disclosure: I don't write generators. I prefer shouldering the burden to drive something data-driven from the get-go then to go through the intermediate step of writing a generator (that generates something that gets the actual job done). I find the one-step-approach way easier to debug, adopt to future changing requirements (which will of course change because that's what requirements simply do) and to teach to someone freshly joining the team.
Should I ever be explicitly required to write a generator (instead of getting things done one way or another), I'll heed your words.
|
|
|
|
|
It's not so much about the code generation per se, but the type and amount of iterations you'll be doing.
Consider the following source file that generates an LR table. The code is ugly because the algo is ugly. There's not much way around it. See the accompanying article for an explanation of the algo if you want:
Downloads: GLR Parsing in C#: How to Use The Most Powerful Parsing Algorithm Known[^]
The point in showing you this is the iteration code to generate things like the LRFA state graph.
When I say generate above, I'm not talking about code generation, but simply computation of tables.
Trying to do those things with LINQ - massive recursive iteration is just a mug's game.
Real programmers use butterflies
modified 24-Feb-21 8:22am.
|
|
|
|
|
Speaking of ugly code due to ugly algo, VIF tables in the M-Bus norm are a bloody nightmare.
|
|
|
|
|