The Lounge is rated PG. If you're about to post something you wouldn't want your
kid sister to read then don't post it. No flame wars, no abusive conduct, no programming
questions and please don't post ads.
No, I don't. And you don't. But ... we've both seen it done, and sometimes in production code.
I think we have to accept that developers are not what they used to be (and "Hoorah!" for that in some cases), projects generally are a lot larger and more complex than they were, and that we have to change to languages which facilitate that.
Sent from my Amstrad PC 1640 Never throw anything away, Griff
Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...
AntiTwitter: @DalekDave is now a follower!
But ... we've both seen it done, and sometimes in production code.
I have seen it many times in production code. And would have been more except that I started reviewing the code of others and catching it. I worked at a place with formal code reviews and I was still the only one finding them.
Not to mention that these sorts of problems impact the server far more.
I think we have to accept that developers are not what they used to be
But applications now are not the same either. When I started I wrote the UI and the back end. Now I can't even effectively review the UI code and the developers cannot write the back end code.
Oh, I had a programmer returning stack pointers.
The third time he claimed it worked fine when he tested it (inside the debugger, duh),
I fired him. The fact that he could not comprehend that a stack is temporary on invocation of the function...
And that is part of the problem. People understand less and less. Coding used to be understand first, analysis, and development. Now it feels more like "crank out code"... And nobody realizes the how deep the problems are.
Then the often hidden fact: Development is only 20% of the budget. Maintenance is 80% over the long haul. So SIMPLE, CLEAN Code that is easier to maintain wins. When you need performance. Do that. Do it right with the right tools, and keep it clean and simple. And if it is unmanaged code, then you must run a memory analyzer/leak finder. Or your code is not properly tested!
Having worked on a project with 250K lines of code. They said it had a memory leak. A quick review of the code and I labelled it a Memory Sieve! The design guaranteed a leak! Nothing would be freed as children referenced their parents!
I did a little experiment a while ago - calculating prime numbers in C++ and a handcrafted assembly version, same algorithm obviously. The C++ one ran quicker, never found out why. CPUs are so damn complicated these days. In the glory days you could look at your instruction and know how many clock cycles it took. I think the on chip caches change all that.
In my assembly days you'd get everything into registers and keep it there as long as possible. Perhaps these days the cache is good enough. Reluctantly I concede it's best left to the compiler.
The main thing in the C#/C++ for me though is that the C++ you see is so damn cryptic. I don't think it needs to be, but usually just is. Maybe its the mindset.
I suspect that an average C# coder will produce code that is less efficient than a skilled C++ coder to do the same job - but I also suspect that he'll produce it in less time, and it'll be more easily maintainable by an average developer. And with the performance of modern machines that's a critical factor in most cases. Additionally, I suspect that a skilled C# developer will produce better, faster code than an average C++ dev, and get it out the door quicker as well.
by the same token with newer machines having so much more memory than before you could argue garbage collection / free-ing unused memory can also be ignored in short running programs or where only alloc-ing space for relatively small buffers/structures. [which could also have dramatic results for performance.]
OTOH, using the excuse: "todays machines are faster / bigger memory... so optimisation and/or garbage collection matters little" is when people copy or extend that code into small/short running programs into big / long running ones.
OP's "sound" app may work fine in optimised C# [and/or without cleaning up unused memory] but when added to a video editing suite would that still be true?
On my bicycle I can match (if not beat) the bus on a 10 mile commute and get away without refueling on the way, but let's make that 100 miles.
If performance can be improved just by throwing more hardware at it, it probably makes more sense to have an average developer write good, maintainable code than having a rock star developer write code that only he has the ability to read and maintain, even if that code manages to squeeze every little bit of performance out of the hardware.
Simply because developer time is a lot more expensive than hardware.
No not really. I think Microsoft likes to occasionally upload your entire browsing history so it can 'improve your windows experience' and if that were to happen the same moment as you're filling your buffers you may come unstuck. Ultimately, we're all slaves to the thread scheduler.
But, that's just windows. I've mucked about with thread and process priorities, but it doesn't seem to make much difference.
In terms of the driver, I am using ASIO. And so far it's all single threaded too.
I used unmanaged methods with C# to do some image processing.
For fun I was writing something to detect hand-drawn rectangles and circles - so that you can turn a hand-drawn diagram into vectors.
I found that the unmanaged methods took around three times less time than using managed code - so it does make a bit of a difference.
“That which can be asserted without evidence, can be dismissed without evidence.”
System.Numerics is better than nothing (I recommend it over "doing nothing" at least), but it's really difficult. By which I mean more difficult than using SIMD intrinsics in C++. The API is full of holes (no right shift?? no shuffle? no special operations?) and landmines (you'd think that eg multiplying a vector by a scalar is equivalent to broadcasting that scalar and then doing a vector multiply, but no, you have to do that manually and ban the scalar*vec from your code).
Of course programming is an exercise in "poking the code until the asm looks good" either way, but the System.Numerics API makes it harder to get it right. Sometimes impossible.
The newer SIMD API in .NET Core 3 (System.Runtime.Intrinsics) is more complete and probably will be better. Still tricky to get good codegen in all cases. For example it's not so easy to force it to "broadcast from memory" and avoid the "load-then-broadcast" anti-pattern (this costs a shuffle µop on p5 in addition to the load, broadcast from memory only costs a load).
C++ is really a system's language, for large systems, where lots of incremental performance gains (or losses) adds up. And of course the thing is, if you looked under the hood of the C# libraries, I'm guessing there's probably a lot of C++ down in there.
Languages like C# are good if you are sitting on the top of the food chain. If your code needs to live from the metal up to the UI, then something like C++, with all it's issues, is likely more practical. It's one of those languages that maybe doesn't do anything one thing the absolute best but it does a broad spectrum of things well. If the code base has to cover that broad spectrum, it adds up.
If you end up having to do lots of unsafe C#, you sort of give up the biggest advantages of C# without getting the real benefits of C++.
Explorans limites defectum
Last Visit: 19-Aug-19 8:14 Last Update: 19-Aug-19 8:14