You should think about performance all the way through the project, but don't become obsessed by it. Code for flexibility and modularity so changes can be made down the line without causing too many ripples. Measure frequently through the development of the project; you don't want to end up trying to make an impossibly slow product cut it five minutes before shipping.
no, not really. premature optimization can in a lot of cases lead to quite unreadable code, and also requires more programming time that might be better spent elsewhere. but a lot of people don't get either of these points.
Amen! most programmers don't know where to optimize anyway... i've seen many a gyration where people are thinking they're cute by tightening up the code in a (way too large) function when the function itself is making it's own copy of a (very) large object... and reinitializing it before it's used ... sometimes more than once
....do you think that after you write the entire code, you can optimize it? Just maybe by twitching a few bits?
Optimization begins with clever coding practice and goes deep down into the data and code structure. You cannot do that later, you can only do that at the beginning. Once code and data structure is layed down, you won't go back and change anything, because that would mess up everything.
And the statement most of the people unexperienced in optimization "optimized code is messy and hard to read" it's just a myth. Optimized code is clean, neat and usually well structured. Beacause doing things well means doing things fast.
In my opinion, you're describing efficient code; not optimized code.
In some cases, once the code is implemented as efficiently as it can be it still doesn't reach performance goals (or the goal is to be as fast as possible). That's where optimization comes in and it generally leads to questions of "why was it done this way?" and unreadability.
E.g. Using loops and methods makes your code easier to read and more maintainable, but it may not be as fast. Therefore an optimizing compiler can unroll the loops and inline the methods to yield faster compiled code. (You get the best of both worlds.) If you unroll your loops and inline your methods yourself rather than having the compiler do it, anyone who reads your code will think you're an idiot, and they could be right.
At any rate, if high performance is essential, you should be writing in assembly, not managed code.
Performance takes effort and resources. Sometimes those resources are better spent in other areas.
You will spend these resources now or later soon as soon as your user discover your performance falls. and it will reflect to your user respect and your busieness look,You will fall down on solving bugs based on performance problems, rasing the cost of your application because its minimum hardware requirments and so on,Lots of factors will be affected while you left performace .
the money you saved your resources from the biegning , you will pay later and you might need more cost to rebuild your all software to solve performance issues.
Performance is very importnat issue while developing medium to large applications, and it must get some of your project costs. in order to minimize the factor of future performance problems with customers.
I care for performance all the time.
But, I can't STOP the development at the first phase because I must do everything work with the highest performance.
Things must be done using the "good practices" for performance first. But only after it really works that time must be spend optimizing problematic calls. After all, if it is well done, it is only a matter of changing some methods, witout changing everything (and that's why the design phase must care about performance).
In my experience, it is always a team effort.... I mean a good team can deliver a good result and a good engineer in a bad team need to struggle a lot. For software projects performance will matter from day1. Sometimes it mainly depends on decision making skills and good decision can change the result.
I never use the optimiser in Visual Studio, in my experience it makes code worse not better.
But I worry about performance a lot, mainly because my target is a lot slower than my development machine (it may be slow but you can drive a tank over it and it still works). Over the years I have got into the habit of writing code to be efficient anyway. I always try to save space as well as time, you call tell I used to write assembler. I save every microsecond I can.
So as a rule my code is pretty much as fast as it can be, but if it is too slow for the target I can use the debugger to find where it's wasting time and do something about it. There is always something you can do.
Erm, I'm assuming you mean you don't use the C#/VB.NET compilers and JIT optimizations when you say "optimiser in Visual Studio". And, if that is really true, I think its ludicrous!
Any optimizations you make purely with C#/VB.NET code are at a very high level. You are incapable of making the kinds of optimizations the compiler is no matter how hard you try, because a lot of IL constructs may only BE made by the compiler itself (unless you write IL directly). The JIT makes even lower level optimizations, at the machine-code level, tuned specifically for the operating environment (physical hardware, virtual resource allocations, etc.) within which the code is actually running (and those aspects may change from run-to-run). No matter how much you try to optimize at the code level, you can not foresee each and every operating environment and account for every single optimization, nor are you capable of it.
Optimizing at a high level is only one level of optimization, and it will only take you so far. There are additional levels of optimization that SHOULD be done, but which a developer should not have to worry about (hence the reason we use high-level languages). I think its a load of crap that you don't use the compiler optimizations, and I think its humorous that you actually think your doing a better job than the compiler and JIT by ( ) "saveing every microsocond you can."
Give me a break. Use the compiler for what it is, a tool to reduce your workload and handle significant, lower-level optimizations for you.
I should have said, at the moment I use Visual Studio only for C++. I just got a new version (version 9 SP1). The version before that had some problems using the optimisers, I had memory leaks and strange crashes. Colleagues said try disabling optimisation, and suddenly things got a lot better. Maybe the new version is better. I will give it a try when I get time, since you recommend it. Saving microseconds is a habit of mine, not a work ethic. Actually, some of my datasets are very large indeed, and shaving off a few microseconds on every loop can make a measurable difference. The customers are really interested in seconds; I gather their survival depends on it. Soon we will get a more modern processor and everything will be faster.
I use other tools for tidying my code, like Together and DevPartner. All very, very useful. My code is safety critical and I have to prove I have no memory leaks, no untested code, no uncalled code, no uninitialised variables, etc, etc. We have long lists of guidelines, which we can enter in the tools to check our code for us. We don't get to choose our tools, they are sent from above.
Despite its quirks Visual Studio is a fine thing. I came from Unix originally, and Visual Studio, along with Visual Basic for Applications, was what persuaded me I could work in Windows after all. I'm not sure I would want to go back to vi and Emacs now. I liked the total control, but Visual Studio takes a lot of work off my hands, the interface to external stuff just works, I don't have to mess about with it.
VBA has a better debugger though, the ability to move the pointer back and try again is brill.
Some time later. I switched on Full Optimise for release mode and it seems to work OK. The older colleagues think it is risky, the younger ones don't see the problem, they optimise every time. Well, I have to run a test programme this week so I shall leave the optmisation on and see how it goes.
Contrary to what my husband says, I do listen to advice!
Well, when it comes to C++, I can't say. Visual Studio has never had the greatest C++ support. This is where C# truly shines. Like VB, it has some stunning debugger capabilities, but that isn't the best part about it. Visual Studio and C# were designed to go together, and the static analysis of C# is truly amazing. You generally don't need additional tools to verify the vast bulk of your code for correctness (Having ReSharper is a big bonus and offers some improvements, but Visual Studio itself covers most of it.) C# also has a progressive compiler/optimizer which does phenomenally well.
While you may be a C++ gal, I highly recommend looking into C#. You won't have a problem with the syntax, and the functional improvements it has over C++ will likely improve you're productivity another order of magnitude beyond what you have now (similar to the difference between VI/Emacs and Visual Studio.) You still need to write optimal code to gain the best performance, but doing so isn't nearly as much of a chore as it is with C++.
Last Visit: 27-Sep-20 3:43 Last Update: 27-Sep-20 3:43