The version before that had some problems using the optimisers, I had memory leaks and strange crashes. Colleagues said try disabling optimisation, and suddenly things got a lot better.
Wow! I'm amazed by your reasoning. Not to mention the fact that you wrote significant portion of code without trying to building your project and run it in release mode! And you're working on safty-critical application?
Well, of course it was a lot more complicated than that... I do indeed test in release mode too. We all do, and on the target as well. Actually we spend a lot of time testing.
As it happens a lot of my code is inherited C code (and a few other languages); there are many things in it I don't like and can't change, and a lot of things I have improved over the years. It's not bad as safety critical stuff goes. I have attended hardware trials and I am still alive...
When I did my "Convert from Unix to Visual Studio" course some years ago it was received wisdom that one didn't use the optimisers. Things have changed since then. Me too, if you read the posts here. I am now a convert and am preaching to my colleagues.
... and, on a charitable day, the performance of the team and company I work for.
With regards to performance as it applies to software, this is best handled as part of everyone's continuing eduction in software development and not part of any specific project. After all, do people really think to themselves 'well, I was going to go with the bubble sort but for performance reasons I'd better not'?!?
In my experience, most performance problems are usually a result of algorithm design. Sometimes the optimizer can save you, but most times it can't. If debug builds show poor performance, then its time to start thinking about fixing the problem right then rather than waiting.
What about performance problems that are not algorithmic, but architectural? The use of chatty, envious interfaces over less chatty, more encapsulated interfaces. Going single-threaded because it is easier (algorithmically and conceptually) than using multiple threads (and therefor wasting the benefit of multiple cores/processors that are most likely available.) Implementing a system that requires all components to be deployed locally, rather than using a distributed service-oriented approach that could offer significantly greater scalability, composability, etc. etc.
Not all performance issues are purely algorithmic. Algorithms solve business problems, but they don't really solve technical, non-functional problems. Algorithms can indeed be written poorly (i.e. BubbleSort vs. QuickSort), but that won't matter that much if you need to perform that algorithm on dozens, hundreds, or thousands of independent sets of data simultaneously.
Point well made. That's what I had in mind, but certainly isn't what I wrote
FYI, there's another class of design-related performance bug I've had to find and fix -- blowing the CPU's data or instruction cache while processing in a loop. Its is simply amazing how much performance you can loose doing either Sometimes the optimizewr will save you, but a lot of times it can't.
In my experience, optimizers are good at making good code run just a little faster or beat on the CPU just a little less. They rarely fix true performance problems.
Good point about the CPU cache...thats a level of optimization that most programmers rarely think about. I guess it could be considered a detraction of high-level languages. In that case, proper algorithmic tuning is key, as a blown cache can really destroy algorithmic performance.
When moving large amounts of data, using the assembler code movnt (and it's related functions), can reduce time by 50% or more (movnt writes data to memory, without writing it back to the cache, whereas the standard mov functions writes data to memory AND the cpu cache)
"In which phase of your software project do you actually care about performance?"
Obviously you should "care about" performance all the way through from the very beginning, but that doesn't mean you actually do anything about it.
"When do you decide that the go-faster pedal needs to be applied?"
You can't hit the "go-faster pedal" until you have at least a working prototype to which the performance can be compared. And you can't truly judge performance until you're in the production environment.
On my last job I had a Windows Service that read data from a text file created by a third-party product and inserted it to an SQL Server database.
In production it could process one hundred rows per second, but in test it could only process ten rows per second -- and therefore couldn't keep up with the data. Should I have tried to improve the system so that it could keep up even though there was no problem in production? I don't think so.
There is a bit of a problem with the wording of the answers in that there is more than one “correct” answer (IMO) …
One should keep “performance” in mind when coming up with the initial design, but there is a problem in trying to “optimize” before the system is operational and a regular profile has been established.
Any “optimizations” attempted before that time will most likely drain resources and can prove ineffective or even detrimental until a realistic load pattern is established (after going live).
If there is a requirement for specific performance targets, I'll make it something I think about from the start. The whole design will include performance considerations right from the beginning.
On the other hand, if there aren't any specific performance requirements, I'll pay less attention to it until we have implemented some functionality.
It also depends on level of risk of bad performance. Sever based apps have a higher risk of performance issues than basic client side only gui apps. These factors will effect where I plan to start considering performance. (Server apps get consideration from the start of the requirements phase, simple client apps with nothing complex probably don't get any consideration at all unless it actually doesn't perform well)
As stated in previous posts (by someone else), performance is a factor within the bounds of the implementation. It's not possible to consider performance at the design stage for more than a few reasons, the biggest one being the fact that design is driven by the feature set of the application, which are usually set by the client needs.
Everybody agrees that performance is about 'doing things the best possible way' and that usually addresses terms like 'Speed', 'Ease of use' and 'Simplicity'. But things get a bit complicated when those same words come from the mouth of an 'End User', 'Developer', 'Manager'.
Tip ... When discussing 'Performance' make sure you speak the 'same language' with the other guy(s) !!!!
PS: After a few years of trying, i've come to the inevitable conclusion that performance is located at the very spot where the outer boundaries of the 'acceptable software' criteria of each one involved, meet ....
It can be done before hand at large corporations that meet for months before the project is given to a room of 20 programmers. But in the 'real world', you are absolutely correct. I'm given a project from the start. I have to design it, code it, and distribute it in a specified time period. I don't have time to optimize anything until start actually beta testing at a few of our locations.
I am not advocating any approach, just observing that in reality a lot of projects are challenged and that doesn't seem to align with what people are saying they are doing. It is obviously the best theoretically to think about performance all the time but that is not real life. The better skilled the developer the more they can cheat and get away with it. By cheat I mean take shortcuts. Shortcuts are for those with battle scars and knowledge.
I think the phrasing of the 1st choice explains its popularity:
<i>I care about performance during both the design and implementation phases</i>
It just says that you <i>care</i>, or are mindful of performance during the design phase.
It doesn't say anything about actually optimizing the code.
It's a pretty safe assumption to think that most developers at least care about the idea of performance during the design phase.
I suspect that the percentage would be much lower if it were phrased in terms of actually doing something about it (iteratively testing and optimizing the code during development).
Wonder what percentage of the 54% who says they optimize their code during design and implementation is actually enhancing the truth?
Wonder what percentage of the 54% think the design phase is bouncing a few ideas off their coworkers while getting coffee before they sit down and type in a pile of code. In which case there is always someone who says something like, "no don't do that, use the STL as it's already optimized and tested", so there you go, optimization during the design phase.
..and the initial design (if you still take your time for such quaint oddities) has most influence on final performance - perceived and real.
Personally, I love the idea that Raymond spends his nights posting bad regexs to mailing lists under the pseudonym of Jane Smith. He'd be like a super hero, only more nerdy and less useful. [Trevel] | FoldWithUs! | sighist