The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
It slowed things down to a dead crawl, and gave me over 8000 warnings. Of those, roughly 7800 were complete rubbish, yet considered so Very Very Very important (in someone's imagination) that they couldn't be turned off. Pitched it after a few hours.
It seems to me that there's a reasonable argument that the thing that creates the most bugs in software is the fear of introducing bugs into the software. The fear is not unreasonable either. If you do semi-major surgery on a large piece of code, almost guaranteed you'll introduce new bugs, and possibly even some big, hairy ones with painful consequences.
But, that fear of semi-major (up to major) surgery seems so often to push folks towards always only making incremental and/or localized improvements. And that localized approach also seems to tend to create islands or layers of disparate style and technique and tools. Newer code wants to move forward but can't pull the rest along. Doesn't mean that they are ignoring the cracks in the mortar between the parts, but they just are so loath to pull it all apart and put it back together because of the possible damage.
In the end, does that ultimately lead to worse software? I think it does. But of course companies don't sell software in the end, they sell it now and have to deal with the consequences of that. And this is one of those scenarios where there kind of isn't a middle path. The middle kind of becomes the muddle that I describe above, and there's really no moderate way to fundamentally re-tool and you just have to take the pain and get it over with.
The optimal thing would be to just start a new code base, taking all the lessons learned and building it right. But that's probably a fool's paradise. It hardly ever would happen that way. The expense and the complexity and teasing all of the intricate details that have been woven into the code and perhaps not really remembered by anyone, etc... Not being able to give up the folks who really understand the current code base, so maybe different folks work on the new one and don't really have that deep understanding of what went wrong the first time, rolling their eyes at the old school crowd and their outdated concerns. And the time scale would likely have to be way too short in order to be commercially viable, so it may end up just being a new and expensive collection of different compromises.
Or of course Version 2 Syndrome with a vengeance is a possibility, and it takes twice as long to have enough meetings to decide on further ways to explore possibilities for a deeper understanding of possible modalities for scaling leverage of something or another, than it would have to just have given a small crew of talented people a room and a lot of coffee and left them alone.
Maybe in the end, the above does happen, it's just that a different company does it. Anyhoo, I'm rambling while waiting for my coffee...
It shows as in, I usually just refactor things to my liking. But this refactoring tendency of mine has been known to irk people... because, you know, we don't things "that way" here
(whatever "that way" is... I think it stands for "I wouldn't have written it that way")
A thoughtful post. A former colleague of mine called it "verification inertia", which I think was very apt. Software would be more frequently refactored if it was easier to retest. But tests are too often lacking, or the test environment too difficult to set up.
Wholesale rewriting of code is fraught with peril. It's not often attempted, and I've seen it fail more often than succeed. Here's something I wrote about it last month: [^]
I have re-written things a couple of times and in my experience it was worthy every second I invested on it.
Performance increases, load reductions, clarity and ease of maintenance...
But I was lucky, I was mostly a single developer team (in two of them I had a couple of additional programmers I was training), I had fought the bugs of the legacy versions long enough to be 100% sure of what was definitively wrong and I was pretty confident in what to change and how to change it before I got started.
I now am in front of a huge rewrite in a more or less unknown system... the original programmer (over 15 years developing this) is still here and is 100% agree with me, that we can't add some new features as it now is.
I think we are a good team, we think in similar ways and can speak about almost anything. We both hear the ideas of the other one and analyze all aspects. We both like to think about the structure first and turn ideas around to look from different points of view.
He is long here and knows everything about the current version so he can say if something is worth to try or is not going to work from the very beginning, but he is open minded. And I am new to the project, so I don't have old habits and am not blocked / blinded by established structures so many of my "why not...?" or "have you consider...?" are kind of fresh air that he likes and bring him new ideas.
I really think we can do a huge improvement and expand the action radius of the software to several new fields, if (a big IF) we are allowed to do it, because we are the only developers in the department with our bunch of daily routines / tasks to keep us busy and we estimate that we will at least need 1,5 years to do the rewrite (with around 30% to 40% of dedication within the everyday work), and maybe another year to add the features that we are sure we need to keep us or even increase our current field-share in the mid-term future. And we do know too, that without that re-write we might get "obsolete" because we won't be able to fill the increasing exigence of requirements in future projects / use cases.
But... sadly not our decision.
Let's see what happens
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.
I have a few personal projects I have been tinkering with for years. One is a interpreter/VM for a language i designed 30 years ago - I have rewritten it in: Fortran IV, PL/1, C, C++, Delphi, C# over the years. I understand the code and its far from trivial so I use it as a way to learn a new language.
I've known plenty of developers who were afraid of touching code.
A new feature was always "glued" to already existing code, no matter if it fit.
The result is almost always some Frankenstein's monster.
I say, we have source control so we can always roll back or see how it was.
We have advanced developer tools showing us where what code is used.
Just rewrite the thing if necessary and make sure you test well (unit tests, if possible, and manual tests).
Never forget to manual test everything you've touched.
I know some developers don't manual test because "that's a tester's job".
In the end it's my code and I'm responsible for it.
I'm the master of my code (even if I inherited it) and not the other way around.
The last few years I've acquired responsibility for a number of our products that were written by someone else now laid off, retired, dead, or some combination thereof. One of the earliest lessons I learned in this is: If you want to get back to your real job, and the code you wrote:
Don't. . With. It.
Don't rewrite bits you don't like, don't reformat the code to your style, don't refactor unless it's part of fixing the bug. Once you start down that path you look up one day to find you're the schmuck who works on all the old crap. You're also bitter and depressed from looking at poorly-written shït all day long, and the temptation to just fix it becomes irresistable.
I spent over 200 manhours a couple of years ago debugging some embedded code, parts of which had been around since the mid 1990's. Ultimately I found and fixed a bug in the TCP/IP stack we'd bought from a third party in 1995. The fix was about 20 lines of code. I probably could have rewritten the whole ing thing in six months, but then it would have been mine forever.
I'd say fear of breaking code is the symptom. The cause is not being able to read, synthesize, and understand what the code is doing in order to introduce a new mechanism which is stronger/more efficient/more readable/whatever your reason for refactoring is.
Sometimes, that's understandable due to a system being horrifically designed and having layers of dependency (the old house-of-cards or ball-of-mud syndrome). It's tough to refactor one piece of data when it's put into session and potentially used anywhere in the application, for example.
A lot of the time, though, it's simply due to hiring bad developers. Some people simply don't have the horsepower to do serious analysis.
I think you are simplifying pretty badly there. It doesn't take incompetence or lack of foresight or anything of that nature to get to this situation. It only requires the realities of commercial development where people leave the company and take their knowledge with them, where just keeping up with the competition and customer requirements takes all your time plus some, etc...
A colleague was just rewriting some code that broke due to new/different behavior in an updated JVM. He was nervous about doing it since it was a critical function, but it had to be done.
I recommended that he enumerate all of the affected classes (hundreds) and dump the state with the old JVM. Apply the update and code changes and dump the state again. If there really was no impact, the before and after would be the same.
This is a typical practice that I do all of the time when refactoring to add a feature. The refactor should NOT change the results at all. Once the refactor is complete, then you add a new feature and you should see the expected changes.
He did that and found where his new code was producing better results because it fixed a rare but unreported issue in the older code. Now the new code can run any on either JVM and produce the same results.
That's only doable on certain types of software. In highly user configurable, very broad software products, there's no practical way to do that.
And of course we aren't just talking about refactoring. In many cases it will require jettisoning out of date tools that have deep roots in the code base, and/or rewriting large chunks of it that might contains really complex domain knowledge.
Explorans limites defectum
Last Visit: 31-Dec-99 19:00 Last Update: 5-Mar-21 14:56