Click here to Skip to main content
15,881,709 members
Articles
(untagged)

Lessons from a Life in a Chair

Rate me:
Please Sign up or sign in to vote.
4.74/5 (14 votes)
22 Jun 2019CPOL16 min read 28.5K   6   11
Things I've learned, or at least come to believe, after 30+ years of hardcore software development

Introduction

I've been programing professionally since 1988, and for a number of years non-professionally before that. I started in the age of floppies and my High School computer class was BASIC programs on a mainframe terminal. So I've put in a good number of miles and hours in this thing of ours since then. I just wanted to throw out some things that I've learned or come to believe from spending 50'ish man-years in a programming chair and exploring a pretty wide variety of problem domains. Accept or ignore as you wish.

Complexity is the Devil

Ultimately, for those of us on the decidedly non-trivial end of the software spectrum, complexity is the devil. And, sadly, both optimization and flexibility are indirect tools of Satan. I say sadly, because there's no way we can really avoid them, and in appropriate measure they are very useful or absolutely necessary. But they clearly are big contributors to complexity that are not just the results of bad decisions or an infinitely uncaring universe. They are significant sources of complexity that even the best designed systems can't avoid and we create them purposefully.

Obviously on the very local scale, the tools can help a lot (see Say What you Mean below.) But on the larger scale, there's nothing going to save us that I can see. As a practical matter, we create linkages between disparate bits of code such that we can't really express to any tools enough information for them to be sure we are doing the right thing, both initially and over time. And we inevitably create situations where the internal state of objects are purposefully not coherent at all times so as to avoid overhead that may never be required, or to support some required or desired functionality (maybe like move semantics in C++.)

I have no answer for this at all, after 30+ years. I don't think that there is an answer. Unless someone makes a 'Do Exactly What my Idea Needs in a Can" framework that has all of the issues pre-worked out, every significant undertaking is going to face those complexity issues and it seems to me nothing but extreme human vigilance can ultimately be used to manage it. And we humans aren't all that great at that.

The Persistence of Persistence

One thing I've learned the hard way is always, always make sure that any data you persist is versioned and carefully structured to allow for extension. Failure to do this almost always bites you in the butt somehow, somewhere. Once you get something stored and it's not versioned, and if that data type is in turn persisted as part of a bunch of other types, the only way to fix an issue may be to do something in every one of those containing types, since you may have to depend on the version of the containing type to know if you are dealing with the old, bad format or the new fixed one.

I have a few of these mistakes I made way, way back in the early days of my system, and wouldn't even want to consider having to make up for them even now, so they are still there and will likely have to remain unchanged forever.

Another mistake I made early on was to write out the contained data values first, then the stuff about the containing data type itself. But, that means that you can't even use the version of the containing data type to correct errors in the persistence of unversioned contained values because you can't get the version of the containing data type value until you've read all the contained values. That one bit me hard early on in a couple cases. So the lesson there is that if the data is hierarchical, make sure your versioning info matches that (pre-order I guess that would be.) That way, you always have an out to correct errors in contained value persistence.

An Empire in my Underwear

No, not that. I should be so lucky. No, this one is related to the fact that one of the great things about software is that it's one of those all too few undertakings where you can sit in your bedroom in your undies and potentially change the world, or create the seed that grows into a business empire that massively changes your bank balance and creates a lot of jobs. There are some others of course, though many of those are more of the creative nature where the value of your labor is much more a matter of opinion and fashion than a matter of demonstrated utility as it is more likely to be with software.

With a computer, a compiler, and a concept, you can potentially make a real difference in the world in some way. 

Say What You Mean

Though there is of course a continuum of software endeavors, and some work on the lighter end of that spectrum may not be all that sensitive to this or that choice because they just aren't that complex, for those things that do reach the level where complexity itself becomes the enemy, the most explicit statement of intent is going to be best.

There is probably always some amount of pressure on languages to make it easier to write code fast. It's the same with most products. The ones that someone can sit down with and get to the happy-clappy demo stage quickest with will likely have an acceptance advantage. And that can be a great advantage for those things on the lighter end of the spectrum, or for creating the demo to get the VCs to make you into a golden unicorn.

But for complex systems that you are creating to sell, it's very much a matter of you write it once and you suffer for that sin forever more afterwards. So anything that increases speed of development at the cost of explicit expression of semantics (which is what tells the tools what you really mean, which is the only thing that lets the tools tell you that you are indeed doing what you really intend) is not a good tradeoff. In my opinion, all languages should prioritize increase in the ability to express semantics. Rust, for example, is doing some interesting things on this front, though I disagree with some of their other decisions.

The Inevitability of Middle Age

Languages seem to tend follow a similar life arc as people generally do. They start off fairly lean and focused. Then they slowly put on more and more poundage. Partly I guess it's the 'swim or die' thing, where you feel like you have to add features constantly or you will be perceived as falling behind and becoming irrelevant. Partly from trying to be more and more things to more and more people to increase appeal and applicability. Partly from dealing with users who are constantly arguing for the often widely varied or mutually exclusive bits and bobs they are particularly obsessed about.

In the end, the language ends up the overweight middle aged guy in a speedo. It's kind of bloated, too complex, too diffuse, trying to be too many things to too many people. It's become what it in some cases may have even been created in opposition to, something often true in the wider world in general. It leaves me sort of hoping for a punk revolution sometimes.

Somewhat of a related argument to this is that I think languages have to have the courage once in a while to just say, that's it. This is the end of this line of evolution. It will be maintained but nothing more. We need to drop a lot of evolutionary baggage and set up a new base camp at considerably higher altitude. Obviously that's hard, but the consequences of not doing it are pretty obvious with ever-accumulating evolutionary baggage, and probably whole areas of the language that effectively become immune to fundamental improvement because they can't realistically be rebuilt while the house is occupied.

The Mistakes of the Past Become the Promise of the Future

Wait around long enough, and the things that were in the past proven in real-world practice to be ill-formed and sub-optimal, and then corrected at great sacrifice, will come back around as the radical new future vision. Once you have enough people who started their careers after that solution was painfully implemented, they will grow up in a world where the only target for their frustration is the thing that was introduced to fix that original problem.

Then you eventually get a critical mass of people who have no memory of how bad it was before, they only see the issues that exist for them now, they see the results of bad human decisions and blame it on the tools and techniques, or believe that inherent problem/people complexity is actually tools and techniques complexity, and they start to argue that the old stuff is really the answer to all these problems they see around them, because it has to be better than the current paradigm. They often push these ideas as modernist when in fact they may be quite retrograde. They don't realise that, if they go back, they will still have all those bad human decisions, all that same inherent problem and people complexity, but now in the context of a set of techniques that were soundly rejected years ago for good reason.

Entrepreneurs vs. Mercenaries

It seems to me that there are two fundamental classes of developers. There are those who want to create something of their own to sell and there are those who work for other people. That's pretty obvious, and might seem unrelated to development, but I think these two undertakings create a very different view of the software world. For the former, the language is mostly just a tool, a means to an end. There's no point chasing the latest and greatest language features because your customers could care less. All they care about is features and quality. So, to the degree that a new language feature doesn't really contribute to the product or the code quality, the entrepreneur may not care at all.

Mercenaries, on the other hand, seem to me to tend to be more obsessed with the language itself because they believe (often justifiably) that knowing all the latest features is important to them to get to their next job. I.e. perhaps the language is as much a tool of career advancement to them. Therefore they are perhaps more likely to adopt new features just because they are there and presumably because they might get asked about them in their next interview.

This, it seems to me, is why you see so many people in language oriented forums arguing about the finer points of really new language features, or even stuff not likely to even happen for years yet, if at all, while actual companies are probably mostly not even fully using features from two revisions back. 

I think this creates a bit of dissonance in online discussions because the underlying views of the participants may be so different but neither side really understands the other person's point of departure clearly. And it has to be said that the bulk of people in programming forums seem to tend towards the mercenary type, so that the entrepreneurial view is often not necessarily well received or understood.

Let the X Percenters Take Care of Themselves

We all grow up learning about the evils of premature optimization. And those lessons are correct. You can spend months and months optimizing code and introducing lots of extra complexity for little gain, while a simple tweak in a very limited area of the code might ultimately provide orders of magnitude more performance. And plenty of programs have no significant performance constraints at all.

But, in the C++ world for example, there seems to be this current obsession with performance optimization that sometimes also results in lots of extra complexity in the underlying infrastructure and applications, when it probably only actually is required in a very small percentage of programs, and even then within small areas of those programs. There are gasps of horror from the audience at virtual methods or runtime inheritance, when the actual difference this will make in the bulk of programs is not even worth worrying about worrying about, and what's used should be driven purely by what works best for you from an implementation point of view.

Obviously general purpose code does have some extra obligation on this front, but ultimately I think that introducing large amounts of complexity to heavily optimize even general purpose code is not a win overall. That code becomes far harder to maintain, harder to move forward quickly with safety, takes more brain cycles that could be applied to other things, and is more likely to have bugs. So it's really stealing from the 90% to serve the 10%, or whatever the actual relative percentages might be.

I say let those folks with exceptional performance requirements take care of themselves, and those few places in the more average program that really need significant optimization be dealt with specifically. That doesn't necessarily mean every one of them has to roll their own, but that they should at least use specialized tools for those programs or small bits of programs that really need it. That means more time goes into the stuff that will benefit the bulk of us more, and all our code is less likely to have bugs introduced over time.

Take Project Structure Seriously From Day One

Though it's easy to say, oh, we can always restructure it later, we all know that, in the real world with fairly sizeable teams and large code bases and probably a fair bit of gnarly code that no one wants to touch anymore, that actually doing any significant restructuring can be really difficult to justify to the powers that be, when you are working hard to just keep putting down the tracks in front of the train. And even difficult to justify to yourself, knowing it really needs to be done, given that you will get paid the same whether you risk a heart attack or not.

So I'd argue to take project structure seriously from the start. Think ahead out to some fairly worst case scenarios and plan for a lot of growth. If it never happens, it doesn't cost you that much. If it does, you will be more ready for it. Even if it seems like you are getting too fiddly at first, probably you won't regret that in the end.

Obviously this isn't the biggest issue in the world. I just mention it because it's all too easy to start a project just thinking, well let's get something working then we can see where to go from there. Then you get something working, business reality kicks in, and suddenly it's years later and it's a mess that will be brutal to straighten out, and you now have to do it while the train is moving fast.

Likely you still won't likely get it perfect up front, but some amount of serious preparatory thought and a bit more up front infrastructure setup work is generally worth it for non-trivial projects. I'm a bit better off than most, being a lone wolf developer, so it's easier to stop and just wholesale make changes across the whole code base. But they can be soul slash brain cell destroying, and possibly avoidable time wasted that could be spent on far more productive things.

Separation of Data and Presentation

This is an obvious one but it's still easy to get wrong. When you are starting a new big chunk, and it's a struggle just to get it done to begin with, it can be easy to forget things like this, and it's often a lot more work to set these things up right up front. I've suffered from this one well enough. In my defence, most of my mistakes (some of which I still haven't dealt with fully because of the difficulty of doing so) were made long ago before this issue was something that was drilled into all of us on a daily basis. 

In my case, the big one is in my automation system's touch screen system. It's very complex. Just getting the original bits of it done was a monster undertaking, and it's grown massively since. Like many such things, it's a set of graphical, and often interactive, widgets that you can place on the screen via a designer and configure to look and act/react like you want. I ended up with the data that configures those widgets being part of the class hierarchy that does the actual display of them, so the two are tied together.

Very sub-optimal, though fifteen'ish years ago when I did the original work, I wasn't so aware of those types of issues. I can untie that knot, but it will be a lot of work at the expense of other important things and there are only so many super-model parties I can miss.

Never Give Up, Never Surrender

In the end, software is the perfect challenge for the techno-geek like me and probably many who are reading this (assuming many actually ever read this or read this far.) It's you against the dark forces of chaos. It's got all of the intellectual challenges of something like math or pure logic, but with (potentially at least) practical consequences. And it generally pays a lot better than either of those as well.

If you are just starting out down this road, just stick with it. Like any sort of open ended endeavor, nothing but time spent at the grindstone is going to make you better. You can't really think your way through it. You have to just get your hands bloody and ultimately sacrifice a good chunk of your time in this mortal coil if you want to really become a master of the art.

For some of us, that's not a bad trade off because we aren't necessarily that comfortable in the mortal coil to begin with. But, either way, you aren't going to become a master via casual effort. It will require a considerable commitment. Still, most anything in this life does to one degree or another if you want to get paid well for it, since if it was easy everyone could do it.

If you have to put in the time anyway, I'd argue that it's a good choice because it offers a lot of intellectual, geographical, and problem domain portability. Almost everyone needs software as part of whatever they are doing, and wherever they are. So you can either concentrate your whole life in one area, or dive into a number of different worlds with a set of skills that are useful in a lot of places. Throw in above average compensation, the ability in a lot of cases to work from home, and the lack of injuries and sore muscles at the end of the day, and it could be a lot worse.

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


Written By
Founder Charmed Quark Systems
United States United States
Dean Roddey is the author of CQC (the Charmed Quark Controller), a powerful, software-based automation platform, and the large open source project (CIDLib) on which it is based.

www.charmedquark.com
https://github.com/DeanRoddey/CIDLib

Comments and Discussions

 
Questionoi'l give it foive Pin
dmjm-h26-Jun-19 7:40
dmjm-h26-Jun-19 7:40 
AnswerRe: oi'l give it foive Pin
Dean Roddey26-Jun-19 8:52
Dean Roddey26-Jun-19 8:52 
QuestionExcellant! Pin
steve.tabler25-Jun-19 4:55
steve.tabler25-Jun-19 4:55 
AnswerRe: Excellant! Pin
Dean Roddey25-Jun-19 5:36
Dean Roddey25-Jun-19 5:36 
PraiseWell done! Pin
AnotherKen24-Jun-19 15:02
professionalAnotherKen24-Jun-19 15:02 
GeneralRe: Well done! Pin
Dean Roddey24-Jun-19 16:18
Dean Roddey24-Jun-19 16:18 
PraiseOptimization Pin
englebart24-Jun-19 8:47
professionalenglebart24-Jun-19 8:47 
GeneralRe: Optimization Pin
Dean Roddey24-Jun-19 16:19
Dean Roddey24-Jun-19 16:19 
GeneralRe: Optimization Pin
Member 202563026-Jun-19 0:07
Member 202563026-Jun-19 0:07 
PraiseThanks Dean! Pin
Blake Miller24-Jun-19 7:52
Blake Miller24-Jun-19 7:52 
GeneralRe: Thanks Dean! Pin
Dean Roddey24-Jun-19 16:20
Dean Roddey24-Jun-19 16:20 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.