If someone walks into a shop and cannot name the (commonplace) items on the shelves, they should not be allowed to go shopping on their own, let alone write code.
Likewise layers upon layers of (poorly named) abstractions, along with countless anonymous lambdas and closures might look neat at the small scale but eventually lead to unmaintainable code. Giving an unnamed temporary an explicit and meaningful name greatly improves readability and eases maintainance for almost no cost. Sadly I know I've already lost that debate...
I think some people notice the entropy in everything more than others.
Other people notice the adaptation more than the entropy.
So if you ask them a general question of an organic quality like this, you'll get answers of roughly a couple of different strains.
I think it says more about the person answering the question than it does about coders in general.
For my part, I think I've gotten better as a coder, but on the other hand, coding is a growing field which suggests there are more beginners than experts. The faster it grows the more that ratio grows.
So maybe we're experiencing some growing pains as a field.
But taken as a whole I think people tend to get better at their craft until the point where they really should retire. Since we tend to stand on the shoulders of those that come before us my hope is the field is or at least will be getting back to being better/more professional/more proficient as a whole.
But I don't know, since I've been out of mainstream development circles for years now. I code professionally, but the work I do is solitary and niche. I don't have my finger on the pulse, so to speak.
On the one hand there is the force of progress lifting the boat of programming practice thanks to modern computer languages, sophisticated IDE's and dev tools, formal methodologies for team development ...
On the other hand: the awkward elaborate dances to interface apps to the web, and browsers, and the thousand-and-one frameworks ... many aggregating other frameworks ... imho pull the boat back down.
The new "holy grail" of WORA (write once, run anywhere) multi-platform apps (same myth, new coat of shiny metaphors) remains as much a bloody graveyard of knights-errant slain by the dragons of pixels ... as ever.
«One day it will have to be officially admitted that what we have christened reality is an even greater illusion than the world of dreams.» Salvador Dali
Unit tests provide automated regression testing and code coverage to spot untested areas.
IDEs provide templated suggestions and highlight potential defects.
Widely read books help form consistency among the community as a whole which improves readability.
Design patterns, likewise, provide recognizable patterns that are well known.
Just have a look to the Q&A or SO or other online forums...
- Many newbies are not capable or are too lazy to do basic research or even try on their own.
- Many teachers are not good enough to explain things that are beyond reading the script.
- Software Landscape transforms continually and grows too fast to keep up to date in everything. But people still try to use all the new crap before getting the last one right.
- Legal or Marketing departments get in many places more money than I+D
- Managers are getting way greedier and try to get every damn penny at any costs, and mostly the sacrified part is salary to get an experienced dev, time to get the job done, budget to get the right tools.
For me it is no wonder that software quality is getting worse, the wonder is that there are still good products out there...
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.
Modern tools allow us to write (and run) code in quantities that would have been unimaginable 30 years ago, but IMO the quality of this code is not as high.
This is not necessarily because developers are less professional, but because of the development methodology used. With few exceptions, no attempt is made to produce bug-free code; bugs that don't actually crash the system are tolerated in the initial release because they can easily be fixed in the field using some sort of patch mechanism.
The result is that we now have multiple daemons running in the background, eating CPU cycles and memory, dialling home periodically to check for updates. Is this really the best we can do?
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
Most of the comment I read were focused on readability/maintainability of the code as a measure of code quality. If this is the measure, then the automated tools are helping to improve that (for someone's definition of more readable code ).
I think a more important measure of code quality is how robust the code is, especially in this age of finding exploitable holes has become the favorite pass time of many.
Coupled with that is flexibility of the code - how easy is it for someone else to expand the code to add new features. Readability plays an important role in this, sure, but the basic design and framework is, in my opinion, more important.
In these last two areas, the quality of code has declined in my experience. The focus is on speed and low cost, starting with shorter training programs and more budget constrained projects.
It really does depend on how you are defining "good code", but you still really can't make a fair comparison.
I still remember developing under the 640k limit. We would spend weeks prior to release shrinking memory use to cut that to as close as half as possible. Modern programs don't seem to put as much care into memory usage. I don't really blame the developers so much. The tools and the dev environments don't really appear to be geared toward it. There is also the argument that memory is cheap, and it isn't a concern anymore. I can't argue with that either. I think I would rather see developers focusing more on security than memory usage myself.
It really is hard to compare coding from then and now. The focus is changed, and keeps changing over time. Thus the standards for evaluating code is also changing. It is kind of like comparing contemporary athletes to the "early greats". You really can't in a fair manner, since rules and even physical venues have also changed over time.
Money makes the world go round ... but documentation moves the money.
Working in the embedded world where hardware doesn't get updated all that often, we are still concerned with overall memory usage, but I agree that security is also an important consideration these days.
It seems like most code is getting easier to read and possibly maintain, but on the other hand the glut of code seems to be growing astronomically.
At the beginning of my career project executables were measured in kilobytes, but the code was dense and fragile and it could take hours of review for simple little changes to see how it might effect other code.
Newer projects are huge in comparison and much more to keep in your head when making changes. I still think a lot of code bases are fragile, but are basically layers of abstraction protecting the devs.
While I do think that we (as a profession) are putting out more and better quality code than ever before, I also think that the answer is different if you look at a smaller scale. I've noticed two antithetical trends:
Good code is getting better
Bad code is getting worse
At the "good" end, newer tooling like linters, intellisense, automated tests, security scanners, and CI/CD pipelines are enabling developers to write better code faster. We can focus on the important parts of the code and have more time to think through our solutions, because the overhead of coding is greatly reduced. When an entire day of manual testing can be replaced with 30 seconds of automated test cases, the time savings really start to add up.
On the other hand, these same tools that help us write better code can also be misused. I've met developers who use their tools as an excuse to ignore best practices entirely. "If the tests show green, then the code is good" and similar attitudes are way too common. But no tool is perfect - so if you expect your IDE to warn you that you've created a god-object that will be a ball of spaghetti code by this time next year, then you're in for a bad time.
There's probably a lot more to it than this, and I could be wrong entirely. I'd love to hear any feedback, disagreement, or alternate explanations.
The DoD (Definition of Done) often reads like a checklist, one of the points being "...and unit tests green".
BUT... no definition of the tests.
The AC (Acceptance Criteria) often read like another checklist, and then some devs walk that road saying "each topic in the AC leads to a unit test and if it is green, the code is fine".
No more active thinking, no more looking around a bit more than this one exact topic. It's green. Job done.
A real problem. But due to the (at some places) very high job rotation and changing job every year or two, those problems arise not for the devs that make the code, but for their replacement. They need to fix things they've never made wrong. And how do they do it?
They write a bug ticket.
Define AC's. Follow the DoD.
Then it's green.
...after a year, they move to the next company...
I believe that we are writing much better, more bug free code. When I started in this field, COBOL, FORTRAN and BAL (IBM Basic Assembly Language, for you youngsters ) were the big languages. Writing bug-free code was hard. Today, the object oriented, scope controlled languages are designed to minimize the writing of buggy code. Libraries of existing code make coding the problem solution relatively quick and easy.
The problem is not the code. It is all the steps leading up to the coding. Over the years, I have seen newbies (i.e. recent graduates) and junior engineers come to the profession with less and less ability in solving problems. This starts with identifying the problem to be solved, understanding it, and then developing a solution, with built in resilience. By resilience, I mean handling irrational or unexpected inputs, network access failures, hardware failures, credential revocation, malicious attacks, and so forth.
Many have trouble simply extracting the problem to be solved from the textual description. This is a skill that should be learned in high school and college math and science courses. Dividing the problem into solvable pieces should be taught in engineering courses, especially the lab courses. I found that I spent too much time mentoring new junior co-workers and each year, it seemed to get worse and worse. The girls were usually somewhat better than the boys, but afraid to act and speak. They boys often boasted of skills and abilities (book learning) they had yet to actually learn to use. Almost all of them eventually did well and were either promoted or moved on to other employers.
My judgement is based on the fact we have to do less in regards to writing our bodies of code. We can pull in a large array of libraries where we call our methods to do the work as opposed to creating this code ourselves. in my mind we our tradesman, an it is how you use the tools, as it is also a lot easier to write very bad code, as we have an abundance of resources that might point us in a counter direction.
In my experience, more senior people these days are missing the point, and fail to grasp that the demon we're fighting is called complexity.
It's a worrying trend, because I don't like cleaning up after seniors. They get annoyed and snarky.
On the other hand, the work environment keeps improving year after year.
I'm in a mature fintech company now, and apart from the occasional mansplaining, people have stopped assuming I know nothing just because I'm a woman, which is nice.
No ability to handle general solutions (if the dev didn't know about it, it doesn't exist).
Concentration of efforts on form rather than substance.
Written, ever more often, by those lacking a real attention span, except for grasping their cell phones in one of their paws.
Driven by business models that want a product, even if they know it's defective.
And did I mention "Agile" ?
Debugging via the clients (gotta meet those deadlines!)
Start of a new project:
"We do SCRUM here, so Agile basically, which means we tell you want we want every other week.
For you that means you just build what we've asked you, and you don't have to worry about details or documentation.
Just make it work. Let us worry about the details."
3 months later:
"What does this button do? I don't remember asking anything like that. That doesn't sound like anything I would ask.
I mean, if you think about it, it doesn't even make sense to have a button like that. Just remove it."
6 months later:
"Well, I don't get it. I mean, I get the idea, obviously, but I don't understand how it's all supposed to come together in a way that makes sense to the client.
Let's redesign the interface, this time, I'll micro-manage all the design decisions."
9 months later:
"At a trial run at a potential customer, we got the remark that our core functionality was somehow missing. How is that even possible?
I clearly remember seeing it at some point in the past. Where did it go? Who is cutting features without consulting me?"
The change I see is not so much on overall code quality (it's always the same: experienced seniors write stable code "as they did for so much time" and juniors with little to no experience "write code where the cursor just happens to be").
So many (young and old!) devs totally under-estimate that correct formatting, naming and intending is super important.
I call that "polite code". Code that can be read by someone else.
Not that mess of a mixture between blanks-or-tabs, sometimes 4-indent, sometimes 2...
The real problem these days arise when a web-dev meets a desktop-dev, or a so-called "script dev" meets an "oop dev" (don't you dare to tell me, php is an OO-language! it knows classes, but that doesn't make it an OO-language).
There are really GREAT frameworks and design toolkits and a thousand great libraries out there that allow you to create cool websites in minutes.
Same is true (to some extend) for mobile devs and a bit less for desktop devs.
One can like or dislike platform-independent languages which produce output for all of them, that's not the point.
The point is, while some (experienced and newbs alike) see coding these days more like building a Lego-House by just putting some pre-made pieces together and hoping, it will all work out, some others don't like all that high-level stuff and prefer to code as-low-level-as-possible, doing everything by hand.
While its true, the latter will have more control over things that happen, the others are super-fast. Managers often see only "time and costs" and will go for the high-level path.
8 out of 10 websites look the same, have the same bugs, because they are done with the same toolkit by the same kind of people, who no longer care about "the stuff behind the scenes".
But they were fast and therefore cheap.
So, yes, the overall quality is going down. Lightning-fast. But it looks amazingly good while it does that.