The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
I've done this two or three times. I've written something over the course of a few months and been so in the groove by the end that I produced prodigious amounts of amazingly complex code to perform miracles of processing. Then I look at it year later and wonder who wrote this? Me? I can't do that! ....but apparently I did.
I wrote an expert system, in FORTRAN, that was still being used, unchanged, over twenty years later. Apparently someone had ported it from one mainframe to another but hadn't changed it in any basic way; they also said they weren't entirely sure how it worked, it just did.
I am currently looking at some code that I have ported from one platform to another three times since I first wrote the basic system in 1992. I started in Rexx on an IBM mainframe, then ported it to C on a PC, then to C++ as a web-hosted application communicated with via email, then to C# as a heavily interactive web-page. I am now making it a hybrid web/desktop for performance reasons but a lot of the innards (in a DLL) are a mystery to me now! Mostly I am changing and expanding the user interface to it.
There are a couple of other complex projects I have done that I probably would have no idea how I to even start on them now.
- I would love to change the world, but they won’t give me the source code.
I watched someone using one of my tools once. He would start it and due to the size of the project it would take about 20-30 minutes to complete. He was literally just playing with his phone for 3/4's of his day instead of doing other things.
I got so angry I literally rewrote the whole thing an entirely different way while consuming an entire bottle of scotch. I woke up with no recollection of what I had done, but it worked and that same task only took 30 seconds! Years later I did unravel what I did and still don't understand how I made something relatively nice while hammered drunk.
Clearly my first implementation was not a good one, but in my defense it wasn't meant to be run on excessively large data sets at the time.
I can provide a well-aged perspective here. Most of the software I write professionally has a long life. Development typically takes a couple of years and then maintenance lasts for a decade or more. I have an internal tool that got its start in 2000 and I'm still actively developing and maintaining it. In other words, I get to go back and look at code I wrote a long time ago - a lot.
Over time my coding and commenting style have matured. I write things as simply as possible. I avoid being clever, and I avoid 'clever' language features. I name things carefully, especially when they are tied to a particular part of the product or hardware. I use comments only to say things the code can't and to link the code to hardware or documentation when necessary.
During maintenance I refactor to simplify things or just to improve readability. The worst case is that I branch the code in question and refactor the branch until the cognitive dissonance from it is tolerable. Usually it doesn't go that far, as I'm probably trying to fix a problem, and I'll find it during the refactoring. Often I can just discard the branch, fix the bug in the original code, and go on. Sometimes I'll keep some of the refactoring. It depends upon the scope of the problem, the fix, and the risk associated with them.
The end result is that it's fairly rare for me to look back at something I wrote and be baffled or confused by it. The point here is that you only acquire this skill by doing it. If you only write code that you never revisit, you've lost the learning opportunity that arises when you do.
I'm not confused by it. It's actually fairly easy for me to understand, considering what it does.
And what it does is complicated, any way you slice it. For starters, it relies on a LALR algorithm which is confusing no matter how you break it down. LALR is just complicated.
Furthermore on top of that, I have a non-deterministic worker that finds all possible trees for a parse based on an ambiguous grammar. Again, it's just complicated, but it can be simplified a bit, unlike LALR, and I had simplified it
Finally, it has to take all of this and generate code in most major .NET languages (usually C# or VB.NET)
It's just a complicated project. I avoid "clever" as well when I don't have to, but some of the features my generator has are implemented cleverly because the alternative is far larger in terms of code, and slower to execute.
In my case, the complication arises from the number of agents acting on a variety of time scales. We build commercial ink-jet printing systems. At one scale, you have a 40,000 foot roll of paper that may take up to an hour to be printed through the machine. At the opposite end of the scale, you are generating and tracking over a billion drops of ink per second, each measuring 6-9pL in volume. In between, that paper is moving through the press at 17 feet per second and a user navigating a touch screen. The agents I mentioned include PLC's, custom processors and hardware managing the press, the actual ink-jet, image quality cameras, and system timing. Our product consists of a UI application and several Windows services which divvy-up responsibilities. All of them including the UI are heavily multithreaded.
My point in all this is that complexity in a given project can arise for any number of reasons. My experience has been that the key to managing that complexity is through professionalism and craft. I'm afraid your work hits something of a nerve with me. I've had a couple unfortunate experiences with folks whose work was more computer science than engineering, and had a generally low opinion of coders in the trenches.
I feel I need to clarify that I don't have a low opinion of coders in the trenches. I used to be one.
That said, my code I post here isn't bizdev code, or even team developed. I code for the situation I'm in. My professional business software source doesn't look like the source I code in my free time where I can make it look and perform how *I* want to. It takes me less work to do it my way and I find the freedom of it liberating.
I think it's weird that you consider my code more CS than engineering, since I've never taken a CS course in my life.
I think it's weird that you consider my code more CS than engineering
My bad. I'm stereotyping your code based on the subject matter: parsers and the surrounding ecosystem. That area of expertise has always seemed to be dominated by academics, in my experience.
I occasionally do work on the side from my M-F/8-4 job. One job was for a university professor who used graduate students as slave labor. They needed a multithreaded app to setup and control some hardware they were developing for sale outside the university. There was quite the culture shock when I started submitting code to them. They were used to using and writing code that started with the bare minimum necessary to perform some function, and then layered error handling and UI on top. The notion of architecting a solution in advance that kept these considerations in mind was utterly foreign to them.
The more noteworthy job was software to run a prototype machine. An intern at the company had written hardware control primitives that were quite good. A scientist wrote code that performed the detailed mathematics required to execute the machine's actual function. The scientist was a good mathematician, but a terrible programmer. I was hired to write a test bed application to let the company demonstrate the hardware to their customer. I wrote UI and integrated the intern's hardware primitives in short order. Integrating the mathematics was a disaster. I routinely set the warning level on my compilers at maximum just ensure that the stupid mistakes are caught. The scientist's code wouldn't compile clean, even at warning level zero. Lots of ill-advised pointer arithmetic, a global misunderstanding of type casting, random switching between float vs. double, a firm belief that array indices in C started at 1 (see the pointer arithmetic), and so on. I tried to work around the problems for a while, but finally gave up.
The scientist wouldn't give me a copy of his design notes for the mathematics, so I finally went to the head of the project for them. Between those notes and reverse-engineering his code, I was able to replace the mad scientist code with something a lot more robust. Interestingly this was one time in my career where my courses in numerical methods in college really came in handy. I replaced some of the scientist's integration and differentiation code with other algorithms to address precision issues. I even found some operational errors in the design as I coded the replacement. Since this was a prototype and a demonstrator, it wasn't too hard to make the code switchable to demonstrate the original mathematics versus mine. Since the original math crashed the app well over half the time, or took minutes to produce a result, and my code took a couple of seconds and never crashed, it made an impression.
The funny part of the whole thing was that I didn't know anything about the problem domain that the math was being used in. I just knew when the syntax of the operations being performed didn't make sense (multiplying a 5x7 matrix by a 4x3, for example), or that the order of operations was likely to cause an overflow or underflow, or that units conversions were not being handled correctly.
I'm stereotyping your code based on the subject matter: parsers and the surrounding ecosystem. That area of expertise has always seemed to be dominated by academics, in my experience.
It is, but only because I never went to school for software development, so now that I have time, I'm picking up on the CS fundamentals I never learned. It's not to make my code more academic, but it's to round out my knowledge.
Gary R. Wheeler wrote:
The scientist wouldn't give me a copy of his design notes for the mathematics, so I finally went to the head of the project for them. Between those notes and reverse-engineering his code, I was able to replace the mad scientist code with something a lot more robust.
That doesn't surprise me actually. This might be my failing in assuming code produced by academics has no place in production, but that's where I'm at and how I feel. We may even share that opinion.
Still, I don't want to be too hard on them, and I think being able to give your algorithms formal mathematical treatment has its place, especially with really complicated algorithms.
In all seriousness, I've been told I'm good before, but I figure as long as there is room for improvement I'd rather think of myself as still learning - I've also been told that humility is the seed of wisdom.
A lot of my code is pretty lean though, and sometimes elegant. My documentation is spotty when left to my own devices but I'm getting better about it (again).
That's not to say I haven't written a lot of WTF code. In fact, my first attempt at doing anything non-trivial in terms of an application is often garbage. I even plan for that. I consider my first attempt a draft. It's that bad sometimes.
The legendary Fred Brooks, in The Mythical Man-Month (1975), p116:
The management question, therefore, is not whether to build a pilot system and throw it away. You will do that. The only question is whether to plan in advance to build a throwaway, or to promise to deliver the throwaway to customers. Seen this way, the answer is much clearer. Delivering that throwaway to customers buys time, but it only does so at the cost of agony for the user, distraction for the builders while they do the redesign, and a bad reputation for the product that the best redesign will find hard to live down.
Hence, plan to throw one away; you will, anyhow.
Ain't that the truth!
Software rusts. Simon Stephenson, ca 1994. So does this signature. me, 2012