The Lounge is rated PG. If you're about to post something you wouldn't want your
kid sister to read then don't post it. No flame wars, no abusive conduct, no programming
questions and please don't post ads.
I was a little too young to learn programming in the 60s, but had my university education around 1980. That was an age where a lot of exciting things happened in the language world - every man had his own language. I can't count the languages I have been programming in.
So when people asked me how many langugages I were familiar with, I said: Maybe three or four.
Second comes the "workspace" model which is like a sandpit where you throw in and take out functions and objects over time. Smalltalk comes in this class; my main experience is with APL, which is also quite different with its extremely array based data structures. Yet, the most essnetial difference from the algorithic group is the workspace model.
The third language is the predicate languages, of which I have only used three variants: SQL, Prolog and SNOBOL. You could say that SNOBOL is a semi-algorithmic language; it has flow control and at the top level is rather sequential, but the more familiar you are with the language, the more you leave to predicate logic. I never became friendly with XSLT, and am happy that I managed to sneak out of it regex I have to do every now and then.
Fourth language: Essentially Lisp, which I have only used for programming Emacs. I can see that some people like it, but it doesn't give you much help in offloading the conceptual model into the saved file; essentially you must free up space inside your brain to hold the model.
For a couple of projects, we peeked at functional programming, but I wouldn't count that as number five. We just studied the syntax of Erlang (and peeked at others from a distance), but I never really tried it in pratice.
I would rather like to add as number five: Data modelling - not a complete language by itself, as it usually won't provide definition of operations, only the interface to them. I think this language (group) is the most underrated one! A good data model, whether you use ER or ASN.1 or an XML schema, is essential to understand the problem domain. You may say that if you use the whole of the OSI communication protocol stack toolbox, you do operations modelling very much in the sam spirit as you do data modelling.
We may add yet another language group: State/event programming. Look at how the OSI session layer is programmed: Tables with current state along one axis, event along the other, each square stating predicates and actions. No other language could possibly come close to that state/event description of the logic, when it comes to clarity, consiseness and freedom from ambiguity.
- - -
Gee, did we have a plethora of languages in those days! Now, we have chopped off eighty percent of the algorithic group and 100% of the rest. State/event is squeezed into a switch. Data modelling is squeezed into C struct. The workspace concept is squeezed into C malloc, or by its modern name: new. List processing is squeezed into C linked structures ... And so on. One bad thing is that today's young programmers never ever question the () around if/while conditions (whether they program in a C variant, Python, Java,... When they see Pascal code, they wonder: Didn't you forget the parentheses? Talk to them about workspace models, and they worry how efficient the malloc will be in such an environment, and when a Cobol decimal field is 6 digits wide they figure that a long long is needed to hold the value range.
Really, today's programming world is a lot more primitive than in the 80s.
And you could say the same about OSes. About file systems. About communication protocols. About user interfaces. Maybe the 80s were a chaos, but it certainly wasn't that primitive monoculture of today, where 95+% of all programming is done in C-like languages, 95+% of all data traffic is limited by the Internet Protocol family at least at some level or stage, 95% of all data is stored in a file system that provides no high level support, and for 95% of process/thread modelling the entire world uses one of two models - and those two certainly do not differ very much.
True: We have got dozens of variations on C. Dozens of variations of U*ix. Dozens of variations of sh. Dozens of variations on fork. But those are essentially small ripples on the surface. Fundamentally different ways of thinking and of doing things have weathered away. I think that is just as big a loss as the ripples are a problem.
As stated, it's been like this for decades. However, the tools that exist and people want are far more greater in number that it was even 10 or 20 years ago.
As a dev approaching 15+ years of work and 20+ years programming experience, you basically have two semi-smart choices to make as you grow up in this field. Position your self to grow from dev to middle management and beyond. Not everyone is managerial material, or wants to be so this isn't always the best option.
The other smart path is for the first 5-10 years of your career, look for the BIG tech tools that every big company uses for 5-10 years and become an expert amongst experts. After that tech is dead for 10+ years, dig up all of your old resources, load up some old VMs and re-familiarize yourself with "legacy" tech.
There are a lot of major corporations that are not in IT that still have old COBOL/RPG mainframes. There are also a ton of VB6/very-old Microsoft Access databases that are in use in even BIG companies. These companies often get to the point where this stuff breaks or they realize they have to throw a TON of money to upgrade their systems. Software specs tend to not exist within these companies (or were long lost) so they need legacy gurus to come in, dissect the existing system(s) and then fix it or spec-out a new system from old code. This is "consultation" work and can pay very well, if you are familiar with the right tech.
This isn's "sexy", but by golly if you get bored with the "rat" race of being perpetually obsolete, let your most obsolete expertise start to work for you. I'm not their yet, but I'm probably going to start pushing "WinForms" development around 2021. I plan to work hard at staying on top of what is relevant today but in 2021 I'll be 40 and that's a good age to start pulling out the old familiar tech that none of the "new kids", fresh out of school, have even heard of or will laugh at when they still see it in use. Suckers... it's the desperate corps that pay the best, who are scrambling to find the grey-beard experts.
Yup. I believe the latest buzzword today is "full stack" developer. Sorry, I don't buy that designation AT ALL. You could re-brand that "jack of all trades, master of none". I'm sorry, but the designation is pure B.S.
I've been developing code for 40 years and I think I've developed some good proficiency in that time and know a few good technologies to use in my development. My code gets answers and it runs FAST. (I've had more than one employer ask me in an incredulous tone "why does your stuff run so fast"?). Er, maybe it's because I don't haul in a couple of gigs of library code to run my executables...
I don't even apply for positions that are looking for "full stack" developers because, IMHO, they are completely disillusioned as to what software development is really about. I believe that would be ... solving problems? Full stack ... seriously?
If you think hiring a professional is expensive, wait until you hire an amateur! - Red Adair
Back in the day, the progression was:
Sr. Analyst (Or Business Analyst)
<some level="" of="" management="">
I always preferred adding Analyst. First you learn the syntax, and the environment.
As a Jr. Programmer, you often took someones scribbles of code on punch cards, and punched them.
The person reviewed them. One programmer could keep a few Jr. Programmers busy. (things changed).
Usually it was teams of both...
After the language/syntax and environment was learned. You moved up.
The real interplay is in taking business needs and getting to computer solutions.
My favorite job interview was where I was competing with someone with 5 years of Clipper for a Clipper job. I had SEEN clipper code, and did DBase code a little bit. But I had great analytical skills.
The guy interviewing me for a part-time position was convinced he would hire the "Pro", and not me, but already had my interview scheduled.
I simply explained that it is the Analysis where all the failures being. The syntax of the language is easy enough to learn. If you are solving the right problem. I asked him to think about the "fixes" he had to have the previous guy make. What percentage were:
- Did not understand the goal properly
- Logic Error (Did not express the goal properly)
- Lack of testing
- Lack of User Sign off
- User Error/User Confusion
- Bad Syntax/Failure to use the programming language correctly?
I explained to him, that if he hired me, I would drive the first few items to ZERO occurrences, and that my biggest fear was programming myself out of a job, because the current guy was constantly fixing his own mistakes. He laughed. He thought... He Hired...
One year later, he apologized that he ran out of work for me to do. Wrote me a 2 page letter of recommendation, and gave me a minimum number of hours each week to do whatever I wanted.
I want to hire creative problem solvers who know how to solve problems and express them in code.
Then the importance of the language is reduced. The rework is reduced.
Nobody wants to help that person by giving them a little time to learn a technology they may need.
That's crazy. Good problem solvers are hard to find. Great programmer/analysts are hard to find.
So old companies would make them!
Heck, even into the early '90s you could "get by" with just a few good skills. I think retraining hell is companies' revenge for having to pay us so well. I have a Despair Inc coffee mug that says "Just because you're necessary, doesn't mean you're important." That sums it up nicely.
I agree with you that there is a lot more to know now than in the 60s. More importantly, I think, things change a lot faster now.
But to be fair, there's a lot they had to know back then that most of us don't have to think about at all anymore. In particular, we don't usually need to think nearly as carefully about hardware issues (memory constraints, timing issues) or lower-level software issues (how to write a quicksort algorithm or a garbage collector). We don't need to cram 8 different boolean values into a single byte that we xor to read the value from. We don't need to write code that modifies itself or overlays itself to save memory. And we don't have to wait fifteen minutes or more for an edit/compile/run cycle.
First, I readily admit I should have known better.
So I'm populating a jqxTree control with items, and I set the ID to the record's ID. There are multiple parents, and they relate to different tables (therein sort of lies one of my design flaws but I wanted separate tables even though the items are all basically similar, because at some point their similarities will diverge.)
Anyways, so of course, the record ID's can be the same because they are from different tables. Which means of course that a child under one parent can have the same id='19' as a child under a different parent.
Which means that when you click on one of the children, jqxTree's click even gives you the item you clicked on, but because the ID's are the same, it gives you the last item with that ID, which is a node in a different branch of the tree!
OK, I consider that a bug in jqxTree, it should give me the damn element I clicked on, whether they have the same ID or not.
For sake, I wouldn't have to deal with this BS in a WinForm app.
I'll report it as a bug in jqxWidgets forum though. They tend to be good at either fixing things or responding with "it works as intended."
To speak in code it would be something like this...
with IDs like "table-id" to namespace them in essence.
Exactly. So now I have id: "sx" + st.Id and similar for the different types (sx, rx, hx, etc.) and item.id.substring(2) to strip off the "namespace." My original rant took longer to write than the refactor the code.
Sounds more like a band-aid then a proper solution. I could be wrong...I usually am.
All items in your tree should have separate and unique id's based on your schema. I would double check to make sure this isn't something you did, versus it being the control vendor's fault.
It hasn't been 3 fulls weeks since I've posted a message here in the lounge praising the current state of Linux. The short version of that post is that I have an old machine dedicated for watching media hooked up to a projector, and it's been sluggish with Windows 10, which had also been giving me trouble with updates, so I installed Lubuntu on it (a lightweight version of Ubuntu) and it's performing a lot better--at least in terms of always remaining responsive. Everything "just worked" and the machine served its (single) purpose again beautifully.
Cue to today. Now that I've had a few weeks of "real use" out of it, the review isn't so glowing anymore.
a) Video drivers. Even though the machine is old, it played back 1080p video just fine, so long as I had Nvidia's ION driver installed, which is trivial to install on Windows. Nvidia has a version for Linux, but it looks like it's a few kernel versions behind, so it doesn't install on the current Lubuntu (17.10). I'm no Linux kernel developer, which you apparently have to be in order to figure out how to get things working in this sort of situation. I had to give up on that, which is pretty much a showstopper as, without hardware acceleration, 1080p video stutters all over the place and is basically unwatchable. 720p, with just the basic video driver, "works", but it's definitely glitchy here and there.
b) LAN connections. Again, I'm no Linux expert, but I do know enough so that using "smb://machinename" in the file browser was all that was needed to access shares on other machines on my LAN. It worked well for about a week. Then it simply refused to access anything from the machine hosting my media files (some generic timeout error, even though it's clearly not spending any time waiting for a response, as the error is immediate). However, other machines on my LAN remained accessible to it. The consistent fix was to reboot the machine hosting my media files, even though other (Windows) machines could read everything with no issue. That machine is also hosting other files that are needed elsewhere, and rebooting all the time is going to upset some processes, so that's not a long-term solution.
c) The straw that broke the camel's back: One day the machine booted at 640x480 only, and refused to go back to whatever native resolution my projector is using (it's not a monitor+projector configuration - only the projector is hooked up to it). No amount of rebooting would change it back, and given the other two problems I already had, I knew that even if I managed to solve this by messing around with video configuration files (which I've done exactly once, years ago), this machine was destined anyway to get flattened/rebuilt from scratch.
I just so happened to have a recent version of CentOS on a bootable USB stick. I figured why not try another distribution altogether. The installation went fine, only, after the first reboot, I was looking at a blank screen (not even a blinking cursor). I've used the same image to set up VMs, so I know it works.
Now...I'm the guy who will spend hours, if not days, of his spare time digging into technical problems and keep looking long after others have given up. However, this is a machine I use when I simply want to kick back, turn off my brain, and watch something for an evening--in other words, this is the machine I use when I want to walk away from problems I'm trying to solve. So in this particular case, I simply have no patience for fiddling with OS settings.
So this week, back to Windows 10 I went. It's not without its own problems, but I can at least get it in a state where it's functional, it'll play 1080p flawlessly, and then I can leave it alone. Updates be damned if they start breaking - this machine isn't used to access the internet.
Linux...you're so close, yet still so far from actually being usable.
Last Visit: 22-Aug-19 15:40 Last Update: 22-Aug-19 15:40