The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
In the end, it's all about breaking down and solving problems. I'll gladly learn a new stack/framework if it solves a problem at hand. (makes or saves $) That said, I usually don't (aside from maybe reading articles) invest in learning something new just to add a feather to my cap.
On another topic, with the answers to the universe at out fingertips these days, getting by on your wits is much easier than it used to be. Either I've done it (or something like it) and can re-use the code/logic, or I can usually find something useful in less than 10 seconds using google. This is why I haven't bought a real programming book/manual in more than 5 years. These days the only mastery required is in phrasing search terms.
One cannot simply learn everything that the industry uses, this could apply to other fields too.Considering the remarkable impact that computer and engineering has made on other disciplines and considering the modern trends in the industry.Older systems and technology gets replaced by newer system and programming languages.....On a long enough timeline, the survival rate for everyone drops to zero...
"Progress doesn't come from early risers – progress is made by lazy men looking for easier ways to do things." Lazarus Long
To digress a little.... Time was an intelligent and educated person could know just about everything there was to know. Literally. And from that grew the stereotype of the lone scientist in his lab coming up with some new invention to change the world... for a while, such people could exist, but not any longer. No one can know everything, not even within one subject area - the most anyone can be is a master at one or two (r more, maybe) disciplines within a subject, there is that much knowledge about.
So science now, and in the future, is and will be a collaborative affair. The big advances now - take nuclear fusion (if it ever happens), quantum computing, or a myriad of medical advances - these aren't and won't be made by our stereotypical white-coated lone scientist in a lab, but by the collaborative efforts of different research groups around the world.
We all have to stand now on the giant collective shoulder of those around us in order to see anything.
The code I am trying to maintain began with a definable message based API. It was a little slow on the under-powered embedded system we started with, so "optimizations" were made. This is my current contract interface: API[^]
I now have logic in some key processing that depends on the string content of a global value. Rather than just look at the interface, I am reduced to searching through all project files for all references of said global string variable. I have code from multiple targets in the wrong files, etc. I've never seen entropy attack so fast.
<italic>Stuck in a dysfunctional matrix from which I must escape...
"Where liberty dwells, there is my country." B. Franklin, 1783
“They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759
Don't try to learn anything new until you need to. For instance, if knowing A, B, and C gets the job done where you're at now, then become proficient in A, B, and C.
When you're considering trying to find a new job (or have to because you got fired/laid off), look at what's currently in use, and learn enough to at least be familiar with the general territory, and when you do get a job do the new tech, become proficient in it.
Little secret - all programming languages are essentially the same. The primary stuff you have to learn is the topology of the framework(s)-de-jeurs. THAT is where the steep learning curves exist.
I've been doing this crap for 40 years, and that's the way I've been doing it the entire time.
".45 ACP - because shooting twice is just silly" - JSOP, 2010 ----- You can never have too much ammo - unless you're swimming, or on fire. - JSOP, 2010 ----- When you pry the gun from my cold dead hands, be careful - the barrel will be very hot. - JSOP, 2013
I was a little too young to learn programming in the 60s, but had my university education around 1980. That was an age where a lot of exciting things happened in the language world - every man had his own language. I can't count the languages I have been programming in.
So when people asked me how many langugages I were familiar with, I said: Maybe three or four.
Second comes the "workspace" model which is like a sandpit where you throw in and take out functions and objects over time. Smalltalk comes in this class; my main experience is with APL, which is also quite different with its extremely array based data structures. Yet, the most essnetial difference from the algorithic group is the workspace model.
The third language is the predicate languages, of which I have only used three variants: SQL, Prolog and SNOBOL. You could say that SNOBOL is a semi-algorithmic language; it has flow control and at the top level is rather sequential, but the more familiar you are with the language, the more you leave to predicate logic. I never became friendly with XSLT, and am happy that I managed to sneak out of it regex I have to do every now and then.
Fourth language: Essentially Lisp, which I have only used for programming Emacs. I can see that some people like it, but it doesn't give you much help in offloading the conceptual model into the saved file; essentially you must free up space inside your brain to hold the model.
For a couple of projects, we peeked at functional programming, but I wouldn't count that as number five. We just studied the syntax of Erlang (and peeked at others from a distance), but I never really tried it in pratice.
I would rather like to add as number five: Data modelling - not a complete language by itself, as it usually won't provide definition of operations, only the interface to them. I think this language (group) is the most underrated one! A good data model, whether you use ER or ASN.1 or an XML schema, is essential to understand the problem domain. You may say that if you use the whole of the OSI communication protocol stack toolbox, you do operations modelling very much in the sam spirit as you do data modelling.
We may add yet another language group: State/event programming. Look at how the OSI session layer is programmed: Tables with current state along one axis, event along the other, each square stating predicates and actions. No other language could possibly come close to that state/event description of the logic, when it comes to clarity, consiseness and freedom from ambiguity.
- - -
Gee, did we have a plethora of languages in those days! Now, we have chopped off eighty percent of the algorithic group and 100% of the rest. State/event is squeezed into a switch. Data modelling is squeezed into C struct. The workspace concept is squeezed into C malloc, or by its modern name: new. List processing is squeezed into C linked structures ... And so on. One bad thing is that today's young programmers never ever question the () around if/while conditions (whether they program in a C variant, Python, Java,... When they see Pascal code, they wonder: Didn't you forget the parentheses? Talk to them about workspace models, and they worry how efficient the malloc will be in such an environment, and when a Cobol decimal field is 6 digits wide they figure that a long long is needed to hold the value range.
Really, today's programming world is a lot more primitive than in the 80s.
And you could say the same about OSes. About file systems. About communication protocols. About user interfaces. Maybe the 80s were a chaos, but it certainly wasn't that primitive monoculture of today, where 95+% of all programming is done in C-like languages, 95+% of all data traffic is limited by the Internet Protocol family at least at some level or stage, 95% of all data is stored in a file system that provides no high level support, and for 95% of process/thread modelling the entire world uses one of two models - and those two certainly do not differ very much.
True: We have got dozens of variations on C. Dozens of variations of U*ix. Dozens of variations of sh. Dozens of variations on fork. But those are essentially small ripples on the surface. Fundamentally different ways of thinking and of doing things have weathered away. I think that is just as big a loss as the ripples are a problem.
As stated, it's been like this for decades. However, the tools that exist and people want are far more greater in number that it was even 10 or 20 years ago.
As a dev approaching 15+ years of work and 20+ years programming experience, you basically have two semi-smart choices to make as you grow up in this field. Position your self to grow from dev to middle management and beyond. Not everyone is managerial material, or wants to be so this isn't always the best option.
The other smart path is for the first 5-10 years of your career, look for the BIG tech tools that every big company uses for 5-10 years and become an expert amongst experts. After that tech is dead for 10+ years, dig up all of your old resources, load up some old VMs and re-familiarize yourself with "legacy" tech.
There are a lot of major corporations that are not in IT that still have old COBOL/RPG mainframes. There are also a ton of VB6/very-old Microsoft Access databases that are in use in even BIG companies. These companies often get to the point where this stuff breaks or they realize they have to throw a TON of money to upgrade their systems. Software specs tend to not exist within these companies (or were long lost) so they need legacy gurus to come in, dissect the existing system(s) and then fix it or spec-out a new system from old code. This is "consultation" work and can pay very well, if you are familiar with the right tech.
This isn's "sexy", but by golly if you get bored with the "rat" race of being perpetually obsolete, let your most obsolete expertise start to work for you. I'm not their yet, but I'm probably going to start pushing "WinForms" development around 2021. I plan to work hard at staying on top of what is relevant today but in 2021 I'll be 40 and that's a good age to start pulling out the old familiar tech that none of the "new kids", fresh out of school, have even heard of or will laugh at when they still see it in use. Suckers... it's the desperate corps that pay the best, who are scrambling to find the grey-beard experts.
Yup. I believe the latest buzzword today is "full stack" developer. Sorry, I don't buy that designation AT ALL. You could re-brand that "jack of all trades, master of none". I'm sorry, but the designation is pure B.S.
I've been developing code for 40 years and I think I've developed some good proficiency in that time and know a few good technologies to use in my development. My code gets answers and it runs FAST. (I've had more than one employer ask me in an incredulous tone "why does your stuff run so fast"?). Er, maybe it's because I don't haul in a couple of gigs of library code to run my executables...
I don't even apply for positions that are looking for "full stack" developers because, IMHO, they are completely disillusioned as to what software development is really about. I believe that would be ... solving problems? Full stack ... seriously?
If you think hiring a professional is expensive, wait until you hire an amateur! - Red Adair
Back in the day, the progression was:
Sr. Analyst (Or Business Analyst)
<some level="" of="" management="">
I always preferred adding Analyst. First you learn the syntax, and the environment.
As a Jr. Programmer, you often took someones scribbles of code on punch cards, and punched them.
The person reviewed them. One programmer could keep a few Jr. Programmers busy. (things changed).
Usually it was teams of both...
After the language/syntax and environment was learned. You moved up.
The real interplay is in taking business needs and getting to computer solutions.
My favorite job interview was where I was competing with someone with 5 years of Clipper for a Clipper job. I had SEEN clipper code, and did DBase code a little bit. But I had great analytical skills.
The guy interviewing me for a part-time position was convinced he would hire the "Pro", and not me, but already had my interview scheduled.
I simply explained that it is the Analysis where all the failures being. The syntax of the language is easy enough to learn. If you are solving the right problem. I asked him to think about the "fixes" he had to have the previous guy make. What percentage were:
- Did not understand the goal properly
- Logic Error (Did not express the goal properly)
- Lack of testing
- Lack of User Sign off
- User Error/User Confusion
- Bad Syntax/Failure to use the programming language correctly?
I explained to him, that if he hired me, I would drive the first few items to ZERO occurrences, and that my biggest fear was programming myself out of a job, because the current guy was constantly fixing his own mistakes. He laughed. He thought... He Hired...
One year later, he apologized that he ran out of work for me to do. Wrote me a 2 page letter of recommendation, and gave me a minimum number of hours each week to do whatever I wanted.
I want to hire creative problem solvers who know how to solve problems and express them in code.
Then the importance of the language is reduced. The rework is reduced.
Nobody wants to help that person by giving them a little time to learn a technology they may need.
That's crazy. Good problem solvers are hard to find. Great programmer/analysts are hard to find.
So old companies would make them!
Heck, even into the early '90s you could "get by" with just a few good skills. I think retraining hell is companies' revenge for having to pay us so well. I have a Despair Inc coffee mug that says "Just because you're necessary, doesn't mean you're important." That sums it up nicely.
I agree with you that there is a lot more to know now than in the 60s. More importantly, I think, things change a lot faster now.
But to be fair, there's a lot they had to know back then that most of us don't have to think about at all anymore. In particular, we don't usually need to think nearly as carefully about hardware issues (memory constraints, timing issues) or lower-level software issues (how to write a quicksort algorithm or a garbage collector). We don't need to cram 8 different boolean values into a single byte that we xor to read the value from. We don't need to write code that modifies itself or overlays itself to save memory. And we don't have to wait fifteen minutes or more for an edit/compile/run cycle.
First, I readily admit I should have known better.
So I'm populating a jqxTree control with items, and I set the ID to the record's ID. There are multiple parents, and they relate to different tables (therein sort of lies one of my design flaws but I wanted separate tables even though the items are all basically similar, because at some point their similarities will diverge.)
Anyways, so of course, the record ID's can be the same because they are from different tables. Which means of course that a child under one parent can have the same id='19' as a child under a different parent.
Which means that when you click on one of the children, jqxTree's click even gives you the item you clicked on, but because the ID's are the same, it gives you the last item with that ID, which is a node in a different branch of the tree!
OK, I consider that a bug in jqxTree, it should give me the damn element I clicked on, whether they have the same ID or not.
For sake, I wouldn't have to deal with this BS in a WinForm app.
I'll report it as a bug in jqxWidgets forum though. They tend to be good at either fixing things or responding with "it works as intended."
To speak in code it would be something like this...
with IDs like "table-id" to namespace them in essence.
Exactly. So now I have id: "sx" + st.Id and similar for the different types (sx, rx, hx, etc.) and item.id.substring(2) to strip off the "namespace." My original rant took longer to write than the refactor the code.