The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
I recently replaced two nested loops with index arithmetic with a single function call. I even have a theory of why this old code was in place, but I feel still dumbfounded by spending several minutes deciphering what the code is supposed to do while, really, a single call to a single well-documented function would have done the job.
We all admit that this happens, but we never stop to think about why it happens. Let me put forth a hypothesis - deadlineitis!
Programmers have been bamboozled into believing that software development should be an engineering discipline. While there are some things that engineering can offer to software development, there is much that the engineering discipline cannot "solve" for software development.
Let me defend this. "Engineering" is the discipline that says we can build massive structures by applying known smaller structures, that have mathematical descriptions which can be added together to present a mathematical model of the desired structure.
In software development, so-called "software engineers" repeatedly encounter instances of where there is NO smaller software structure - merely methods that "almost" work, methods which are "not quite" what is required, or even a lack of smaller structures that even come close to what is required. Add to this that combining all these "iffy" structures together leads to coupling problems - emergent behavior unanticipated by anybody. (And if you are looking for mathematical precision, well, good luck with that.) So software development winds up being a series of "experiments" - tweaking prior structures, and even developing your own structures, with the "requirements" for the structures being imprecise and unclear.
Under "project management", this is seldom, if ever, accounted for, because, like so much other stuff, the concept of "project management" was imported into the domain of software development from the engineering disciplines.
Imagine, if you will, the plight of an bridge engineer if he had to redefine what a "truss" was multiple times during the life of the project.
It's relatively easy to put together a bunch of well-known structures to complete a much larger structure on time, on budget, and safe (yet even then we have major cost overruns and major delays, and even failures of the final product).
The problem with software is the fact that compared to "physical" products, there is no conceptual limit to what an individual piece of software might be. You want it to do A, B and C. Why not? Could you get it to do D as well? No problem. How about E and F? Well, that may strain our resources a bit. Oh, we can't have that. How about if we eliminate C and D - could we get E and F for the same cost and in the same time?
And on and on it goes. So what happens when a software developer has to develop on of these base widgets, or tweak another. The programmer does not have a mathematical model to rely on, but just a natural language description of what has to be done. This is, of course, ambiguous, and the first kick at the cat fails. Two choices - tweak, or total rewrite. Over the course of development of this one widget, total rewrite is an option at the start, but hardly one over the course of time. And the deadline is looming. (Deadlines ALWAYS loom.) So it comes down to tweaking. Finally, unit testing succeeds. Hallelujah. You know that you should go back and refactor, and tighten up the code, but you're scheduled to work on another widget, so you start on that while the widget you've completed sits in the integration testing queue.
And just when you are deeply into the next widget, you are hauled away because your widget failed integration testing. You are already behind on that other widget, so you get all the information you can on how this widget failed integration testing, and you tweak the code, and unit test it, and put it back into the integration testing and you go back to that other widget, and on, and on, and on, and on.
And finally, after all developers have collectively torn their hair out by the roots, have lost an average of 20 pounds because "there's no time to eat", do not remember when it was the last time they sat down with their significant others, and have bloodshot eyes from putting in 18 to 20 hour days for the last of the project, QA says that they approve for release.
And six months from now, some new hire in maintenance is calling you up on the phone complaining that your code is too complex!
Though I've open sourced a butt-load of code, the current trends that I see around open source are not good. I see all these people screaming about how great open source is and how it's stupid to be proprietary these days and how companies are finally waking up and becoming more consumer-centric.
But that's exactly the opposite of what is happening. All of this is part of an inevitable move to copy Google's example. They have worked out a system where they make massive amounts of money off of us, but without having any obligations to us, because they don't sell us stuff, they sell us.
That has created a world where all the other big companies are going the same direction. Stop being a company that makes software to sell, and become a company that gives away software as a gateway drug to getting customers addicted to their cloud based services. The inevitable end of that road takes us back to the 60s, with a huge, air conditioned machine that we have to rent time on.
If you don't sell the software itself then you make your money by spying on users and selling the data, by charging rent to use your software, or by pushing ads. Are any of those scenarios actually better for us as customers? It ultimately means that they have no more obligation to you than the end of the month. They can cut you off any time, or drop any product, and you have no foot to stand on because you never bought it.
And of course it'll go meta as well, where you'll be renting software from people who are renting cloud based services from larger companies that they use in the software they rent to you. If large companies can create features that everyone feels they need in order to be competitive, and they can keep that competitive level so compute intensive and complex that it can't reasonably be replicated for local use, then folks will use those cloud based service in their own software because they feel like they have to.
Speech recognition is a good current example. My CQC automation system has an all local voice control system, but it really can't compete with the Echo, which uses state of the art DNN technologies backed by massive amounts of training data and computing resources. So we also have to support the Echo to be competitive. And the odds aren't great this will change.
As these companies become enormously profitable not selling software, they are sucking up massive numbers of top engineers to help them make that cycle go faster. And they are sucking up massive numbers of top engineers, with salaries and benefits that others can't compete with. Even if you have some lingering doubts about what these companies are creating long term, it's very hard to turn down the pay.
Anyway, I'm obviously not against open source per se, having open sourced more of it than your average 100 other developers combined. But I just think that there's a lot of naivete out there about all of this and why companies like Google open source so much code. It's not largess, it's long term strategy.
Of course there's naivete out there, way too much of it. Spoke with several people myself about topics such as open source software, bitcoin (or cryptocurrency in general) and similar stuff. My main conclusion is that people tend to fall in love with the idea, with the ideals behind the idea, and completely ignore such puny details as real-world ramifications. When kids commit that mistake, it's cute. In case of adults though, it quickly gets awkward.
In retrospect the name was maybe not a good choice. It's short for Charmed Quark Controller, but always gets shortened to CQC. If you search for CQC though, in addition to closer quarters combat, you get all kinds of stuff. Camden Chess Club, Care Quality Commission, California Quality Collaborative, China Compulsory Certification, Community Quality Council, Canadian Quilting Club, and on and on.
I think you missed the point. Google has never ever sold software. They are a services company. The same can be said of Facebook. And there are plenty of companies selling software AND contributing to open source. The two are not related at all.
What I do see is that more and more companies are releasing the portions of the software not directly tied to their business. Facebook has released Bootstrap and React as an example. Both projects help them closer to their goal. Neither is the core product.
Now there is also a movement toward selling the data gleaned from providing "free" software as well. And on phones some have found they make far more from the advertising then they can make from selling ad free. But the two issues are separate.
Sure, I was just saying that Google sort of set the 'standard' (or the sub-standard as I would consider it) of making money by selling your customers, instead of selling messy things like products which you have to support and can't just drop any time you want. And they were so successful that it's pushed everyone in that direction.
Other companies, who were actual software vendors, want to go the same way, but they can't do it exactly the same way. So it's all now push everything into the cloud. More and more of the software products we use will become things we have to rent and can't use if our internet connection is down.
I think you are conflating the idea of "free" software with "open source" software.
There is a big difference between "open source" and "free". Many companies are moving toward a service contract model, which I believe is the right way to go.
Off the top of my head I can think of Canonical(Ubuntu), MongoDB, Meteor. All of these companies have FOSS (Free Open Source Software) but they sell contracts to businesses that need them to keep maintaining the software.
Additionally, software can be "open source" but not "free".
I don't think "open source" software has anything to do with consumers being the product. You could use that argument for "free" software on the other hand, like Facebook and Google (neither of which is "open source")
I'm not conflating them, I'm just saying it's all part of a common pattern. Not open source per se, but the recent trend of companies to start open sourcing so muuch stuff. If your goal is to get out of the selling software business and get into the selling customers business or renting software business, then suddenly some things aren't important anymore like they would have been before. Even the (client side) OS isn't important anymore, because all you care about is getting more people using your cloud services. If support for your cloud based services are built into the OS, then giving that away or almost so will also become a strategic move.
If it makes you look beneficent at the same time, all the better.
I'm going to play devil's advocate and offer a counterpoint. Companies like Google and Facebook deal with such massive amounts of data that they had to approach solutions from a different perspective. They weren't teaching the methods they use to do things in schools and the information wasn't readily available. By open sourcing it, now the community of programmers around the world has access to it. This way when they hire someone to work on these complicated projects they can filter out candidates that never bothered to study what they make or how it works, and they don't have to train new engineers on these things. If they're going to give away their most prized solutions and algorithms, why not give all of it away. And yes they do suck people into being dependent on them for services and the cloud. But the alternative is everyone builds their own proprietary systems and their own cloud, or if they can't afford it they build nothing. Imagine instead of a handful of cloud providers you have hundreds of them, most of them closed to the public. Developers that change jobs now need to learn a completely new cloud environment instead of taking their skills with them.
Companies like Microsoft and Amazon are actually lowering the barrier to entry to developers who need a cloud infrastructure, and yes they profit as a result, but it's not like they're gouging people or not providing a valuable service. All of the open source code serves as a model for how things can be done, you can take it as is and be dependent or use it as a starting point to understand how you might do it on your own, or better even.
Not exactly open source, but related: When MFC was introduced, many years ago, I considered it and rejected it because I saw that the functionality, although great, meant that if I designed my solution around MFC it would tie me to MS solutions far more than I wanted to. So I chose more low level libraries with less support, but creating applications that could be far more easily ported to different system.
By the way: "Open-source lockin" is a much underestimated issue. Believing that you can freely incorporate some open source into your solution very often leads you to accept this required library for this, the other library for that ... often recursively. And, data formats defined by that open source library fits nicely in with that class of open source libraries, but not neccessarily with your application; reshaping the data may require significant effort. Too often a specific UI style is assumed, e.g a (synchronous) CLI interface onto which you have to map your (asynchronous) GUI.
Open source may be great for learning how to implement or use some technique. But I prefer to read the source code, understand it, and copy the good elements of it into my own solution, the way it suits me, rather than blindly accept the way the original binds me to a whole lot of other open source solutions that I do not have the resources to treat the same way.
That's why my system is a fully integrated, monolithic system. No mixing and matching of bits and pieces that may or may not fit well together. It's all of a piece. No STL/standard library stuff because that's just another piece that you can't make fit into anything that it doesn't already understand.
A good principle is to make sure you always know well what is going on at the first abstaction level below the one you are working at. Obviously, you do not write your own sine function because you do not trust a standard library.
We may have different opinions on how detailed your understanding should be - e.g. if you use a compression library, do you need to know the details of the compression algorithm? As long as it doesn't affect my code how it does it, it can be treated as a well defined black box, and I know quite well the principal idea of various types of compression (lossless, lossy of various application specific variants, ...), that is sufficient for me. But I am not satisfied with functions of the kind SolveTheProblemForMe() when I don't have a clue about how the problem is solved.
Your reply may be read as a rejection of all sorts of standard libraries - and if that is your intention, I hope it is ironic . If you understand what a library function will do for you, accept it. If you know how it could be done, you can solve yourself a lot of coding work. But picking up some library or open source code because you don't understand how to solve your own problem, but will leave it to someone else to handle it, then you are on the wrong track.
Unfortunately, you too often see people picking up free solutions because they don't understand their own problem.
I'm not sure I understand the point you are making. But my general reply is that it has nothing to do with understanding or saving work. My system is the way it is because it's about tight integration. Unless you've really worked in a system like that, and almost no one has in the C++ world, you can't really appreciate what it does for you.
If you want a truly integrated system, you can't just use random libraries. I can't make such a library use my logging system, my exceptions, my statistics system, my threading system, etc... Those are things that just wrapping some black box of code won't deal with.
And of course if something goes wrong in the field, these things are NOT black boxes to me. So I just don't have the sorts of issues that are so common when you use a bunch of black boxes. Where you have to upgrade one to fix some problem and realize that has created five more. Or something goes wrong in the field and it's very difficult to figure out why. If something goes wrong in my version, it can log something to my logging system, which can also log to a centralized log server among other things.
If all you want to do is get something working, then of course you can do it with a bunch of pieces and parts. But that's not what I'm trying to do.
I'm not sure I understand the point you are making.
My point was that in some cases you have to trust the code of others - that be trig functions, drivers or whatever. A few years ago, our main products were 8051 based; we wrote the monitor ("OS") ourselves, but even then we had to trust the drivers supplied by developers of peripherals. Today we are on more modern hardware, but the increased complexity means we become more and more dependent on software developed by others.
We are not using "just random libraries". We are using libraries, drivers etc. from subcontractors where we are familiar with the QA procedurs. We know exactly what the functions are supposed to do. We do have access to the source code for inspection (partially under NDA contracts). When done that way, I can defend using code developed and maintained by others.
That was the point I was making. For anything but trivial systems, you will have to trust code obtained from others, to some degree. We are in the process of introducing an alternative to our proprietary "OS", based on an open-source embedded OS - but that is one where we actively take part in the further development of it, in close contact with other stakeholders.
"Tight integration", in a technical sense, I see as a quite trivial matter. In the embedded world, you always deliver to the customer a complete, self-contained code image. In the IoT world, there is very little of dynamic linking and over-the-network retrieval of missing modules.
My IT childhood is so long ago that e.g. Python's quiet downloading of dependencies you are not aware of, downloading some arbitrary version that could have been a different one yesterday, and may be a diffent one tomorrow ... that gives me shivers. In our company, this is relevant for test tools only, not for delivered software, and we do have tools to handle it reasonably well. It does cause problems every now and then, when developers go behind the tools to retrieve "the latest and greatest" version, but for the main production line, we control it.
Again, my point is know who you want to trust. If you know that you can trust them, fine. Use the code, even if it isn't written by you. If you pick up some code from wherever, just because you think it solves a problem that you don't know how to ... You asked for it, you got it. Or if you like: you may be in deep sh*t.
Even if you obtained the source code and "integrated" it into your system - that is not the point of it. Any code you integrate is your responsibility, whether you understand it or not.
Obviously I'm not going to write an operating system or anything. There's no practical recourse but to use the OS APIs. I'm not terribly concerned that they are going to be flakey, since we only use the lowest level OS APIs we can, and those are very widely banged on. And they mostly are just APIs, i.e. single calls to do one thing. That's something you can wrap cleanly.
What's important is that all of the code on my side of the line be very tightly integrated to the extent that's possible. That means it's all written in terms of my interfaces, participates in my standard system functionality as appropriate, etc... So there's never any 'impedance mismatch' between parts of the code, you never have to translate from this scheme to that scheme, you never have to use inconsistent styles or mechanisms to interact with any of the code, because it is all of a piece.
And if by some decision, Google closes Google Drive and Google Photos, makes our files inaccessible; we can do nothing but suck our thumb. It's free service, we have nothing to sue them. So, cry in the corner, instead.