The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
In retrospect the name was maybe not a good choice. It's short for Charmed Quark Controller, but always gets shortened to CQC. If you search for CQC though, in addition to closer quarters combat, you get all kinds of stuff. Camden Chess Club, Care Quality Commission, California Quality Collaborative, China Compulsory Certification, Community Quality Council, Canadian Quilting Club, and on and on.
I think you missed the point. Google has never ever sold software. They are a services company. The same can be said of Facebook. And there are plenty of companies selling software AND contributing to open source. The two are not related at all.
What I do see is that more and more companies are releasing the portions of the software not directly tied to their business. Facebook has released Bootstrap and React as an example. Both projects help them closer to their goal. Neither is the core product.
Now there is also a movement toward selling the data gleaned from providing "free" software as well. And on phones some have found they make far more from the advertising then they can make from selling ad free. But the two issues are separate.
Sure, I was just saying that Google sort of set the 'standard' (or the sub-standard as I would consider it) of making money by selling your customers, instead of selling messy things like products which you have to support and can't just drop any time you want. And they were so successful that it's pushed everyone in that direction.
Other companies, who were actual software vendors, want to go the same way, but they can't do it exactly the same way. So it's all now push everything into the cloud. More and more of the software products we use will become things we have to rent and can't use if our internet connection is down.
I think you are conflating the idea of "free" software with "open source" software.
There is a big difference between "open source" and "free". Many companies are moving toward a service contract model, which I believe is the right way to go.
Off the top of my head I can think of Canonical(Ubuntu), MongoDB, Meteor. All of these companies have FOSS (Free Open Source Software) but they sell contracts to businesses that need them to keep maintaining the software.
Additionally, software can be "open source" but not "free".
I don't think "open source" software has anything to do with consumers being the product. You could use that argument for "free" software on the other hand, like Facebook and Google (neither of which is "open source")
I'm not conflating them, I'm just saying it's all part of a common pattern. Not open source per se, but the recent trend of companies to start open sourcing so muuch stuff. If your goal is to get out of the selling software business and get into the selling customers business or renting software business, then suddenly some things aren't important anymore like they would have been before. Even the (client side) OS isn't important anymore, because all you care about is getting more people using your cloud services. If support for your cloud based services are built into the OS, then giving that away or almost so will also become a strategic move.
If it makes you look beneficent at the same time, all the better.
I'm going to play devil's advocate and offer a counterpoint. Companies like Google and Facebook deal with such massive amounts of data that they had to approach solutions from a different perspective. They weren't teaching the methods they use to do things in schools and the information wasn't readily available. By open sourcing it, now the community of programmers around the world has access to it. This way when they hire someone to work on these complicated projects they can filter out candidates that never bothered to study what they make or how it works, and they don't have to train new engineers on these things. If they're going to give away their most prized solutions and algorithms, why not give all of it away. And yes they do suck people into being dependent on them for services and the cloud. But the alternative is everyone builds their own proprietary systems and their own cloud, or if they can't afford it they build nothing. Imagine instead of a handful of cloud providers you have hundreds of them, most of them closed to the public. Developers that change jobs now need to learn a completely new cloud environment instead of taking their skills with them.
Companies like Microsoft and Amazon are actually lowering the barrier to entry to developers who need a cloud infrastructure, and yes they profit as a result, but it's not like they're gouging people or not providing a valuable service. All of the open source code serves as a model for how things can be done, you can take it as is and be dependent or use it as a starting point to understand how you might do it on your own, or better even.
Not exactly open source, but related: When MFC was introduced, many years ago, I considered it and rejected it because I saw that the functionality, although great, meant that if I designed my solution around MFC it would tie me to MS solutions far more than I wanted to. So I chose more low level libraries with less support, but creating applications that could be far more easily ported to different system.
By the way: "Open-source lockin" is a much underestimated issue. Believing that you can freely incorporate some open source into your solution very often leads you to accept this required library for this, the other library for that ... often recursively. And, data formats defined by that open source library fits nicely in with that class of open source libraries, but not neccessarily with your application; reshaping the data may require significant effort. Too often a specific UI style is assumed, e.g a (synchronous) CLI interface onto which you have to map your (asynchronous) GUI.
Open source may be great for learning how to implement or use some technique. But I prefer to read the source code, understand it, and copy the good elements of it into my own solution, the way it suits me, rather than blindly accept the way the original binds me to a whole lot of other open source solutions that I do not have the resources to treat the same way.
That's why my system is a fully integrated, monolithic system. No mixing and matching of bits and pieces that may or may not fit well together. It's all of a piece. No STL/standard library stuff because that's just another piece that you can't make fit into anything that it doesn't already understand.
A good principle is to make sure you always know well what is going on at the first abstaction level below the one you are working at. Obviously, you do not write your own sine function because you do not trust a standard library.
We may have different opinions on how detailed your understanding should be - e.g. if you use a compression library, do you need to know the details of the compression algorithm? As long as it doesn't affect my code how it does it, it can be treated as a well defined black box, and I know quite well the principal idea of various types of compression (lossless, lossy of various application specific variants, ...), that is sufficient for me. But I am not satisfied with functions of the kind SolveTheProblemForMe() when I don't have a clue about how the problem is solved.
Your reply may be read as a rejection of all sorts of standard libraries - and if that is your intention, I hope it is ironic . If you understand what a library function will do for you, accept it. If you know how it could be done, you can solve yourself a lot of coding work. But picking up some library or open source code because you don't understand how to solve your own problem, but will leave it to someone else to handle it, then you are on the wrong track.
Unfortunately, you too often see people picking up free solutions because they don't understand their own problem.
I'm not sure I understand the point you are making. But my general reply is that it has nothing to do with understanding or saving work. My system is the way it is because it's about tight integration. Unless you've really worked in a system like that, and almost no one has in the C++ world, you can't really appreciate what it does for you.
If you want a truly integrated system, you can't just use random libraries. I can't make such a library use my logging system, my exceptions, my statistics system, my threading system, etc... Those are things that just wrapping some black box of code won't deal with.
And of course if something goes wrong in the field, these things are NOT black boxes to me. So I just don't have the sorts of issues that are so common when you use a bunch of black boxes. Where you have to upgrade one to fix some problem and realize that has created five more. Or something goes wrong in the field and it's very difficult to figure out why. If something goes wrong in my version, it can log something to my logging system, which can also log to a centralized log server among other things.
If all you want to do is get something working, then of course you can do it with a bunch of pieces and parts. But that's not what I'm trying to do.
I'm not sure I understand the point you are making.
My point was that in some cases you have to trust the code of others - that be trig functions, drivers or whatever. A few years ago, our main products were 8051 based; we wrote the monitor ("OS") ourselves, but even then we had to trust the drivers supplied by developers of peripherals. Today we are on more modern hardware, but the increased complexity means we become more and more dependent on software developed by others.
We are not using "just random libraries". We are using libraries, drivers etc. from subcontractors where we are familiar with the QA procedurs. We know exactly what the functions are supposed to do. We do have access to the source code for inspection (partially under NDA contracts). When done that way, I can defend using code developed and maintained by others.
That was the point I was making. For anything but trivial systems, you will have to trust code obtained from others, to some degree. We are in the process of introducing an alternative to our proprietary "OS", based on an open-source embedded OS - but that is one where we actively take part in the further development of it, in close contact with other stakeholders.
"Tight integration", in a technical sense, I see as a quite trivial matter. In the embedded world, you always deliver to the customer a complete, self-contained code image. In the IoT world, there is very little of dynamic linking and over-the-network retrieval of missing modules.
My IT childhood is so long ago that e.g. Python's quiet downloading of dependencies you are not aware of, downloading some arbitrary version that could have been a different one yesterday, and may be a diffent one tomorrow ... that gives me shivers. In our company, this is relevant for test tools only, not for delivered software, and we do have tools to handle it reasonably well. It does cause problems every now and then, when developers go behind the tools to retrieve "the latest and greatest" version, but for the main production line, we control it.
Again, my point is know who you want to trust. If you know that you can trust them, fine. Use the code, even if it isn't written by you. If you pick up some code from wherever, just because you think it solves a problem that you don't know how to ... You asked for it, you got it. Or if you like: you may be in deep sh*t.
Even if you obtained the source code and "integrated" it into your system - that is not the point of it. Any code you integrate is your responsibility, whether you understand it or not.
Obviously I'm not going to write an operating system or anything. There's no practical recourse but to use the OS APIs. I'm not terribly concerned that they are going to be flakey, since we only use the lowest level OS APIs we can, and those are very widely banged on. And they mostly are just APIs, i.e. single calls to do one thing. That's something you can wrap cleanly.
What's important is that all of the code on my side of the line be very tightly integrated to the extent that's possible. That means it's all written in terms of my interfaces, participates in my standard system functionality as appropriate, etc... So there's never any 'impedance mismatch' between parts of the code, you never have to translate from this scheme to that scheme, you never have to use inconsistent styles or mechanisms to interact with any of the code, because it is all of a piece.
And if by some decision, Google closes Google Drive and Google Photos, makes our files inaccessible; we can do nothing but suck our thumb. It's free service, we have nothing to sue them. So, cry in the corner, instead.