The Lounge is rated PG. If you're about to post something you wouldn't want your
kid sister to read then don't post it. No flame wars, no abusive conduct, no programming
questions and please don't post ads.
I'm not conflating them, I'm just saying it's all part of a common pattern. Not open source per se, but the recent trend of companies to start open sourcing so muuch stuff. If your goal is to get out of the selling software business and get into the selling customers business or renting software business, then suddenly some things aren't important anymore like they would have been before. Even the (client side) OS isn't important anymore, because all you care about is getting more people using your cloud services. If support for your cloud based services are built into the OS, then giving that away or almost so will also become a strategic move.
If it makes you look beneficent at the same time, all the better.
I'm going to play devil's advocate and offer a counterpoint. Companies like Google and Facebook deal with such massive amounts of data that they had to approach solutions from a different perspective. They weren't teaching the methods they use to do things in schools and the information wasn't readily available. By open sourcing it, now the community of programmers around the world has access to it. This way when they hire someone to work on these complicated projects they can filter out candidates that never bothered to study what they make or how it works, and they don't have to train new engineers on these things. If they're going to give away their most prized solutions and algorithms, why not give all of it away. And yes they do suck people into being dependent on them for services and the cloud. But the alternative is everyone builds their own proprietary systems and their own cloud, or if they can't afford it they build nothing. Imagine instead of a handful of cloud providers you have hundreds of them, most of them closed to the public. Developers that change jobs now need to learn a completely new cloud environment instead of taking their skills with them.
Companies like Microsoft and Amazon are actually lowering the barrier to entry to developers who need a cloud infrastructure, and yes they profit as a result, but it's not like they're gouging people or not providing a valuable service. All of the open source code serves as a model for how things can be done, you can take it as is and be dependent or use it as a starting point to understand how you might do it on your own, or better even.
Not exactly open source, but related: When MFC was introduced, many years ago, I considered it and rejected it because I saw that the functionality, although great, meant that if I designed my solution around MFC it would tie me to MS solutions far more than I wanted to. So I chose more low level libraries with less support, but creating applications that could be far more easily ported to different system.
By the way: "Open-source lockin" is a much underestimated issue. Believing that you can freely incorporate some open source into your solution very often leads you to accept this required library for this, the other library for that ... often recursively. And, data formats defined by that open source library fits nicely in with that class of open source libraries, but not neccessarily with your application; reshaping the data may require significant effort. Too often a specific UI style is assumed, e.g a (synchronous) CLI interface onto which you have to map your (asynchronous) GUI.
Open source may be great for learning how to implement or use some technique. But I prefer to read the source code, understand it, and copy the good elements of it into my own solution, the way it suits me, rather than blindly accept the way the original binds me to a whole lot of other open source solutions that I do not have the resources to treat the same way.
That's why my system is a fully integrated, monolithic system. No mixing and matching of bits and pieces that may or may not fit well together. It's all of a piece. No STL/standard library stuff because that's just another piece that you can't make fit into anything that it doesn't already understand.
A good principle is to make sure you always know well what is going on at the first abstaction level below the one you are working at. Obviously, you do not write your own sine function because you do not trust a standard library.
We may have different opinions on how detailed your understanding should be - e.g. if you use a compression library, do you need to know the details of the compression algorithm? As long as it doesn't affect my code how it does it, it can be treated as a well defined black box, and I know quite well the principal idea of various types of compression (lossless, lossy of various application specific variants, ...), that is sufficient for me. But I am not satisfied with functions of the kind SolveTheProblemForMe() when I don't have a clue about how the problem is solved.
Your reply may be read as a rejection of all sorts of standard libraries - and if that is your intention, I hope it is ironic . If you understand what a library function will do for you, accept it. If you know how it could be done, you can solve yourself a lot of coding work. But picking up some library or open source code because you don't understand how to solve your own problem, but will leave it to someone else to handle it, then you are on the wrong track.
Unfortunately, you too often see people picking up free solutions because they don't understand their own problem.
I'm not sure I understand the point you are making. But my general reply is that it has nothing to do with understanding or saving work. My system is the way it is because it's about tight integration. Unless you've really worked in a system like that, and almost no one has in the C++ world, you can't really appreciate what it does for you.
If you want a truly integrated system, you can't just use random libraries. I can't make such a library use my logging system, my exceptions, my statistics system, my threading system, etc... Those are things that just wrapping some black box of code won't deal with.
And of course if something goes wrong in the field, these things are NOT black boxes to me. So I just don't have the sorts of issues that are so common when you use a bunch of black boxes. Where you have to upgrade one to fix some problem and realize that has created five more. Or something goes wrong in the field and it's very difficult to figure out why. If something goes wrong in my version, it can log something to my logging system, which can also log to a centralized log server among other things.
If all you want to do is get something working, then of course you can do it with a bunch of pieces and parts. But that's not what I'm trying to do.
I'm not sure I understand the point you are making.
My point was that in some cases you have to trust the code of others - that be trig functions, drivers or whatever. A few years ago, our main products were 8051 based; we wrote the monitor ("OS") ourselves, but even then we had to trust the drivers supplied by developers of peripherals. Today we are on more modern hardware, but the increased complexity means we become more and more dependent on software developed by others.
We are not using "just random libraries". We are using libraries, drivers etc. from subcontractors where we are familiar with the QA procedurs. We know exactly what the functions are supposed to do. We do have access to the source code for inspection (partially under NDA contracts). When done that way, I can defend using code developed and maintained by others.
That was the point I was making. For anything but trivial systems, you will have to trust code obtained from others, to some degree. We are in the process of introducing an alternative to our proprietary "OS", based on an open-source embedded OS - but that is one where we actively take part in the further development of it, in close contact with other stakeholders.
"Tight integration", in a technical sense, I see as a quite trivial matter. In the embedded world, you always deliver to the customer a complete, self-contained code image. In the IoT world, there is very little of dynamic linking and over-the-network retrieval of missing modules.
My IT childhood is so long ago that e.g. Python's quiet downloading of dependencies you are not aware of, downloading some arbitrary version that could have been a different one yesterday, and may be a diffent one tomorrow ... that gives me shivers. In our company, this is relevant for test tools only, not for delivered software, and we do have tools to handle it reasonably well. It does cause problems every now and then, when developers go behind the tools to retrieve "the latest and greatest" version, but for the main production line, we control it.
Again, my point is know who you want to trust. If you know that you can trust them, fine. Use the code, even if it isn't written by you. If you pick up some code from wherever, just because you think it solves a problem that you don't know how to ... You asked for it, you got it. Or if you like: you may be in deep sh*t.
Even if you obtained the source code and "integrated" it into your system - that is not the point of it. Any code you integrate is your responsibility, whether you understand it or not.
Obviously I'm not going to write an operating system or anything. There's no practical recourse but to use the OS APIs. I'm not terribly concerned that they are going to be flakey, since we only use the lowest level OS APIs we can, and those are very widely banged on. And they mostly are just APIs, i.e. single calls to do one thing. That's something you can wrap cleanly.
What's important is that all of the code on my side of the line be very tightly integrated to the extent that's possible. That means it's all written in terms of my interfaces, participates in my standard system functionality as appropriate, etc... So there's never any 'impedance mismatch' between parts of the code, you never have to translate from this scheme to that scheme, you never have to use inconsistent styles or mechanisms to interact with any of the code, because it is all of a piece.
And if by some decision, Google closes Google Drive and Google Photos, makes our files inaccessible; we can do nothing but suck our thumb. It's free service, we have nothing to sue them. So, cry in the corner, instead.
I infected you with my private malware, (RAT) / (Remote Administration Tool), a few months back when you visited some website where my iframe was placed and since then, I have been observing your actions.
The malware gave me full access and control over your system, meaning, I can see everything on your screen, turn on your camera or microphone and you won't even notice about it.
I have also access to all your contacts, private pictures, videos, everything!
I MADE A VIDEO showing you (through your webcam) STATISFYING YOURSELF!
You got a very good taste! Hahaha...
I can send this video to all your contacts (email, social network) and publish all your private data everywhere!
Only you can prevent me from doing this!
To stop me, transfer exactly 1600$ with the current bitcoin (BTC) price to my bitcoin address."
Interestingly that was a password from a very long time ago, like 10+years, that I have used on fairly junk sites, like linked in, etc.
So it seems as if the rumours of the hack are right.
But anyway, I cant wait to see your reaction to seeing my vid! (Be kind, I am not as young as I once was. )