The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
Enterprises are the worst offenders when it comes to keeping up to date. We've all heard the horror stories about enterprises being unable to move away from XP for one reason or another. When MS kept pushing back the XP support cutoff date, it wasn't because of home users.
So I'd have to think that testing against multiple OSes is NOT an edge case, but should be done by any developer or tester who wants to sell/use software in any business that's in that boat.
(disclaimer: I work for a tiny company, but we sell primarily to larger enterprises with tens of thousands of servers)
Testing against multiple OS types is not how enterprise commonly use VMs; it's a use case that is 100% an edge case for how VMs are used in the enterprise, regardless of your own personal - and no argument that it's a perfectly valid - usage of VMs in your work environment.
Enterprises typically use VMs to lighten their physical server requirements, which is good, but in doing so have embraced standing up a new VM for whatever whim the management teams happen to have (like a unique web server per department, for instance), which is bad. The wasted time and resources used to manage (and secure) the bloat of extraneous VMs, which would be BETTER served by containers, is my complaint here.
"Never attribute to malice that which can be explained by stupidity."
- Hanlon's Razor
I can upvote you only once, but that's the exact point of this...
Since Docker (and other containers) came to focus it became a matter of 'fashion' to hammer VMs and glorify containers...
It's like we would make cakes with fresh vegetables instead of sugar from now on...
"The greatest enemy of knowledge is not ignorance, it is the illusion of knowledge". Stephen Hawking, 1942- 2018
I never understood why a large fraction of Docker buffs shows panic reactions every time someone suggests that Docker is a kind of virtualization. Of course there are things that, say, VMware will do that Docker won't do - and the other way around. So Docker isn't identical to VMware.
Yet, the concept of virtualization has been applied in lots of other ways. VMware is not The Only Defintion of virtualization. When I on my Windows machine run a Ubuntu application in a Docker container, operating in its own network world, and sees a Unix style file system rather than the physical NTFS file system underneath - of course those are examples of virtualization!
It seems to me like Docker buffs really are trying to say: Forget about competing alternatives - this is something completely different. You shouln't even consider making any feature-by-feature comparison, because they are so different. Virtualization is out. Containers provide an operating environment which is independent of the underlaying hardware, and that isn't virtualization. You can run different base layers (e.g. different OS kernels) in simultaneously running containers on a single host, but that isn't virtualization. You can create multiple fully independent networks for groups of containers to communicate among themselves; these networks have separate, independent network address spaces so they don't interfere with each other even if they use identical addresses, but that isn't virutalization. Multiple containers running from the same image have identical local file systems at startup, but if they make modifications, one container's changes is invisible to the other containers, even with identical file names, but that isn't virtualization.
We have decided to use a different terminology - we refer to the local file system as a "union FS" to mark a distance from a virtual file system. We call it a "named" network to distinguish it from a virtual network. Hey, they have different names, how could they then represent similar concepts?
Docker containers realize one set of virtualization concepts, VMware another one. It sure seems to me that Docker has made a great selection for a fairly lightweight kind of virtualization. Nevertheless it is creating virtual environments, virtual resources and mapping these onto more or less arbitrary physical hardware. Just like all virtualization does.
Because Docker essentially ignores all other I/O-facilities than IP and Unix-style file systems, it has a somewhat easier job than those VMs taking the full responsibility for I/O (and other hardware access). So Docker can say "Why do /xxx/ to provide a virtual resource - Docker doesn't need it?" Sure, when you do not provide e.g. general I/O, then you don't need it. Still you are virtalizing those resources that you do provide!
I think Docker is great for a large subset of tasks. But why should it displace other virtualization methods for other tasks? Docker is not universal: It cannot handle arbitrary I/O. It cannot handle arbitrary OSes - the base layer (usually an OS kernel) has a rather limited set of APIs to the host for realizing its own provisions, in particular with respect to I/O and device access. Say, if you need to run one container providing a Windows GUI, one running a MacOS application an a few running Linux applications, the Linux Docker implementation cannot handle the two first ones. VMware can. So for that use, why shouldn't I "be allowed to" run VMware?
This seems to me very much as a turf war, were terms and definitions are used a mechanisms to push competitors away. If Docker could take over all tasks, it could make more sense, but since there are lots of issues Docker cannot handle, it will never fully replace VMs, only a certain fraction of them. So why not make clear where Docker is suitable, and leave it at that?
I'm supposed to be looking into this as an option for migrating a legacy lob desktop application. As soon as I get my server 2016 ready for production, I'll be able to check it out. I'm not about to pay for a hosted environment unless I absolutely have to.
It is a technology we are looking at where I am employed.
Really haven't touched on it too much, I have real (not virtual) things to do; so to borrow from Rune Haako I will "send a droid", or in my case an intern.
We had one of them there walking discussions yesterday, and I was kinda like "so Docker is like the new Java, we just have containers instead of jars". But it was pointed out to me today that you can run Java inside of Docker, but not vice versa.
Director of Transmogrification Services
Shinobi of Query Language
Master of Yoda Conditional
I worked with it a fair bit in the past. Docker alone isn't really the only way to achieve any of the things it offers, and in my experience is downright counter productive if you end up in an organisation that tries to fit everything into Docker containers.
I'd recommend reading about The Twelve-Factor App if you're completely unfamiliar with Docker or Containers - it's basically just a set of 'best practices' for using Docker. They're ideas that actually make docker a nice enough development experience, but just downright stupid if you were to try use them all outside of Docker (e.g. all config should be environment variables) although a few of them are just good common sense rules.
All in All my conclusion was that well architected software, along with decent automation scripts for managing infrastructure and dependencies can be easier than trying to maintain dozens of different container images. Although no-one I know who advocates using Docker is actually conscious of the fact that security updates still need to be installed into their images - along with all the testing that comes with that.
We are in the process of introducing Docker, but for somewhat restricted use, and we will not use all facilities available.
The primary use is for compile/build jobs: Our various teams each have their requirements for compiler versions, library versions etc. For about five years, we have been running a system where each job calls a toolbox utility which switches symbolic links around, in some cases uninstalls the current tool version and installing the one required by a job, redefines environment symbols and PATH variables etc. The utility itself has gradually become quite stable, but we still have difficulties making people use it in a proper way (using Bamboo, you have to repeat the environment changes for every single job step!)
So now we will set up a new set of build agents with no build tools installed at the OS level. All tool suites shall be Dockerized. We had a tug-of-war between those who wanted each tool to be a separate Docker image, to be used like old-style individual executables activated one by one from a build script outside docker, and those who want to put a complete set of build tools into a quite large image, activating the various tools from a script interpreted by the shell within the running container.
The second group won: We will make complete toolboxes with a complete, coherent set of tools as one image. If any single tool comes in a new version, a new, complete image must be built - you cannot just update the Docker image of that tool and leave the rest. This will hopefully have a moderating, stabilizing effect on the uncontrolled proliferation of tool versions that have ridden us the last year or two.
Some projects demanded their own build machines because they were using tools that could not be version switched by the toolbox utility. We had a lot of unused processing power on those machines, while the general pool was overloaded. Now, with everything in Docker images, we will have one common pool of build machines, all of them capable of running any Docker image, and everything specific will be hidden within that image: When the job step has completed, the host is perfectly "clean", and the next invocation of the same image is virginal: Nothing from the previous run affects it. (We will not be using local volumes; all permanent storage are mounted as host volumes at container startup.)
Our second use is similar, but different: Our target testing, running physical target hardware, will need several McGyver style solutions for the interfacing. The problem with the (pre Docker) toolbox utility is that some of the device drivers refuse downgrading, and/or intstallation / activation of another version requieres manual intervention. So every time we replace the target hardware with another setup - very often an older setup, to verify that our new software works on the old hardware, or that the old software works on new hardware - reconfiguring the test environment may take hours of effort. We will now strive to put as much of the configuration into Docker images that are self contained and isolated and can be loaded in a few seconds.
We do not expect that it will be possible to Dockerize all testing software, so host will not be completely "clean" (like the compile/build machines): A test run may have some steps running in Docker, other steps running non-Dockerized software. The setups will be so tightly bound to the surrounding physical environment that the Docker images will certainly not be "portabele" (not even among our various test setups). We use Dockeer simply as a tool for easy reconfiguration of a single machine.
We are in the final planning stages of this change, so I cannot yet tell how well it will work. Most likely we will have surprises.
I've no need for it (and can't see where it would have been helpful on any of the past projects I've worked on). Started downloading it as per the Docker Challenge here on CP to have a preliminary play, but fortunately read the "warning" comment about it only running on Win10, so cancelled out of that...
Is there anybody out there who still doesnt have a need to look into this?
I have had no need as of yet. Thus far, I have treated it as the "religion du jour": newest concept that will solve ALL of the world's problems. Introduced by someone, evangelized by some, trade media goes crazy, management gets excited, eventually decreeing that we must go all in, at just about that time the press starts covering the various problems with implementing the concept.