The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
Am I the only one who can not get enthousiastic about Docker, is it me?
I haven't seen any advantage, though we (at the company) might be looking at Docker with AWS to spool up additional servers as load increases. The problem with that is we're heavily embedded in ASP.NET and given the above, running Windows in Docker is probably a horrid idea.
Back in the day, we could write an executable that consumed less than 500 bytes that could take down a major city's power grid.
Ahh, those were good times indeed.
".45 ACP - because shooting twice is just silly" - JSOP, 2010 ----- You can never have too much ammo - unless you're swimming, or on fire. - JSOP, 2010 ----- When you pry the gun from my cold dead hands, be careful - the barrel will be very hot. - JSOP, 2013
In the old days of the Univac 1108 you could take down the entire mainframe with a single instruction...
In the indirect addressing format, the sign bit would indicate: This is not the address, but a pointer to the address. Which might have the indirect bit set, pointing to yet another address - or to itself. An infinte addressing loop. Using indirect addressing required no privilege; any arbitrary programmer/user could do it.
The bad thing was that the hardware design of the addressing logic required interrupts to be disabled (the 1108 had no paging, no page faults). There was no way to bring the machine out of the infinite loop, short of a full reboot.
It's neat for running servers as you can nuke/rebuild your server in a hunch if anything goes wrong. But it's rather pointless for anything else.
Docker, just like any neat technology, has one or a couple of sensible use cases and a whole army of nutjobs following the cult because it's cool. Heck, not so long a go, some coffee company included the term "Blockchain" in the name and the share value soared. Not kidding, merely mentioning a current hot fad was enough to draw attention from said nutcases.
Once they sort out the "how do we make sure our 100000 docker images are all patched to latest security level" issue, it is going to be quite useful, but probably not for running applications on desktops. That was never the goal (though you never know what people will get hacked in - I guess with a X11 server it can already run Linux client software with the right settings).
Now think of a server farm that needs to run 1000 services based on the Windows Nano framework. Sure you can install whatever is needed by the 1000 services on all your machines. Good luck keeping that running on 20+ machined. Oh, need to add more machines to scale out. Sure, that will be ready in 2 days once it is all installed... what do you mean it is too late?
You can run the 1000 services in virtual machines. Just think about the size and memory overhead of that.
With Docker, every service will share the same (relatively limited number of) base images. They will run in the same kernel - consuming less overhead memory as well. Need to scale out? Bring a docker host online, add it to your container management software... and you are done.
There are a few other useful things. For example, your software can contain the docker definition for building your product. Need to build a hotfix for a version released 3 years ago? It will take a bit longer as the build agent will have to restore some old docker images - but then it will just run with the build agent looking like it did 3 years ago - and not as the current one with the incompatible versions of the build tools installed.
If your usage does not match this, then Docker simply isn't build to address any of the issues you have. Introducing it in this case will just add problems - not solve them.
I haven't use Docker much but to me the idea of docker is that everyone running you container has the same environment. It's like running a Virtual Machine, but with less overhead as it's using things from the currently installed Linux kernel.
Here you're on Windows, and you use a Windows Nano server image, meaning that your container will at least be of that size I guess. So to me it sounds more like as if you were just using a Virtual Machine.
Docker is working great for me.
I have a .net core application that needs to be tested on different .net core runtime (2.0,2.1,2.2). So i created 3 docker containers with different net core runtime running alongside with my application. I will just run those docker images and see if i broke something.
Another instance is that i have local application, .net core with PostgreSQL, or with MySQL. Now instead of running a shared database for testing, i can just send my docker images with PostgreSQL or MySQL to our QA/Tester to run on their local machine for testing, alongside with my .net core application in a docker container.
Docker makes more sense with Linux as a target, as you have distros like Alpine that have been designed to be really minimal (a base Alpine image is 5MB). I reworked an old Windows server (very old - it was running Server 2003 R2!!) of ours, which had web server, Git & Mercurial repo access, Redmine, MediaWiki into a set of 5 docker images totalling somewhere around 100MB in size. By separating each application into a separate container, I can update each of them without worrying about breaking the others through some common dependency. docker-compose makes it pretty easy to build, connect and run a set of containers that are menat to run in unison.
But for distributing desktop apps? Doesn't make too much sense just yet, especially with Windows, unless you have a large, involved environment you want to make available... However... something like Windows Sandbox is a stepping stone towards using containers or containerisation trechnology for running desktop applications.
Java, Basic, who cares - it's all a bunch of tree-hugging hippy cr*p
We are in the process of introducing Docker, now that we are moving a lot of development activity over to Linux. Our Linux build nodes will essentially have no utilities or tools at the OS level, everything will be put into Docker images. Since Windows Docker may run Linux containers, and everything is in the container, you can run it on your Windows desktop (assuming that you have a 64 bit Windows 10 Pro or other version that can run Hyper-V).
We are going to run our build nodes under Linux, but the developers may run the same Docker images on their desktops. In the Docker world, there are two schools: One school is to wrap a single tool into a container, so a build script using five different tools would run in the OS, and activate the five images one by one. The other school says to put the complete build environment, all required tools including a command shell, into one huge container, and run the script in (or give commands interactively to) the shell inside the container. We have selected the second approach. So nothing depends on the host OS; the scripts (or interactive commands) are identical on the Linux build nodes and on the Windows desktop.
Docker on Windows have a significant startup time, regardless of whether we are in Linux or Windows mode. Once the container is running, there are not any significant delays. The system is stable; we haven't had much problems (none that I can remember, that wasn't caused by our own inexperience!). Installation went without problems. Our experience is with Windows Pro; the implementation is somewhat different on Windows Server.
In our first step, the images are of Linux flavor, running Linux tools. We may be going on to the next step: Making Windows flavor containers, running Windows tools. But since Docker in suitable only for command-line tools (and optionally X11, but X11 is virtually unknown in Windows), you are limited to those tools having a decent CLI. That is rather strict limitation in a Windows environment; lots of good tools require a GUI. For any given non-trivial development task, you are likely to want to use at least one GUI tool, so you can't put the entire tool set into a container but must do part of the job in containers, par of it outside.
What are our reasons for going into Docker?
Our build nodes run jobs for a multitude of projects, requiring different tool versions, library versions and what have you. You can never know what state the previous job on the same node left behind, which tool versions it used.
So for a number of years we have (in Windows) had this utility that switches between software configurations: Every job presents a list of the software versions it requires, and if not in place, the utility takes care of switching to the requested version(s). Depending on the tool, that could be to move a symbolic link, change the order in $PATH, ... and in the worst case: Uninstall the current version and install the requested one, before the job continues.
This reconfiguration can take significant time. Also, some tools cause lots of trouble, e.g. they refuse to run via a symbolic link. Or installers for plugins or librarires insisting on installing in the newest version of the main tool, no matter what the links and paths and whatever says is the currently active one. When we resort to uninstall/install, the installer must be able to run silently, and not all of them are. And so on and so on... For being frank: I am sick of it!
With a (partial) move to Linux, we start with blank sheets and can rethink it all. We will put everything in containers, so that the previous run on the same node leaves a "blank" machine, with no traces of any software version. The next job picks up its Docker image, which is ready built, ready to run.
Also, we have had some "disciplinary" problems under Windows: Projects have "secretly" (from the build scripts) installed software not declared in their toolbox, so the build configuration is not controlled; builds cannot be perfectly reproduced. This can also cause problems for the next job: Uninstall is sometimes very version dependent, and the toolbox utility doesn't know how to uninstall an unknown version. Or, it may try to move a symbolic link to another version, but the "secret" version wasn't installed to a symbolic link ... and so on.
In principle, a build running a script in a Docker container may install any software available on the net. But once the container terminates, the installation is gone; it must be repeated every time the image is run, you cannot preserve the current state of the container for the next run (you could, in principle, dump it to a file outside the container, but that is not very realistic).
So we expect to be able to maintain a far better discipline by use of Docker. This comes in from another angle as well:
Small, the Dockerfile single-tool Docker images usually start out with an OS base image, and all the rest is specified in that Dockerfile. Many complex images do it the same way, going all the way from the OS up to the complete tool set in a single Dockerfile. Some of our tools are updated every two week or so (those are our own internal ones), so every two weeks you build yet another version of a huge image, for all the different tool sets that includes the updated one. The building takes quite some time, the registry is filled up with lots of huge images, and the common use of image cache, file space for common layer, and common use of the same memory segment is lost, because they are not common any more.
We will make a quite strictly managed tree structure of images: All the (comparatively) stable tools go at the lower layers, creating another base image. On this base, we put layers of semi-stable tools, making a larger base image. Here we do some branching: We make one second-level image for C/C++ development, another one for Python tasks, a third one for documentation, ... On top of each of these we put furher layers, with less stable tool versions. Still they can be used as base images for an image that adds those updated-biweekly tools. When those tools are updated, we select the appropriate third-level base image; the Dockerfile only includes the updated utility, and builds in a few seconds. It requires very little extra disk space, and all the layers below are shared with a great number of other users.
When someone comes asking for an update of a lower layer, that could affect all other users of it as a base image - the entire subtree of higher images, and we will make sure that the update is in accordance with the future plans of all those projects. Maybe they will suggest "Why don't you update xxx as well, so we combine that into a single update?" So before a project requests a tool update, they know that it is a heavier project, and will request it only if they have a real need for it. (In the Windows environment, we haven't been strict on this; we have been too much willing to accept yet another tool version just because someone argued that 'the new version is better', without giving details). We expect to have far less version proliferation with our tree of (base) images.
IF we introduce the Windows flavor of Docker images, we will follow the same principles, for the same reasons.
Wow, that's more of an article than a reply.
One of my colleagues is already building Docker Linux containers for a small board controller, these are tiny, only about 3 Mb. Hence my bosses enthousiasm I think, he probably thinks Windows containers will be that size too, but this will obviously be disappointing to him ...
Thanks for taking the time to reply!
One thing to keep in mind is that the base image is immutable and is shared among all Docker services on the machine that use the same base image.
So if you had 5 services that consisted solely of 1mb executables, and they all used the same Windows Nano base image, you'd only use up about 405MB of disk space. The images built on top of the base image are just stored as a set of diffs from the base image. Though I think if you ask Docker how big each image is, it'll report 401MB - the size of the base image plus the diff - so unless you know about the immutable bit, it'll look like you're using up more space than you actually are.
This can make Hello World apps look huge, but you keep in mind you're only going to have one copy of that base image shared across all apps that use it, it's not so bad. If you take care to ensure that all of your apps and services use the same base image, it can be a pretty sane way to deploy your apps to servers, because you'll get the benefits of having your apps completely self-contained without needing to install each one in its own VM.
The massive Hello World isn't as much of an issue on the Linux side of things if you build on top of an Alpine Linux image. I've packaged up a few server apps written in Go, which statically links everything into a single executable, and my whole image (base + diff) was under 10 megabytes.