The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
If you by "Windows application" mean one with a native Windows GUI: No, that is not possible.
Windows applications without any GUI, communicating through either a web or SSH interface, is certainly possible - the kind of backend / web services.
The bottom layer of a Windows Docker image contains all that OS functionaliy that the upper layers have access to is a "Windows nanokernel" of, believe it or not, almost a gigabyte. (One might wonder what size a megakernel would be!) My guess is that those services offered by the nanokernel (which does not include any GUI functions!) really could be done in a fraction of the size, but the various modules are so deeply intertwingled that shaving off all that stuff that really does nothing for the API would require man-years of effort. Since this layer is shared between all running containers, and code that is never used is never paged in from disk, they probably figure "A GB of disk space is nothing, so shaving it further down isn't worth the cost". Sure, I am just guessing, but to me that looks like a reasonable explanation for that GB size bottom layer.
You know, I just spent 10 minutes looking at the Docker site and I have not clue at all what it is and what it should do.
"Docker is the company driving the container movement and the only container platform provider to address every application across the hybrid cloud. Today’s businesses are under pressure to digitally transform but are constrained by existing applications and infrastructure while rationalizing an increasingly diverse portfolio of clouds, datacenters and application architectures. Docker enables true independence between applications and infrastructure and developers and IT ops to unlock their potential and creates a model for better collaboration and innovation."
There is so much technobabble in there, I'm not certain who's the target audience.
You have to get past that nonsense - it put me off too.
Best way to explain it is as being like a lightweight VM, but instead of storing an image you store how to create the image. The image definition is like an onion - you build it up layer by layer, e.g. we start with a plain linux box, install say node (one layer), global npm packages (second layer) webpack (another layer) until the box can be used for what we want. The layers are all cached when built and stored so you can pull them later - really the "lengthy" build process only needs to happen once. Then you can run the image, mounting parts of the host file system (in our case the source to build) and then scripts to run, say a webpack build. The images are "universal" so running inside the container gets consistent behaviour if you are running the host on Windows or Linux.
We're using it to migrate our build process, we've managed to get it so our build-agent infrastructure is just a linux box with docker installed, we bypass the corporate stuff about getting out infra bods to install/upgrade dependencies. Also the dockerfiles (image definitions) are all just text scripts so you can even source control these.
That reads exactly like Microsoft's technobabble: I've given up even trying to find out which version of whatever Windows Server is called this minute is appropriate for any given type of business, as even the technical pre-sales guff doesn't ever actually tell you why anyone might want to use the system or what for. As a MS partner, I have campaigned quite strongly against this kind of useless guff, but to no avail.
In the end, I just go with whatever the 'standard' version is called and add on any bits I might need (eg Exchange, SQL server etc).
I look after a charity that is currently using the very short lived 2011 version of Small Business Server (which is not supported by VMWare BTW!). Anyone care to guess what the off-the-shelf replacement for that is? The charity is contemplating moving everything to Google instead, because there isn't a direct equivalent it seems, and the licensing makes something that matches their requirements horrendously expensive...
Maybe I'm just a grumpy old git, but Docker et al appears to me to implement a lightweight virtualisation platform - so far so good, but I have yet to find any business case for using it rather than a solid hypervisor like vSphere etc. Everywhere I've deployed virtualisation so far, the key thing has been to reduce hardware dependence and maintenance whilst allowing easier backup and fail-over in the event of a hardware issue. I cannot see that Docker offers any advantages over the other well-established hypervisors in such cases.
And don't start quoting performance issues at me - its been a long time since that was a serious consideration for smaller deployments! 8)
Many Docker guys try to sell Docker as just a way to pack up things, sort of like a fancy MSI or CAB file but more self contained. Making it appear very simple and lightweight is essential for markting.
Really, it is a complete, more or less closed execution environment. A huge black box. You stuff your code into it through a Docker image builder; that makes it enter the black box and the only thing you can do with it is act as a user, either through a web or SSH interface. You don't see its file system. You don't see its internal network, its processes and threads. You can request some information about what's going on through an SSH interface to the demon controlling the whole thing, but that is very indirect and with far less control and available information than you are used to.
After you have learned the basics, you go into "orchestration": Having a dynamic set of maybe a thousand running containers serving your web users. So you add other black boxes, controlling all those blackboxes, "swarms" and "kubernetes" are the most common orhcestration tools. They create their own closed worlds, too, and the infrastructure has become so complex that you can forget everything about "lightweight"...
Don't expect to understand any of Docker in 10 minutes. Not even 10 hours. If you spend 10 days intensely studying it, with the aid of some good instructors, you will begin to understand what it is. After 10 months of practical use, you may have a feeling of beginning to master it.
You put everything that is to be included into the new image (except the base image) into a subdirecotry at the host. (Keep everything else out of that subdirectory!) In the root of that subdirectory you save the build script (the "Dockerfile") in the script language described at Docker Build Documentation[^].
Using the CLI interface to the daemon, you give a build command, naming your build script (note that other Docker users will frown if it is named something else than "Dockerfile" with no extension). This will not do the build at the host; it will copy the entire subdirectory into the Docker daemon, and the daemon will do the build.
The Dockerfile language is really primitive. Conceptually, the script loads the base image specified (e.g. a Linux base) and RUNs one or more executables (typically some installer), COPY from the directory tree you specified to the file system of the new image, one command line at a time. When all the RUN / COPY commands are performed, the current state, with the newly installed software, is saved as a new image. There is not much more to it, just minor details such as the command to run when the container is started, naming and other optional things.
Your first Dockerfile could consist of three lines
"myprogram-installer.exe" would be placed in the directory tree that is copied to the Docker daemon.
"myprogram.exe" lives in the file system of this image only, making up a new layer of your image. It exists inside the Docker daemon only, even there invisible to other images, unless they are built using your image as base).
That's it. There is not much to learn, as long as you know how to run installers and start the application...
Note that since the entire build is done in a black box outside your control, there is no way for you to supply any sort of parameters through a dialog. All choices must be specified as arguments at the RUN line (possibly by naming a parameter file that you have COPYed to the image earlier in the Dockerfile, if the installer can be parametrized that way).
Are you using Docker or similar technologies today?
Yes, in fact it's currently my main task
Nish Nishant wrote:
What's been your experience like?
Nish Nishant wrote:
What stack do you use it on?
Linux - the host machine doesn't matter too much, the containers are either apline or debian. From there we build the container to do what we want, such a "be a gulp environment" or "be a cordova build system". Not tried it on Windows, the other team seems to think it's fine. Doesn't work fully on Macs - runs inside an actual VM, but can be used for testing.
Images were Local at first, we're now using AWS ECR to store the images themselves. We don't have any need to persist intances at the moment.
We're also running the image pull and run in .sh scripts, if you have the choice I'd suggest python as an alternative if you just want something portable to run on a *NIX system. Developing has been fine, the biggest problem has been the stack we've adopted - Docker, AWS-ECR and AWS-SSM (which we're using as a secure param store) are all new to the team, and the kiddiewinks I work with have barely any BASH exposure, so there has been a learning curve.
Running locally has been fine from a debugging/dev perspective - the tooling is pretty much your favoured IDE around whatever you've decided to wrap this in (in our case BASH), the only stuff you won't be familiar with is the docker files (not difficult) and the docker framework you'll need to spin the thing up (in our case the Docker CLI, but stuff is available for python and dot net ). We also don't really attach a debugger anywhere, as the code is all BASH, so YMMV if you go down a different route.TBH it's not much different from scripting on a linux OS, our main problem has been testing in our Build Manager (Team City) where we've got the latency of build agents spinning up etc.