|
Good info, thank you.
|
|
|
|
|
Thanks, that clears things up a lot.
I knew Docker is not a virtual machine, but did not know how to call it otherwise, maybe "containerization platform" would fit the bill ?
|
|
|
|
|
AFAIK, Docker for WindoZe is mostly meant for developing purposes and is not yet recommended for production(*) (at least last time I checked.)
(*) Just as much as WindoZe recommended for production isn't either ...
|
|
|
|
|
|
I like how docker is a 'demon' and not daemon
|
|
|
|
|
honest: that was a genuine typo, it was not intended
|
|
|
|
|
|
|
|
Don't they have Windows support now?
|
|
|
|
|
If anyone ever managed to run a Winforms application on Docker I would like to know !
|
|
|
|
|
If you by "Windows application" mean one with a native Windows GUI: No, that is not possible.
Windows applications without any GUI, communicating through either a web or SSH interface, is certainly possible - the kind of backend / web services.
The bottom layer of a Windows Docker image contains all that OS functionaliy that the upper layers have access to is a "Windows nanokernel" of, believe it or not, almost a gigabyte. (One might wonder what size a megakernel would be!) My guess is that those services offered by the nanokernel (which does not include any GUI functions!) really could be done in a fraction of the size, but the various modules are so deeply intertwingled that shaving off all that stuff that really does nothing for the API would require man-years of effort. Since this layer is shared between all running containers, and code that is never used is never paged in from disk, they probably figure "A GB of disk space is nothing, so shaving it further down isn't worth the cost". Sure, I am just guessing, but to me that looks like a reasonable explanation for that GB size bottom layer.
|
|
|
|
|
Why would you want to run a UI app in Docker?
|
|
|
|
|
Mainly for testing purposes so our tester has a ready to run Windows testing environment that can be produced by our Continuous Integration pipeline.
|
|
|
|
|
|
If you can do it as a web application, with a HTML based GUI: Yes.
If you want a native Windows GUI: No.
|
|
|
|
|
You know, I just spent 10 minutes looking at the Docker site and I have not clue at all what it is and what it should do.
"Docker is the company driving the container movement and the only container platform provider to address every application across the hybrid cloud. Today’s businesses are under pressure to digitally transform but are constrained by existing applications and infrastructure while rationalizing an increasingly diverse portfolio of clouds, datacenters and application architectures. Docker enables true independence between applications and infrastructure and developers and IT ops to unlock their potential and creates a model for better collaboration and innovation."
There is so much technobabble in there, I'm not certain who's the target audience.
I'd rather be phishing!
|
|
|
|
|
Maximilien wrote: There is so much technobabble in there, I'm not certain who's the target audience.
The marketing department. And your next deliverable will use it.
Signature ready for installation. Please Reboot now.
|
|
|
|
|
Maximilien wrote: There is so much technobabble in there
You have to get past that nonsense - it put me off too.
Best way to explain it is as being like a lightweight VM, but instead of storing an image you store how to create the image. The image definition is like an onion - you build it up layer by layer, e.g. we start with a plain linux box, install say node (one layer), global npm packages (second layer) webpack (another layer) until the box can be used for what we want. The layers are all cached when built and stored so you can pull them later - really the "lengthy" build process only needs to happen once. Then you can run the image, mounting parts of the host file system (in our case the source to build) and then scripts to run, say a webpack build. The images are "universal" so running inside the container gets consistent behaviour if you are running the host on Windows or Linux.
We're using it to migrate our build process, we've managed to get it so our build-agent infrastructure is just a linux box with docker installed, we bypass the corporate stuff about getting out infra bods to install/upgrade dependencies. Also the dockerfiles (image definitions) are all just text scripts so you can even source control these.
|
|
|
|
|
Wow, I know what it is and even I'm confused
|
|
|
|
|
That reads exactly like Microsoft's technobabble: I've given up even trying to find out which version of whatever Windows Server is called this minute is appropriate for any given type of business, as even the technical pre-sales guff doesn't ever actually tell you why anyone might want to use the system or what for. As a MS partner, I have campaigned quite strongly against this kind of useless guff, but to no avail.
In the end, I just go with whatever the 'standard' version is called and add on any bits I might need (eg Exchange, SQL server etc).
I look after a charity that is currently using the very short lived 2011 version of Small Business Server (which is not supported by VMWare BTW!). Anyone care to guess what the off-the-shelf replacement for that is? The charity is contemplating moving everything to Google instead, because there isn't a direct equivalent it seems, and the licensing makes something that matches their requirements horrendously expensive...
Maybe I'm just a grumpy old git, but Docker et al appears to me to implement a lightweight virtualisation platform - so far so good, but I have yet to find any business case for using it rather than a solid hypervisor like vSphere etc. Everywhere I've deployed virtualisation so far, the key thing has been to reduce hardware dependence and maintenance whilst allowing easier backup and fail-over in the event of a hardware issue. I cannot see that Docker offers any advantages over the other well-established hypervisors in such cases.
And don't start quoting performance issues at me - its been a long time since that was a serious consideration for smaller deployments! 8)
|
|
|
|
|
Many Docker guys try to sell Docker as just a way to pack up things, sort of like a fancy MSI or CAB file but more self contained. Making it appear very simple and lightweight is essential for markting.
Really, it is a complete, more or less closed execution environment. A huge black box. You stuff your code into it through a Docker image builder; that makes it enter the black box and the only thing you can do with it is act as a user, either through a web or SSH interface. You don't see its file system. You don't see its internal network, its processes and threads. You can request some information about what's going on through an SSH interface to the demon controlling the whole thing, but that is very indirect and with far less control and available information than you are used to.
After you have learned the basics, you go into "orchestration": Having a dynamic set of maybe a thousand running containers serving your web users. So you add other black boxes, controlling all those blackboxes, "swarms" and "kubernetes" are the most common orhcestration tools. They create their own closed worlds, too, and the infrastructure has become so complex that you can forget everything about "lightweight"...
Don't expect to understand any of Docker in 10 minutes. Not even 10 hours. If you spend 10 days intensely studying it, with the aid of some good instructors, you will begin to understand what it is. After 10 months of practical use, you may have a feeling of beginning to master it.
|
|
|
|
|
Yeah, it's not easy to get a proper technical article amidst a lot of marketing content.
|
|
|
|
|
Yeah, we use it in dev to run PostgreSQL. I use it with Kitematic and so far it's been fine.
|
|
|
|
|
Thank you. How do you do local dev/debugging? Always connect to the remote docker image/instance?
|
|
|
|