|
Maximilien wrote: There is so much technobabble in there
You have to get past that nonsense - it put me off too.
Best way to explain it is as being like a lightweight VM, but instead of storing an image you store how to create the image. The image definition is like an onion - you build it up layer by layer, e.g. we start with a plain linux box, install say node (one layer), global npm packages (second layer) webpack (another layer) until the box can be used for what we want. The layers are all cached when built and stored so you can pull them later - really the "lengthy" build process only needs to happen once. Then you can run the image, mounting parts of the host file system (in our case the source to build) and then scripts to run, say a webpack build. The images are "universal" so running inside the container gets consistent behaviour if you are running the host on Windows or Linux.
We're using it to migrate our build process, we've managed to get it so our build-agent infrastructure is just a linux box with docker installed, we bypass the corporate stuff about getting out infra bods to install/upgrade dependencies. Also the dockerfiles (image definitions) are all just text scripts so you can even source control these.
|
|
|
|
|
Wow, I know what it is and even I'm confused
|
|
|
|
|
That reads exactly like Microsoft's technobabble: I've given up even trying to find out which version of whatever Windows Server is called this minute is appropriate for any given type of business, as even the technical pre-sales guff doesn't ever actually tell you why anyone might want to use the system or what for. As a MS partner, I have campaigned quite strongly against this kind of useless guff, but to no avail.
In the end, I just go with whatever the 'standard' version is called and add on any bits I might need (eg Exchange, SQL server etc).
I look after a charity that is currently using the very short lived 2011 version of Small Business Server (which is not supported by VMWare BTW!). Anyone care to guess what the off-the-shelf replacement for that is? The charity is contemplating moving everything to Google instead, because there isn't a direct equivalent it seems, and the licensing makes something that matches their requirements horrendously expensive...
Maybe I'm just a grumpy old git, but Docker et al appears to me to implement a lightweight virtualisation platform - so far so good, but I have yet to find any business case for using it rather than a solid hypervisor like vSphere etc. Everywhere I've deployed virtualisation so far, the key thing has been to reduce hardware dependence and maintenance whilst allowing easier backup and fail-over in the event of a hardware issue. I cannot see that Docker offers any advantages over the other well-established hypervisors in such cases.
And don't start quoting performance issues at me - its been a long time since that was a serious consideration for smaller deployments! 8)
|
|
|
|
|
Many Docker guys try to sell Docker as just a way to pack up things, sort of like a fancy MSI or CAB file but more self contained. Making it appear very simple and lightweight is essential for markting.
Really, it is a complete, more or less closed execution environment. A huge black box. You stuff your code into it through a Docker image builder; that makes it enter the black box and the only thing you can do with it is act as a user, either through a web or SSH interface. You don't see its file system. You don't see its internal network, its processes and threads. You can request some information about what's going on through an SSH interface to the demon controlling the whole thing, but that is very indirect and with far less control and available information than you are used to.
After you have learned the basics, you go into "orchestration": Having a dynamic set of maybe a thousand running containers serving your web users. So you add other black boxes, controlling all those blackboxes, "swarms" and "kubernetes" are the most common orhcestration tools. They create their own closed worlds, too, and the infrastructure has become so complex that you can forget everything about "lightweight"...
Don't expect to understand any of Docker in 10 minutes. Not even 10 hours. If you spend 10 days intensely studying it, with the aid of some good instructors, you will begin to understand what it is. After 10 months of practical use, you may have a feeling of beginning to master it.
|
|
|
|
|
Yeah, it's not easy to get a proper technical article amidst a lot of marketing content.
|
|
|
|
|
Yeah, we use it in dev to run PostgreSQL. I use it with Kitematic and so far it's been fine.
|
|
|
|
|
Thank you. How do you do local dev/debugging? Always connect to the remote docker image/instance?
|
|
|
|
|
Someone here set up some scripts to create the docker images and configure them. From what I understand I connect to the local docker instance.
|
|
|
|
|
|
You put everything that is to be included into the new image (except the base image) into a subdirecotry at the host. (Keep everything else out of that subdirectory!) In the root of that subdirectory you save the build script (the "Dockerfile") in the script language described at Docker Build Documentation[^].
Using the CLI interface to the daemon, you give a build command, naming your build script (note that other Docker users will frown if it is named something else than "Dockerfile" with no extension). This will not do the build at the host; it will copy the entire subdirectory into the Docker daemon, and the daemon will do the build.
The Dockerfile language is really primitive. Conceptually, the script loads the base image specified (e.g. a Linux base) and RUNs one or more executables (typically some installer), COPY from the directory tree you specified to the file system of the new image, one command line at a time. When all the RUN / COPY commands are performed, the current state, with the newly installed software, is saved as a new image. There is not much more to it, just minor details such as the command to run when the container is started, naming and other optional things.
Your first Dockerfile could consist of three lines
FROM some-baseimage
RUN myprogram-installer.exe
CMD myprogram.exe
"myprogram-installer.exe" would be placed in the directory tree that is copied to the Docker daemon.
"myprogram.exe" lives in the file system of this image only, making up a new layer of your image. It exists inside the Docker daemon only, even there invisible to other images, unless they are built using your image as base).
That's it. There is not much to learn, as long as you know how to run installers and start the application...
Note that since the entire build is done in a black box outside your control, there is no way for you to supply any sort of parameters through a dialog. All choices must be specified as arguments at the RUN line (possibly by naming a parameter file that you have COPYed to the image earlier in the Dockerfile, if the installer can be parametrized that way).
|
|
|
|
|
It's all so confusing. I checked here [^] and found none of them suitable for acquisition.
Ravings en masse^ |
---|
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein | "If you are searching for perfection in others, then you seek disappointment. If you are seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010 |
|
|
|
|
|
Nish Nishant wrote: Are you using Docker or similar technologies today?
Yes, in fact it's currently my main task
Nish Nishant wrote: What's been your experience like?
Pretty much wholly positive - though we're almost a classic use case. We're containerising our build system which is a mix of gulp and webpack builds across JavaScript and Typescript code bases (deprecating the former in favour of the latter) as well as cordova apps.The platform (.net) team at the other end of the corridor is doing something similar.
Nish Nishant wrote: What stack do you use it on?
Linux - the host machine doesn't matter too much, the containers are either apline or debian. From there we build the container to do what we want, such a "be a gulp environment" or "be a cordova build system". Not tried it on Windows, the other team seems to think it's fine. Doesn't work fully on Macs - runs inside an actual VM, but can be used for testing.
|
|
|
|
|
How's your development/debugging experience? Do you create local docker containers? Or do you connect to a remote docker image/instance? (example on AWS/Azure)
|
|
|
|
|
Images were Local at first, we're now using AWS ECR to store the images themselves. We don't have any need to persist intances at the moment.
We're also running the image pull and run in .sh scripts, if you have the choice I'd suggest python as an alternative if you just want something portable to run on a *NIX system. Developing has been fine, the biggest problem has been the stack we've adopted - Docker, AWS-ECR and AWS-SSM (which we're using as a secure param store) are all new to the team, and the kiddiewinks I work with have barely any BASH exposure, so there has been a learning curve.
Running locally has been fine from a debugging/dev perspective - the tooling is pretty much your favoured IDE around whatever you've decided to wrap this in (in our case BASH), the only stuff you won't be familiar with is the docker files (not difficult) and the docker framework you'll need to spin the thing up (in our case the Docker CLI, but stuff is available for python and dot net ). We also don't really attach a debugger anywhere, as the code is all BASH, so YMMV if you go down a different route.TBH it's not much different from scripting on a linux OS, our main problem has been testing in our Build Manager (Team City) where we've got the latency of build agents spinning up etc.
|
|
|
|
|
Thank you, Keith, appreciate the details.
|
|
|
|
|
No worries, glad to help!
|
|
|
|
|
Haven't used it yet, but I'd very much like to.
I'm sure it has its problems, but it would sure as hell solve a lot of them too!
The company I currently work for has been talking about it, but they're still on Windows 7 so it's a no-go for them (for now).
They do have lots of services that all depend on one another though, so Docker would certainly help in automated testing and deployment.
|
|
|
|
|
|
|
Thank you. Will be good to read some critical and not so positive write-ups too I suppose.
|
|
|
|
|
Nish Nishant wrote: Will be good to read some critical and not so positive write-ups too I suppose.
Oh, don't get me wrong. Docker running Linux on a Win10 box is great. It's just that Docker for Windows sucks.
|
|
|
|
|
I use Docker to create encapsulated, versioned development environments for my projects. That way I can have different package versions installed side by side, and can easily rollback environment changes. See my Luffer project for details.
|
|
|
|
|
Nice - thank you. Can you talk about you local dev experience? Do you cerate a docker image locally and work against that?
|
|
|
|
|
Yes, I run the docker container locally in detached mode, with the base directory of my project mapped into it, and use docker exec to send it commands. This is all automated / abstracted away by Luffer, the tool I mentioned earlier.
|
|
|
|
|
Excellent. Thank you for the info.
|
|
|
|
|