The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
It's a "container" that specifies the OS and the applications that you want to run. As such, it's much easier, faster, and much much smaller to ship the "container" rather than an entire VM image, because all you're really "shipping" is the specification for what the container contains. Hence the whale with a bunch of containers on its back logo. Why? Because on the first run, when you "launch" the container, it will download all the pieces that you specified and run various configuration/setup scripts. When it's done, you now have a VM with the specified OS and applications that you can "talk to", as in, many Docker apps are servers -- there's no UI.
It makes it really easy to test stuff, because you can reset the VM back to its original state at any time. From my very limited experience, it works best with Linux and Linux apps because Linux natively doesn't include all the UI bloat that Windows does.
You don't interface with the apps in a container through a UI, you interface with them through a terminal app like PuTTY or WinSCP for file management, or if the apps in the container provide a web API, you go that route.
And the cool thing is, the applications running in the container are completely isolated from the host machine. Sure, something malicious might blow away the VM, but your host machine is safe. Furthermore, unless you do something really dumb, the only apps that run in the container are the ones specified in the container configuration file -- so you know what you're putting into it.
And the really really cool thing is that once the container is initialized, you can launch multiple instances of the VM fast and isolated from each other.
Because scripting languages like Python are easily specified as "I want to install Python version x.xx", it makes it really easy to create containers with custom code. And, one of the reasons I think Microsoft has put a lot of effort into getting some of its servers and frameworks to run under Linux is because they are easily containerized with the Linux OS.
But Docker for Windows truly sucks, the last time I tried it.
Most Windows software is GUI based, and the only graphical user interface that you can make with Docker is a web iterface. So almost all of MS software is useless in a Docker environment.
You can make Docker images running programs with a 1970-style command line user interface, such as compilers, MSbuild and similar tools. These are typically freely available, such as in the vs_buildtools package, with "no" licensing restrictions. (There is a license agreement, but none of the restrictions is likely to affect you, whatever you use it for.) So for all practical purposes, Windows licensing is not an issue.
The disadvantage of a free package is that there is no support - earlier today, when I asked MS support for a list of the IDs of the modules in the vs_buildtools package, I was told to raise a support case, paid for at a case-by-case basis, expected cost around 300 Euro. I turned that "offer" down, and later today I found the URL listing the IDs. I am happy that I didn't pay MS 300 Euro to provide a URL to one of their web pages!
Essentially, Docker images create closed environments where the only interface to the outer world are IP based protocols, such as SSH, HTTP etc. They are sort of nice when you live in a command-line oriented world - so *nix geeks love them. The isolation is also sort of nice, as long as you do not expect any non-IP interface to the outer world. But if you change a single detail, such as updating the compiler version, you have to create another closed world, containing that compiler version.
On the other hand... We have had a long discussion whether to make "one tool, one Docker image" or "one complete set of tools one docker image". In the first alternative, the build script is essentially interpreted outside Docker, each Dockerized tool being invoked more or less like another executable. In the second alternative, the build script is essentially passed to the command interpreter of a running Docker container, and the various tools in the container is invoked in turn, inside the container.
In our company, we have essentially gone for the second alternative, because it allows us centralized control over the entire tool chain. We offer the developers a unified set of tools, rather than a pick-and-choose development environment. If some project insists on, say, a newer compiler version, it requires us to update an entire tool chain, which is not a five-minute job. (Well, technically it might, but we will force it to be something that is considered in a larger framework, e.g. consulting with other users of the old version). So larger Docker images providing complete toolchains, rather than smaller images providing individual tools, is one way of controlling version proliferation across projects within the company (which certainly has been a problem the last few years).
The closed world of a Docker container (to keep terminology clear: A Docker image is similar to an executable file, with each "layer" similar to a dll, while a Doocker container is similar to a process - an instance of an image, or if you like: An instance of an executable file) means that unless a specific bug affects your program, there is no need to "maintain and patch" the image. Maybe other images are based on newer, patched-up base images, but that won't affect your older image. If you really need to update, because a patch is required for your image to run, then you will create a new Docker image, a new isolated world with that patched-up image. But that is only if your software depends on those fixes. If they don't: Keep running your old version. That's what isolation is about
it's a pair of purpose designed clippers to remove the tails from lambs so the shi poo doesn't stick to their tail.
Saves them from "fly strike" which (particularly in Aus & NZ) happens when flies lay eggs in the stuck poo from which which maggots emerge to basically eat the lambs ass; while not every lamb will get this, those that do will suffer in pain and eventually (many months to years) will eventually die a horrible death.
PETA of course insists the practice is cruel, and lambs should be allowed to retain the much increased chance to experience lingering painful diseases.
Should you care? Hell yes! If only to be up-to-date with the latest developments in software, your bread and butter.
I actually can't believe you (and others) didn't already do a five minute read-up years ago when it became hot
Although if you're doing UI development it may not be that applicable
In 7 years as a developer I never shipped anything that required any kind of toy like that. There isn't only the web - in fact there is a definite lack of low level programmers because everyone and their dogs launch themselves on the latest trends, as if programming was a popularity contest, rather than doing the hard stuff.
In 7 years as a developer I never shipped anything that required any kind of toy like that.
Neither did I to be honest, but it's good knowing what's out there even if you're not using it.
As I said, you don't need Docker if you're doing UI development, or low level, as you said, but surely you know about Rust when you're a C(++) developer even if you're not using it? (And just in case you don't, Rust should be THE new and easy replacement for C, or so I'm told, I don't do C(++)).
I once worked with a web developer who never heard of Node.js.
I find that absolutely amazing (and not in a good way) that some people care so little about their trade that they miss such industry changing tools.
Especially as a consultant I can't imagine coming to a client and not knowing about Node.js or Docker, I'd be out of a job in no time!
So it looks like Micro$oft has updated my rig's OS to this build that seems to be getting all sorts of bad reviews.
I'm trying to synch up some backups, and the readings for the sizes for a particular directory are all over the map. I use a great free utility called WinDirStat as well (although it takes a while to refresh ), which is the ONLY reading in which the directories (i.e., across a pair of drives) are the same. If it makes any difference, the files here are mostly audio files, with a few auxiliary files, that were all downloaded via qbitTorrent.
WinDirStat: 183,979,917,768 - BOTH
Explorer (at the directory level): F: 148,637,098,513 C: 170,778,902,832
Explorer (within the directory): F: 183,202,426,289 C:173,874,406,509
A few years ago there was a beer that was sold in bottles and on tap, with a particular fruity flavour (redolent of pears). I am sure that the image had a kingfisher in the picture. I cannot find it anywhere (pub or suprmarket), nor can I remember the name. Anyone in the UK know the one I mean?
To rule out the obvious, you're not talking about the Indian Kingfisher[^] beer???
Anything that is unrelated to elephants is irrelephant Anonymous - The problem with quotes on the internet is that you can never tell if they're genuine Winston Churchill, 1944 - Never argue with a fool. Onlookers may not be able to tell the difference. Mark Twain
Last Visit: 18-Jan-20 11:10 Last Update: 18-Jan-20 11:10