The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
Well, JVM/dotNET do virtualize some aspects - they are virtualization techniques. You can say the same about a lot of computer concepts: Any compiler virtualizes the instruction set of the CPU. A file system driver creates virtual storage unit where you don't have to handle sectors and track and surfaces. And so on.
I see a lot of computer people that seem to think that virtualization is one specific thing: Creating a complete virtual hardware CPU / memory / IO environment. If you don't provide all of that Hyper-V or VMware provides, it is not virtualization. If you provide something not found in Hyper-V/VMware, then it it has nothing to do with virtualization.
I beg to differ. Virtualization can cover an arbitrary set of virtualized aspects. Bytecodes is one aspect. Memory paging is another. File system drivers is a third. Yes, you are right that containers can run C# applications but not vice versa. You could say something similar: A file system driver can be realized in byte code but a byte code interpreter cannot be realized by a file system driver, so they are completely different things. Yet both are virtualizations.
I have been arguing with Docker gurus who consistently insist that containers are NOT virtualization! But Docker does create virtual networks, a virtual address space, virtual disks... It is not the entire set of VMware virtualizations, but ... No, Docker gurus insist that Docker is lightweight, efficient, nothing like resource hogs like VMware/Hyper-V! Whatever Docker does must be called something else - even if it is exactly the same as virtualization.
So even people who are working with such issues more or less full time do not have a comprehensive understanding of what virtualization is in a more general sense, but stick to specific instances of it. It should come as no surprise that a less experienced fellow have problems keeping things straight
I was talking about the run time virtual machines that the languages use to execute the IM. Assuming I am running in a container (and of course not everything does) and have complete control of the software enviornment doesnt that take away operating system uncertainties, leaving the hardware to be abstracted. In which case do we need to use a virtual machine at runtime ? Of course that tightly couples the code to a specific run time environment which is not ideal so that would be a good argument not to.
In that case I see what you mean in that it could replace the JVM, although that's likely not going to happen anytime soon.
There are a few problems.
One, as you mention, containers run on the OS while VMs can have their own OS.
Second, so far, running UI applications in containers isn't possible (well, I think it is, but you'll have to go through a lot of trouble and do some hacking and tweaking).
For that reason, I don't think containers will replace VMs, they do different things and have different purposes, even though one looks like the other.
.NET code doesn't run in a virtual machine, as far as I know.
Instead, .NET's IL is compiled by a JIT compiler into machine code.
The JIT compiler is just an application running directly on the user's machine.
The best known container solution, Docker, is primarily suited for back end servers, command line interface. You may run a web server in a container, to get sort of a GUI interface, but at a performance, and with a functionality/flexibility far below what you would expect from a native GUI application.
Also, HTML specs are so fuzzy that we are still fighting with browser incompatibilities. Now that we no longer have IE6 as a scapegoat, noone wants to reveal that there are, and have always been, incompatibilities among the other browsers. (It seems to be much more proper to say "Can't you just tell your users to use Google Chrome?" than it was to say "Can't you just tell your users to use IE6?", even though the logic is the same.)
You can run an X.11 client in a Docker container, but X.11 servers (i.e. front ends - X.11 terminology is somewhat strange) are not very widespread nowadays, in particular in Windows environments. X.11 handles mouse/screen only; any other I/O requires a different model. Adapting a GUI application from almost any other framework to X.11 is likely to require a major rewrite.
Docker is essentially a *nix thing. The interface with the host is very much according to *nix structure and philosophy. There is a Windows Docker, but the MS guys had to give up mapping all Windows functions onto that *nix host interface, and made their own. But this host interface is way from stable, and is updated with every new Windows release, so every half year you have to rebuild all your Windows Docker images fit the new host OS version. Not much virtualization there... And even with that Windows specific host interface, you can only run CLI windows applications - no GUI. (Windows Docker can run Linux containers, though, but of course not the other way around: The Linux community won't touch the Windows variant with a ten foot pole.)
Even if you stick to Linux: Docker provides no virtualization of the CPU. The executable code is "bare", and run directly on the CPU. You can't run 64 bit code on a 32 bit CPU, or an ARM container on an Intel CPU.
A container is exactly identical every time it starts up. It has a file system, but any changes made during execution are temporary, disappearing when the container terminates. You cannot set preferences, maintain a list of last files processed etc. in the container; all data to modified permanently must be maintained outside the container, either by mapping a host directory at run time (which creates certain problems with OSes differing from one environment to the other), a database, or a file system external to the container but maintained by the Docker engine. You must adapt your application to this. E.g. if you keep user information in the Windows registry HKCU, you not only must move all of that in an external database, but you must provide some login or user identification mechanism: Unless instructed otherwise, a given Docker image always runs as a given user.
And so on. Simple Docker demos are simple to make. If you really believe that you have any sort of "virtualization", and try to make use of it, you will soon be out of luck. Docker provides a trapping of a number of system calls, and sets up memory management so that you cannot address anything outside the set of "layers" making up the container (plus making use of the bottom layer host interface) - that's all the "virtualization" it does. It sets up fences.
The major difference between JVM/dotNET and Docker is that Docker packs a stack of DLLs ("layers" i Docker lore) into one tight package, identified by a SHA256 checksum so nothing can be updated/changed, and no memory references whatsoever are permitted outside this package, neither to code nor data. JVM/dotNET allow late DLL binding with fuzzy versioning, and provides no simple-to-use way to pack DLLs together and bind them to one unique version (in the SHA256 sense) of all the other DLLs in the pack.
This packing of specific DLL version into one unit and prohibit all external references do have arguments going for it. But the way it is done, it has far too many limitations. As long as you are in a command line world, and all your tools are Linux based, you can work around the restrictions and limitations. For back end servers, it may be fine. You cannot move images among different hardware architectures. You can run Docker images of any color that you want, provided that you want it in black (Linux, that is). If you want any other color, then you are stuck.
If you develop Windows end user applications, Docker is certainly not for you. You will be forced into a command line Linux style world. You certainly do not gain any sort of flexibility or freedom from host restriction comparable to e.g. what VMware virtualization gives you.
There were rumors about significant updates to native Windows virtualization (Hyper-V) coming up for Windows 10X - I didn't get the details, but got the impression that it would be more lightweight. If anyone knows more about this, please provide links! I guess that if you want virtualization for Windows applications, this is a far more viable alternative than Docker.
Docker is the only container technology I have used (/fought with). Maybe there are others that are better suited, but today it sees as if most people consider containers and Docker to be more or less synonyms.
So lets assume my code runs in a container. Not all code will , fair enough. And the end of C# and Java question was a little bit provocative.
So anyway I have code that I run in a container. For that code I have complete control over the software enviornment. I cant control the underlying hardware, but I can control the software. SO the question is this. In this case are there advantages in allowing the runtime to assume a specific software enviornment. If I do that doesnt the run time virtual machine essentially become hardware abstraction ? And in that case do we need it in its present form ?
I can think of a few reasons why I may not want to do this, but its an argument I have heard a few times and am looking for opinions.
Surely you mean "VMs" unless you are referring to something they possess.
From 1989 to 2002, along with doing development, I was a system manager for various systems running 5.x, 6.x, and 7.x .
And don't forget to mention 9-track tape reels.
Now I have four small OpenVMS systems purchased via Ebay running versions 7.2 (AlphaServer 800), 7.3 (MicroVAX 3100), 8.3 (AlphaServer DS10L), and 8.4 (Integrity rx1620, Itanium) to keep from getting too rusty.
Hmm, I have a colleague who is a great believer in containers. However, I have yet to find any use case for them in my work, even though I use VMs extensively and have done for many, many years.
It has been a very long time since any of the true hypervisors have consumed significant amounts of the hosts available resources (certainly the bare metal ones anyway, like ESX) and the sheer hassle of coming up with a working Docker image of my dev environment say, that I can replicate easily between my various workplaces and machines is much greater the just cloning a complete VM and spinning it up, and for on-going development I just use Nextcloud to replicate the work between multiple (virtual and/or physical) machines to make sure they all keep in step, and write occasional updates to my GIT repository by way of additional backup (itself running in a linux VM that is hosted on one of my ESXi hosts - that also hosts my DC, my SQL server, a mail server for some of my clients, a 3CX phone exchange and a Nextcloud instance that I and some of my clients use - all on an old I7 with 32Gb RAM).
I use VMs to replicate the entire working structure of one of my clients, so I can develop in a replica of their production environment without risk to their setup and yet be confident that when I deploy, things will work.
Despite the incredible hype surrounding Docker (and to some extend Kubernetes) I have yet to find any instance when Docker was a better fit for me. My colleague, despite insisting that containers would be much better and more productive, has never been able to explain exactly how it would help me.
But dont containers also do that by allowing us to have whatever OS we want independent of the underlying operating system OS ?
Container completely depends from underlying OS. If you made program on Win7, you have to deploy container image on the same OS. Opposite to compiled C# program, which can work ANYWHERE where you installed .NET (what is way smaller than a whole OS).