The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
One of the main advantages of C# and Java is their use of a virtual machine. It abstracts the dependency on the underlying hardware. But dont containers also do that by allowing us to have whatever OS we want independent of the underlying operating system OS ? So why do we continue to use VMS in a world of containers ? And if the use of VMS goes , does that mean the writing is on the walls for languages that use them, such as C# and Java, or will we simply see a move away from the vm and revert to having the code more tightly coupled to the underlying OS ?
If "abstracting the dependency on the underlying hardware" is the criterium for a VM, then PDF readers are VMs, and even some word processors -- in fact, it could be said that anything that transports commands to OS peripheral interfaces is a VM.
For me, being in a purist mood, a VM has to effectively sidestep the underlying OS of the computer, by running files on a different OS on top of the underlying OS.
Do C# and Java do this? Not so far as I know, they don't; they may abstract things a tiny bit further than a PDF reader does, but it's still only abstraction.
They are programs that allow you to open, run, and use certain files.
Notepad does that much, for Heaven's sake!
So stop calling spades shovels, and the "problem" highlighted by the article disappears.
I wanna be a eunuchs developer! Pass me a bread knife!
Of course they do it , the code is compiled to intermediate language in .net and that is executed in the runtime virtual machine. It is this virtual machine that may be impacted in the case of code running in containers since the software environment is now controlled. The hardware isnt, but the software is .
In that case are the benifits of using the runtime virtual machine as compelling . Of course not all code runs in containers, and it never will, but in the case of containers are there any advantages that can be gained by having control over the software enviornment? Potentially do we need the VM in its current form (in containerised apps) .
If we dont ( and I am not saying we dont I am mulling over the question) but if we dont need the VM then isnt that a bit of a kick in the teeth for languages that use a VM such as c#,vb.net, java, python etc. Will we see an emergence of a language more suited to containerised apps?
the code is compiled to intermediate language in .net and that is executed in the runtime virtual machine. It is this virtual machine that may be impacted in the case of code running in containers since the software environment is now controlled. The hardware isnt, but the software is .
I'd call that a sandbox.
To me, a VM has to allow the hardware and peripherals to be governed by a different OS (or another instance of the same OS).
I wanna be a eunuchs developer! Pass me a bread knife!
In all honesty many people replying have been banging on about VMS (or VMs for the pedants) as if I was talking about the traditional virtual server. I'm not, I am talking about the run time virtual machines integral to program execution in many languages. Not really a sandbox , they are referred to by the term virtual machine , hence the confusion. My bad for not being clearer.
That doesn't make any sense, they're completely different things.
Both containers and VMs can run C# and Java applications, but not vice versa.
C# ad Java can be used to create new applications, while VMs and containers, well, can't because they're very different things.
VMs can be used for work computers, servers, sandboxes, etc. and give you a complete OS on top of your OS.
Containers just run a piece of (non-UI) software on the existing OS.
If you really have to ask this I suggest you do some reading on the topics.
Well, JVM/dotNET do virtualize some aspects - they are virtualization techniques. You can say the same about a lot of computer concepts: Any compiler virtualizes the instruction set of the CPU. A file system driver creates virtual storage unit where you don't have to handle sectors and track and surfaces. And so on.
I see a lot of computer people that seem to think that virtualization is one specific thing: Creating a complete virtual hardware CPU / memory / IO environment. If you don't provide all of that Hyper-V or VMware provides, it is not virtualization. If you provide something not found in Hyper-V/VMware, then it it has nothing to do with virtualization.
I beg to differ. Virtualization can cover an arbitrary set of virtualized aspects. Bytecodes is one aspect. Memory paging is another. File system drivers is a third. Yes, you are right that containers can run C# applications but not vice versa. You could say something similar: A file system driver can be realized in byte code but a byte code interpreter cannot be realized by a file system driver, so they are completely different things. Yet both are virtualizations.
I have been arguing with Docker gurus who consistently insist that containers are NOT virtualization! But Docker does create virtual networks, a virtual address space, virtual disks... It is not the entire set of VMware virtualizations, but ... No, Docker gurus insist that Docker is lightweight, efficient, nothing like resource hogs like VMware/Hyper-V! Whatever Docker does must be called something else - even if it is exactly the same as virtualization.
So even people who are working with such issues more or less full time do not have a comprehensive understanding of what virtualization is in a more general sense, but stick to specific instances of it. It should come as no surprise that a less experienced fellow have problems keeping things straight
I was talking about the run time virtual machines that the languages use to execute the IM. Assuming I am running in a container (and of course not everything does) and have complete control of the software enviornment doesnt that take away operating system uncertainties, leaving the hardware to be abstracted. In which case do we need to use a virtual machine at runtime ? Of course that tightly couples the code to a specific run time environment which is not ideal so that would be a good argument not to.
In that case I see what you mean in that it could replace the JVM, although that's likely not going to happen anytime soon.
There are a few problems.
One, as you mention, containers run on the OS while VMs can have their own OS.
Second, so far, running UI applications in containers isn't possible (well, I think it is, but you'll have to go through a lot of trouble and do some hacking and tweaking).
For that reason, I don't think containers will replace VMs, they do different things and have different purposes, even though one looks like the other.
.NET code doesn't run in a virtual machine, as far as I know.
Instead, .NET's IL is compiled by a JIT compiler into machine code.
The JIT compiler is just an application running directly on the user's machine.
The best known container solution, Docker, is primarily suited for back end servers, command line interface. You may run a web server in a container, to get sort of a GUI interface, but at a performance, and with a functionality/flexibility far below what you would expect from a native GUI application.
Also, HTML specs are so fuzzy that we are still fighting with browser incompatibilities. Now that we no longer have IE6 as a scapegoat, noone wants to reveal that there are, and have always been, incompatibilities among the other browsers. (It seems to be much more proper to say "Can't you just tell your users to use Google Chrome?" than it was to say "Can't you just tell your users to use IE6?", even though the logic is the same.)
You can run an X.11 client in a Docker container, but X.11 servers (i.e. front ends - X.11 terminology is somewhat strange) are not very widespread nowadays, in particular in Windows environments. X.11 handles mouse/screen only; any other I/O requires a different model. Adapting a GUI application from almost any other framework to X.11 is likely to require a major rewrite.
Docker is essentially a *nix thing. The interface with the host is very much according to *nix structure and philosophy. There is a Windows Docker, but the MS guys had to give up mapping all Windows functions onto that *nix host interface, and made their own. But this host interface is way from stable, and is updated with every new Windows release, so every half year you have to rebuild all your Windows Docker images fit the new host OS version. Not much virtualization there... And even with that Windows specific host interface, you can only run CLI windows applications - no GUI. (Windows Docker can run Linux containers, though, but of course not the other way around: The Linux community won't touch the Windows variant with a ten foot pole.)
Even if you stick to Linux: Docker provides no virtualization of the CPU. The executable code is "bare", and run directly on the CPU. You can't run 64 bit code on a 32 bit CPU, or an ARM container on an Intel CPU.
A container is exactly identical every time it starts up. It has a file system, but any changes made during execution are temporary, disappearing when the container terminates. You cannot set preferences, maintain a list of last files processed etc. in the container; all data to modified permanently must be maintained outside the container, either by mapping a host directory at run time (which creates certain problems with OSes differing from one environment to the other), a database, or a file system external to the container but maintained by the Docker engine. You must adapt your application to this. E.g. if you keep user information in the Windows registry HKCU, you not only must move all of that in an external database, but you must provide some login or user identification mechanism: Unless instructed otherwise, a given Docker image always runs as a given user.
And so on. Simple Docker demos are simple to make. If you really believe that you have any sort of "virtualization", and try to make use of it, you will soon be out of luck. Docker provides a trapping of a number of system calls, and sets up memory management so that you cannot address anything outside the set of "layers" making up the container (plus making use of the bottom layer host interface) - that's all the "virtualization" it does. It sets up fences.
The major difference between JVM/dotNET and Docker is that Docker packs a stack of DLLs ("layers" i Docker lore) into one tight package, identified by a SHA256 checksum so nothing can be updated/changed, and no memory references whatsoever are permitted outside this package, neither to code nor data. JVM/dotNET allow late DLL binding with fuzzy versioning, and provides no simple-to-use way to pack DLLs together and bind them to one unique version (in the SHA256 sense) of all the other DLLs in the pack.
This packing of specific DLL version into one unit and prohibit all external references do have arguments going for it. But the way it is done, it has far too many limitations. As long as you are in a command line world, and all your tools are Linux based, you can work around the restrictions and limitations. For back end servers, it may be fine. You cannot move images among different hardware architectures. You can run Docker images of any color that you want, provided that you want it in black (Linux, that is). If you want any other color, then you are stuck.
If you develop Windows end user applications, Docker is certainly not for you. You will be forced into a command line Linux style world. You certainly do not gain any sort of flexibility or freedom from host restriction comparable to e.g. what VMware virtualization gives you.
There were rumors about significant updates to native Windows virtualization (Hyper-V) coming up for Windows 10X - I didn't get the details, but got the impression that it would be more lightweight. If anyone knows more about this, please provide links! I guess that if you want virtualization for Windows applications, this is a far more viable alternative than Docker.
Docker is the only container technology I have used (/fought with). Maybe there are others that are better suited, but today it sees as if most people consider containers and Docker to be more or less synonyms.
So lets assume my code runs in a container. Not all code will , fair enough. And the end of C# and Java question was a little bit provocative.
So anyway I have code that I run in a container. For that code I have complete control over the software enviornment. I cant control the underlying hardware, but I can control the software. SO the question is this. In this case are there advantages in allowing the runtime to assume a specific software enviornment. If I do that doesnt the run time virtual machine essentially become hardware abstraction ? And in that case do we need it in its present form ?
I can think of a few reasons why I may not want to do this, but its an argument I have heard a few times and am looking for opinions.
Surely you mean "VMs" unless you are referring to something they possess.
From 1989 to 2002, along with doing development, I was a system manager for various systems running 5.x, 6.x, and 7.x .
And don't forget to mention 9-track tape reels.
Now I have four small OpenVMS systems purchased via Ebay running versions 7.2 (AlphaServer 800), 7.3 (MicroVAX 3100), 8.3 (AlphaServer DS10L), and 8.4 (Integrity rx1620, Itanium) to keep from getting too rusty.