The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
The best known container solution, Docker, is primarily suited for back end servers, command line interface. You may run a web server in a container, to get sort of a GUI interface, but at a performance, and with a functionality/flexibility far below what you would expect from a native GUI application.
Also, HTML specs are so fuzzy that we are still fighting with browser incompatibilities. Now that we no longer have IE6 as a scapegoat, noone wants to reveal that there are, and have always been, incompatibilities among the other browsers. (It seems to be much more proper to say "Can't you just tell your users to use Google Chrome?" than it was to say "Can't you just tell your users to use IE6?", even though the logic is the same.)
You can run an X.11 client in a Docker container, but X.11 servers (i.e. front ends - X.11 terminology is somewhat strange) are not very widespread nowadays, in particular in Windows environments. X.11 handles mouse/screen only; any other I/O requires a different model. Adapting a GUI application from almost any other framework to X.11 is likely to require a major rewrite.
Docker is essentially a *nix thing. The interface with the host is very much according to *nix structure and philosophy. There is a Windows Docker, but the MS guys had to give up mapping all Windows functions onto that *nix host interface, and made their own. But this host interface is way from stable, and is updated with every new Windows release, so every half year you have to rebuild all your Windows Docker images fit the new host OS version. Not much virtualization there... And even with that Windows specific host interface, you can only run CLI windows applications - no GUI. (Windows Docker can run Linux containers, though, but of course not the other way around: The Linux community won't touch the Windows variant with a ten foot pole.)
Even if you stick to Linux: Docker provides no virtualization of the CPU. The executable code is "bare", and run directly on the CPU. You can't run 64 bit code on a 32 bit CPU, or an ARM container on an Intel CPU.
A container is exactly identical every time it starts up. It has a file system, but any changes made during execution are temporary, disappearing when the container terminates. You cannot set preferences, maintain a list of last files processed etc. in the container; all data to modified permanently must be maintained outside the container, either by mapping a host directory at run time (which creates certain problems with OSes differing from one environment to the other), a database, or a file system external to the container but maintained by the Docker engine. You must adapt your application to this. E.g. if you keep user information in the Windows registry HKCU, you not only must move all of that in an external database, but you must provide some login or user identification mechanism: Unless instructed otherwise, a given Docker image always runs as a given user.
And so on. Simple Docker demos are simple to make. If you really believe that you have any sort of "virtualization", and try to make use of it, you will soon be out of luck. Docker provides a trapping of a number of system calls, and sets up memory management so that you cannot address anything outside the set of "layers" making up the container (plus making use of the bottom layer host interface) - that's all the "virtualization" it does. It sets up fences.
The major difference between JVM/dotNET and Docker is that Docker packs a stack of DLLs ("layers" i Docker lore) into one tight package, identified by a SHA256 checksum so nothing can be updated/changed, and no memory references whatsoever are permitted outside this package, neither to code nor data. JVM/dotNET allow late DLL binding with fuzzy versioning, and provides no simple-to-use way to pack DLLs together and bind them to one unique version (in the SHA256 sense) of all the other DLLs in the pack.
This packing of specific DLL version into one unit and prohibit all external references do have arguments going for it. But the way it is done, it has far too many limitations. As long as you are in a command line world, and all your tools are Linux based, you can work around the restrictions and limitations. For back end servers, it may be fine. You cannot move images among different hardware architectures. You can run Docker images of any color that you want, provided that you want it in black (Linux, that is). If you want any other color, then you are stuck.
If you develop Windows end user applications, Docker is certainly not for you. You will be forced into a command line Linux style world. You certainly do not gain any sort of flexibility or freedom from host restriction comparable to e.g. what VMware virtualization gives you.
There were rumors about significant updates to native Windows virtualization (Hyper-V) coming up for Windows 10X - I didn't get the details, but got the impression that it would be more lightweight. If anyone knows more about this, please provide links! I guess that if you want virtualization for Windows applications, this is a far more viable alternative than Docker.
Docker is the only container technology I have used (/fought with). Maybe there are others that are better suited, but today it sees as if most people consider containers and Docker to be more or less synonyms.
So lets assume my code runs in a container. Not all code will , fair enough. And the end of C# and Java question was a little bit provocative.
So anyway I have code that I run in a container. For that code I have complete control over the software enviornment. I cant control the underlying hardware, but I can control the software. SO the question is this. In this case are there advantages in allowing the runtime to assume a specific software enviornment. If I do that doesnt the run time virtual machine essentially become hardware abstraction ? And in that case do we need it in its present form ?
I can think of a few reasons why I may not want to do this, but its an argument I have heard a few times and am looking for opinions.
Surely you mean "VMs" unless you are referring to something they possess.
From 1989 to 2002, along with doing development, I was a system manager for various systems running 5.x, 6.x, and 7.x .
And don't forget to mention 9-track tape reels.
Now I have four small OpenVMS systems purchased via Ebay running versions 7.2 (AlphaServer 800), 7.3 (MicroVAX 3100), 8.3 (AlphaServer DS10L), and 8.4 (Integrity rx1620, Itanium) to keep from getting too rusty.
Hmm, I have a colleague who is a great believer in containers. However, I have yet to find any use case for them in my work, even though I use VMs extensively and have done for many, many years.
It has been a very long time since any of the true hypervisors have consumed significant amounts of the hosts available resources (certainly the bare metal ones anyway, like ESX) and the sheer hassle of coming up with a working Docker image of my dev environment say, that I can replicate easily between my various workplaces and machines is much greater the just cloning a complete VM and spinning it up, and for on-going development I just use Nextcloud to replicate the work between multiple (virtual and/or physical) machines to make sure they all keep in step, and write occasional updates to my GIT repository by way of additional backup (itself running in a linux VM that is hosted on one of my ESXi hosts - that also hosts my DC, my SQL server, a mail server for some of my clients, a 3CX phone exchange and a Nextcloud instance that I and some of my clients use - all on an old I7 with 32Gb RAM).
I use VMs to replicate the entire working structure of one of my clients, so I can develop in a replica of their production environment without risk to their setup and yet be confident that when I deploy, things will work.
Despite the incredible hype surrounding Docker (and to some extend Kubernetes) I have yet to find any instance when Docker was a better fit for me. My colleague, despite insisting that containers would be much better and more productive, has never been able to explain exactly how it would help me.
But dont containers also do that by allowing us to have whatever OS we want independent of the underlying operating system OS ?
Container completely depends from underlying OS. If you made program on Win7, you have to deploy container image on the same OS. Opposite to compiled C# program, which can work ANYWHERE where you installed .NET (what is way smaller than a whole OS).
You're kinda hovering on the idea, but it was close but not cigar.
Java and .NET Core (not really C#) are meant to be platform agnostic as they compile themselves to an Intermediate Language (bytecode for Java and CIL for .NET) before being executed by their runtimes. All other hardware is seen through a Hardware Abstraction Layer (HAL) which hides the details and intricacies from the software.
VMWare and EC2, on the other hand, abstract a full PC environment into a virtual solution which also includes the operating system. Is this last component that really makes a differences and the reason this solutions will never go away.
Docker on the other hand is simply a simulation created on Linux to trick software into thinking that they are isolated from the hardware, but actually are just a Linux process shielded with lots of tags and chroots. This does not make the binary portable nor it creates a full operating system, but rather uses the OS from the host. Malware loves this approach and that's the reason it can't be used on dubitable loads.
I guess in the end, Docker will change things and Java (or any equivalent) in the future, will look more like Docker, where you'll both create the Java Virtual Machine and a Docker container to accompany it, with a single command aka
You're kidding right? First, containers are specialized version of a VM. End of story as it abstracts an operating system. As for the end of C# - the future has never been brighter. We are now for the first time seeing C# penetrating the Linux world. Setting up a project to run in a virtualized environment is super easy today. Between containers and app engines running the language of choice C# has never been easier to scale.
I honestly don't know about Java - but I wouldn't count it out anytime soon.
In one of the gaming forum that I frequent, they like to post so called "unpopular opinion" thread, and then proceed to say something everybody agrees on...
Anyway, while idly driving to work this morning I got stuck by an idea and I am going to try my own take on the unpopular opinion meme. See whether I am also, mistakenly, posting what is in fact a popular opinion! ^_^
So.. here I go.
From a C# dev to another C# dev. I hate interfaces.
ok, ok, sorry, I got nothing against interface in principle. They could be quite useful . It's just that in practice I have seen so many projects with zillion of interfaces with zillion of methods which are only implemented once. Worst, sometimes some of those methods implementation can be derived on the other method and if one would implement the interface twice there would be a lot of copy paste :/
But what really take the cake and I unambiguously despise is the argument that it helps "testability" (using mocks! ho god, mocks, I hate thee so).
From what I have seen those people are stickler for "unit test" (as opposed to "integration test") which basically only test the mock that you spend hours writing and make refactoring difficult, but don't really test the application... And it is often white box testing (I know it's implemented this way, that's why I write that test) which contribute to making refactoring a pain.
This is worst than waste of time. It also make future developer also waste time.
I'd say that many C# developers -- if given a choice -- would rather have multiple-inheritance.
I doubt many C# developers are of the opinion that Interfaces are just the bestest thing ever.
I'd prefer to have both.
What most developers don't understand is that Interfaces enforce the "like a duck" requirement for Duck Typing.
Partial interface.. the new thing with default method implementation for interfaces?
Looks good, better than extension method!
Unfortunately can't quite use with .NET 4.7.2 I think (mm... I think there is a project settings to use them with .NET 4.7.2 but I have cold feet on that ^_^ )