The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
The best known container solution, Docker, is primarily suited for back end servers, command line interface. You may run a web server in a container, to get sort of a GUI interface, but at a performance, and with a functionality/flexibility far below what you would expect from a native GUI application.
Also, HTML specs are so fuzzy that we are still fighting with browser incompatibilities. Now that we no longer have IE6 as a scapegoat, noone wants to reveal that there are, and have always been, incompatibilities among the other browsers. (It seems to be much more proper to say "Can't you just tell your users to use Google Chrome?" than it was to say "Can't you just tell your users to use IE6?", even though the logic is the same.)
You can run an X.11 client in a Docker container, but X.11 servers (i.e. front ends - X.11 terminology is somewhat strange) are not very widespread nowadays, in particular in Windows environments. X.11 handles mouse/screen only; any other I/O requires a different model. Adapting a GUI application from almost any other framework to X.11 is likely to require a major rewrite.
Docker is essentially a *nix thing. The interface with the host is very much according to *nix structure and philosophy. There is a Windows Docker, but the MS guys had to give up mapping all Windows functions onto that *nix host interface, and made their own. But this host interface is way from stable, and is updated with every new Windows release, so every half year you have to rebuild all your Windows Docker images fit the new host OS version. Not much virtualization there... And even with that Windows specific host interface, you can only run CLI windows applications - no GUI. (Windows Docker can run Linux containers, though, but of course not the other way around: The Linux community won't touch the Windows variant with a ten foot pole.)
Even if you stick to Linux: Docker provides no virtualization of the CPU. The executable code is "bare", and run directly on the CPU. You can't run 64 bit code on a 32 bit CPU, or an ARM container on an Intel CPU.
A container is exactly identical every time it starts up. It has a file system, but any changes made during execution are temporary, disappearing when the container terminates. You cannot set preferences, maintain a list of last files processed etc. in the container; all data to modified permanently must be maintained outside the container, either by mapping a host directory at run time (which creates certain problems with OSes differing from one environment to the other), a database, or a file system external to the container but maintained by the Docker engine. You must adapt your application to this. E.g. if you keep user information in the Windows registry HKCU, you not only must move all of that in an external database, but you must provide some login or user identification mechanism: Unless instructed otherwise, a given Docker image always runs as a given user.
And so on. Simple Docker demos are simple to make. If you really believe that you have any sort of "virtualization", and try to make use of it, you will soon be out of luck. Docker provides a trapping of a number of system calls, and sets up memory management so that you cannot address anything outside the set of "layers" making up the container (plus making use of the bottom layer host interface) - that's all the "virtualization" it does. It sets up fences.
The major difference between JVM/dotNET and Docker is that Docker packs a stack of DLLs ("layers" i Docker lore) into one tight package, identified by a SHA256 checksum so nothing can be updated/changed, and no memory references whatsoever are permitted outside this package, neither to code nor data. JVM/dotNET allow late DLL binding with fuzzy versioning, and provides no simple-to-use way to pack DLLs together and bind them to one unique version (in the SHA256 sense) of all the other DLLs in the pack.
This packing of specific DLL version into one unit and prohibit all external references do have arguments going for it. But the way it is done, it has far too many limitations. As long as you are in a command line world, and all your tools are Linux based, you can work around the restrictions and limitations. For back end servers, it may be fine. You cannot move images among different hardware architectures. You can run Docker images of any color that you want, provided that you want it in black (Linux, that is). If you want any other color, then you are stuck.
If you develop Windows end user applications, Docker is certainly not for you. You will be forced into a command line Linux style world. You certainly do not gain any sort of flexibility or freedom from host restriction comparable to e.g. what VMware virtualization gives you.
There were rumors about significant updates to native Windows virtualization (Hyper-V) coming up for Windows 10X - I didn't get the details, but got the impression that it would be more lightweight. If anyone knows more about this, please provide links! I guess that if you want virtualization for Windows applications, this is a far more viable alternative than Docker.
Docker is the only container technology I have used (/fought with). Maybe there are others that are better suited, but today it sees as if most people consider containers and Docker to be more or less synonyms.
Surely you mean "VMs" unless you are referring to something they possess.
From 1989 to 2002, along with doing development, I was a system manager for various systems running 5.x, 6.x, and 7.x .
And don't forget to mention 9-track tape reels.
Now I have four small OpenVMS systems purchased via Ebay running versions 7.2 (AlphaServer 800), 7.3 (MicroVAX 3100), 8.3 (AlphaServer DS10L), and 8.4 (Integrity rx1620, Itanium) to keep from getting too rusty.
In one of the gaming forum that I frequent, they like to post so called "unpopular opinion" thread, and then proceed to say something everybody agrees on...
Anyway, while idly driving to work this morning I got stuck by an idea and I am going to try my own take on the unpopular opinion meme. See whether I am also, mistakenly, posting what is in fact a popular opinion! ^_^
So.. here I go.
From a C# dev to another C# dev. I hate interfaces.
ok, ok, sorry, I got nothing against interface in principle. They could be quite useful . It's just that in practice I have seen so many projects with zillion of interfaces with zillion of methods which are only implemented once. Worst, sometimes some of those methods implementation can be derived on the other method and if one would implement the interface twice there would be a lot of copy paste :/
But what really take the cake and I unambiguously despise is the argument that it helps "testability" (using mocks! ho god, mocks, I hate thee so).
From what I have seen those people are stickler for "unit test" (as opposed to "integration test") which basically only test the mock that you spend hours writing and make refactoring difficult, but don't really test the application... And it is often white box testing (I know it's implemented this way, that's why I write that test) which contribute to making refactoring a pain.
This is worst than waste of time. It also make future developer also waste time.
I'd say that many C# developers -- if given a choice -- would rather have multiple-inheritance.
I doubt many C# developers are of the opinion that Interfaces are just the bestest thing ever.
I'd prefer to have both.
What most developers don't understand is that Interfaces enforce the "like a duck" requirement for Duck Typing.
Partial interface.. the new thing with default method implementation for interfaces?
Looks good, better than extension method!
Unfortunately can't quite use with .NET 4.7.2 I think (mm... I think there is a project settings to use them with .NET 4.7.2 but I have cold feet on that ^_^ )
I agree that interfaces shouldn't categorically be seen as a best practice. Doing something because it makes mocks easier to implement is putting the cart before the horse. And there's already enough boilerplate that obfuscates the code.
Unit testing is great for libraries: a collection of disparate things. But if you're building a system whose components all cooperate, integration testing should be paramount.
However, (pure) virtual functions are vital in object models where polymorphism and/or inheritance are important. But that's a design abstraction similar to code reuse: if it happens only once, the abstraction isn't needed! If it happens a second time, you start thinking about it. And if it happens a third time, abstraction is called for, just like finding a way to have one instance of the code that would otherwise be copy-pasted into multiple locations.
I totally agree but I’m more of a believer in the YAGNI principle.
Rarely do abstractions spring to mind fully formed and ready for battle. It’s better to wait until natural flow of the project forces you to create those abstractions. On the other hand, if you wait too long you end up with many almost repeating pieces of code. Knowing when to do it is the difference between a good designer and a mediocre one.
“The sculpture is already complete within the marble block, before I start my work. It is already there, I just have to chisel away the superfluous material.” - Michelangelo
I had to look up YAGNI (Martin Fowler: You Ain't Gonna Need It). I haven't read what he says about it, so I'll just say that sometimes abstractions can precede applications. There's a spectrum for this:
When applications in the current release are being designed and it's clear that some abstractions are in order.
When you've read specifications that will be implemented in the next release and can foresee the abstractions.
When you can anticipate where the product will go. This is getting a bit dubious, so I usually stuck to the first two.
The abstractions can then be made available before the applications are implemented. In the absence of this, refactoring will be needed later, which is great if the culture supports it. But managers usually favor the "If it ain't broke, don't fix it" rule and would prefer everyone to be beavering away on new features. You're lucky if you've got management that even believes in building a framework in the first place.
Interfaces, like all sorts of "contracts", defeat the agile philosophy. Maybe not if you ask a philosopher, but certainly if you as an agile code developer.
Defining an interface / contract will tie you on your hands and feet. You do not have the freedom to change that API whenever you feel like, to whatever you think it should be today. Contracts are like the waterfall model: It is an attempt to foresee what the solution will look like before you start coding.
Setting up contracts / interfaces requires planning. It requires problem analysis and defining a solution architecture before you start coding. Such elements are devastating to the very idea of 'agile'.
On the other hand: I am not personally an agile evangelist. So I think setting up contracts, hereunder interfaces, is an important part of the solution architecture work, done before you start coding.
In the agile congregations of today, you rarely will get accept for any such though. 'Solution architecture' is what your code looks like when you have completed it. 'Interface' is the API you finally ended up with. For this version, that is. Hey, it is just a function declaration! You can't let that restrict what we do in the next version!
I met that pr1ck when he was trying to build a career as a TV "personality". A little self-obsession is normal, in the entertainments field, but this guy genuinely believed that the world revolved around him and the Sun shone out of his @rse (so even then he had zero grasp on science).
He quit his TV endeavours because he was too good for TV (he arrived at that conclusion about a year after everyone stopped hiring him because of his combination of being both useless at the job and unbearable to work with), and started on the conspiracy theory trail, which is the best way for incompetent bullsh1tters to pick up mentally deranged worshippers.
He's the kind of self-absorbed, "truth is less important than I am" w@nker who makes the world worse no matter what he does, because everything he does is to get attention for himself, and he doesn't care who gets hurt in the process.
I do hope that the police are looking into conspiracy to commit criminal damage charges.
I wanna be a eunuchs developer! Pass me a bread knife!
Last Visit: 8-Apr-20 15:43 Last Update: 8-Apr-20 15:43