The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
Hmm, I have a colleague who is a great believer in containers. However, I have yet to find any use case for them in my work, even though I use VMs extensively and have done for many, many years.
It has been a very long time since any of the true hypervisors have consumed significant amounts of the hosts available resources (certainly the bare metal ones anyway, like ESX) and the sheer hassle of coming up with a working Docker image of my dev environment say, that I can replicate easily between my various workplaces and machines is much greater the just cloning a complete VM and spinning it up, and for on-going development I just use Nextcloud to replicate the work between multiple (virtual and/or physical) machines to make sure they all keep in step, and write occasional updates to my GIT repository by way of additional backup (itself running in a linux VM that is hosted on one of my ESXi hosts - that also hosts my DC, my SQL server, a mail server for some of my clients, a 3CX phone exchange and a Nextcloud instance that I and some of my clients use - all on an old I7 with 32Gb RAM).
I use VMs to replicate the entire working structure of one of my clients, so I can develop in a replica of their production environment without risk to their setup and yet be confident that when I deploy, things will work.
Despite the incredible hype surrounding Docker (and to some extend Kubernetes) I have yet to find any instance when Docker was a better fit for me. My colleague, despite insisting that containers would be much better and more productive, has never been able to explain exactly how it would help me.
But dont containers also do that by allowing us to have whatever OS we want independent of the underlying operating system OS ?
Container completely depends from underlying OS. If you made program on Win7, you have to deploy container image on the same OS. Opposite to compiled C# program, which can work ANYWHERE where you installed .NET (what is way smaller than a whole OS).
You're kinda hovering on the idea, but it was close but not cigar.
Java and .NET Core (not really C#) are meant to be platform agnostic as they compile themselves to an Intermediate Language (bytecode for Java and CIL for .NET) before being executed by their runtimes. All other hardware is seen through a Hardware Abstraction Layer (HAL) which hides the details and intricacies from the software.
VMWare and EC2, on the other hand, abstract a full PC environment into a virtual solution which also includes the operating system. Is this last component that really makes a differences and the reason this solutions will never go away.
Docker on the other hand is simply a simulation created on Linux to trick software into thinking that they are isolated from the hardware, but actually are just a Linux process shielded with lots of tags and chroots. This does not make the binary portable nor it creates a full operating system, but rather uses the OS from the host. Malware loves this approach and that's the reason it can't be used on dubitable loads.
I guess in the end, Docker will change things and Java (or any equivalent) in the future, will look more like Docker, where you'll both create the Java Virtual Machine and a Docker container to accompany it, with a single command aka
You're kidding right? First, containers are specialized version of a VM. End of story as it abstracts an operating system. As for the end of C# - the future has never been brighter. We are now for the first time seeing C# penetrating the Linux world. Setting up a project to run in a virtualized environment is super easy today. Between containers and app engines running the language of choice C# has never been easier to scale.
I honestly don't know about Java - but I wouldn't count it out anytime soon.
In one of the gaming forum that I frequent, they like to post so called "unpopular opinion" thread, and then proceed to say something everybody agrees on...
Anyway, while idly driving to work this morning I got stuck by an idea and I am going to try my own take on the unpopular opinion meme. See whether I am also, mistakenly, posting what is in fact a popular opinion! ^_^
So.. here I go.
From a C# dev to another C# dev. I hate interfaces.
ok, ok, sorry, I got nothing against interface in principle. They could be quite useful . It's just that in practice I have seen so many projects with zillion of interfaces with zillion of methods which are only implemented once. Worst, sometimes some of those methods implementation can be derived on the other method and if one would implement the interface twice there would be a lot of copy paste :/
But what really take the cake and I unambiguously despise is the argument that it helps "testability" (using mocks! ho god, mocks, I hate thee so).
From what I have seen those people are stickler for "unit test" (as opposed to "integration test") which basically only test the mock that you spend hours writing and make refactoring difficult, but don't really test the application... And it is often white box testing (I know it's implemented this way, that's why I write that test) which contribute to making refactoring a pain.
This is worst than waste of time. It also make future developer also waste time.
I'd say that many C# developers -- if given a choice -- would rather have multiple-inheritance.
I doubt many C# developers are of the opinion that Interfaces are just the bestest thing ever.
I'd prefer to have both.
What most developers don't understand is that Interfaces enforce the "like a duck" requirement for Duck Typing.
Partial interface.. the new thing with default method implementation for interfaces?
Looks good, better than extension method!
Unfortunately can't quite use with .NET 4.7.2 I think (mm... I think there is a project settings to use them with .NET 4.7.2 but I have cold feet on that ^_^ )
I agree that interfaces shouldn't categorically be seen as a best practice. Doing something because it makes mocks easier to implement is putting the cart before the horse. And there's already enough boilerplate that obfuscates the code.
Unit testing is great for libraries: a collection of disparate things. But if you're building a system whose components all cooperate, integration testing should be paramount.
However, (pure) virtual functions are vital in object models where polymorphism and/or inheritance are important. But that's a design abstraction similar to code reuse: if it happens only once, the abstraction isn't needed! If it happens a second time, you start thinking about it. And if it happens a third time, abstraction is called for, just like finding a way to have one instance of the code that would otherwise be copy-pasted into multiple locations.
I totally agree but I’m more of a believer in the YAGNI principle.
Rarely do abstractions spring to mind fully formed and ready for battle. It’s better to wait until natural flow of the project forces you to create those abstractions. On the other hand, if you wait too long you end up with many almost repeating pieces of code. Knowing when to do it is the difference between a good designer and a mediocre one.
“The sculpture is already complete within the marble block, before I start my work. It is already there, I just have to chisel away the superfluous material.” - Michelangelo
I had to look up YAGNI (Martin Fowler: You Ain't Gonna Need It). I haven't read what he says about it, so I'll just say that sometimes abstractions can precede applications. There's a spectrum for this:
When applications in the current release are being designed and it's clear that some abstractions are in order.
When you've read specifications that will be implemented in the next release and can foresee the abstractions.
When you can anticipate where the product will go. This is getting a bit dubious, so I usually stuck to the first two.
The abstractions can then be made available before the applications are implemented. In the absence of this, refactoring will be needed later, which is great if the culture supports it. But managers usually favor the "If it ain't broke, don't fix it" rule and would prefer everyone to be beavering away on new features. You're lucky if you've got management that even believes in building a framework in the first place.
Interfaces, like all sorts of "contracts", defeat the agile philosophy. Maybe not if you ask a philosopher, but certainly if you as an agile code developer.
Defining an interface / contract will tie you on your hands and feet. You do not have the freedom to change that API whenever you feel like, to whatever you think it should be today. Contracts are like the waterfall model: It is an attempt to foresee what the solution will look like before you start coding.
Setting up contracts / interfaces requires planning. It requires problem analysis and defining a solution architecture before you start coding. Such elements are devastating to the very idea of 'agile'.
On the other hand: I am not personally an agile evangelist. So I think setting up contracts, hereunder interfaces, is an important part of the solution architecture work, done before you start coding.
In the agile congregations of today, you rarely will get accept for any such though. 'Solution architecture' is what your code looks like when you have completed it. 'Interface' is the API you finally ended up with. For this version, that is. Hey, it is just a function declaration! You can't let that restrict what we do in the next version!