|
How many grammer nazis does it take to change a lightbulb ? Ans: Too
|
|
|
|
|
Their isn't a hell to hot for you...
Software Zen: delete this;
|
|
|
|
|
Hmm, I have a colleague who is a great believer in containers. However, I have yet to find any use case for them in my work, even though I use VMs extensively and have done for many, many years.
It has been a very long time since any of the true hypervisors have consumed significant amounts of the hosts available resources (certainly the bare metal ones anyway, like ESX) and the sheer hassle of coming up with a working Docker image of my dev environment say, that I can replicate easily between my various workplaces and machines is much greater the just cloning a complete VM and spinning it up, and for on-going development I just use Nextcloud to replicate the work between multiple (virtual and/or physical) machines to make sure they all keep in step, and write occasional updates to my GIT repository by way of additional backup (itself running in a linux VM that is hosted on one of my ESXi hosts - that also hosts my DC, my SQL server, a mail server for some of my clients, a 3CX phone exchange and a Nextcloud instance that I and some of my clients use - all on an old I7 with 32Gb RAM).
I use VMs to replicate the entire working structure of one of my clients, so I can develop in a replica of their production environment without risk to their setup and yet be confident that when I deploy, things will work.
Despite the incredible hype surrounding Docker (and to some extend Kubernetes) I have yet to find any instance when Docker was a better fit for me. My colleague, despite insisting that containers would be much better and more productive, has never been able to explain exactly how it would help me.
So my answer is "No"
8)
|
|
|
|
|
I am talking about the run time virtual machines that .net, java and python languages target .
|
|
|
|
|
But dont containers also do that by allowing us to have whatever OS we want independent of the underlying operating system OS ?
NOT.
Container completely depends from underlying OS. If you made program on Win7, you have to deploy container image on the same OS. Opposite to compiled C# program, which can work ANYWHERE where you installed .NET (what is way smaller than a whole OS).
|
|
|
|
|
.NET doesn't even use a VM. 🤣
|
|
|
|
|
All .net languages, java and Python ( and probably others) target run time virtual machines. App code is compiled to an intermediate form for execution on the run time virtual machine.
|
|
|
|
|
You're kinda hovering on the idea, but it was close but not cigar.
Java and .NET Core (not really C#) are meant to be platform agnostic as they compile themselves to an Intermediate Language (bytecode for Java and CIL for .NET) before being executed by their runtimes. All other hardware is seen through a Hardware Abstraction Layer (HAL) which hides the details and intricacies from the software.
VMWare and EC2, on the other hand, abstract a full PC environment into a virtual solution which also includes the operating system. Is this last component that really makes a differences and the reason this solutions will never go away.
Docker on the other hand is simply a simulation created on Linux to trick software into thinking that they are isolated from the hardware, but actually are just a Linux process shielded with lots of tags and chroots. This does not make the binary portable nor it creates a full operating system, but rather uses the OS from the host. Malware loves this approach and that's the reason it can't be used on dubitable loads.
I guess in the end, Docker will change things and Java (or any equivalent) in the future, will look more like Docker, where you'll both create the Java Virtual Machine and a Docker container to accompany it, with a single command aka
java container start com.example.HelloWorld -baseimage ubuntu:breezy
|
|
|
|
|
The vms I was talking about was the C#, Java and even Python Virtual machines. I can see how my initial question was not clear , but it has been kind of funny reading some of the replys knowing that.
|
|
|
|
|
You're kidding right? First, containers are specialized version of a VM. End of story as it abstracts an operating system. As for the end of C# - the future has never been brighter. We are now for the first time seeing C# penetrating the Linux world. Setting up a project to run in a virtualized environment is super easy today. Between containers and app engines running the language of choice C# has never been easier to scale.
I honestly don't know about Java - but I wouldn't count it out anytime soon.
|
|
|
|
|
I emailed my son a newsletter from the local cultural centre, with some stuff that might entertain his kids for a bit. His reply:
I'm not saying the girls are missing their friends, but I caught [Ms 6] playing multiplayer minecraft earlier. Player #2 was her teddy bear. I'm sure there'll be more...
Cheers,
Peter
Software rusts. Simon Stephenson, ca 1994. So does this signature. me, 2012
|
|
|
|
|
It’s not rare the non-English royal is in brother (8)
Not (6,6)
|
|
|
|
|
In one of the gaming forum that I frequent, they like to post so called "unpopular opinion" thread, and then proceed to say something everybody agrees on...
Anyway, while idly driving to work this morning I got stuck by an idea and I am going to try my own take on the unpopular opinion meme. See whether I am also, mistakenly, posting what is in fact a popular opinion! ^_^
So.. here I go.
Short story:
From a C# dev to another C# dev. I hate interfaces.
Long story:
ok, ok, sorry, I got nothing against interface in principle. They could be quite useful . It's just that in practice I have seen so many projects with zillion of interfaces with zillion of methods which are only implemented once. Worst, sometimes some of those methods implementation can be derived on the other method and if one would implement the interface twice there would be a lot of copy paste :/
But what really take the cake and I unambiguously despise is the argument that it helps "testability" (using mocks! ho god, mocks, I hate thee so).
From what I have seen those people are stickler for "unit test" (as opposed to "integration test") which basically only test the mock that you spend hours writing and make refactoring difficult, but don't really test the application... And it is often white box testing (I know it's implemented this way, that's why I write that test) which contribute to making refactoring a pain.
This is worst than waste of time. It also make future developer also waste time.
modified 7-Apr-20 20:24pm.
|
|
|
|
|
I'd say that many C# developers -- if given a choice -- would rather have multiple-inheritance.
I doubt many C# developers are of the opinion that Interfaces are just the bestest thing ever.
I'd prefer to have both.
What most developers don't understand is that Interfaces enforce the "like a duck" requirement for Duck Typing.
Super Lloyd wrote: only implemented once
I do that.
I'm with you on the rest.
I also write partial Interfaces. Muhahaha!
modified 7-Apr-20 20:35pm.
|
|
|
|
|
Partial interface.. the new thing with default method implementation for interfaces?
Looks good, better than extension method!
Unfortunately can't quite use with .NET 4.7.2 I think (mm... I think there is a project settings to use them with .NET 4.7.2 but I have cold feet on that ^_^ )
|
|
|
|
|
Super Lloyd wrote: I think there is a project settings to use them with .NET 4.7.2
There isn't. You can enable C# 8 features in a .NET Framework project, so long as you're using a recent version of VS2019. But that doesn't mean everything will work.
Default interface members required changes to the runtime, which were not back-ported to .NET Framework. They will only work with .NET Core (including .NET 5 when it arrives).
C# 8.0 and .NET Standard 2.0 - Doing Unsupported Things[^]
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
But hey - Windows forms still hasn't died yet so you should rest easy that staying on the last forward moving version of full framework will carry you until close to retirement.
|
|
|
|
|
I agree that interfaces shouldn't categorically be seen as a best practice. Doing something because it makes mocks easier to implement is putting the cart before the horse. And there's already enough boilerplate that obfuscates the code.
Unit testing is great for libraries: a collection of disparate things. But if you're building a system whose components all cooperate, integration testing should be paramount.
However, (pure) virtual functions are vital in object models where polymorphism and/or inheritance are important. But that's a design abstraction similar to code reuse: if it happens only once, the abstraction isn't needed! If it happens a second time, you start thinking about it. And if it happens a third time, abstraction is called for, just like finding a way to have one instance of the code that would otherwise be copy-pasted into multiple locations.
|
|
|
|
|
I totally agree but I’m more of a believer in the YAGNI principle.
Rarely do abstractions spring to mind fully formed and ready for battle. It’s better to wait until natural flow of the project forces you to create those abstractions. On the other hand, if you wait too long you end up with many almost repeating pieces of code. Knowing when to do it is the difference between a good designer and a mediocre one.
“The sculpture is already complete within the marble block, before I start my work. It is already there, I just have to chisel away the superfluous material.” - Michelangelo
|
|
|
|
|
I had to look up YAGNI (Martin Fowler: You Ain't Gonna Need It). I haven't read what he says about it, so I'll just say that sometimes abstractions can precede applications. There's a spectrum for this:
- When applications in the current release are being designed and it's clear that some abstractions are in order.
- When you've read specifications that will be implemented in the next release and can foresee the abstractions.
- When you can anticipate where the product will go. This is getting a bit dubious, so I usually stuck to the first two.
The abstractions can then be made available before the applications are implemented. In the absence of this, refactoring will be needed later, which is great if the culture supports it. But managers usually favor the "If it ain't broke, don't fix it" rule and would prefer everyone to be beavering away on new features. You're lucky if you've got management that even believes in building a framework in the first place.
|
|
|
|
|
Isn't YAGNI also known as "The constant need for refactoring principle"?
|
|
|
|
|
Member 7989122 wrote: Isn't YAGNI also known as "The constant need for refactoring principle"?
My experience is that lack of refactoring is cancer to a project. It should probably stop when the project dies.
|
|
|
|
|
Interfaces, like all sorts of "contracts", defeat the agile philosophy. Maybe not if you ask a philosopher, but certainly if you as an agile code developer.
Defining an interface / contract will tie you on your hands and feet. You do not have the freedom to change that API whenever you feel like, to whatever you think it should be today. Contracts are like the waterfall model: It is an attempt to foresee what the solution will look like before you start coding.
Setting up contracts / interfaces requires planning. It requires problem analysis and defining a solution architecture before you start coding. Such elements are devastating to the very idea of 'agile'.
On the other hand: I am not personally an agile evangelist. So I think setting up contracts, hereunder interfaces, is an important part of the solution architecture work, done before you start coding.
In the agile congregations of today, you rarely will get accept for any such though. 'Solution architecture' is what your code looks like when you have completed it. 'Interface' is the API you finally ended up with. For this version, that is. Hey, it is just a function declaration! You can't let that restrict what we do in the next version!
|
|
|
|
|
<InsertObligatoryToolsCanBeMisusedObservation />
Software Zen: delete this;
|
|
|
|
|
I love interfaces, but the problem isn't interfaces, it's that any good thing can be use wrongly/badly/ugily So just because it can be done badly, doesn't mean it's bad.
I've had colleagues that would never write a class in c# without a corresponding interface, most of which were never actually used for any proper purpose. That's just extra maintenance with zero benefit.
And one often sees wrong things put in interfaces, e.g. implementation specific info which makes no sense for other implementations of the interface.
And often interfaces contain too many things that should be separated out into multiple interfaces.
This I find cool, that I can implement multiple interfaces in a single class.
|
|
|
|