The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
1.1. It's trying to be useful for everything, an egg laying wool milk sow. To do that, it relies too much on libraries for and against everything. A 'one size fits all' approach, while convenient, can get a little bit wasteful. On a PC they easily get away with this, but on a smaller computer with a more modest processor and far less memory things get tough and you don't have enough control over the computer's resources to get very far.
4. Too much comfort makes programmers ignorant and lazy. When an allmighty framework does everything for you, you don't have to waste a thought on anything yourself, right? Wrong, when you rub the framework the wrong way and then start to improvise to correct the problem.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
Too much comfort makes programmers ignorant and lazy.
In different words I've said the same thing for years. Pointer, namespaces, enumerations, everything all use the dot-separator. For those who don't know better, meaning those who learned this with C/C++ first, they are all the same. They look the same, don't they?
Double colons and -> ? They're not so terribly hard to type and keep one informed of what the hell's going on.
Yes, deleting allocated objects is one of the places that find lots of bugs. Many of these bugs become very difficult to track down. Slow memory leakage is something that eats up lots of support time and turns off many customers.
deleting what you allocate is far too much to ask of programmers
Actually it is. "Use after free" is one of the biggest security risks in C/C++. Also, not removing what's no longer in use eventually leads to memory exhaustion.
These two situations have been known issues since at least 1958 when Lisp was first developed. This is also why all high level business languages, as opposed to embedded or operating system development, contain at least memory garbage collection. Almost all early languages (COBOL, BASIC, FORTRAN, APL, Algol, etc.) have some concept of garbage collection for some data types. What changes with Java was that all data types are now garbage collected unless the programmer explicitly tell the compiler not to do so.
A big part of the problem is that it’s not always clear who bears the responsibility for releasing the allocation. If you think otherwise, perhaps it’s you who need to consider alternate careers. Or prepare yourself for a big shock if you are just getting started and have just assumed it is that simple.
Exactly the kind of rebuttal I would expect from someone who doesn’t have a lot of experience.
Your pronouncement would be more defensible if you had written “somebody” didn’t do it right, but it’s not necessarily the someone who is writing code today and the question of what exactly “it” is has a couple of potential answers. It could be, for example, that a library author meant to conform to an specific predefined protocol and failed. Or it could be they were implementing something new and the documentation they provided is incomplete or incorrect. In especially old code, perhaps they *were* correctly following a known protocol but the protocol itself ended up redefined. Or one of my favorites: a library has multiple functions that accept an allocation as a parameter. Some consume the allocation and others just reference it, and there’s a convention to help you as the library user recognize which are which. But also there’s an old function that doesn’t follow the convention, its behavior is grandfathered in due to being used in existing systems and the footnote mentioning this legacy deviation is cropped off the bottom of the photocopied documentation you were given. I’ve run into all of those scenarios in large scale production systems that I was trying to interface with.
It’s easy to make a simplistic assertion that the only reason this is an issue is that somewhere, sometime, somebody did something wrong. You may be 100% correct about that. But you’re making the very point you’re arguing against. Things like this absolutely happen, and it is in real life one of the most common sources of program misbehavior. We know from decades of experience that this *will* go wrong and that it *will* result in system instability and/or security exposures. So we can cross our fingers and hope after all this time as systems continue to increase in complexity that coders as a population will become perfect at it, or we can automate this tedious, error-prone task for essentially perfect behavior today and let developers spend their time and energy on the real meat of their projects.
but you can do unmanaged stuff in C# and you can interface fairly easy with any C++ components
Yes, you could write the UI (for example) of the system tools in C#, but why bother? It just adds another requirement (and another failure point) to the system.
The investment can be done over time and #1 could help here. Is there a cost, absolutely, but far less then migrating to completely other tools.
I did not say that it could not be done. I did say that because of the cost it is unlikely to be done, giving the prevalence of Cobol as an example of a similar case.
Can you give examples?
It's not that learning the libraries is more difficult than learning the libraries available for other languages. The issue is the conversion costs - you have to take a productive programmer, expert in C or C++, and turn him into a novice C# programmer. It is true that he/she will eventually learn the C# way of doing things, but in the meantime - they will be less productive. Many companies are unwilling or can't afford to pay this cost.
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
It's also valid in the opposite direction: with some time invested you can do the job in C++, which is a more performant language, that has it's own huge collection of (mostly) cross-platform libraries.
I would also add to the list of issues the lack of compatibility between different .net versions.
Maybe. The idea that c++ is highly efficient comes from the fact that c is. However, if using many of the features of c++ can make a c++ application take more memory and run slower than a managed language. With the advent of dotNet Core 5.0, the performance and the cross platform issue mainly becomes moot. The only real thing lacking is WinForms for non-windows environments. The wpf approach has some advantages for gaming and graphic applications. It however, fails when it comes to line-of-business class of applications that fund most development.
My question would be why do people still think Java (or its cousins like Kotlin) make sense. My take is that there is still a culture that is anti-Microsoft.
The idea that C++ is highly efficient comes from the zero cost abstraction design, not from anything else. Besides, performance is not everything, languages like C and C++ can offer deterministic execution time as well as object lifetime, stuff that is impossible with managed languages by design, which make them a poor choice for system programming. The fact that NetCore doesn't support WinForms, makes C# a poor choice for cross-platform GUI application development as well, so it ends up being a niche platform for people who find themselves in need to program on Linux but don't want to learn anything besides C#.
"However, if using many of the features of c++ can make a c++ application take more memory and run slower than a managed language." Really? you have an example?
1. For all its advantages, C#, like Java, is unsuited to system-level programming. The kernel in both Windows and Linux is programmed in C and ASM.
C# isn't really designed for system-level programming. It's designed for building applications. In that regard after 12 years of using it, I find it's remarkably fluent and concise. That said, I have used it for several Windows services with no trouble.
Daniel Pfeffer wrote:
2. Many organizations have an investment in C and C++ code. Conversion to C# would require a major investment. Note that this is also one of the reasons that companies keep using Cobol, so I don't see this changing in the near future.
That's true for any language, not just C#.
Daniel Pfeffer wrote:
3. C# does have a serious learning curve - not for the language, but for its libraries. If you have learnt to do things in C or C++, converting to C# is far from simple.
True. When I started in C#/WPF back in 2008, it took me quite a while to grasp one of the fundamentals of .NET programming: it's in there. C++ and MFC require that you build some application basics yourself. Many of those basics are already present in .NET and whatever UI framework you choose. Instead of saying to yourself "OK, how do I wrap the primitive crap in something elegant in order to make this work", like you do so often in C++, Windows API, and MFC, it's "there's got to be something to do this in .NET; the question is where?"
I agree. As an embedded developer (now retired), neither C# or Java would have been a viable option for the products I worked on. In the last 20 years of my career I think the largest amount of memory I had was 256K of RAM and 1M of FLASH (a TI DSP). Most of my projects had way less than this so C# or JAVA were nonstarters.
Last Visit: 31-Dec-99 19:00 Last Update: 21-Jan-21 9:15