The Lounge is rated PG. If you're about to post something you wouldn't want your
kid sister to read then don't post it. No flame wars, no abusive conduct, no programming
questions and please don't post ads.
and leaving C++ in a situation where even now you can't write even a modest practical application without third party libraries.
Not quite sure what you mean by "modest practical" but I haven't written any application in over 25 years (at least) in any language without relying on third party libraries.
Wouldn't even agree to that unless who ever was paying the bills agreed to at a minimum a much larger project timeline. And I would use that extra time to re-engineer existing libraries very likely using those third party library API definitions to replicate them.
Certainly 25 years ago I can remember creating my own logging library and implementing a testing framework as two examples of libraries that I consider essential now. I see no point in re-implementing those, especially given that I know the difficulties I had getting just those right then.
I write huge applications without third party libraries. If I could to it by myself, then clearly the collective C++ community could have (in all these years) managed to get standard and portable and reasonable subsets of at least a large core set of commonly required functionality into the language itself, in order to be more competitive with newer languages like Java and C#.
collective C++ community could have (in all these years) managed to get standard and portable and reasonable subsets of at least a large core set of commonly required functionality into the language itself
Rather certain that the C++ community specifically does not want to do that.
There was a magazine called the 'C++ Users Journal' which had at least one columnist and perhaps two that were active participants (and perhaps chairs) on the ANSI C++ committee. From what they wrote over years, as I recall, there were specific attempts to move additional libraries into the core language and those were rejected basically unanimously. Even getting templates in there and the template libraries was a fight.
Yes and no. The biggest problem by far is C++'s compatibility with C and modern C++'s compatibility with ancient C++. Meaning if you look for tutorials or ask in forums, you may and very much will come across information from the days of old, when C++ had all the disadvantages of low-level C and high-level-languages combined without any advantages. Well, this sentence is somewhat exaggerated, but the point stands: There's too much reading material on C++ and too many C++ programmers stuck in the past. To take advantage of modern C++, you need to understand when you're facing old C++ and avoid that.
That said, modern C++ itself isn't quite as easy to use as Python as you still have the static typing system, but once you learn to use it properly, it's a) actually darn easy to use (and you can kill a huge lot of difficulties by typing everything as auto) and b) the compiler catches tons of errors due to said static typing and the overall more static nature of the language.
Short: It's more complicated to quickly prototype in but the investment pays back huge when you build complex software that needs to bloody hell run.
Still, the overhead of avoiding all the legacy crap is rather substantial. I dearly wish the C++ committee came up with a modern mode. Let's say, unless a code file contains a #pragma(IAmStuckInThePast), every non-modern construct for which there's a modern replacement is a compiler error.
I think you've hit the nail on the head - the vast baggage that comes along with C++'s attempts to retain (at least superficially) compatibility with the past iterations makes it both incredibly difficult to ensure you are up to date and using the 'correct' constructs, and also means - unless you can avoid using/calling legacy code etc - that there are so many possible ways of doing things that it has become 'too difficult' to use unless you are immersed.
I gave up developing in C++ for the most part when I realised it was taking me more time and effort to understand and use correctly the various constructs that made the language most effective than it was to solve the generally non-time critical problems I was working on. Obviously others will have different experiences and hence viewpoints, I'm not saying mine is the only one.
To my mind C++ has effectively evolved into a new language, so much so that someone coming to it from new is probably in a much better place than someone like me who started with assembly language and has moved through C, C++ etc over the years. I think it is past time really for the latest iteration of C++ to drop all the 'legacy/compatability' stuff and stride out as a new language without all the baggage.
This assumes that you believe that all of the modern stuff is actually better, which plenty of folks don't. Some of it is clearly useful, but some is very much a matter of opinion. And of course you have to distinguish between the language and the library. A lot of the stuff that most anyone writing new code wouldn't want to use is the old library stuff, while a lot of thew new language stuff is much more debatable as to whether it's better or just different, or whether any advantage is does have is outweighed by different problems it introduces.
Let's use an example then. Is it really up to debate/belief, that a static_cast<T> is better, than it's counterpart, the classic cast? The cast where the compiler can check if the cast makes sense and tell me if it doesn't really isn't better than a simple "Shut up and shove the memory", is it? Of course a static_cast isn't always applicable, but then there's dynamic_cast<T> and the rest of the family and while one of the members ain't any better than the cast of olden, it doesn't have to be used in every case where you need a cast. With the old system, there's only one option, the rather dangerous one. Another example, MMA. Manual memory allocation. I simply don't believe you that there's advantages to it when a smart pointer or a library container does the job as well. Well, when talking to a memory-mapped device, static memory is all there is, laying a library container on top will only screw things up because I don't control the memory layout anymore. But that's an edge case, when it comes to business logic, letting the compiler/library do the job is in far most cases superior. Yet the oldtimers insist on doing things the old way no matter whether it's a better idea or not.
The issue I am talking about is that Oldtimers don't bother distinguish between cases, they go with old-and-tried (and difficult to use and dangerous) for the sake of it, not because it's a better idea. And really, unless you write low-level-code (which, let me just put that as a claim, Python programmers usually don't do), the new ways of more abstraction are superior.
But so much allocate of memory is within a class already. There's nothing really gained by wrapping those things in a smart pointer. It's just more moving parts and syntax and generated code. The memory is already owned and managed by the object that allocated it, and of course deleting those things in the destructor allows you to catch any errors and log them for in the field diagnosis, which you can't do if you are just letting a destructing member delete the memory.
If you are passing allocated objects around, then of course it makes a lot more sense, but that's nothing new. It's been going on for a long time and isn't modern per se.
I certainly agree that stuff like override, member delete, member default, static_cast and such are significant improvements and they don't bring lots of baggage with them that you don't want, unlike a lot of other new stuff. Things like that, which improve type safety and the ability to express semantics are always a good thing in my opinion. I can't believe that we still don't have the option for explicit parameter direction indication.
But that's the problem, you didn't say explicitly what it should be. If the right side gets accidentally changed, nothing is going to complain if what it got changed to supports the same interface (not terribly uncommon in the modern world if lots and lots of operator driven stuff.)
If you explicitly say what it's supposed to be, then two things have to get simultaneous broken in the same way. If you don't, then only one has to get broken for potential silent errors.
If that was the case I wouldn't use auto . The situation you described is actually a feature of the auto keyword. It's pretty useful to only change the initializer without have to change the type declaration during refactoring.
I don't think the point of a language should to make it easy to refactor without having to really think about what you are doing and the potential silent errors it could introduce. Significant refactoring isn't common and it should be approached very carefully. Being explicit it always safer.