The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
In fact I am not in love of C++ nor want deep control but
I want C++ to be as easy as C# + Xaml stack
I want to use DirectX seamlessly
and speaking of which Kenny Kerr work in upcoming SDK 17025 seems to be it. Really impressed. Upcoming C++ 17 standard helps too.
I would like C++ to be as easy as C# to integrate with other .NET work, with off chance I hit a scenario offers critically more performance.
And, could you believe it, I hit that scenario today at work. In an UWP app we were accessing some low level hardware but some operation was 100x times faster in C++. Turns out it is because, I believe, any high level device operation requires reading and writing lots of small array. Having a huge cross ABI boundary cost.
Luckily writing a C++ UWP component available usable by my C# project proved easy. Although I couldn't get to reference a C# interface in the C++ component, ending making it standalone.
Finally thanks for your feedback. It was interesting and relevant.
Most of my C# coding has been GUI and file system related. Timing comparisons are relevant only on selected parts with no user interaction. For file system operations, disc waiting time may easily be ten times the processing time - and that waiting is roughly programming language independent, unless the language libraries use different buffering strategies (and then you are comparing buffering strategies, not languages).
So, for a long time I had overlooked the "Optimize code" chekcbutton in the VC project properties: For the GUI/disk related projects it really didn't matter. When I checked it, CPU bound code became approximately 4 times as fast(!) (but cannot be properly debugged). Now I would certainly like to rewrite some of that CPU bound code in C++ to see if there is anything to gain. I guess that it will be very little!
(By the way: There is no noticable speed difference between x64 code produced directly by the C# compiler, compared to CIL generated by the compiler with final code generated by dotNet to x64 on the first run. I didn't expect there to be - but lots of people believe that the just-in-time code generating imposes a heavy perfomance penalty.)
As you say: C++ (or just plain C) has its place as a CPU independent (high level) assembly language, e.g. for interfacing to hardware. In the old days, libraries were written in assembly, nowadays in C/C++. They make the interface available to e.g. C#, but you won't get completely away from assembly/C to create the interface library.
there is a discussion from 1998-99 (on NetNews/Usenet...) with this guy insisting that high level languages are a fad: They will never be able to compete with properly written assembly code. Well, it can. GC can compete with most hand-written heap management code.
Rather certain that I was looking at a automated GC for C++ in the 90s. It was an add in library of course. And one needed to adapt certain ways of working.
But of course in C# and Java, both with automated GC, one must adopt to the reality that computers have many resources of which memory is just one. So one must adapt to certain ways of working - one must always explicitly manage everything except memory. With C++ however one can manage other resources via the scope (dtors) and do so with complete control.
Member 7989122 wrote:
But it won't be mainstream again.
Certainly for my definition of "mainstream" it is in fact exactly that. And it hasn't changed much in the last 20 years.
one must always explicitly manage everything except memory.
Most certainly not.
When we were students, some OSes/file systems (notably: an IBM mainframe we ran a big project on) required you to manage disk space explicitly: You had to reserve the amount the file would require, as one contiguous chunk on the physical disk. (Certainly, we had been working on other systems with automatic disk space management for more than three years, and refused to believe it when this IBM guy gave us a lesson in how to create a file, but it was true!)
I am too young to myself have experience with setting up a job mix where one job would use the CPU while another job did I/O, but a couple of my older textbook discusses this. However, we have had automated CPU management for several decennies.
Even communication has gone from static swithed lines to statistical multiplexing, where every user is automatically allocated capacity according to needs.
So memory management is certainly not the only area for automatic management.
There are scaled-down OSes for embedded use as well. The Wikipedia "Embedded operating systems" category page has quite a few entries. Certainly, a lot of the entries are not OSes, some are propritary OSes, some may not qualify as "embedded" and some may not qualify as OSes.
The one I have been in touch with is Zephyr, which certainly qualifies as an OS, and certainly as embedded (it can well run on, say, IoT chips with less than 256k), it is open-source and implemented on a number of typical embedded-class CPUs.
Yep, every embedded processor I've developed for in the last 20 years has had a C/C++ compiler available for it (and not much else). Unfortunately, they've also been years behind the curve. The current TI C++ compiler (I do a fair amount of DSP work) is only at the C++(03) level.
3 of the last 4 jobs I have had all involve C++. Two involved both C++ and C#. My current position is as a C++ software developer. I always hear it is dying but I don't see that really happening. We are just not as flashy as the Web devs.
A niche where C++ could fit, I think, is in the field of webapis. Picture a RESTful service that does the heavy processing efficiently as only a good piece of C++ code can do. Front ends (for web or mobile) can be done using C# or Java or Swift. When you install these webapis to the cloud, it would mean (at least theoretically) less memory and probably less CPU usage (if the C++ code is optimized), which would translate in less resources and a less expensive Azure bill.
But I haven't seen a framework for C++ webapis apart from half-baked projects that stopped active development long ago. So unless you want to go creating ISAPI extensions or good old CGIs, I don't see it happening anytime soon.
As for desktop development, Microsoft won't upgrade MFC nor will opensource it, and neither Qt nor C++/CX are actually standard C++ (and WinRT seems too much complicated to invest in a sandboxed environment) so I don't think C++ will come back to desktop anytime soon. Plus, Microsoft doesn't seem interested in desktop development aside from their UWP, essentially dooming (IMO) desktop apps.
I don't think MS is dooming Desktop apps, the Web & Cloud is.
Besides, there are heavy apps that can only work properly on desktops, e. g. Photoshop, Cad, C++ IDEs
And, you wouldn't make such apps in WinRT.
Also, there are apps which are simply incompatible with UWP, e. g. Browsers & File managers.
To summarize, UWP is OK as long as you don't fall into one of the above categories, but I really don't see a lot of C++ devs jumping into that boat, it definitely for dotneters.
BTW, your statement about Qt not being standard C++ is kind of strange, Qt is not a C++ dialect, it's a framework. The C++ in Qt is as standard as your compiler assures
Do they at least still include the part where you slap someone in the face with a glove and agree upon a meeting with witnesses at dawn? That used to be the best part when someone violated the code of conduct.
"You, sir, are a scoundrel and a liar. Choose your weapons."
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
I'm thinking like a wedding taking place where some folks could be interested in watching the live stream, perhaps because they are too busy or not feeling well (e.g., an infirm) to attend. At the bare minimum, it would have to be a cameraman, a key grip / M.C., and a network tech that makes sure the stream is being done properly and can answer calls from folks watching the stream who are having issues. It would seem that this could be done by any tech shop, at a price of no more than $250/hr. I wonder how such a place would be advertised.