The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
Yes, and probably with lower maintenance requirements. As well as C# probably requiring a development team with an overall lower payroll. These are factors that really matter in real-world development. And if the result is "good enough", then call it done and move on.
Rule #1 of bench-marking code like this is to make damn sure both pieces of code are on the same playing field. As you've already been told, they are not.
NEVER EVER EVER put output code (WriteLine, cout, printf, setting a TextBox.Text property, ...) inside your timing code. If you need to print result strings, put the result values in a plain old, preallocated(!), array. Arrays works pretty much the same across all languages so there isn't much of an overhead difference between implementations. Output your data to the screen after you exit the timing code. This way, you're not timing the efficiency of the output code on top of your algorithm code. As you've seen, the differences between console, stream and even visual control implementations can be HUGE.
"the debugger doesn't tell me anything because this code compiles just fine" - random QA comment
"Facebook is where you tell lies to your friends. Twitter is where you tell the truth to strangers." - chriselst
"I don't drink any more... then again, I don't drink any less." - Mike Mullikins uncle
As many others have also shown, your main issue is the cout streaming in C++ is less than optimal (to say the least). Also things like compiling to production settings (i.e. all optimizations possible) instead of debug, ensuring thread priority is set to high and cpu affinity set to ensure no other processes interfere and cause a cache reload during the test. All these thing could throw out any such benchmarking.
But just to be nitpicky: You're using a simple clock to time your benchmark. In both C++ as well as C#, that has previously been seen to have lots of inaccuracies due to optimizations in the clock's implementation. To ensure the most accurate such timing (if you're not using a profiler instead - which BTW you should rather do) then rather use:
In C++ it depends on the system. Under Linux clock_gettime[^], under Windows QPC [^] is a better alternative
All the comments are like "but you didn't do it this way" or "you didn't do it that way".
So that pretty much just says that you can easily write c# code to be fast, and in many cases faster than c++. However if you want to obsess over your code optimization and use uncommon coding practices you can make c++ be faster than c# in many cases.
I copied and pasted your exact code and compiled them with the Visual Studio 2008 (v9.0) compilers (cl for C++, csc for C#) and got wildly different results--C++ reported a time of 2.868 seconds, versus 4 for C#. My exact command lines were:
cl /EHcs cpptest.cpp
cpptest.exe > cppoutput.txt
cstest.exe > csoutput.txt
As far as why people say C++ is faster than C#--there are a great many factors, including how the code was written, what the code is doing, how it was compiled, the system (hardware and OS) it's run on, compilers, etc. that can make a difference, and no doubt there may be examples of C# performing some things faster in some instances. In general, I think it has more to do with the overhead inherent in C#, but that's just me speculating. Maybe we can get a real answer from someone knows more than I do about the internals of the languages (and whether managed C++/CLI would have had results similar to the C# code above).
I compiled and ran the program at home using g++.
The printing version was 5.6 seconds and a non printing version was 4.62 seconds.
much better than both those times.
C# also has the overhead on the .NET runtime.
And the C++ program just has the c++ standary libray for overhead.
c++ is fully compiled and c# is tokenized and still needs to be processed by the JIT.
Because "everyone" doesn't know what they are talking about when then make a statement like that.
First of course in pantheon of programming speed is of small consequence. Microsoft, google and apple are not successful because of speed but rather because they make money. Businesses that don't make money do not last. And developers that think technology is more important than sales end up working for companies that don't make money.
Second of course there can be actual business reasons where 'speed' is important but in the vast, vast majority of cases actually producing speed is based on factors besides just code and language.
Third when in fact something needs to be faster it is experience not code that actually leads to faster solutions. Someone with years of experience is much more likely to produce a 'fast' system than someone without that experience regardless of technology choice. Keep in mind however that that is an average an not an absolute.
Fourth, benchmarks, which is what you have, is pretty much useless in determining speed as far as it means anything in the business world. They often reflect nothing but one small aspect of the language and platform on which they run. That is if they are down well. Poorly done they often reflect a misunderstanding of how languages, platforms and even benchmarks work. Very rarely they even reflect a biased attempt to achieve a specific result.
Fifth if one really wants to actually focus on professional programming then one should focus on understanding the basics on what it means to create a system and not language specifics.
Basic one would be to understand that performance is impacted in the following way
1. Requirements, most impact
2. Design (including explicit and implicit.)
4. language least impact
Basic two would be to understand that if one wants to get more performance out of an application (not a system) and one can only focus on 4 in the above then one must do the following
1. Learn how to use an application profiler
2. Learn how to simulate real business data in the application
3. Run the profiler and determine where it might actually be possible to improve performance.
Focusing on 1/2 in the above can achieve orders of magnitude impacts on performance while 4 can only achieve marginal percentage increases.
After all of that my personal opinion is that if one had an unlimited amount of time, if there was an optimal design and requirements, if two programmers were very, very experienced with similar backgrounds in application design and the application was limited to a very narrow subset of functionality then it is possible that the C++ programmer might produce a solution that is marginally faster than the C# programmer.
But since that is a complete fantasy which has nothing to do with reality it is pointless to discuss it.
C# or rather the .Net runtime does a number of things to make code "safer". These are things like bounds checking, type safety, etc. Some of that happens at compile time, but of necessity a lot of it happens at run time. This obviously takes time. Much of it can be "turned off" if you want to though you may not want to. If you want very tight control of timing then C++ and likely straight C is a better bet. For a variety of problems .Net (and therefore C#) is performant enough and "safer" than C or C++. It is good to have options.