|
Rick York wrote: With auto the compiler knows exactly the appropriate type to use and gives you an error if it can't figure it out.
If it "knows exactly", how can it not "figure it out"? Sounds like a chick/egg scenario to me.
".45 ACP - because shooting twice is just silly" - JSOP, 2010 ----- You can never have too much ammo - unless you're swimming, or on fire. - JSOP, 2010 ----- When you pry the gun from my cold dead hands, be careful - the barrel will be very hot. - JSOP, 2013
|
|
|
|
|
If it knows exactly what the type should be, it'll substitute the appropriate type. If it doesn't know exactly what the type should be, it knows exactly that it doesn't know what the type should be.
How's that a chicken/egg-scenario? Either the compiler knows the type or it doesn't. That's the beginning of the causality chain. If the compiler doesn't know the type, it throws you an error. That's the end of the causality chain.
|
|
|
|
|
The compiler knows. The developer doesn't (necessarily).
|
|
|
|
|
That's pretty much the point. In most cases, namely when the type doesn't matter as long as it works, the developer doesn't need to know. In edge cases, such as auto i=1 where i is required to be, let's say, an unsigned value somewhere later down the line, the developer can still forego the auto and make it a unsigned int i=1 or at least an auto i=static_cast<unsigned int>1.
An example from my own work: I've been using API functions like GetTickCount quite a lot and instead of looking up the exact return type, a simple auto s=GetTickCount does the job. API functions returning some value are documented as "Returns 0 if the operation succeeded", in that case, a if 0==s is still enough, I don't need to know the type.
|
|
|
|
|
Even "auto i=1" can be made explicit with "auto i=1U".
|
|
|
|
|
It's laziness: same as var in C#. Yes, you need it (in C# you can't do Linq without it, pretty much) but when all you ever see is
var x = 666;
var y = "Hello World";
var z = DoSomething(x, y); It's just the coder* saying "I can't be bothered to think about it - you work it out for yourself"
* Note that I didn't use "developer" here
Sent from my Amstrad PC 1640
Never throw anything away, Griff
Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
At first glance var does seem lazy, I use it regularly while working on a large codebase with a lot of 'technical debt'.
I use it quite a lot in my professional code development having been encouraged to do so.
There again I work in an environment where comments are frowned upon, the thinking being that well written code should not have to be documented - a philosophy which I don't agree with.
I think the use of var fits in with this 'no comments' philosophy as it is not explicitly stated what the type of the variable is and you have to figure it out with intellisense or by inspecting the method's return type.
“That which can be asserted without evidence, can be dismissed without evidence.”
― Christopher Hitchens
|
|
|
|
|
If your code has a lot of 'technical debt' it is probably not well written code, and thus arguing if well written code should be documented commented or not is irrelevant for that particular case.
---
I agree with a lot of the philosophy you don't like... I have been anti-comment and pro var for a long time.
I believe comments should not say what (names are for that) or how (instructions are for that)... yet, I think comments that explain why and for what are good. At the end the motivation for having less comments is that comments are not checked, and could be forgotten in refactoring, and thus there is a risk that they will be outdated... sure, we can argue dicipline, yet we use strictly typed languages for a reason. Thus, instead we want to express what we would have said in comments in code.
With that siad, I can tell you that using var as an extension of a no-comments philosophy is retrograde. The idea is to make the code express as much as it can (so it is explicit, that is what they mean by well written code, please do not confuse with verbose), so that we do not have things to communicate in comments... from that point of view, var is counterproductive.
Let us be clear, var is not dynamic typing. Yes, names can help with knowing the type※... yet, no, I am not advocating for hungarian notation either. So how can I be anti-comment and pro var if they are at odds?
I belive in the use of var as a way to protect the code from reasons to change. Same goes for auto . And yeah, I use it virtually everywhere. It eases refactoring (If I change the return type, using auto avoids a maintenance ripple of updating types everywhere the code is used), thus increases maintainability.
Addendum: You know what, I do realize it goes both ways, because if I did a poor job and returned something bad, auto will not complain. Although, I would expect it to break where we actually try to use the value.
If the code follows the robustness principle ("Be conservative in what you send, be liberal in what you accept"), we will not be using auto for the return type, instead the return type should be as specific as posible (without breaking encapsulation, if any). On the other hand, we want to assign the return value to a variable, and the return type is probably much more specific than you actually need in client code. In that situation we probably should not care of the particular type... in fact, we can argue that in that situation - if possible - it often makes sense to cast to an base class/interface that expresses what we need from the return type... and yes, you can use auto with a cast, yet, I will not be forcing you or anybody to use auto .
---
※: if I write auto highPriority = is_high_priority(w); the type of highPriority could be anything. However, it is expected that we can do if(highPriority){} or at least somebody did a very bad work at naming (somebody has been writing very bad code). Do not tell me that the names means nothing, names are for the people who read the code. If the name does not give you a clue and you need documentation to know what everything does, that is poorly written code. In fact, there could (and arguebly should) be naming convention that cover this. Yes, I understand that you want to see bool there, it gives people peace of mind. As I said above, I will not be forcing you or anybody to use auto .
So, nah, I'm not trying to convince you to use auto everywhere. I just wanted to give some insight on why one could advocate for zero comments and var everywhere. I felt misrepresented.
|
|
|
|
|
For no particular reason, I've adopted var in the case of your third example, but not the first two. I always use an explicit type for native/built-in types like string or int...but when it comes to classes (assuming your DoSomething() returns a class rather than a native type), I'll use var...especially when I might not even do anything with it other than forward it to somebody else (ie, I might not even need to look at any of its properties or members).
|
|
|
|
|
var in Javascript relates to scope rather than the type of the object.
I think you may be referring to C#
Scope is perhaps the one thing in Javascript that gives me a taste of what hell might be like.
“That which can be asserted without evidence, can be dismissed without evidence.”
― Christopher Hitchens
|
|
|
|
|
auto is abused for simple POD types.
When you start using more advanced C++ idioms (templates, lambdas...) , it can be a soul saving tool.
I'd rather be phishing!
|
|
|
|
|
This. A thousand times this!
auto is a huge benefit for iterators and such. Do you really want to be typing out std::vector<Something*>::iterator... when auto can save you time and typing? You would twist your fingers and brain up remembering the correct syntax for more complex structures, such as maps.
Don't use it for your data definitions, but use the hell out of it everywhere else! You will save a huge amount of time. And it will be easy to tell the intent by the way it is used: auto it = someVector.begin();
|
|
|
|
|
It is getting stupid, but then again much of modern C++ is some sort of attempt to try to make it into exactly what it really isn't. It's like we've been infected with people whose idea of software is a javsascript web site.
When you are righting serious code that you will have to support and upgrade over decades, being as explicit as you can is always a good thing. You'll write it once, but you'll have to read and modify it many, many times. Auto makes it way too to make silent mistakes during modifications, because it just takes on whatever you assign it. If the type you wrongly assign is syntactically similar enough, and that's not hard to happen given how much people do with operators and other templates and such, it will just silently change the code.
If you explicitly indicate the type, you have to screw up two different ways as once, which just makes it that much less likely to happen silently.
Explorans limites defectum
|
|
|
|
|
I'm tellin ya, every time I read stuff on r/cpp, I start to suspect more and more that Russia is putting stupid pills in our water. I mean there are people now arguing for stuff that was so utterly bad in the 1980s that pretty much an entire industry switched to OOP to get rid of it. And they are arguing for this stuff like it's some sort of modern, magic hipster technology to fix all of the evils of OOP.
Explorans limites defectum
|
|
|
|
|
That is very representative of society today : thinking lazy, being lazy. With the consequences we know...
|
|
|
|
|
The use of auto is not laziness, nor is it abusive. It is correct and idiomatic modern C++. Bjarne Stroustrup and most members of the ISO C++ standardization committee actively advocate for its use, to the point where AAA - Almost Always Auto - has become a common mantra. The simple fact is that, most of the time, the compiler is smarter than you, and understands your code on a level that you never could. Allowing the compiler to determine the type automatically, as often as possible, allows for optimizations that may not be possible if you coerce an explicit type.
People who reject evolutionary features of C++ are the same sort of people who would reject fuel injection on cars, because they learned how to drive a car with a carburetor, so everyone else should be fine with it.
Technology advances. Try to keep up, or be left behind.
|
|
|
|
|
Andy Hoffmeyer wrote: The simple fact is that, most of the time, the compiler is smarter than you, and understands your code on a level that you never could. Allowing the compiler to determine the type automatically, as often as possible, allows for optimizations that may not be possible if you coerce an explicit type.
So, by that logic if I was using an IDE with good intellisense and hovered over a variable declared initially with auto, which showed me what the omniscient compiler will decide the type should be, and then explicitly declared the variable to be that exact type it would somehow break the multi-dimensional optimization the compiler would perform.
Are you seriously saying that or did I misread your comment??
|
|
|
|
|
You're performing a good bit of mental gymnastics to arrive at that interpretation of what I said. Clearly, if you know the exact type that would be deduced, there would be no penalty for explicitly using it.
|
|
|
|
|
But you always DO know the type, well 99.9999% of the time. So clearly there's no penalty. So how exactly does the compiler know more than us, particularly enough to risk the potential silent bugs that auto could introduce?
Explorans limites defectum
|
|
|
|
|
Can you give an example of such silent bugs? In fact, I think the ISO C++ committee would probably be interested in hearing about these bugs so they could address them in the next release.
|
|
|
|
|
I gave one below and I'm sure that they know about them and they cannot address this, because it's fundamental to why auto is dangerous.
auto whatever = GetSomething()
while (somecondition)
whatever++;
If you accidentally change the right side to anything that provides a ++ operator (anything that is syntactically valid for the loop), the compiler will never know that's wrong, because you are not providing the compiler with information about your intent. The compiler is only being given SYNTACTICAL guidance when you use auto, not SEMANTIC guidance, which is what it needs to help you in this situation.
If you provide the actual type, then you are telling the compiler what your intent is, i.e. semantic information, and so you have to make two parallel errors in most cases for this to silently cause a bug. Otherwise, you won't know until you somehow realize that something isn't getting incremented as it should be which could have most likely been caught at compile time with explicit typing.
Explorans limites defectum
|
|
|
|
|
That sounds like your API has issues. Functions are interface contracts and if you don't know that function's return type changed, then the interface has been broken. That's outside the scope of language keywords, in my opinion.
|
|
|
|
|
So you are saying only one class in the entire code base can have a ++ operator? Or a += operator? or an add() method or a push() method?
Explorans limites defectum
|
|
|
|
|
I never said anything even close to that. I am saying if the maintainer of the GetSomething() function changes the return type without informing the consumer, that's a major problem that has nothing to do with language features.
|
|
|
|
|
So you are saying that mistakes shouldn't happen? Of course GetSomething()'s return could be changed by accident, and that would get caught also. But the more likely scenario is that someone accidentally changes the call, either by editing the wrong thing, or by search and replace, so that something besides GetSomething() is being called.
Either of those things would become silent failures that could be taken care of by using an explicit type. Are they going to happen every day? No, they won't. But it's those type of silent errors that are the killers. Those are the ones that suddenly six months later the code stops working in the field and no one understands why.
Explorans limites defectum
|
|
|
|
|