|
My god... That was impressive. I really think you should try to listen people like Tim Smith, Christian Graus, ... Because you really need to understand one thing: Think about it twice before posting such a stupid and useless remark about HN.
1) HN is used as a base for most of the existing programming conventions. It doesn't matter if you use it or not, you have to choose convention before starting any project. Even if you're the only "hardcoder".
2) Well if you don't use a prefix, how do you name your variables ? Let's take a MFC example, you create a CListBox object that will list some Names, you name it ListBoxName or lbName ? It doesn't matter as long as you choose a convention! I prefer lb because it's shorter... But if you choose ListBox well... It's your choice!
3) It totally agree with you that's why I never use s or l, I simply use "n" for all my (n)umbers. But not real numbers :p
4) So you only read that document ? I can give you some good references if you're pleased to listen.
The example code you submitted is the best you could give for us, defenders of the programming conventions . Because it doesn't represent at all what HN is about. No spaces, no tabs, stupid names, pointers, stupid algorithms, C in C++ craps... I told you it's really impressive!
I don't have much time to re-write it correctly (in my opinion) so let's say that you should try to more... careful
Jean-Marc Molina
Email: jmmolina@ifrance.com
Web: http://goa.ifrance.com
|
|
|
|
|
Jean-Marc,
You should really read a message, and try to understand what the person was trying to say, before you reply to it. You should also try not to be so insulting.
Jean-Marc Molina wrote:
Because it doesn't represent at all what HN is about. No spaces, no tabs, stupid names, pointers, stupid algorithms
THAT was my point exactly. The example code was quite unreadable, and the person who wrote the code
a) thought that the code was not only okay, but good enough to use as an example.
b) invented HN
So I said:
Warren Stevens wrote:
Look at the code and then ask yourself "is the person who wrote this un-readable mess, really someone I should be taking advice from ?"
I know there a bazzillion articles on HN, I was just picking one (that the inventor of HN wrote) to show that he might not be the best person to take advice from.
As you said:
let's say that you should try to be more... careful
|
|
|
|
|
If you condemn HN as obsolete due to modern compilers, you should consider that HN helps the programmer read the code, not the compiler.
I personally use a variant with all types capitlized variables miniscule and additional semantic prefixes like Lim, Max, Min.
This way I can use the same name for type/class and variable; e.g:
Color color;
The somewaht formal naming procedure actually simplifies the task for me, because in 80% of the cases I immediatly know how a new variable should read; and 100 lines down I can reference the variable wichout looking up again.
Another thought is: whenever too many prefixes add up and hte according name grows ugly I automatically feel an urge to reconsider whether this complicated access is really unavoidable. And simplifying things is usuallly good programming practice.
|
|
|
|
|
Has this question been posted as a poll before? If not I am seeing a winner here!
|
|
|
|
|
It's nice to see somethng like this; someone advocating Hungarian without going into the full "goulash" that's created when everything has its own prefix.
Here are some suggestions of mine:
- s_ for a static scoped variable.
- str for a string (from MFC)
- u for unsigned integer
- i for signed integer
- s for short, signed integer
- Use all uppercase for constants (something my Mac programmer colleagues seem to have a constant problem (kProblem) with :')
- Most important: avoid the temptation to give every data type its own prefix! When this happens the code starts to become incomprehensible because there are too many "standard prefixs" to keep track of.
|
|
|
|
|
Jim A. Johnson wrote:
- u for unsigned integer
- i for signed integer
- s for short, signed integer
- Most important: avoid the temptation to give every data type its own prefix! When this happens the code starts to become incomprehensible because there are too many "standard prefixs" to keep track of.
Is this meant to be ironic ?
FWIW, I use n for any number, so that if I need to change the type, I don't need to change it in a bajillion places.
Christian
The tragedy of cyberspace - that so much can travel so far, and yet mean so little.
"I'm somewhat suspicious of STL though. My (test,experimental) program worked first time. Whats that all about??!?!
- Jon Hulatt, 22/3/2002
|
|
|
|
|
I also use n for integer numbers. Any more and it just becomes a nightmare. I used to use l for longs and i for ints but it didn't make the code more readable.
Michael
|
|
|
|
|
I used to use b for bugs but then I started coding in VC++ and bugs are a thing of the past.
|
|
|
|
|
cheers,
Chris Maunder
|
|
|
|
|
Christian Graus wrote:
- Most important: avoid the temptation to give every data type its own prefix! When this happens the code starts to become incomprehensible because there are too many "standard prefixs" to keep track of.
Is this meant to be ironic ?
Not at all. I've seen several projects where every class has its own little prefix.. some from MS. Can't think of examples now, but in the MS case, they mostly have to do with handles: hwndMain, hbmpSomething, hpalSomething, rather than hMainWnd (or hMainWindow), hSomeBitmap, hSomePalette.
Christian Graus wrote:
FWIW, I use n for any number, so that if I need to change the type, I don't need to change it in a bajillion places.
That's where it's confusing. There are too many types of numbers - integers, floats, and bytes are all numbers. I use types in my notation specifically so that I can watch for trouble like:
cSomething = bySomething; // Why am I stuffing a byte into a char?
Kind of a contrived example, of course.. the compiler will catch the nastiest ones (comparison of signed vs. unsigned); but knowing the sizes of things halps me avoid problems.
|
|
|
|
|
At my office, we've implemented nearly identical standards.
|
|
|
|
|
You have writte the following:
Bitmap | IDB_ | IDC_ARROW
I think that should be IDB_ARROW
And maybe another one:
Windows message | Msg | msgCut
Should msg start with a capital letter or not?
And one in your last sentence
Free free to ...
I think it should be "Feel free to ..."
Bye,
Gertschi
|
|
|
|
|
i'd like to say you'd found my deliberate istakes
but as they were not deliberate...;)
i'll fix those righ away
thanks
Bryce
|
|
|
|
|
Great to have a list like that.
Does anybody have something similar for the coding styles for C# and other .NET languages.
Michael
|
|
|
|
|
Microsoft covers it nicely themselves, see the Design Guidelines for Class Library Developers. This document covers names for public things, amongst many other things, but when considering local variable names, note particularly the section on parameter names. In short, things have changed a lot - camel casing and no hungarian notation.
--
-Blake (com/bcdev/blake)
|
|
|
|
|
Arghh, 10 year of coding style down the drain
Thanks for the link.
Michael
|
|
|
|
|
I understand you and I feel the same.
But wait...
To me this DOES makes sense that Microsoft has changed their conventions to some sort of mixture of pascal and camel casing, but I would only understand this for .NET framework and managed coding.
In this case (like C#) all built in types are inherited from the same base class 'object'. I would say that it is no longer so extremely important to have a sort of visual confirmation about the actual type in this case (since all types are basically the same).
In native C and C++ though, I believe it is still extremely important about knowing the actual typing everywhere (not to forget scope, 'm_', 's_' and 'g_' makes a hell of a difference to avoid name clashes with locals), and hungarian notation is a sort of tool / help to make the code much clearer to read (just like someone here wrote earlier).
So, what has been discussed is two other things here about hungarian notation:
1. What if I change the actual type and not change the notation?
Well, I say, then you are a sloppy coder and should do something else than programming. It is always one of the responsibilities of a programmer to maintain his code and keep it tidy. I don't think I can personally remember at any point that I have forgotten that. And besides, how often do you REALLY change the type of a variable?
2. Difference on the size of integers ('n' for both short's and int's).
Well, I admit this is a bad, but where I worked we had for long ago resolved this issue by simply having different notations for all possible types. Yes, this is possible. Just sit down a few hours in the beginning of a project, discuss the coding guidelines, style, rules and file / folder naming / structure and you will find solutions to things like this.
I have rather recently moved to a different company and in the current (C/C++) embedded project I'm working in, we settled for camel casing everywhere with the addition of '_p' at the end (!) of names for pointers.
Well, I don't like it at all, but if you are a team member you simply have to agree to the conventions that most of the team members prefer. Guess that's how democrazy works it will never be perfect for everyone, but for most.
All in all, my experience tell me that if you have become used to using HN, you will find that you make you own code much clearer in its intentions and typing, you will also find that it is much easier to read others code that write using HN.
Thanks for the article, even though I think it should be a little bit more complete (what about static members, STL types, COM/ATL types and windows controls etc.) and also take care of the HN issues that I mentioned in point 2 because this is not so good.
|
|
|
|
|