The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
But you wouldn't believe how many people talk about 'kWh per hour'
Sure, it could be meaningful if the power varies, but then 'kWh per hour' might vary, too, and cannot be treated as a single value. And lots of people refer to kWh/hour even when the power is constant.
kW per hour per hour? In which situations does that unit occur?
When does kW per hour occur? A constantly rising (or falling) power, at a rate of x kW/h?
And then you want to a unit for how much, in kW, the power has risen in one hour.
You don't want to simply call it kW, these are not 'absolute' kilowatts, but a change in power, and that is a change per hour over a period of an hour, which is a different kind of kW unit.
Then you want the unit of the increase in power for each minute, right?
You come across the strangest units if you look around
When I was a student, we did some filter calculations where I (after years of wondering) saw how a frequency correction could be given by a time value (European FM pre-emphashis is 50 us, US radios use 75 us - or is it the other way around?), but I never got it under my skin; it is just a strange artifact of unit arithmetics!
If you change that 60 to NumberOfMinutesPerHour, the question is: Are these really the same MinutesPerHour as when you measure 'absolute' time progression? Or, do these MinutesPerHour have a slightly different semantics from the wall clock's minutes per hour (similar to a kW value indicating a change in kiloWattPerHourPerHour is different from an instantaneous, or constant, power kilowatt value)? Maybe it should be called NumberOfMinutesPerHourWhenCalculatingChangesInPowerOverTime?
When we updated our programming guidelines, the project leader of our project immediately granted an exception from the 80 char maximum line length: Our rules for how to construct 'const' names (this was K&R C) led to serveral cases of identifiers of length >80.
Finally, there is the famous Xerox Fortran manual quote:
"The primary purpose of the DATA statement is to give names to constants;
instead of referring to pi as 3.141592653589793 at every appearance, the
variable pi can be given that value with a DATA statement and used instead
of the longer form of the constant. This also simplifies modifying the
program, should the value of pi change."
In such a situation, the constant is badly mis-named.
Calling it a constant, if you intend to vary it, is a small detail.
Setting MILLISECONDS_PER_SEC to 2000? Seriously?
Which time-dependent operations would that affect - those where milliseconds are used, those where other milliseconds are used? Or all? Would it double or half the speed?
A properly named semi-constant value would be something like SLOWDOWN_FACTOR or VIRTUAL_TIME_TO_REAL_TIME.
And while I am at it: I really hate this C style CONSTANTS_IN_ALL_UPPER_CASE rule - much because I have seen too many cases where functional extensions requiring the symbol defintion to be changed to a a variable to adapt to other situations - but the old UC name was used in so many source files and documentation that it cost too much to change it, and the old UC name was retained for the variable.
(I also had my first serious programming training in Pascal, and when switching to C, I really missed the option to replace a semi-const/variable definition with a (parameterless) function call - parentheses were not required in Pascal, but in C you have to go through every use and add () to every use of the symbol.)
If I'm analysing a time based data stream I may wish to record how many milliseconds worth of a data I am processing per second. This information would be crucial for scaling the services doing the processing.
MILLISEC_PER_SEC is actually a conversion factor between some underlying time measurement and seconds - the name suggests it is milliseconds, but it might not be, and it might change. Some old computer hardware only kept time in power line frequency of 50 or 60 hz, and some future hardware (or OS) might keep time intrinsically in nanoseconds. Having a symbolic constant in this conversion, instead of a magic number 1000, makes it perfectly clear what the semantics of the conversion are, which is a good thing.
I disagree - I have done such things often.
If you are writing software for an embedded system, it will often happen that the basic time tick is close to but not exactly a millisecond: for instance, with a 1MHz clock and a clock divider of 1024 you get 1.024 msec. You can still think of this as a millisecond, which is close enough for some purposes. But if you want to scale to a longer time period, you get errors. For instance, there are about 977 of your "milliseconds" in a second. So defining MILLISEC_PER_SEC as 977 is quite reasonable. Then 60 "seconds" is only off by .027 seconds. If you used the nominal 1000 "msec" per "sec", you'd be off by 1.44 seconds.
It may never be actually needed, but it's better than having a lurking special value, repeated throughout the code, of 1000. It communicates something about wherever it's being used -- probably to convert a seconds value to / from milliseconds.
If you have measurement-illeterate members on your team, it's rather needed indeed. Sure, to everyone with even a bit education in engineering, "milli" is clearly E-3. But to everyone else, not so much.
Heck, I've seen people stumbling over "1,2 k€" which is still rather clear to me.