The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
I also have had to fix problems such as the MM-DD-YY problem you described.
Strong typing often causes as many problems as weak typing because people want to find a solution for their problem. I'm thinking along the lines of a "definable" strong type conversions. Using arithmetric conversions such as integer to character, I would simply like to say "string S = i" having previously declared "i" as an integer. Then, add optional meta-data such as format.
Regarding pointers, C++ pointers has made debugging difficult. And having to cast causes even more confusion. I'm thinking the majority of pointer and casting problems are caused as a result of less experienced or lazy developers trying to find a quick, workable (in most cases) solution to their problem.
I'm just wondering if there isn't perhaps a better solution.
I disagree - they are a different type of problem. String typing reduces problems that can be located at compile time, while your weak typing problem is because the developer hasn't thought about the code sufficiently. If that forces him to look at what he's doing instead of assuming that the compiler will do the right thing, then that improves the code and reduces the chances of bad data.
You make typing mistakes, I make them - we all do. The more that the compiler can pick up instead of assuming that it's correct and blindly converting the wrong data the better, no?
string s = id;
bad if you actually meant to do this:
string s = IansName;
Why is it such a problem for you to spell out what you want the compiler to do?
Adding formatting defaults and suchlike makes it more confusing if the developer doesn't realise what format is being used ... which is why you get dd/MM/yy and MM/dd/yy confusion because different users use different defaults!
Sent from my Amstrad PC 1640 Never throw anything away, Griff
Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...
AntiTwitter: @DalekDave is now a follower!
I do a lot of development, migrating, porting, etc., in different environments, different systems, different languages, different versions,....get the idea? I have specialized over the years in doing weird stuff.
Good! I got a PHD by asking questions about things that everyone else assumed, so that seemed weird at the time, but I was right to question the assumptions, it turns out they were wrong.
The weird problems are the most interesting.
CQ de W5ALT
Walt Fair, Jr., P. E. Comport Computing Specializing in Technical Engineering Software
Weird problems are the most fun, most satisfying, but usually take more time than the budget allows.
Result: People take shortcuts because somebody is crawling all over them to "get the job done".
I, like many people, am inherently lazy. But I work very hard at being lazy efficiently. I do not like solving the same problems over and over. I do like solving a problem--once! The second and subsequent times are a waste of my time.
I will never, ever, buy a self-driving car with the current state of software development.
In fact, it's so big, she counteracts the effect of the moon on local tide tables.
".45 ACP - because shooting twice is just silly" - JSOP, 2010 ----- You can never have too much ammo - unless you're swimming, or on fire. - JSOP, 2010 ----- When you pry the gun from my cold dead hands, be careful - the barrel will be very hot. - JSOP, 2013
Once you start writing significantly sized software automatic conversions are the last thing you want. It's a mine field of errors waiting to happen, IMO.
As is often the case in line, you can sort of pick one, easy or good. You can't really have both unless it's a fairly modest undertaking. When it gets serious, the old saying of measure twice, cut once really applies. The time you spend being very specific to the compiler about what you want to happen will save you endless woe.
This is one of the things that really makes me shake my head at modern C++ where people are using 'auto' all over the place.
A colleague was just complaining about a new language+library that he was using for big data.
Too many "magical" conversions were taking place under the covers. Often processing the data in non-optimal ways, and not an easy way to force better ways to process it because of poor library design.
Per dates and formats. Dates/DateTimes in codes should be some sort of numeric type (possibly wrapped) and always in UTC. As someone else mentioned, display of dates (including timezones) should be a user preference or a replaceable component that mimics being a user preference.
I faced a similar problem just recently. The solution was to let the compiler handle the endianness. Integers, Chars (multibyte/Unicode in that context), all work the same way in the same language.
I don't cast/convert between integers and chars because the one is numbers, the other is characters, letters. If a conversion is needed, the compiler offers conversion functions that work the same independent of the platform.
Same for data declarations, in C(++), a uint16_t is unambiguous. Across languages however, don't see how that's supposed to work. What's even the point of piping a C source file into a Delphi compiler or vise versa?
This technique naturally results in my tools behaving the same on every platform the compiler offers.
Your post reminds me on a nightmare from a coworker I fixed a while ago. He was parsing a protocol with the bytestream containing, amongst other things, integers. The protocol is LE, the system the parser's running on is LE (Windows x86-64), all things work splendidly. For 1, 2 and 4 byte-integers that is. Just cast it! And to parse 3-byte integers, he built a monster of bit shifts and whatnot.
I've replaced his horrible mess with somewhat simple code that initializes the result to 0 and then adds byte by byte multiplying it accordingly. Viola, problem solved, function is simple and works for an arbitrary number of bytes.
Your 3 byte integer problem is exactly the kind of work-around that causes problems. I like your solution to the LE problem--simple and directly understandable.
But, what if we didn't have to think about it? What if the variable could be defined as "3 byte little endian integer" and the conversion to/from machine requirements took place automatically on all future references? How could something like that be implemented at the language, compiler or machine level?
I cannot tell you how many times I've seen "char" definitions used inappropriately when a "byte" definition would make more sense. Or things like:
But we already have that! Let's forget the 3-byte-integer for a moment, that's rather specific to that protocol and usually not needed. When I work with a int32_t (a rather common type), I don't have to care which endianness the underlying system uses. The compiler takes care of everything! Same goes for, let's say, uintptr_t. I don't have to care about endianness, bitness, none of that. The compiler does it for me. Well, I of course have to work with the compiler, but your world, the one where the programmer doesn't have to care, is already there.
The other topic here is that people will always find a way to circumvent the compiler and shoot themselves in the foot.
There's languages that make that easier or harder, C is the worst offender I've ever met (save for assembly, but that's in a league of it's own). C doesn't even have a byte type! A char is a byte in C, you can't blame the programmer (except for possibly poor choice of C as a tool). Switch the language. C# or Delphi on the other hand, those compilers yell at you when you're doing questionable things. And if that thing in question may work just fine while still remaining questinable, you'll at least get a warning.
Feel free to yell BULL, by the way. I've never done such a thing because the question is not whether it'll blow up in my face but a mere when. It will blow up sooner or later.
Fun fact: I've now spent about two weeks fixing a binary communication layer. Some predecessor of mine thought that using strings, data structures designed for text, where binary information is processed, would be a splendid idea. And then came Unicode. Trying to convert 99h to a Unicode string member yields 3Fh and some other byte I've forgotten. I bet it was a C programmer who grew up in the 60s riding the "Learned it once, never relearn"-mentality. Converted all of this nonsense to TArray<Byte> (Delphi nomenclature) and stuff works now.
I still wonder whether your topic is about programming in general C in particular as you seem insistent on issues that are long solved by several programming languages (granted, both Delphi and C# still allow you to shoot yourself in the foot, but you gotta fight hard against the compiler to do that).