The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
Two bytes for the year to hold only two digits was common in the commercial world using languages like COBOL and PL/1, but even COBOL had a condensed mode (forget the exact keyword) to save as BCD which gave two digit in a single byte. In the scientific world we used languages like FORTRAN that use binary values so a single byte could hold up to 256 values so it was common to use 1900 + byte for the year which was (and still is) OK from 1900 to 2255.
The ICL 1900 series dates were in the format ddmmmyy but they had the convention that yy >= 65 represented 1900 + yy, yy < 65 represented 2000 + yy. So, if there are any ICL 1900s still around, they are safe.
But in the early days, byte adressability was rather uncommon. It came with the IBM 360 mainframe series (and their marketing people upset both competitors and customers by selling memory by the price per kilo, when the audience realized that IBM was talking about kilo bytes of 8 bits while the competitors were talking about kilo words, that might be 32 or 36 bits. (For small computers, the word might be 18 or 16 bits, but they were not competing against the 360 series.)
The 360 architecture did not immediately force byte adressability onto everybody. Take the Univac 1100, a significant competitor with the 360 (much less in the US than in Europe, though) - it went through the 60s, 70s and 80s without ever getting byte adressability. The later models in the series had a few instructions for register operations on a quarter word (9 bit bytes) or sixth word (6 bit bytes), so you didn't have to do all the shifting and masking yourself. There was no way to write back to RAM anything less than a full 36 bit word.
You could see similar things on a lot of machines, even those brought to market during the 70s. Decsystem-10 and -20 were 36 bit word adressable (with some byte-related instructions like on the late U1100). PDP-11 had byte-adressable RAM, but the Nordic competitors from Norsk Data were 16-bit word adressable.
If you were in a Fortran environment, and were able to address single bytes, my guess is that you were in the PDP-11 world. Early "scientific" IBM machines (like the 709 and 7090) were word adressable 36 bits architectures. Even the ICL 1900 was 24 bit word adressable, but actually had intstructions for addressing 6 bit bytes within the word; the hard logic would do the masking and shifting when retrieving or updating only a quarter of a word. In RAM, a single byte would still take a full 24 bit word if you couldn't combine it with other single bytes. (But again: If you use five instruction words to save one byte of data space, you lost the game!)
I was so happy when 2012 was over! I did loose a lot of hair from intense head scratching over dates like 01/02/03 or 01-02-03, which has at least three different interpretations. 01/02/13 reduced it to two, and the slashes raised the probability of one of them. 01-02-13 rasised the probability of the other. With 13-01-02, you could almost be sure of the interpretation, 13/01/02 tended towards the other, but with less certainty. Still, you can't know for sure unless you have two parts of 13+, but the period from 2001 to 2012 was really a nightmare wrt. interpreting dates.
I really wish that the transition to ISO 8601 (i.e. yyyy-mm-dd style) would go faster! I know that "International Standards Organization", ISO, makes a lot of USAnians stall: It isn't invented here! But doesn't the American Standards Asocciation provide an American standard with the same contents, but a True American Standard reference that can be used to promote it in the USA?
But a lot of standards do have several standard numbers, because they have been developed in close cooperation of two or more standard organizations. Quite a few telecommunication standards have both an ISO number and an ITU "recommendation" number (like X.509). Some IEEE standards are identical to ISO standards. In Germany, DIN (eutsches Institut für Normung) is the German branch of ISO. They were very early with some standards (like DIN 45500 which all old-time hifi freaks know well) - parts of this was made into ISO standards with different numbers.
In a few cases, the standards have small "editorial" differences, such as whether the final part(s) are called an "appendix" or "annex". Or mandatory definition of certain terms such as MAY and MUST. There may be other formal requirements, such as ITU referring to specific regulatory units in the telecom world, which is against ISO principles, so those are replaced by terms like "the management organization" - not identifying a specific one. The technical contents of the standard is completely unaffected by these differences.
Sometimes, you may see national standards such as Norsk Standard 646 - NS 646 is identical to ISO 646 ("ASCII") but with an addendum defining its use in Norway. Fortunately, 646 was unused in the NS number series; in other cases, the NS number differs from the ISO number, for the same technical content. And for some standards, the English text isn't even translated to Norwegian for the NS version.