The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
I don't think a "system" counts as a program. Granted, SABRE is possibly the oldest civilian system in existance, but the longest continuously running software probably goes to NASA's various interstellar probes. Earth-bound software is replaced too frequently to even come close to the NASA stuff. 41 years (as of this August/September) and counting... They expect the probes to lose power some time in 2025.
".45 ACP - because shooting twice is just silly" - JSOP, 2010 ----- You can never have too much ammo - unless you're swimming, or on fire. - JSOP, 2010 ----- When you pry the gun from my cold dead hands, be careful - the barrel will be very hot. - JSOP, 2013
Millenium bug? A program storing dates either with 4 digit year (or actually anything else than 2 digit "mod 100" style) encountered no problems with the new millennium. Assuming, of course, that the OS didn't crash or deliver the wrong values. My guess is that the OS used in 1958 was so primitive that it had few if any built-in calendar-related functions beyond reporting the current date and time. As long as that report didn't use a mod 100 year value, you'd be fine.
The University of Copenhagen ran a huge Univac 1100 mainframe, from the days when CMOS and battery backed up real-time clocks had't been invented yet. So if the machine was rebooted (which could be due to normal maintenance), the operator had to set the current time manually. At one reboot, the operator happended to mistype the year, setting the machine 10 years into the future. It wouldn't be that dramatic, if they hadn't - before the mistake was discovered - run the program deleting all files that hadn't been accessed for six months. ("On a clear disk, you can seek forever"...)
There is a second part to this story: The data wasn't actually deleted. Storage for large systems was heavily tape based in those days. Univac had a very compact format where all the metadata, the catalog information with pointers to the data blocks, were kept on disk. Only the data blocks themselves were written to tape ... without any metadata. So all the data blocks were there, but with no pointers to them. No indication of which data blocks belonged to which file.
(This was a well known "real life" story in my student days - my U had two huge Univac 1100 mainframes; the operators loved to tell about this incident. I never saw any "hard" documentation. If anyone can point me to reliable sources, I'd be happy!)
Trust me, back in those days, a mod 100 year would have been used - memory was small and damn expensive - you wouldn't waste a byte per date! (Your code might have worked with year >= 50 == 1900 + year, year < 50 == 2000 + year but in the fifties that was very, very unlikely - that was a big part of the Millenium Bug)
Bear in mind that in those days its was mostly punch cards - which had 12 rows so a month could be encoded in a single digit to save space!
Sent from my Amstrad PC 1640 Never throw anything away, Griff
Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...
AntiTwitter: @DalekDave is now a follower!
Naah... You worry about those two bytes when you store time stamps thousands or millions of records. An OS that provides a single "current date" value does not chop off two digits to save two bytes in the single date value presented to applications. The chopping off would take far more than two bytes of code! I have worked with several OSes from the early 1970s, and they all provided 4 digit year values. If two digits were chopped off, that was done by the application, not by the real time clock in the OS.
I rather question whether a 1958 vintage system really had a real time clock at all, and an OS that at all provided the current time in a dd-mm-yyyy format. My guess is that after reboot, the operator could set the startup date/time in a single well known location, alongside with a register counting machine cyles since startup. Remember that machines of those days did not have byte addressing, and yyyymmdd in decimal format fits well within a word, even on 32 bit machines (most were 36 bits at that time). Storing yymmdd in a single word saved no space compared to yyyymmdd.
Of course MOACS itself could choose to chop off digits to save space in a million of records, probably to save magnetic tape. I doubt that it held zillions of records in memory at the same time! Also note that it is written in COBOL, The Great Promoter of BCD - PACKED DECIMAL uses 4 bits to the digit, so only a single byte would be saved by a yy format. BCD could save quite a few bytes per accounting record with lots of numeric values. Once you half the space for numeric entities by using BCD, chances are less that you go further to save a single byte by using mod 100 year values.
Anyway: In lots of applications, a time stamp is just a label; you don't do arithmetic on in. In the days of Pascal, with enumeration values as a primary non-numeric data type, I was arguing with fervor that April is not half of September, but followers of this new "C" language protested: Why not? As long as a date is just a label, there is no millenium problem. A person reading the label will know from the context that "95" is ten years before "05" rather than ninety years later.
When we entered the new millenium, quite a few people (not limited to diehard preppers!) had filled their basements with canned food and water bottles, bought freestanding propane heaters etc. expecting the entire society infrastructure to break down at midnight. We know it didn't happen, even though numerous computer systems were NOT updated to handle year 2000 - none of those I worked on, none of those I depended on. That was either because they had never cared to save those two bytes (or nibble), or that they never did date arithmetic. Or that they since long had been prepared for it, making the 100 years run from 1950 to 2050 (I saw that in a couple systems long before the Millennium Panic.
The Millennium Panic was essentially driven by users who wanted to have their systems upgraded, but those sitting on the money said "No!". By creating a big panic that the money people understood nothing of, lots of both software and hardware was updated ahead of schedule, even if it wasn't at all affected by the year. (And which hardware was millenium dependant? Lots of hardware was thrown out!) I consider at least half of the millenium issues to be fictous, just a power tool to force through updgrade that would otherwise have come significantly later.
Two bytes for the year to hold only two digits was common in the commercial world using languages like COBOL and PL/1, but even COBOL had a condensed mode (forget the exact keyword) to save as BCD which gave two digit in a single byte. In the scientific world we used languages like FORTRAN that use binary values so a single byte could hold up to 256 values so it was common to use 1900 + byte for the year which was (and still is) OK from 1900 to 2255.
The ICL 1900 series dates were in the format ddmmmyy but they had the convention that yy >= 65 represented 1900 + yy, yy < 65 represented 2000 + yy. So, if there are any ICL 1900s still around, they are safe.
But in the early days, byte adressability was rather uncommon. It came with the IBM 360 mainframe series (and their marketing people upset both competitors and customers by selling memory by the price per kilo, when the audience realized that IBM was talking about kilo bytes of 8 bits while the competitors were talking about kilo words, that might be 32 or 36 bits. (For small computers, the word might be 18 or 16 bits, but they were not competing against the 360 series.)
The 360 architecture did not immediately force byte adressability onto everybody. Take the Univac 1100, a significant competitor with the 360 (much less in the US than in Europe, though) - it went through the 60s, 70s and 80s without ever getting byte adressability. The later models in the series had a few instructions for register operations on a quarter word (9 bit bytes) or sixth word (6 bit bytes), so you didn't have to do all the shifting and masking yourself. There was no way to write back to RAM anything less than a full 36 bit word.
You could see similar things on a lot of machines, even those brought to market during the 70s. Decsystem-10 and -20 were 36 bit word adressable (with some byte-related instructions like on the late U1100). PDP-11 had byte-adressable RAM, but the Nordic competitors from Norsk Data were 16-bit word adressable.
If you were in a Fortran environment, and were able to address single bytes, my guess is that you were in the PDP-11 world. Early "scientific" IBM machines (like the 709 and 7090) were word adressable 36 bits architectures. Even the ICL 1900 was 24 bit word adressable, but actually had intstructions for addressing 6 bit bytes within the word; the hard logic would do the masking and shifting when retrieving or updating only a quarter of a word. In RAM, a single byte would still take a full 24 bit word if you couldn't combine it with other single bytes. (But again: If you use five instruction words to save one byte of data space, you lost the game!)
I was so happy when 2012 was over! I did loose a lot of hair from intense head scratching over dates like 01/02/03 or 01-02-03, which has at least three different interpretations. 01/02/13 reduced it to two, and the slashes raised the probability of one of them. 01-02-13 rasised the probability of the other. With 13-01-02, you could almost be sure of the interpretation, 13/01/02 tended towards the other, but with less certainty. Still, you can't know for sure unless you have two parts of 13+, but the period from 2001 to 2012 was really a nightmare wrt. interpreting dates.
I really wish that the transition to ISO 8601 (i.e. yyyy-mm-dd style) would go faster! I know that "International Standards Organization", ISO, makes a lot of USAnians stall: It isn't invented here! But doesn't the American Standards Asocciation provide an American standard with the same contents, but a True American Standard reference that can be used to promote it in the USA?
But a lot of standards do have several standard numbers, because they have been developed in close cooperation of two or more standard organizations. Quite a few telecommunication standards have both an ISO number and an ITU "recommendation" number (like X.509). Some IEEE standards are identical to ISO standards. In Germany, DIN (eutsches Institut für Normung) is the German branch of ISO. They were very early with some standards (like DIN 45500 which all old-time hifi freaks know well) - parts of this was made into ISO standards with different numbers.
In a few cases, the standards have small "editorial" differences, such as whether the final part(s) are called an "appendix" or "annex". Or mandatory definition of certain terms such as MAY and MUST. There may be other formal requirements, such as ITU referring to specific regulatory units in the telecom world, which is against ISO principles, so those are replaced by terms like "the management organization" - not identifying a specific one. The technical contents of the standard is completely unaffected by these differences.
Sometimes, you may see national standards such as Norsk Standard 646 - NS 646 is identical to ISO 646 ("ASCII") but with an addendum defining its use in Norway. Fortunately, 646 was unused in the NS number series; in other cases, the NS number differs from the ISO number, for the same technical content. And for some standards, the English text isn't even translated to Norwegian for the NS version.