The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
Looking at ProjectEuler and the OEIS website, it seems that some programming languages offer shorter methods of doing something (mathematical) (and I'm not just talking about the obfuscatory syntax). One language that comes to mind is J; another is Maple. It seems that you can do a lot with just a few short calls. Why isn't that added to most languages? Is there a repository that adds the most optimal method to do something into a library for that language so that it may be used?
What is the longest running program? I assume the programs developed for Voyager 1 and 2 are pretty much up there. What other types are there? Is there a never-ending program that is calculating all the primes and storing them in a library somewhere?
Did you actually read him?... His haven't much more sense than yours
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.
Is there a never-ending program that is calculating all the primes and storing them in a library somewhere?
well there was Deep Thought that determined the answer to life, the universe and everything ...but that did eventually complete.
Now [given the answer] they're trying to trying to figure out the ultimate question, however that will take a much larger computer.
#1. It is odd that some language-specific features don't spread more broadly (APL's matrix manipulation as another example). I guess that the general language users don't need them, so it doesn't migrate into the more common languages. Some stuff does migrate (quicksorts and whatnot), so I assume it's just need vs. experimental languages.
I don't think a "system" counts as a program. Granted, SABRE is possibly the oldest civilian system in existance, but the longest continuously running software probably goes to NASA's various interstellar probes. Earth-bound software is replaced too frequently to even come close to the NASA stuff. 41 years (as of this August/September) and counting... They expect the probes to lose power some time in 2025.
".45 ACP - because shooting twice is just silly" - JSOP, 2010 ----- You can never have too much ammo - unless you're swimming, or on fire. - JSOP, 2010 ----- When you pry the gun from my cold dead hands, be careful - the barrel will be very hot. - JSOP, 2013
Millenium bug? A program storing dates either with 4 digit year (or actually anything else than 2 digit "mod 100" style) encountered no problems with the new millennium. Assuming, of course, that the OS didn't crash or deliver the wrong values. My guess is that the OS used in 1958 was so primitive that it had few if any built-in calendar-related functions beyond reporting the current date and time. As long as that report didn't use a mod 100 year value, you'd be fine.
The University of Copenhagen ran a huge Univac 1100 mainframe, from the days when CMOS and battery backed up real-time clocks had't been invented yet. So if the machine was rebooted (which could be due to normal maintenance), the operator had to set the current time manually. At one reboot, the operator happended to mistype the year, setting the machine 10 years into the future. It wouldn't be that dramatic, if they hadn't - before the mistake was discovered - run the program deleting all files that hadn't been accessed for six months. ("On a clear disk, you can seek forever"...)
There is a second part to this story: The data wasn't actually deleted. Storage for large systems was heavily tape based in those days. Univac had a very compact format where all the metadata, the catalog information with pointers to the data blocks, were kept on disk. Only the data blocks themselves were written to tape ... without any metadata. So all the data blocks were there, but with no pointers to them. No indication of which data blocks belonged to which file.
(This was a well known "real life" story in my student days - my U had two huge Univac 1100 mainframes; the operators loved to tell about this incident. I never saw any "hard" documentation. If anyone can point me to reliable sources, I'd be happy!)
Trust me, back in those days, a mod 100 year would have been used - memory was small and damn expensive - you wouldn't waste a byte per date! (Your code might have worked with year >= 50 == 1900 + year, year < 50 == 2000 + year but in the fifties that was very, very unlikely - that was a big part of the Millenium Bug)
Bear in mind that in those days its was mostly punch cards - which had 12 rows so a month could be encoded in a single digit to save space!
Sent from my Amstrad PC 1640 Never throw anything away, Griff
Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...
AntiTwitter: @DalekDave is now a follower!
Naah... You worry about those two bytes when you store time stamps thousands or millions of records. An OS that provides a single "current date" value does not chop off two digits to save two bytes in the single date value presented to applications. The chopping off would take far more than two bytes of code! I have worked with several OSes from the early 1970s, and they all provided 4 digit year values. If two digits were chopped off, that was done by the application, not by the real time clock in the OS.
I rather question whether a 1958 vintage system really had a real time clock at all, and an OS that at all provided the current time in a dd-mm-yyyy format. My guess is that after reboot, the operator could set the startup date/time in a single well known location, alongside with a register counting machine cyles since startup. Remember that machines of those days did not have byte addressing, and yyyymmdd in decimal format fits well within a word, even on 32 bit machines (most were 36 bits at that time). Storing yymmdd in a single word saved no space compared to yyyymmdd.
Of course MOACS itself could choose to chop off digits to save space in a million of records, probably to save magnetic tape. I doubt that it held zillions of records in memory at the same time! Also note that it is written in COBOL, The Great Promoter of BCD - PACKED DECIMAL uses 4 bits to the digit, so only a single byte would be saved by a yy format. BCD could save quite a few bytes per accounting record with lots of numeric values. Once you half the space for numeric entities by using BCD, chances are less that you go further to save a single byte by using mod 100 year values.
Anyway: In lots of applications, a time stamp is just a label; you don't do arithmetic on in. In the days of Pascal, with enumeration values as a primary non-numeric data type, I was arguing with fervor that April is not half of September, but followers of this new "C" language protested: Why not? As long as a date is just a label, there is no millenium problem. A person reading the label will know from the context that "95" is ten years before "05" rather than ninety years later.
When we entered the new millenium, quite a few people (not limited to diehard preppers!) had filled their basements with canned food and water bottles, bought freestanding propane heaters etc. expecting the entire society infrastructure to break down at midnight. We know it didn't happen, even though numerous computer systems were NOT updated to handle year 2000 - none of those I worked on, none of those I depended on. That was either because they had never cared to save those two bytes (or nibble), or that they never did date arithmetic. Or that they since long had been prepared for it, making the 100 years run from 1950 to 2050 (I saw that in a couple systems long before the Millennium Panic.
The Millennium Panic was essentially driven by users who wanted to have their systems upgraded, but those sitting on the money said "No!". By creating a big panic that the money people understood nothing of, lots of both software and hardware was updated ahead of schedule, even if it wasn't at all affected by the year. (And which hardware was millenium dependant? Lots of hardware was thrown out!) I consider at least half of the millenium issues to be fictous, just a power tool to force through updgrade that would otherwise have come significantly later.
Two bytes for the year to hold only two digits was common in the commercial world using languages like COBOL and PL/1, but even COBOL had a condensed mode (forget the exact keyword) to save as BCD which gave two digit in a single byte. In the scientific world we used languages like FORTRAN that use binary values so a single byte could hold up to 256 values so it was common to use 1900 + byte for the year which was (and still is) OK from 1900 to 2255.
The ICL 1900 series dates were in the format ddmmmyy but they had the convention that yy >= 65 represented 1900 + yy, yy < 65 represented 2000 + yy. So, if there are any ICL 1900s still around, they are safe.
Last Visit: 19-Sep-20 0:44 Last Update: 19-Sep-20 0:44