The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
He has himself given that explanation. Of course he may have made it up.
Yet: You present a different context, "64K was a lot of memory", which would make the reference to ten times as much memory even more reasonable (virtual memory on PCs was unknown at that time). But in that context, I find the statement rather unlikely. Frankly: It would make far less sense in that context. And if it was made in that context, it would be much easier to defend.
I have found no sources documenting it from 1981 - the earliest reference is 1985.
But that is the fun of undocumented quotes - they can be argued forever! And some day your laugh is stuck in your throat... Like the famous Thomas J Watson (IBM CEO for ages) about the world needing maybe five computers: If you today claim that five publicly available cloud offerings, or five diffent social networks, is sufficient, noone will laugh at you.
Interesting. I found many references to 1981. Here's a sample : Google[^]
Regardless, after thirty plus years, it can sometimes be difficult to find definitive references. Given my (foggy) memory, I can understand that. I used to have stacks and stacks of old EE Times and IEEE Transactions on Microprocessors but I had to unload them all three moves ago.
Talking about 64K ... but make that 64K bits, please...
Around 1980, RAM chips grew from 16K bits to 64K bits. However, the 64K chips were badly plauged by cosmic alpha radiation, causing the microscopic (in those days) dynamic-RAM capacitors to discharge, causing a lot of bit errors. I worked on a 16 bit machine that had self-correcting memory: Each 16 bit word was protected by 6 error correcting bits.
For several years, people were fearing that we had reached the limit for RAM density, that the alpha radiation made it impossible to make denser chips, with smaller geometries.
After several years, it struck me that I hadn't heard those worries for a long time - and there were 256K RAM chips on the market. Until this day, noone has been able to tell me what had happened. How can we today make Gbit-size RAM chips that are not knocked out by alpha radiation? Are today's chips built with shield that stops alphaparticles? Or was that alpha-explanation wrong, and there was another, curable, reason for the random discharge of capacitors?
Or was that alpha-explanation wrong, and there was another, curable, reason for the random discharge of capacitors?
Not all silicon is created equal. In the old days you are talking about CMOS was just beginning to appear and not yet widely accepted.
The little processor I have been using all along must have been the first CMOS processor ever. That gave it some unique properties, like using very little power and giving it a higher radiation resistance. There even were special radiation hardened versions made in a special process, called silicon on sapphire.[^] These properties made it the first processor to be used in space.
Today practically everything is CMOS or a more advanced variant of CMOS, otherwise most devices would go up in flames because of their a thousand times higher power requirements. My best guess is, that higher radiation resistance was yet another reason why CMOS has 'won'.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
In 1980 I built myself a UK101 kit computer, with a 6502 processor. The motherboard had 8 pairs of slots of 1kb per pair (4 bit chips). This gave a grand total of 8Kb RAM. Pretty soon this was a limitation to what I wanted to do, so I created a solution; buy another 16 chips, bend up the "chip select" pin on each one through 90 degrees (carefully - do it too fast and the pin will snap) so it stuck out sideways; then (carefully) solder the remainging pins directly onto a memory chip in the socket below it. Carefully, as too much heat will wreck the chip. Then take a wire and connect the 16 "sticking-out" pins to the next pin-out of the main addressing chip, so that the "extra" chips occupy the next 8Kb of memory space.
I used the same technique to double the 1Kb of display memory to 2Kb, and with a couple of cuts on the motherboard and another jumper wire, doubled the video access rate to the memory and extended the address range. Each video character was now half the height it was previously, giving 32 rows of 64 characters instead of just 16.
Oh, and the 6502 (by the time I built my UK101) was quite capable of running at 2Mhz, twice the UK101's design of 1Mhz. Again, one cut of the motherboard and a jumper wire to the next pin-out of the main timer chip and hey presto - double the clockspeed. Did the same (with a rotary selection switch) for the RS232 output, speeding up tape cassette output from 300baud to 600 or 1200.
Never experienced any over-heat issues, but then it was running as a "naked" board with no case...
I will have to wean myself from <table>s and replace them with <div>s.
I wonder about that.
Unless <table> is being deprecated, I would use the one that's most convenient and mostly easily controlled at the time. The table model is rather convenient for php generated output from database record sets. Very predictable rendering.
Rather than wean yourself away, just master both methods of handling the problem and use what you think is best.
As I delve deeper and deeper into W3.CSS, I am beginning to find myself confronted with difficulties when laying out the structure of the page using <div>s. This is the same problem that I've faced earlier. I'm leaning toward performing layout with <table>s and ignoring W3.CSS.
I disagree totally with "<table>s for tabular data; <div>s for layout." This paradigm has been invalid since it was first uttered. It is semantics - a play on words. I use <table>s for layout and for tabular data. My experience with using <div>s has been that they are far more costly than are <table>s for layout.
I'm reminded of a student of mine who claimed that a binary search was always the most efficient search technique for a table. Unfortunately, he neglected the cost of sorting the table and maintaining the sorted order during CRUD operations.
In the past, I've suggested that the "table" tag should have been named "grid". Then this foolish non-argument would never had arisen.
The <table> versus <div> arguments were raised by proponents of a strict interpretation of the separation of structure from presentation from behavior. With the advent of the CSS grid, that separation no longer holds.
HTML have a group of elements called semantic elements (like TABLE) that by their name define for what they are there. DIV and SPAN are the anti-semantic elements and for that are perfect candidates for layout, especially if you are in need for responsive layout.
It is true that you can redefine the behavior of every element - if you are go deep enough - but even than you can't break the expected parent-child hierarchy of certain semantic elements, like TABLE.
While it is absolutely true, that building a TABLE based layout is very quick and clean, but it won't hold the moment you are moving to small screens (responsiveness)...
I have over 15 years of experience with these things, and tried every option - DIVs are the best for layout...
"The greatest enemy of knowledge is not ignorance, it is the illusion of knowledge". Stephen Hawking, 1942- 2018
Come on. Table can easily be made responsive through clever CSS classes on its rows and columns. If you were to move say a column in next row, all you need is a not so little JS function which can basically rewrite HTML based on screen size.
See how easy it is.
Now where is "let me write a senseless solution while pretending to be serious and genius" icon?