|
I see. Thanks for the info. I never dealt with the 68010 at all. I encountered practically every other one in the family up to the 68040 but not the 68010. Mostly that was with various VME bus systems and some STD bus systems for the older ones.
|
|
|
|
|
Now that you remind me of the PowerPC - I had almost forgotten that one!
It certainly deserves to be remembered: A very clean hardware achitecture, with core modules that always had to be present, and then a very regular and highly standardized internal interconnect for extending the core, similar (although in details quite differernt) to an I/O bus, but much closer to the core. The developer could add modules for, say, hardware implementation of matrix multiplication, FFT, trancendental functions... depending on the specific application area for that chip variant, an leaving "unneccessary" modules out, without compromising the RISC nature of the core, in a very clean and systematic way.
Some designers of ARM based (and probably other) embedded style chips have implemented on-chip "peripherals" (which may have nothing to do with I/O, but e.g. handle encryption), in ways that conceptually resemble the Power architecture, but never nearly as closely integrated with the core as in the Power.
Years ago, there was a Windows-NT implemenetation for the Power-PC. At that time, I was still hoping for it to squeeze out the x86 (anything that could squeeze out the x86 would be great!) ... It failed. I was hoping for Apple to go for the Power-PC: They made a try, but decided that accepting the x86 mess would give them higher profits... However, IBM and maybe others made a number of highly parallelized mainframe CPUs based on arrays of PowerPC chips. That didn't help to squeeze out the x86, though...
Then, when I check Wikipedia, to my surprise I see that as late as November 2015, a new version of the Power ISA standard was published. It is unclear to me whether IBM still develops Power based machines, but at least they did in 2015. So the architecture certainly not completely dead - but I guess that the primary users of Power based machines could care less about the processor architecture, as long as they have their problems solved. (Which is also the reason for the x86 mess to survive for 30 years more than it deserved!)
Frequently, I see people point to VHS as a prime example of the second best (or third, or fourth...) winning the battle. In the CPU world, the x86 is more like the fifth og sixth. It still won.
(Youngsters out there: If "VHS" is greek to you, ask your grandpa about it )
|
|
|
|
|
Rick York wrote: Back then no one had heard of the word "overclocked." That's not true. The same little 8 bit processor that I'm working with was often overclocked. It is a CMOS processor, which made it very flexible with its input voltage and its clock frequency. At 10v it went up to 6.4 MHz without being overclocked. Not so bad for a processor from 1976.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
Yes, it is true. While it happened, CPUs were sometimes clocked higher than their rating, the word "overclocked" was not used. It was also not that common because CPUs usually did not have heat sinks back then. The IBM PC did not have a heat sink on its CPU for several generations, not until the 80486 become the standard processor. I remember when I bought a higher clocked 80287, they called it "turbo-speed" I believe. The standard chips were clocked at 6MHz and this one was 12MHz and it was unique in that it had a heat sink attached to the FPU. The stock ones did not, nor did this 80387 chips.
|
|
|
|
|
I worked with the Z8000 for about 5 years, and it was a damn good chip (or at least compared to the Z80 I was also working with - much better memory access for starters).
Pity it was overshadowed in the media by the 68-series, it could have been much better if it was wider used. Very nice instruction set!
Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
I was studied computer architecture in graduate school at the time and we spent many class periods discussing the various chips and instruction sets of the day. I recall the Z8000 did have some attractive attributes but I can't remember the specifics now.
|
|
|
|
|
If you really want to be impressed by fancy architectures, try to get hold of a detail description of iAPX 432, coming to the market in 1981. A fully object oriented CPU - to the degree that if you mailed an object to another process (both communication and process concepts were realized by the CPU), you didn't mail a copy: You lost access to it yourself!
The architecture was extremely fancy; the implementation not quite as successful. Rumours were that a complete 432 CPU software simlator running on an 8086 was faster than the first physical CPUs sent to market. (Consider this a rumour; I can't document it.) But Intel people have been quite clear in that even though 432 was a total flop, they learned so much about how not do to things that when they soon after developed the MMS for the 386, they got it right.
Some times I think: What if Intel picked up the ideas from 432 today, and set out to make an object oriented CPU the right way today? Of course you couldn't just implement the original 432 architecture (e.g. it could handle at most 8K objects), but scaling it up to fit today's needs, with all the protection and safety of a capability based architecture. In 1975, when the 432 project started, only a few academics knew of OO; now it is mainstream. Running Fortran on a 432 would be rather meaningless; running dotNet could be great!
I am not holding my breath waiting for it to happen, though. It is just that I was extremely fascinated by that chip in the early 1980s. I sure would like to re-experience that same fascination!
|
|
|
|
|
I remember that one fairly well. I had quite a bit of documentation on it. It was a multi-chip module as I recall, with three chips in the module. I remember that it was very, very innovative and completely unsuccessful. At least the Itanium was a little more successful than that one but not by much.
|
|
|
|
|
Things was dictated in those days by a lot of things... IBM picked Intel, because of price, availability, 8 bit support (to work with matured 8 bit equipment) and existing code-base... 68000 also got its part via Atari and Amiga, their success pushed the CPU too...
Z8000 was relatively slow, working on 16 bit with no future plans, and most importantly it had not the monetary background Intel and Motorola could provide...
"The greatest enemy of knowledge is not ignorance, it is the illusion of knowledge". Stephen Hawking, 1942- 2018
|
|
|
|
|
So you've re-invented EMS. Your design is superior because it's designed in the hardware from the beginning, while LIM EMS was a retro-fitted kludge.
Nice going!
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|
|
It's simpler. I don't have a real MMU and caches between the CPU and the memories, just a simple logic to extend the address lines.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
LIM EMS wasn't a "real MMU"; just a mechanism for enabling on of the boards plugged into the address bus and diabling the others within that same address range. All controlled by software.
At that time when LIM EMS was The Standard, I was truly fascinated by it. In 2010 I switched jobs and started programming a modern implementation of the 8051, with on-chip bank switching: The lower 48 Kbyte was fixed, for the upper 16K, four different banks could be switched in for a total of 112 Kbyte.
To be frank: I hated all the complications it lead to! Bank switching was one of the greatest hassles, but 8051 is a true 8 bitter, not 16. You had to be extremely careful with arithmetic operations, sign extension when mixing 8 and 16 bit entities etc. When we a couple of years later switched to ARM CPUs it was such a relief - even if started out with the M0. You'll never get me back to an 8 bit CPU again!
|
|
|
|
|
Member 7989122 wrote: I hated all the complications it lead to! Bank switching was one of the greatest hassles
I think I can avoid the worst of it for code, less so for data.
I have a smaller unswitchable RAM for the stack, so that's no problem. Then I have subroutines which call and return from subroutines. These routines see to it that parameters and registers are saved and retrieved from the stack. These routines will go into the ROM and not be switched away. By incorporating bank switching into them, the program is almost totally unaware that any subroutine it is just calling has been loaded into another memory page.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
CodeWraith wrote: This way the code will not be aware that it's running in paged memory. I can call anything at any time and the code will not notice anything of the bank switching.
How do you do that if a call instruction or stack pop of the PC is only a 16 bit register?
|
|
|
|
|
The processor does not have a fixed program counter, nor does it have instructions for calling or returning from a subroutine.
Instead, I can load an address into any of it's 16 registers and simply make it the current program counter to call a routine and simply make the previous register, which still points back to the last address where it left off, progrogram counter again to return.
That's the simplest technique. It does not involve any use of the stack at all. The stack, by the way, works in a similar fashion. I can load an address into any register at any time and make this register the current stack pointer.
Implementing a stack protocol for subroutines means writing two routines using this basic calling technique, one to call another routine and the other one to return. I will have to pass the address of the routine that is to be called and the parameters. Adding a further parameter for the memory page of the routine and doing the switching in the calling routine actually is very simple. The page of the calling routine is saved on the stack, along with the return address. Both are restored when returning.
Both the stack(s) and the routines for calling and returning must not be in the paged memory and then everything is well for the code and calling subroutines. For the beginning logical pages will be identical to the physical pages the code will be loaded to, to keep it simple. Later I may use an allocation table to automatically convert logical page numbers to physical page numbers.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
Still looks exaggerated to me, but, very nice indeed.
|
|
|
|
|
It is, but the board is expensive enough to not waste any space. I must not fill the sockets for the memories to the brim. Even installing only one single memory IC would work, but the ICs are not that expensive anymore.
I also want to try my luck at implementing multitasking and simply assigning memory pages to tasks and switching them away as needed may be helpful.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
64k ought to be enough for anybody.
|
|
|
|
|
I love to point out the context of the original quote - about 640K. It turns out that most people who refer to the 640K does not know it.
The 8086 could address up to 1 MByte of physical RAM. The OS needs quite some space for keeping the significant parts of its code resident in RAM. Out of that 1 Mbyte, how large fraction should be reserved for the OS, drivers etc., and how much should be offered to the user programs?
Give 384K to the OS and drivers, 640K to user applications. 640K should be enough for anybody.
In that context, the remark makes perfect sensed. But of course it also takes the fun out of quoting it.
|
|
|
|
|
I don't think that is the correct context. He said it in 1981 and the PC was announced in August of 1981. The original, base-line configuration of the PC had no drives, 16K of RAM, and a cassette interface. This was in an era when most home enthusiasts used S100 bus systems and 64K was a lot of memory for those.
In fact, what put Microsoft on the map originally was their BASIC that ran on those S100 systems. FWIW, my second job out of school was at a company that made a robot and its controller used MS BASIC as its programming language and it was embedded in the ROM. They had printed out a complete listing of BASIC and it was a foot-high stack of paper. You could see Gates and Allen's names throughout the code.
|
|
|
|
|
He has himself given that explanation. Of course he may have made it up.
Yet: You present a different context, "64K was a lot of memory", which would make the reference to ten times as much memory even more reasonable (virtual memory on PCs was unknown at that time). But in that context, I find the statement rather unlikely. Frankly: It would make far less sense in that context. And if it was made in that context, it would be much easier to defend.
I have found no sources documenting it from 1981 - the earliest reference is 1985.
But that is the fun of undocumented quotes - they can be argued forever! And some day your laugh is stuck in your throat... Like the famous Thomas J Watson (IBM CEO for ages) about the world needing maybe five computers: If you today claim that five publicly available cloud offerings, or five diffent social networks, is sufficient, noone will laugh at you.
|
|
|
|
|
Interesting. I found many references to 1981. Here's a sample : Google[^]
Regardless, after thirty plus years, it can sometimes be difficult to find definitive references. Given my (foggy) memory, I can understand that. I used to have stacks and stacks of old EE Times and IEEE Transactions on Microprocessors but I had to unload them all three moves ago.
|
|
|
|
|
A computer will never need more than 64K and other truths!
|
|
|
|
|
64k ought to be enough for everybody?
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
Talking about 64K ... but make that 64K bits, please...
Around 1980, RAM chips grew from 16K bits to 64K bits. However, the 64K chips were badly plauged by cosmic alpha radiation, causing the microscopic (in those days) dynamic-RAM capacitors to discharge, causing a lot of bit errors. I worked on a 16 bit machine that had self-correcting memory: Each 16 bit word was protected by 6 error correcting bits.
For several years, people were fearing that we had reached the limit for RAM density, that the alpha radiation made it impossible to make denser chips, with smaller geometries.
After several years, it struck me that I hadn't heard those worries for a long time - and there were 256K RAM chips on the market. Until this day, noone has been able to tell me what had happened. How can we today make Gbit-size RAM chips that are not knocked out by alpha radiation? Are today's chips built with shield that stops alphaparticles? Or was that alpha-explanation wrong, and there was another, curable, reason for the random discharge of capacitors?
|
|
|
|
|