|
Member 7989122 wrote: Logically, the MMS of the 386 handled 32 bit addesses, but if my memory is right, it had only 25 physical address lines, so it could handle at most 32 MByte physical RAM.
There were various variants of the 80386. The original version (later renamed 80386DX) was a full 32-bit processor, with a 32/32-bit data/address bus. The 80386SX used the same instruction set, but had a 16/24-bit data/address bus. Other variants had a 16/26-bit data/address bus. Lastly, there was a 486-compatible chipset that replaced an 80386DX/80387DX combination with a single chip that did most of the work, and an additional chip that provided the FERR signal and not much else.
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|
|
Member 7989122 wrote: Some of the early 8-bit CPUs had only 8 address lines, yet addressed 64K: The most significant 8 bits where sent out first, and one clock cycle later, the least significant 8 bits followed on the same lines. I believe some (quasi) 16-bit CPUs used similar methods for transferring 16 bit data values over 8 data lines. Multiplexing, the CPU I'm building this memory board for does this with the address bus. It may have been a blessing back then, but it certainly made everything more interesting for me now.
Latching the upper 8 address bits is easy to do, but some I/O chips from the CPU's family want to latch and decode the address lines themselves. The interrupt controller, for example, hogs 4k precious memory space for its registers, but would only need about 32 bytes if the address lines were properly decoded externally. Back in the day this may have helped to build small systems without much glue logic, but today it makes a proper design hard to realize.
Still, these and other microcontrollerish features made it the first CPU in space. It's low power requirements, being CMOS, may also have helped.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
Yes, I know that. Externally it was not though. The 68010 was kind of like the 8088 in that it had a multiplexed 16-bit data lines whereas the 68000 did not.
Kind of like the 8088 but the difference is the 8088 had multiplexed 16-bit data lines where the 68000 did not. The 68000 had an eight-bit data bus. (brain fartage)
modified 21-May-18 16:35pm.
|
|
|
|
|
Rick York wrote: The 68000 had an eight-bit data bus
68000 had a 16 bit data bus (internally 32 bit) and a 24 bit address bus...
There where those (embedded controller) versions that could work with 8 bit data bus to fit better to the environment...
"The greatest enemy of knowledge is not ignorance, it is the illusion of knowledge". Stephen Hawking, 1942- 2018
|
|
|
|
|
Sorry - had a brain fart.
I meant the 68010 was the chip with the multiplexed data bus. The 68000 and 68020 had non-multiplexed data busses, 16 and 32-bits wide respectively.
|
|
|
|
|
I believe that you are confusing it with the 68008, which was a 68K with multiplexed data bus.
The first really fully grown 68K CPU was the 68030, with on-chip cache and MMU. Besides, it was significantly faster than its predecessors, primarily because it required fewer clock cycles for most instructions, and it could be clocked higher. That is the CPU that really deserved to win the market - but IBM was too strong, and Apple had far from the strength is has today. Besides, there was RISC wave coming, blinding people so they didn't see that 68K was as close to a RISC archictecture as you can possibly get while being a CISC . So several of the former 68K engineering workstation-type manufacturers (e.g. Sun) abandoned 68K in favor of "pure" RISC design. Which is a pity. None of the RISCs survived. If they (including Apple) had stayed with the 68K, we could have had much cleaner architectures today.
|
|
|
|
|
I think you are right about the 68008. Didn't the 68010 have an on-chip MMU or something like that?
I remember fairly well the RISC psuedo-revolution that gave rise to the SPARC and 88000, Sun and Motorola's RISC chips. Intel even made something of an attempt at one with 80860. I think the only RISC chip that has survived, at least somewhat, is the PowerPC but it has changed a fair amount over the years. The MIPS instruction set has survived also, in an evolved form.
|
|
|
|
|
Wikipedia is ususally fairly correct when it comes to unarguable technical details. If that holds for 68K, the on-chip MMU came with the 68030. 68010 could use an external MMU chip (thanks to an upgraded fault handling), but it was not not on-chip. The 68010 also had a couple extensions (including one not perfectly backwards compatible) for coordinating multiple CPUs within one machine.
|
|
|
|
|
I see. Thanks for the info. I never dealt with the 68010 at all. I encountered practically every other one in the family up to the 68040 but not the 68010. Mostly that was with various VME bus systems and some STD bus systems for the older ones.
|
|
|
|
|
Now that you remind me of the PowerPC - I had almost forgotten that one!
It certainly deserves to be remembered: A very clean hardware achitecture, with core modules that always had to be present, and then a very regular and highly standardized internal interconnect for extending the core, similar (although in details quite differernt) to an I/O bus, but much closer to the core. The developer could add modules for, say, hardware implementation of matrix multiplication, FFT, trancendental functions... depending on the specific application area for that chip variant, an leaving "unneccessary" modules out, without compromising the RISC nature of the core, in a very clean and systematic way.
Some designers of ARM based (and probably other) embedded style chips have implemented on-chip "peripherals" (which may have nothing to do with I/O, but e.g. handle encryption), in ways that conceptually resemble the Power architecture, but never nearly as closely integrated with the core as in the Power.
Years ago, there was a Windows-NT implemenetation for the Power-PC. At that time, I was still hoping for it to squeeze out the x86 (anything that could squeeze out the x86 would be great!) ... It failed. I was hoping for Apple to go for the Power-PC: They made a try, but decided that accepting the x86 mess would give them higher profits... However, IBM and maybe others made a number of highly parallelized mainframe CPUs based on arrays of PowerPC chips. That didn't help to squeeze out the x86, though...
Then, when I check Wikipedia, to my surprise I see that as late as November 2015, a new version of the Power ISA standard was published. It is unclear to me whether IBM still develops Power based machines, but at least they did in 2015. So the architecture certainly not completely dead - but I guess that the primary users of Power based machines could care less about the processor architecture, as long as they have their problems solved. (Which is also the reason for the x86 mess to survive for 30 years more than it deserved!)
Frequently, I see people point to VHS as a prime example of the second best (or third, or fourth...) winning the battle. In the CPU world, the x86 is more like the fifth og sixth. It still won.
(Youngsters out there: If "VHS" is greek to you, ask your grandpa about it )
|
|
|
|
|
Rick York wrote: Back then no one had heard of the word "overclocked." That's not true. The same little 8 bit processor that I'm working with was often overclocked. It is a CMOS processor, which made it very flexible with its input voltage and its clock frequency. At 10v it went up to 6.4 MHz without being overclocked. Not so bad for a processor from 1976.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
Yes, it is true. While it happened, CPUs were sometimes clocked higher than their rating, the word "overclocked" was not used. It was also not that common because CPUs usually did not have heat sinks back then. The IBM PC did not have a heat sink on its CPU for several generations, not until the 80486 become the standard processor. I remember when I bought a higher clocked 80287, they called it "turbo-speed" I believe. The standard chips were clocked at 6MHz and this one was 12MHz and it was unique in that it had a heat sink attached to the FPU. The stock ones did not, nor did this 80387 chips.
|
|
|
|
|
I worked with the Z8000 for about 5 years, and it was a damn good chip (or at least compared to the Z80 I was also working with - much better memory access for starters).
Pity it was overshadowed in the media by the 68-series, it could have been much better if it was wider used. Very nice instruction set!
Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
I was studied computer architecture in graduate school at the time and we spent many class periods discussing the various chips and instruction sets of the day. I recall the Z8000 did have some attractive attributes but I can't remember the specifics now.
|
|
|
|
|
If you really want to be impressed by fancy architectures, try to get hold of a detail description of iAPX 432, coming to the market in 1981. A fully object oriented CPU - to the degree that if you mailed an object to another process (both communication and process concepts were realized by the CPU), you didn't mail a copy: You lost access to it yourself!
The architecture was extremely fancy; the implementation not quite as successful. Rumours were that a complete 432 CPU software simlator running on an 8086 was faster than the first physical CPUs sent to market. (Consider this a rumour; I can't document it.) But Intel people have been quite clear in that even though 432 was a total flop, they learned so much about how not do to things that when they soon after developed the MMS for the 386, they got it right.
Some times I think: What if Intel picked up the ideas from 432 today, and set out to make an object oriented CPU the right way today? Of course you couldn't just implement the original 432 architecture (e.g. it could handle at most 8K objects), but scaling it up to fit today's needs, with all the protection and safety of a capability based architecture. In 1975, when the 432 project started, only a few academics knew of OO; now it is mainstream. Running Fortran on a 432 would be rather meaningless; running dotNet could be great!
I am not holding my breath waiting for it to happen, though. It is just that I was extremely fascinated by that chip in the early 1980s. I sure would like to re-experience that same fascination!
|
|
|
|
|
I remember that one fairly well. I had quite a bit of documentation on it. It was a multi-chip module as I recall, with three chips in the module. I remember that it was very, very innovative and completely unsuccessful. At least the Itanium was a little more successful than that one but not by much.
|
|
|
|
|
Things was dictated in those days by a lot of things... IBM picked Intel, because of price, availability, 8 bit support (to work with matured 8 bit equipment) and existing code-base... 68000 also got its part via Atari and Amiga, their success pushed the CPU too...
Z8000 was relatively slow, working on 16 bit with no future plans, and most importantly it had not the monetary background Intel and Motorola could provide...
"The greatest enemy of knowledge is not ignorance, it is the illusion of knowledge". Stephen Hawking, 1942- 2018
|
|
|
|
|
So you've re-invented EMS. Your design is superior because it's designed in the hardware from the beginning, while LIM EMS was a retro-fitted kludge.
Nice going!
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|
|
It's simpler. I don't have a real MMU and caches between the CPU and the memories, just a simple logic to extend the address lines.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
LIM EMS wasn't a "real MMU"; just a mechanism for enabling on of the boards plugged into the address bus and diabling the others within that same address range. All controlled by software.
At that time when LIM EMS was The Standard, I was truly fascinated by it. In 2010 I switched jobs and started programming a modern implementation of the 8051, with on-chip bank switching: The lower 48 Kbyte was fixed, for the upper 16K, four different banks could be switched in for a total of 112 Kbyte.
To be frank: I hated all the complications it lead to! Bank switching was one of the greatest hassles, but 8051 is a true 8 bitter, not 16. You had to be extremely careful with arithmetic operations, sign extension when mixing 8 and 16 bit entities etc. When we a couple of years later switched to ARM CPUs it was such a relief - even if started out with the M0. You'll never get me back to an 8 bit CPU again!
|
|
|
|
|
Member 7989122 wrote: I hated all the complications it lead to! Bank switching was one of the greatest hassles
I think I can avoid the worst of it for code, less so for data.
I have a smaller unswitchable RAM for the stack, so that's no problem. Then I have subroutines which call and return from subroutines. These routines see to it that parameters and registers are saved and retrieved from the stack. These routines will go into the ROM and not be switched away. By incorporating bank switching into them, the program is almost totally unaware that any subroutine it is just calling has been loaded into another memory page.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
CodeWraith wrote: This way the code will not be aware that it's running in paged memory. I can call anything at any time and the code will not notice anything of the bank switching.
How do you do that if a call instruction or stack pop of the PC is only a 16 bit register?
|
|
|
|
|
The processor does not have a fixed program counter, nor does it have instructions for calling or returning from a subroutine.
Instead, I can load an address into any of it's 16 registers and simply make it the current program counter to call a routine and simply make the previous register, which still points back to the last address where it left off, progrogram counter again to return.
That's the simplest technique. It does not involve any use of the stack at all. The stack, by the way, works in a similar fashion. I can load an address into any register at any time and make this register the current stack pointer.
Implementing a stack protocol for subroutines means writing two routines using this basic calling technique, one to call another routine and the other one to return. I will have to pass the address of the routine that is to be called and the parameters. Adding a further parameter for the memory page of the routine and doing the switching in the calling routine actually is very simple. The page of the calling routine is saved on the stack, along with the return address. Both are restored when returning.
Both the stack(s) and the routines for calling and returning must not be in the paged memory and then everything is well for the code and calling subroutines. For the beginning logical pages will be identical to the physical pages the code will be loaded to, to keep it simple. Later I may use an allocation table to automatically convert logical page numbers to physical page numbers.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
Still looks exaggerated to me, but, very nice indeed.
|
|
|
|
|
It is, but the board is expensive enough to not waste any space. I must not fill the sockets for the memories to the brim. Even installing only one single memory IC would work, but the ICs are not that expensive anymore.
I also want to try my luck at implementing multitasking and simply assigning memory pages to tasks and switching them away as needed may be helpful.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|