|
Member 7989122 wrote: Another factor: This CPU, does it provide any harware signal indicating whether it is fetching instructions (code), accessing data, whether the stack register is involved in the address calculation or if you are in an interrupt handler? Many CPUs do, and you might use these signals as bank selectors. The 16 bit address could effectively be extended to 18 bits: One 64Ki code space, one 64Ki data space, one 64Ki stack space, and one 64Ki interrupt space. (So it would switch the lower 32Ki as well). Where DMA fits it would depend on the how the signals are set during DMA. Each of these four spaces might be banked. (I guess that stack and DMA would have only a single bank, but if it treated as such, the banker should be able to map the DMA bank with data buffers as a data bank accessible to "ordinary" (non-DMA/interrupt) code.
Interesting. I have to think about that. The processor indeed has that sort of signals that show what kind of bus cycle it is currently executing. These are fetch, execute, interrupt and DMA. Unlike many other CPUs, it does not give up its bus for DMA. Instead, it acts as a DMA controller during DMA cycles, and does the memory addressing itself. The device that requested the DMA just has to put its byte on the bus (or read it) as soon as the CPU responds with a DMA cycle.
Interrupt cycles get a little hairier. The interrupt cycles are only used to respond to an interrupt. The interrupt routine itself is executed with regular fetch and execute cycles.
Fetch cycles themselves are unproblematic, but execute cycles are not. They can access the stack, data or even code (in branching instructions). And then there still is the matter of calling code on another page. It's like sawing off the branch on which you are sitting and usually results in a nice crash.
Maybe I can work out a hybrid of both ideas. At first glance this looks good for DMA, but adds more problems for the other bus cycles.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
This reminds me of the horror that was "Large Model" programming with FAR pointers and all of that. That was so annoying. We can still see lingering traces of it today in the Win32 API with variable types for pointers often prefixed with L. Just seeing those Ls is still annoying for me today and I alias away every one of them that hasn't already been. There are still a few around.
That's what I was wondering about - how can this be handled automatically for software by you and/or the OS? I doubt the compiler for that CPU has any knowledge of segment registers or anything other than what was called "small model."
"They have a consciousness, they have a life, they have a soul! Damn you! Let the rabbits wear glasses! Save our brothers! Can I get an amen?"
|
|
|
|
|
The basic problem is that of pointers.
The Windows design made a serious attempt abandoning them, replacing them with handles, where the programmer (an the user as well) is unconcerned about where the object is physically located: A binary identifier, a synonym to the variable name. In the Win16 days, with 8086 CPUs, there was no hardware support, so we had to pin the object to a location while we were accessing it. But our box of objects could fill a hundred megabytes, even the address space was limited to 1 megabyte: The Windows runtime "paged in" the objects to the address space when we pinned an object to use it.
Windows was created in an age when this was high fashion. There were lots of experimental CPU designs which fully supported handle addressing in hardware. Some tried to make it in the industrial, commercial world, such as the Intel 832 which was 100% object oriented: You provided a handle (aka capability ID) to an object and an offset within that object. THe memory address was completely hidden and inaccessible to the programmer.
Java tried to introduce handles in software to be run on a virtual machine that could trap object references to page these in. Somewhat later C# arrived with references rather than pointers.
If software developers had fully embraced these concepts, abandoning pointers and programmer-known memory addresses, we could have avoided both far and near pointers. But we were not ready. We were thinking in terms of memory addresses, and to some degree we still are: Even in C#, you may have to relate to pointers when addressing Win32 functions - and a fair share of classical developers rejoice. Here they get something concrete, something solid that they can get a grasp on.
Pointerless software is slowly getting acceptance. Very slowly. For 30+ years we have had these performance pi**ing contests: My program runs 3% faster than yours! Then you cannot waste time on having hardware looking up a handle in a capability table, the hardware must work directly on direct addresses, pointers.
Today, for large application areas, CPU speeds are "high enough". A generation ago, video playback could saturate the CPU; today it might take one or two percent of one core. Measured over an hour of use, a typical home PC probably uses far below 1% of its process capacity. Or its disk I/O capacity. We probably could afford (in terms of performance) the cost of getting rid of pointers, with truly object oriented CPUs.
The original IAPX 432 design was, from a functional viewpoint, fully satisfactory. Revitailizing it today would be a joke, e.g. its limit on 8Ki objects per process. In 1980 it was like "640K should be enough for everyody". But the experience from 432 taught Intel countless lessons for builing the 386 memory management system (the segment tables have inherited a lot from the 432 capability table). I really wish that Intel would repeat this exercize, using 35 years of experience to develop a 432 Mark II implementing in hardware the dotNET object model, similar to the original 432 (but certainly different).
Since lots of dotNET software does p/invoke pointer-based code, I guess 432 Mark II would require a co-implemented x64 core. Hopefully, x64 could be phased out with time: When DEC introduced VAX 780, it could execute legacy PDP-11 code natively. Later models, such as the 8600, appearently could do the same, but reality was that it required a DecWriter system console: PDP-11 code caused an interrupt so that VMS could ship that code module over the serial line to the system console for interpretation on the LSI-11 that processed keypresses on the console. Unplug the console, and the 8600 looses its ability to run PDP-11 code! If we got something like 432 Mark II machines, maybe we five years later will have to insert a USB stick with an x64 CPU running Win10 if the dotNET application on the main processor makes p/invoke calls to pointer based code.
I think this would be a great development. I always liked the security provided by capability based machines. In principle, I think that we are ready for it it now - both the technology and the dotNET softare developers. Yet I am not holding my breath. I am afraid that I will see nothing like that in my liftime (and I am seriously planning to live for quite a few more years).
|
|
|
|
|
I remember the i432. It was somewhat interesting. I found the i860 far more interesting though. It was for chewing through numbers which I was and am still interested in except now I get to do it on the job.
I am not at all interested in a world of software without pointers. I can see the value of the concept in certain realms but, thankfully, I am not required to participate in those. I am quite happy where I am.
"They have a consciousness, they have a life, they have a soul! Damn you! Let the rabbits wear glasses! Save our brothers! Can I get an amen?"
|
|
|
|
|
Rick York wrote: That's what I was wondering about - how can this be handled automatically for software by you and/or the OS? I doubt the compiler for that CPU has any knowledge of segment registers or anything other than what was called "small model."
It's actually very easy. The processor comes from the time of home built single board computers. In many ways it was a little ahead of its time in several ways, but was very limited by slow and small memories and expensive storage devices. With more memory and some sort of mass storage to fill that memory it can really eat any 8 bit processor of its time for breakfast and seriously stray into 16 bit territory.
It's true, the processor does not know anything about segment registers. That's ok for the stack segment. It should only be changed under certain conditions, so that will be done by the code and only when these conditions are met.
The data segment can and should be changed as needed. I have little choice but to leave that to the code as well.
At least I can do something for the code segment. Due to its RISC architecture, the processor does not have instructions to call subroutines or return from them. Instead, it loads the address of a subroutine into any one of its 16 working registers and make that register the new program counter. Returning is just as easy. Leave the original program counter alone in the subroutine and make it the program counter again.
Usually you have only two such simple procedures. One is used to call subroutines with a more elaborate protocol for passing parameters and saving registers. The other one handles returning from a subroutines, restoring the registers that were saved in the calling procedure and passing return values. Simply by modifying these procedures to save, change and restore the segment registers of the code and data segments I can instantly call subroutines anywhere in the code segment. No other processor with fixed call/return instructions can do that.
As things were, you wrote machine code. An assembler was luxury and also wanted its share of your memory. There were various BASIC interpreters, but I never was really interested. They just were too limited and wasteful with the limited memory resources. The better ones at least tokenized the code, making the memory hunger a bit smaller and the parsing at runtime a little faster. There were other interpreters, but these languages usually suffered from similar problems. Compilers were not much of a thing at all, like on most 8 bit computers. The reason for this was again memory and mass storage.
There is one exception. FORTH. It scales and adapts very well from tiny microcontrollers to modern processors. It also is quite fast, because it can't decide to be an interpreter or a just in time compiler. It even solves the problem of what to do with the OS. In the old days there was none at all and FORTH has a tendency to become the OS itself by keeping track of every bit of code you wrote. All that makes it a good candidate from the old times to adapt to my memory model.
Today we also have cross assemblers, a C compiler and an emulator/debugger. The cross assembler is open source and I might adapt it myself. The author of the emulator does his best to emulate all the little computers that use this old processor. I already had contact with him when he wanted my permission to include a little game that I wrote 40 years ago on the old computer. I think he will also include my new memory model in his emulator, once I have something to show.
And the C compiler? It needs a new project type, similar to compiling a DLL instead of an executable. And it has to use my modified call and return procedures.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
I would probably do something similar to what DEC (Digital Equipment) did some 50 years ago, in it's PDP11 product line and RSX11D/M operating system: build a small MMU (takes a few PALs or something similar) and support 8 segments of 8KB each of them mappable into the physical memory space.
Of those 8 segements:
one is a permanent "system" segment holding small parts of OS code, OS data, OS stack (say 4KB); the code in there can do whatever it takes to get more OS code and more OS data mapped in and out.
the remaining 7 are "user" segments, that can be used for code, data or stack as required by each individual process.
So each process could dynamically choose concurrent access to 8/16/24/32 KB of code plus 8/16/24/32 KB of data plus 8/16... KB of stack totaling up to 56KB.
|
|
|
|
|
Does anyone here use the Pomodoro technique for coding? I've used it to excellent effect before and would like to get back to it.
That's why I'm asking for recommendations for a Pomodoro planning/tracking app. There are so many, and trying out individual ones is quite a hassle. What do you use/recommend?
"'Do what thou wilt...' is to bid Stars to shine, Vines to bear grapes, Water to seek its level; man is the only being in Nature that has striven to set himself at odds with himself."
—Aleister Crowley
|
|
|
|
|
Never heard of it before but did a quick scan...
It looks a bit... suspicious
Wouldn't it only work in you can split your tasks into equal segments of work? Let's say you have a 25 minute tomato... what happen when a piece of work takes 30 minutes, or 2 hours, etc?
|
|
|
|
|
The humor of that seems to have pasta you by.
Ravings en masse^ |
---|
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein | "If you are searching for perfection in others, then you seek disappointment. If you seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010 |
|
|
|
|
|
W∴ Balboos, GHB wrote: pasta you by
Usually Spaghetti... what pasta you by?
|
|
|
|
|
I should have rotini my original post, orzo it seems.
Ravings en masse^ |
---|
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein | "If you are searching for perfection in others, then you seek disappointment. If you seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010 |
|
|
|
|
|
|
Well that is puree a matter of paste.
Ravings en masse^ |
---|
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein | "If you are searching for perfection in others, then you seek disappointment. If you seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010 |
|
|
|
|
|
If a Pomodoro takes more work than your timer, e.g. 25 minutes, either:
a) You haven't divided your main tasks into separate little tasks. This skill grows very quickly as you begin to practice this technique.
b) You stop working after 25 minutes, note that Pomodoro as unfinished, and after 5 minutes, you start a new Pomodoro and resume work on your ill-planned task.
"'Do what thou wilt...' is to bid Stars to shine, Vines to bear grapes, Water to seek its level; man is the only being in Nature that has striven to set himself at odds with himself."
—Aleister Crowley
|
|
|
|
|
|
Thanks. that looks like a very helpful list, but before I go through each app on it, do you know of any Android apps that integrate with a desktop component. I can't imagine doing Pomodoro planning, with lots and lots of typing, only on my phone. It's a great phone but not so great.
"'Do what thou wilt...' is to bid Stars to shine, Vines to bear grapes, Water to seek its level; man is the only being in Nature that has striven to set himself at odds with himself."
—Aleister Crowley
|
|
|
|
|
|
Thanks, this looks like a great app, but I have one issue, and that's its "flat" todo list. I've just found an app that integrates with nearly all task management services/apps. E.g. I have just integrated it with Trello, which is a dedicated list manager. The app I've found has a desktop app, a Chrome extension, and a mobile app, and it's called [^]Pomodone.
"'Do what thou wilt...' is to bid Stars to shine, Vines to bear grapes, Water to seek its level; man is the only being in Nature that has striven to set himself at odds with himself."
—Aleister Crowley
|
|
|
|
|
Write your own
|
|
|
|
|
Writing a Pomodoro tracker is very, very high on my todo list. I've been mentally planning one for years, but I stopped using the technique and forgot about it.
"'Do what thou wilt...' is to bid Stars to shine, Vines to bear grapes, Water to seek its level; man is the only being in Nature that has striven to set himself at odds with himself."
—Aleister Crowley
|
|
|
|
|
Should not an egg timer do the job?
It does not solve my Problem, but it answers my question
modified 19-Jan-21 21:04pm.
|
|
|
|
|
An egg timer will work fine, but tracking your "sprints" (Pomodoros) helps you improve your time planning and use of the Pomodoro technique. Just like some agile shops have metrics on their sprints and others just follow very basic SCRUM techniques.
"'Do what thou wilt...' is to bid Stars to shine, Vines to bear grapes, Water to seek its level; man is the only being in Nature that has striven to set himself at odds with himself."
—Aleister Crowley
|
|
|
|
|
Brady Kelly wrote: Does anyone here use the Pomodoro technique for coding? As long as you don't end with spaguetti code...
M.D.V.
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.
|
|
|
|
|
If I were regularly interrupted like that I am fairly certain that would be the result.
"They have a consciousness, they have a life, they have a soul! Damn you! Let the rabbits wear glasses! Save our brothers! Can I get an amen?"
|
|
|
|
|
Orcs mut can often be heard eminating from Michael martins lips or observed resting on lopatairs chin. (7)
|
|
|
|
|