|
In my case it's more about the little differences between RISC and CISC processors. I have a nice set of general purpose registers, of which any can be made the current stack pointer at any time. While this is a cross assembler, it still generally assumes a CISC processor. I have two stacks, a parameter stack and a call stack. For all calls it must be sorted out, where the parameters come from (other registers or rarely memory) and what stack they need to go to.
The coolest thing is that the processor does the same trick with the program counter. This opens up a neat possibility. It's a typical 8 bit processor which can only address up to 64k. Of course you can add a larger banked memory, but doing the bank switching in code will get complicated and error prone.
By having multiple program counters, I can move the bank switching into the calling protocol. I can call any routine in any part of the banked memory without any complication.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
CodeWraith wrote: I have a nice set of general purpose registers, of which any can be made the current stack pointer at any time. You could go a lot further. What was the name of that TI chip (9900?) that had all its registers in RAM, with only a single "register block pointer" in the CPU? So general performance was mediocre, but interrupt response was excellent! A single clock cycle to set the register block pointer, and the interrupt handler could start using its private registers, with no need to save anything at all. Same for processes/threads: They all had their private register sets.
A less extreme variant: One of the first CPUs I programmed had 16 register blocks (each consisting of 16 special purpose registers), one block for each of the 16 interrupt priority levels. When an interrupt signal arrived, the first instruction of the handler was executing 900 ns later, which was quite hefty for a "PDP-11 class" 16 bit mini in the mid 1970s. But this was limited to interrupt handling: All ordinary user processes shared a single register block.
|
|
|
|
|
Just look at the cheapest PIC microcontroller. They are called RISC, but they really are the good old Harvard architecture. What they call onboard RAM actually is a set of a few hundred to a few thousand 8 bit registers.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
Some RISC processors can cause brain damage as well: try having a look at the "full Monty" ARM 7+ processor which describes itself as RISC.
It's actually a truly wonderful processor to work with, but by gawd it's complicated for RISC! Nearly every instruction has condition codes, and the addressing modes are ... extensive, shall we say.
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
"Common sense is so rare these days, it should be classified as a super power" - Random T-shirt
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
RISC is not what most people think it is.
The instruction set is not reduced to a minimal set of instructions. The scope of the instructions is reduced, so that they can be executed in two bus cycles. Fetch, execute, fetch, execute. With a pipelined architecture and memory caches you can reduce that down to one or two clocks execution time for each instruction.
The origin of RISC definitely lies in Harvard architecture, from which it inherited the fetch/execute type of operation and the large array of general purpose registers. The registers were the only place these computers could keep their data.
Microprocessors, on the other hand, always were of Princeton (aka Von Neumann) architecture and were built to access RAM for both instructions and data were kept in external memory. At first, all microprocessors were CISC. They only had dedicated registers and many addressing modes. They were all about addressing their memory.
RISC reunited these two philosophies. RISC processors basically are the good old Harvard architecture, but their registers now are also used as memory pointers, so that they now could fetch instructions from memory or here and there read or write some data to it.
Modern processors usually are hybrids between RISC and CISC, trying to get the best of both worlds. That comes at a price, because a pure RISC processor, even with caches and pipelines, is much simpler and needs less transistors. That means less heat and power consumption and also reduces the programmer's brain damage.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
The so-called Advanced "RISC" Machine also has push and pop instructions that take a list of registers as operands to push or pop. I can't think of any definition of RISC that makes that make sense.
|
|
|
|
|
Pipelines and caches have watered down that principle a little. If they are efficient enough, you can afford such un-RISKy things while sacrificing the other principle of keeping the design simple. There are times when I wish I had such an instruction on my old CDP1802, but then again I can also run that processor on batteries for months. Simplicity has its advantages.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
CodeWraith wrote: RISC is not what most people think it is. The instruction set is not reduced to a minimal set of instructions. That should be obvious from its name: "Reduced Instruction Set computer" can't possibly have anything to do with a reduced instruction set ...
Or. It started out that way. RISC became high fashion, a marketing concept. RISC is good! But after the first wave of architectures that had a truly reduced instruction set, they developed on, adding lots of sometimes very complex instructions (such as 4*4 matrix multiplication instructions). From a marketing point of view, telling that "The instruction set has grown so complex that it can no longer be called a RISC" was totally out of the question. Rather, "RISC" was redefined to allow for arbitrary Complex Instruction Set architectures.
During the 80s and 90s, we saw a multitude of alternate RISC redefinitions. Coming to mind is "No microcode", "Single cycle instruction execution" (with the obvious exception for those that could not complete in a single cycle...), "A large number of general registers", "A regular instruction set where a given bit(group) serve the same function in all instructions", "No complex operand address formats", ... Oh, there were more. All of them to bring the attention away from the ever more complex instruction set.
Actually, chips like the 6800, and later 68K, satisfied most RISC criteria. As did the PDP-11. But they had been branded as CISC (by RISC adherents), and never succeeded in washing the CISC stain off - they had to develop a new architecture under a new name. At least it helped them take a certain market share for a few years.
I never worked with IA64, but the x86 architecture is such a mess that I never understood how it could survive. And even less can I understand how they make it spin around that fast. No wonder it takes a billion transistors to do it. (The 68000 was said to have about that many transistors. Today it seems quite amazing that it didn't take more to implement a complete CISC architecture!)
|
|
|
|
|
Quote: Actually, chips like the 6800, and later 68K, satisfied most RISC criteria. I never used the 6800, but spent a lot of time with the 68000. It very much followed the beaten CISC path, but at least had arrays of data and address registers that you can freely use in these roles.
The only 8 bit processor that I can think of that was really a RISC processor was the old CDP1802. An 8 bit RISC processor is an unlikely thing, since you are stuck with 256 opcodes. They had to shoehorn in a few instructions at the expense of other practical but not essential instructions. Also branching instructions that had a full 16 bit address were a problem because they did not fit into the neat fetch/execute schema. But look at the programming model! Very few dedicated registers, not even a program counter or a stack pointer, but 16 16 bit registers that you can use as you wish. That little processor was a RISC processor before it was officially invented.
- No microcode? Check.
- Single cycle execution? Well, two cycles is the best you can do without pipelines. Check, except for the already mentioned 'long branch' instructions with three cycles.
- A large number of general registers? Check, but they did not go far enough for my taste. It would have been glorious if they had pulled of the same trick for the accumulator as they did for the program counter or the stack pointer.
- A regular instruction set where a given bit(group) serve the same function in all instructions? Check, except for the shoehorned instructions.
- No complex operand address formats? Check! To access memory, you had to load the address into any one of the working registers and use it as a memory pointer. How you got your address was your business.
Fits the description very well so far. Have a look: CDP1802 handbook[^]
Quote: but the x86 architecture is such a mess that I never understood how it could survive I think that's mostly because it was Intel that defined many people's first impression what a microprocessor is supposed to be like. Let's begin with the 8080, put the very popular Z80 (which was an improved 8080 bootleg) and then look at the 8086. These processors were the standard, all others were seen as imitations. Not that Intel would ever try to keep that way of thinking alive.
Quote: No wonder it takes a billion transistors to do it. (The 68000 was said to have about that many transistors. It's not that bad. If you can believe Wikipedia, then these numbers are more correct:
CDP1802: 5000 transistors. That's a little misleading, because it's a CMOS processors where the MOSFETs always come in pairs.
Intel 8080: 6000 transistors.
68000: 68000 transistors (!).
Intel 8086: 29000 transistors
Intel 80286: 134000 transistors
Intel 80386: 275000 transistors
Intel 80486: 1180235 transistors
Pentium: 3100000 transistors
You have to go up to the later multicore processors to hit 1 billion or more transistors.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
CodeWraith wrote: Quote: No wonder it takes a billion transistors to do it. (The 68000 was said to have about that many transistors. It's not that bad. If you can believe Wikipedia, then these numbers are more correct: What I intended to say that "the 68000 was said to have about 68000 transistors" - as you confirm in your comment.
I am impressed that they managed to do the entire 68K architecture in 68K transistors, rather than a few million. Or a billion.
|
|
|
|
|
The number of transistors is not a good measure. For example, CMOS always uses transistors in pairs. One opens up, the other closes. A current only flows in the brief moment when they switch. That's why CMOS devices go much easier on your phone's battery.
The number of basic logic gates would be a far better metric for the complexity of a processor. Even that could be misleading. Theoretically you could build a processor just with NAND or NOR gates, but you would need more gates than if you went out of your way to use the right type of gate as needed.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
From my very limited experince of Intel assembly compared to 68K I would say there wasn't much difference. Assembly always struck me as asking for trouble.
|
|
|
|
|
I don't know about trouble, but it certainly leaves you with no excuses - you can't blame the framework, the compiler, or the optimiser: it's exactly what you told it to do and you can't wriggle out of that!
And while that gives you no-one else to blame, it also gives you incredible freedom - and if you know what you are doing wonderful speed and code density to boot.
Yes, it's slower to develop (though debuggers are available for assembler these days, they weren't when I started) and slower to code (you've basically got 8 or 16 variables you actually want to use and the overhead when going beyond that can be extensive). It's certainly harder to maintain than a high level language!
But it's seriously rewarding. When you get a tight bit of assembler running maybe a thousand time faster than the best the compiler can manage and it provides a true square wave as the data clock instead of the compilers asymmetric clock it's a big rush!
I don't code in it any more, but ... I do miss it some days.
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
"Common sense is so rare these days, it should be classified as a super power" - Random T-shirt
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
Like much in life, it has it's uses but can shoot you in the foot (or Head depends on what you are doing) but gives you freedom, freedom in my expirence to drink coffee and draw flow charts to figure out what is or isn't happening. C gives you freedom with only a minor performance hinderance.
It's like Cars, Formula 1 lots of speed, no airbag. Bentley comes with a cup holder but won't do Monza at the same speed. The last time I coded in it was PIC assembler for a product that was coded before the company bought a C compiler.
|
|
|
|
|
Nothing like building your own computer and then bringing it to life.
OriginalGriff wrote: Yes, it's slower to develop Slower? Not really. With a little discipline you can make your life much easier. I write libraries for and against everything. That's a universal way to cut development time in any programming environment.
But I'm getting old. Doing more elaborate math in assembly is a mess. Too much going on on the stack and calling some math routines. You can't see the formula for all the instructions anymore. Here I now use macros more than before. With all that calling and the stack operations out of the way it's much easier to concentrate on what you are actually trying to do.
Quote: though debuggers are available for assembler these days, they weren't when I started How old are you? Did the processors still need an oil change once in a while? Actually a debugger was the first software I ever bought and that was in 1979. It cost me something like 15$, almost a month's income back then.
Quote: It's certainly harder to maintain than a high level language! Discipline, libraries and macros, again.
Quote: But it's seriously rewarding. Yes, a 'I made fire' moment every five minutes. Extremely addictive.
I made fire![^]
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
It is slower to develop, if only because you have to type more, and have to think more carefully about what you are doing. For example, in C / Z80 code:
for (int i = 0; i < 10; i++)
PUSH BC
LD B, 10
loop:
...
DJNZ loop
POP BC And if the iteration count exceeds a byte, you have to think about that as well:
PUSH BC
LD BC, 1000
loop:
...
DJNZ loop
DEC C
JP C, loop
POP BC
But on the other hand, you can copy a string or bytes in one instruction:
copy:
LD HL, source
LD DE, dest
LD BC, count
LDIR
deleteOneChar:
LD HL, source+1
LD DE, source
LD BC, count
LDIR And so on.
Don't get me wrong, I still remember my assembler decades with great fondness - but from a general development POV a high level language lets you get a lot more done in a much shorter timescale, and with a much more maintainable result.
But if you have restricted RAM and ROM (and most of mine was done using 4K RAM, 8K ROM) then assembler is the best way to go!
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
"Common sense is so rare these days, it should be classified as a super power" - Random T-shirt
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
My impression is that this concern for instruction sets from "ordinary" programmers (excluding those writing OS kernels and bottom level drivers) comes from an idea that you will be more clever that the compiler, writing faster code.
You won't. In the proceedings from one "History of programming languages" conference (in the previous millennium) the developers of Fortran II - the first compiler piloting a large number of now "classical" optimization techniques - reported that they repeatedly spent hours understanding "How the elephant did the compiler find out that it could do that?? But it works!"
While I still did some assembly coding, I also did extensive timing to see how much execution time I could save, compared to a high level language. True enough: Sometimes, for a very simple, very tight loop I might be able to increase execution speed for that loop by, say 30%. Profiling the application typically showed that loop to take a percent or less of the total execution time. For the total run, I almost never could measure any difference, whether I compiled with my "optimized" assembler or HLL code. So I gave up assembly. If it can be done in a HLL, do it in a HLL!
Compilers of today do a lot more clever optimizing than Fortran II. You can't outsmart them. Assembly should be limited to those cases where you can't benchmark it against HLL - because the problem is impossible to solve in a HLL.
If you are not assembly coding, why would be concerned about RISC vs. CISC? It affects your software about as much as the microcode word length. Or semiconductor technology. Or internal buses within the CPU chip. That is: Not at all.
|
|
|
|
|
Wrong algorithm = slow program. No compiler can optimize that away. That's where most of the optimization happens. Also, that's why I like my RISC processors so much. There often is only one way to do something, which makes things very straight forward. The clever part is your algorithm.
Quote: Compilers of today do a lot more clever optimizing than Fortran II. You can't outsmart them.
I already had that discussion with a professor long ago. First he thought that I was an arrogant (censored). A few talks later on how I could possibly beat the compiler, he always wanted me to write papers on these ideas. Most of them were quite evil hacks which go against many other holy commandmends and dogmas. If used with caution, you can get away with that and go where no compiler can follow.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
CodeWraith wrote: Wrong algorithm = slow program. No compiler can optimize that away. That's where most of the optimization happens. Amen. So code that algorithm in a HLL. I can't think of an algorithm that can be coded in assembly only. All algorithms can equally well be coded in a HLL.
Re discussions with professors: When I was teaching at a tech college, "readable" code was one of my main focuses. The basic "Computer Architecture" course was the only one with assembly coding, so that the students could see stuff like registers and instruction sets in real use. To zero AX, you move a zero into AX, right? MOV AX, 0. One of the students insisted that "no real programmers" would do it that way; they would rather use XOR AX, AX, which is faster! That he knew for sure - he wouldn't sacrifice performance for readability! So I dug up the timing diagrams to show him that although he was right for the 8086, that less readable code could save you a single clock cycle, for the 186, 286 and 386 (which was state of the art at that time), the two instructions required the same number of clock cycles. That didn't move him: He insisted on writing code that would run at maximum speed on the 8086, even though the 8086 at that time was beyond obsolete.
For the hand-ins, this student provided two solutions: One large comment block with some of the dirtiest assembly code I have seen, headed by the text "This is how a REAL programmer would do it:", followed by a block of (not commented out) assembly code that was neat and readable, headed by "But this is how we are forced to write the code in this course:".
I found it kind of cute. Deep down in my old "archives", I still have a photocopy of that hand-in.
|
|
|
|
|
trønderen wrote: All algorithms can equally well be coded in a HLL. Not every device is a state of the art PC with the strongest processor and plenty of memory. Think of the other end of the spectrum, like microcontrollers. Things like serial communication with a terminal in software without a UART, just bit banging two I/O pins.
Or generating a video signal in software. On my oldest computer this is really done that way. The graphics chip only provides the correct timing and the CPU acts as interrupt and DMA controller to provide the video data on the bus just at the moment it is expected.
Such things require very careful timing and a HLL usually does not give you enough control over the resulting code to do that.
And not all processors do even fundamental things the same way. My old processor, for example, does not have any instructions to call or return from a subroutine. You have to use small procedures with two separate program counters to call a subroutine or to return.
How primitive, right? The processor is just as flexible with the stack pointer as it is with the program counter, so let's use two stacks. A call stack and a parameter stack to make passing parameters a little less complicated. Or let's add some memory management to dynamically load (or even compile just in time) the requested code module. By the way that also opens the way to expanding the memory far beyond the usual addressing range of the processor. The page adress is stored on the call stack along with the return address. The processor does not notice anything.
You see where this is going. Most high level languages take the usual calling conventions for granted. They would not let me use this processor's abilities other than in the usual way. Leaving the beaten paths is one of the most interesting things a programmer can do and high level languages don't easily let me go there because they are built on these beaten paths.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
CodeWraith wrote: Things like serial communication with a terminal in software without a UART, just bit banging two I/O pins. Or generating a video signal in software. Sure, but those are very good examples of the kind of software I was referring to when writing "(excluding those writing OS kernels and bottom level drivers)".
Those I/O pins that you are bit banging are not available in a HLL. You cannot solve that problem without the help from low level assembly code. If you can't control timing with sufficient precision (is that a question of algorithm?) in a HLL, then HLL is not a viable option.
If the timing precision is the only argument against using a HLL, then you claim that it is impossible for a compiler to generate the same instructions as those you handcraft using an assembler. I would like to see the arguments for defending that claim. If your say "that won't happen in practice - so the code generated by the compiler doesn't realize the same algorithm as the one I assembly code", then you have made the definition of the algorithm dependent on the compiler: A fully optimizing compiler makes a different algorithm from a non-optimizing one, from the same HLL code. That does not agree with my idea of an algorithm.
And not all processors do even fundamental things the same way. I once read something by a fellow named Turing, but he could of course be wrong
I still maintain: If assembly and HLL are both viable choices, don't go for assembly for performance reasons, use a HLL. You won't beat the compiler.
If it can't be done in a HLL, then don't code it in HLL.
|
|
|
|
|
Quote: If the timing precision is the only argument against using a HLL, then you claim that it is impossible for a compiler to generate the same instructions as those you handcraft using an assembler. I would like to see the arguments for defending that claim.
I can try, but it will not be easy to show you all the traps in this code which a compiler would have to evade.
This is the datasheet of the ancient CDP1861 graphics chip: Datasheet[^]
It's just a year younger than the famous Altair and the ability to add graphics to your computer for about 20$ was a small wonder. Just hook up this chip to your bus, send the output signals to a composite monitor, include a small interrupt routine and you are ready to go. Of course that only works if you have a CDP1802 processor, because these ICs work together closely via interrupt and DMA.
You will find these interrupt routines on the last pages of the datasheet.
The upper part is all acout initialisation. It already has some pitfalls, worst of which is that the graphics chip gives us only a certain number of bus cycles before it starts requesting display data via DMA. If the initialisation is not complete by then, we are already out of sync before we have begun. How is a compiler to know this? Will it read the datasheet? Other devices may give us more or less time.
The real problem comes in the second half from the DISP label on. The graphics chip has begun to display graphics data line by line. It gets these bytes via DMA, but the CPU never gives up control of the bus. Instead, it adds an additional DMA bus cycle at the end of the current instruction, does the memory addressing itself. The CPU acts like a DMA controller and uses register 0 as DMA pointer.
The lower part of the interrupt routine is albout reducing the vertical resolution. The graphics chip always requests 128 lines every frame. If you repeat each line two or four times, you can reduce the memory requirements of the video buffer and also get better aspect ratios of the pixels.
Again we must execute an exact number of bus cycles per line and at the same time manipulate register 0 while it's also altered by exactly 8 DMA requests per line. Do you know any compiler that could deal with this? Why is it even important? When hardware and software interact so closely all knowledge of the instruction set in the world is not enough.
And yes, this CDP1861 is obsolete and out of production for 30 years now. It's a museum piece. That did not krrp some people from building their own replacements, some even with higher resolutions. But nobody ever even tried to implement any interrupt routines in any high level language. And I recently posted them a little graphics library with modified interrupt routines thatsupport double buffering and configurable vertical resolution. And sprites, text output...
Yes, we have a C compiler that I could have used for that. The performance was ok, but the compiled code was about 1/4 - 1/3 longer. Not acceptable on a computer with as little as 4k memory. Just as I said before. Not everything you can program has the resources of a state of the art PC.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
I have been working with embedded processors for about ten years, and was heavily involved in the core bare-bones software when we switched from 8051 to Cortex M0. Even on the 8051, only a few core functions for hardware interface was assembly code; less than a handful coders managed it. The rest was C. The M0 was similar: Very few of the developers of e.g. ANT or Bluetooth protocols ever touched assembly functions.
As we progressed to more advanced ARM variants, and even more so: More advanced on-chip peripherals, the tiny group of programmers handling assembly coded core functions stayed the same. The protocol and application group grew quite a lot, but none of them need to know the instruction set of the M33/M4s we are using nowadays.
We are currently in a transition from our proprietary bare-bones monitor, written almost entirely in C, to an open-source embedded OS written in C, with only very low-level, architecture dependent drivers in assembly. I would guess that 99+% of our system-on-chip code is C. And 99,99% of the application code for SoC is C, C++ or other HLLs.
We are still talking about SoCs with 64Ki RAM, 256Ki flash - but not 30 years obsolete 4Ki/16Ki units. Nor are we talking about the need for the CPU to regularly refresh dynamic RAM, relate to magnetic core memory or synchronize to mercury memory tubes.
Where do we draw the limit for what is relevant today? At mercury tubes? At 74 chips? Should 74 be forgotten, but CPD18xx taken as relevant influence on the choice of assembly vs. HLL code development?
There are two primary ways of getting old. Either you can turn into a grumpy old man, like Jeff Dunham's Walter, or you may lean back, saying, "Oh well, if that is the way the the next generation wants it, then let'em!" So let them have agile and github and google appstore and facebook and whathaveyou. For the part which is software development, it is HLL, whether you condone or condemn it.
My practical experience is that for embedded code, once you have got the (very limited) assembly functions required for hardware interfacing, C and other HLLs are most certainly suitable even for embedded programming.
|
|
|
|
|
Quote: Where do we draw the limit for what is relevant today? At mercury tubes? At 74 chips? Should 74 be forgotten, but CPD18xx taken as relevant influence on the choice of assembly vs. HLL code development? Draw the line at the day Moore's law finally fails. Technology may stagnate, the expectations will not. Many old approaches come back when there is no more easy way out. It ain't over until the fat lady sings.
Quote: There are two primary ways of getting old. Either you can turn into a grumpy old man, like Jeff Dunham's Walter, or you may lean back, saying, "Oh well, if that is the way the the next generation wants it, then let'em!" So let them have agile and github and google appstore and facebook and whathaveyou. For the part which is software development, it is HLL, whether you condone or condemn it. I have often enough profited from those who are helpless without their tools, frameworks and compilers. So, produce more of them by all means.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
I beg to differ: here are the execution times of the same functions, in microseconds, with the same algorithm programmed in C and in assembler using SSE, on a i5-3610 (averages over 10000 repetition with adequate cache clearing between tests):
Function C Assembler
-------------------------------------------------
F1 333.297, 209.641
F2 804.771, 219.726
F3 1441.889, 280.273
F4 1452.625, 281.373
F5 1435.306, 658.708
F6 1450.495, 663.955
F7 1439.217, 596.668
F8 1454.818, 612.861
the only one with only a minor enhancement is the first, which is a simple memcpy. Code was compiled with VS2008, but tests with 2015, 2017 and even Intel's own compiler gave the exact same running times.
And this is on a modern CPU. I won't even talk about embedded programming in realtime systems, where you have microcontrollers managing the pwm control of a triphase motor with a resolution of 125 microseconds AND manage communication on the CAN bus plus the control system on a 40 Mhz microcontroller.
GCS d--(d+) s-/++ a C++++ U+++ P- L+@ E-- W++ N+ o+ K- w+++ O? M-- V? PS+ PE- Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- r+++ y+++* Weapons extension: ma- k++ F+2 X
|
|
|
|
|