|
Unless there's an ALTernate method at hand. They key to this is to ESCape from the hum-drum responses and maintain ConTroL.
By the way, wireless people aware of the Ctl-F5 true refresh?
Ravings en masse^ |
---|
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein | "If you are searching for perfection in others, then you seek disappointment. If you are seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010 |
|
|
|
|
|
W∴ Balboos wrote:
By the way, wireless people aware of the Ctl-F5 true refresh? A true refresh is only achieved by rebooting.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
You're out of Ctrl, dude!
/ravi
|
|
|
|
|
You're pretty spaced out yourself, Ravi!
"If we don't change direction, we'll end up where we're going"
|
|
|
|
|
Yeah, I was thinking the same thing. Maybe he should go Home early have a lovely beverage like Tab[^] and watch some F1 racing.
"the debugger doesn't tell me anything because this code compiles just fine" - random QA comment
"Facebook is where you tell lies to your friends. Twitter is where you tell the truth to strangers." - chriselst
"I don't drink any more... then again, I don't drink any less." - Mike Mullikins uncle
|
|
|
|
|
How do I Esc from this thread? I'd like this to End, please.
|
|
|
|
|
No - the very action of pushing down the F5 key is depressing!
Socialism is the Axe Body Spray of political ideologies: It never does what it claims to do, but people too young to know better keep buying it anyway. (Glenn Reynolds)
|
|
|
|
|
I'm sure it will bounce back
Message Signature
(Click to edit ->)
|
|
|
|
|
It does but I repress it.
|
|
|
|
|
Yes
And as a side note I found a use for AutoHotKey today. I use Navicat for connecting to our Postgres databases and it's keyboard shortcut for running a query is ctrl+r. I'm used to the usual F5 to refresh, so I used AHK to send ctrl+r in Navicat when pressing F5.
|
|
|
|
|
One of the things I like about Live Mail is that Send/Receive is by default bound to F5 - with Outlook is was sodding F9!
Sent from my Amstrad PC 1640
Never throw anything away, Griff
Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
I prefer CTRL-F5 as it satisfies the freak in me.
|
|
|
|
|
Not as refreshing as a vacation would be. But that's probably just a | dream.
|
|
|
|
|
Finding a hundred different benchmarks comparing the performance of various smartphones is easy. It is a lot harder to find comparisons with traditional desktop CPUs, or for the GPU: desktop graphic cards of various classes.
Obviously: A smartphone processor cannot consume 50-100 W power (or more, for extreme desktop/gaming PCs), so you can't expect the perforance to be at the same level. Yet, it is well known that the ARM cores give a lot of performance per watt, usually better than the X86/X64 family.
That aside: In absolute performance, if you port a "heavy" classical desktop application to a smartphone app, maintaining the same algorithms etc, and run the smartphone at max performance without worrying about battery life, how would it compare to a modern desktop CPU and graphics card?
Since I am mostly curious about the CPU/GPU performance, I assume that the desktop PC for comparison has a flash disk like the smartphone, no power saving features reducing performance etc.
I guess that the results would depend a lot on the kind of task, e.g. whether it is CPU-bound, GPU-bound, or I/O-bound, how well the code can utilize multiple cores etc. So I am not expecting a single numeric factor for the relative performance. I am looking for benchmarks showing the performace factors of various classes of workloads, on PCs and on smartphones. Where can I find that?
|
|
|
|
|
BAck in the early days of the ARM processor, the company used to show it off to potential customers by demonstrating it performing the same task as a Pentium processor.
The only difference was that the ARM was powered from the waste heat emitted by the Intel device ...
It's a RISC chip (in theory, have a look at the instruction set some day and you may start to doubt that) and they are generally faster and more efficient than the traditional CISC devices fitted to desktops.
But ... you are comparing apples and oranges to a large extent: the OS running on the chip makes a HUGE difference to perceived performance (compare a Linux setup to a Windows 10 one on similar hardware and you'll see what I mean) and smartphones OSes are generally tightly coupled to the hardware they are running on, unlike desktop devices which have to cope with a huge variety of hardware environments. And GPU's are different too - smartphones don't have or need the kind of processing a modern PC graphics card will have (heck, the latest Nvidia devices have 46 cores, and 8 GB of RAM!).
Sent from my Amstrad PC 1640
Never throw anything away, Griff
Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
OriginalGriff wrote: It's a RISC chip (in theory, have a look at the instruction set some day and you may start to doubt that) and they are generally faster and more efficient than the traditional CISC devices fitted to desktops. A common misconception that it's the number of instructions that is being reduced. It's actually the number of addressing modes and the number of variants of each instruction that use these addressing modes that are reduced.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
For all practical purposes, that is the situation today. The CISC addressing modes had grown into a huge mess as the various processor families evolved (I will not mention x86 i particular).
If you go back in history, RISC was Reduced INSTRUCTION SET computers, not Reduced ADDRESSING MODES computers. RISC was more than reduced number of instructions: There was a reduction in number of instruction formats: All instructions being x bits wide, all having the operand spec in the same bits etc. The regularity was just as important as the count. It lead to far more direct hardware decoding of the instruction / operand codes into signal lines within the CPU, avoiding (possibly multiple layers of) microcode decoding, making faster instruction execution possible.
Now, ARM is certainly not as regular as the classical RISC chips (or for that sake, 68K). And when microprocessors startet adopting pipelining, speculative execution etc, and interrupt handling became more sophisticated, the ideal of direct decoding from instruction code bits to internal signals began breaking down. You don't see very many references to RISC architectures today, because very few chips follow the RISC principles of the 1980s and 1990s.
|
|
|
|
|
Member 7989122 wrote: If you go back in history, RISC was Reduced INSTRUCTION SET computers, not Reduced ADDRESSING MODES computers. RISC was more than reduced number of instructions: There was a reduction in number of instruction formats: All instructions being x bits wide, all having the operand spec in the same bits etc. The regularity was just as important as the count. It lead to far more direct hardware decoding of the instruction / operand codes into signal lines within the CPU, avoiding (possibly multiple layers of) microcode decoding, making faster instruction execution possible. I see that more in microcontrollers where instruction memory needed not be organized in bytes and any number of bits could be realized as instruction word size. This way the processors only had to fetch one instruction word of n bits instead of several bytes and could execute most instructions in a single cycle.
Did you ever see a 8 bit RISC CPU (as opposed to microcontroller)? I still like to program on the granddaddy of all RISC (and CMOS) CPUs and I can assure you that the addressing modes are extremely reduced there. Some people went so far to call it one of the earliest and most radical RISC implementations ever. I think this spartan design was not due to a radical design philosohy. It was probably the low number of gates available on the die because it was also an early CMOS design and CMOS gates need transistor pairs.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
I can hardly think of even a 16-bit RISC CPU! You could, at least to a ceratin degree, say that the RISCs cleared the way for 32 bit microprocessors: Due to their lower architectural complexity (including, as you point out, addressing modes), it was possible to fit a 32 bit CPU on a single chip, given the technology of the 1980s.
There were microprocessors labeled as CISC which had far more regular, simpler addressing modes than the x86: When I see what modern RISCs have come to, I repeat once more: M68K was a close to a RISC as a CISC could possibly get. If you consider it as somewhat RISCy: The first models had external 16-bit buses (or even 8 bit, for the 68008), but the internal architecture was 32 bit from day 1.
|
|
|
|
|
I loved the 68K back in the day. It was an actual pleasure to write programs for it in assembly language.
|
|
|
|
|
That and all Intel family chips, including those from AMD, have been RISC since the Pentium processor (and possibly the 486 and 386 as well). The first stage of these chips is to convert the x86/64 CISC instruction set into a series of RISC instructions.
|
|
|
|
|
I know, but externally they behave like CISC processors. When it comes down to writing code in assembly or even machine code, you quickly will learn to value a RISC processor. Intel processors are a pain to write assembly code for by now, even if they are RISC processors somewhere deep in their black hearts.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
OriginalGriff wrote: the OS running on the chip makes a HUGE difference to perceived performance (compare a Linux setup to a Windows 10 one on similar hardware and you'll see what I mean) So assume that the desktop PC runs Linux, when it e.g. compiles a million lines of code or converts from one video format to another, or generates an animation movie from a script, or ... Android is Linux based, so even though a number of adaptations to smartphone hardware has been made, this shouldn't affect pure CPU / GPU performance that much.
Obviously there are lots of different Intel/AMD desktop chips and GPU chips, and there are lots of Snapdragon models. That should not make it impossible to say that an Intel-so-and-so at X GHz running FFmpeg will convert MPEG2 to H.264 x times faster than a Snapdragon-so-and-so at Y GHz, also running MMmpeg! (FFmpeg is available for ARM, I guess that also includes Snapdragon.)
Similarly obvious: Smartphone CPUs/GPUs are specialized to the assumed needs. But a Snapdragon is Turing complete, so in principle it can do anything that a desktop PC can do. The reason why I ask for relative performance under various workloads is to learn what kind of tasks fit into the smartphones' intended application area (that is, good performance) and which is outside (that is, poor performance, compared to the desktop PC).
I have several friends who went from desktop PCs to portable PCs to smartphones - they haven't owned a desktop machine for eight years, not a portable for three. They do all their tasks on their phone, even video editing. (Don't ask for my comments on the result of that video editing, though...) Portable PCs have essentially been almost as closed, fully controlled hardware environments as the smartphones (today, you connect the same crowd of USB devices to smartphones as you do to portables). We still can compare performance with desktop PCs. Smartphones gradually take over a far more varied set of tasks, software-wise becoming more and more similar to PCs. Today, the fruit salad is a mix of apples and oranges.
Five years ago we could evade the question of relative performance by pointing out differences in tasks and environment. Today, getting to know hard performance factors is highly relevant. If there isn't any availabel, it ie about time that someone start doing it.
|
|
|
|
|
I have several friends who went from desktop PCs to portable PCs to smartphones - they haven't owned a desktop machine for eight years, not a portable for three. They do all their tasks on their phone, even video editing. (Don't ask for my comments on the result of that video editing, though...) This is actually a keyed item in my mind on the difference between the two platforms.
The last phone I had was an HTC One M7. It took pretty good pictures. HTC decided to up it's game and made the software better for it. OK, it now took better pictures. Then worse, and now they are near worthless. The improved software still works fine; however, the processing power required generated too much heat within the camera sensor, and the sensor now takes all pictures in a wonderful shade of purple. HTC did become aware of the problem and was replacing the module for no cost. Phone was already falling apart and needed to be replaced- which it was.
So while the phone was fully capable of taking and processing high quality pictures; it was self-destructive in nature due to the heat generated exceeded the cooling capacity.
So what is the manufacturer to do? Lower the quality of the resulting image or throttle the processing?
The answer they came up with, in at least my eyes; was to make the phones bigger so they had higher cooling capacity
Director of Transmogrification Services
Shinobi of Query Language
Master of Yoda Conditional
|
|
|
|
|
Member 7989122 wrote:
Obviously: A smartphone processor cannot consume 50-100 W power (or more, for extreme desktop/gaming PCs), so you can't expect the perforance to be at the same level. Yet, it is well known that the ARM cores give a lot of performance per watt, usually better than the X86/X64 family. Why? A processor does not do any physical work. More than 99% of the power is simply converted to heat which is not what we want to get and must even get rid of.
A processor's power requirements depend on the number of transistors and the clock frequency. Leakage is at least one order lower, so we can safely ignore it. Let's assume I can optimize the processor's hardware implementation and reduce the number of transistors or lower the clock frequency, I might get the same performance for less power. It's not as simple as judging the performance by looking at the power consumption.
Comparing two processors with very different architechtures is very hard. Benchmarks are notoriously misleading. The manufacturers of course like to use benchmarks that favor their architecture. Then there is the problem that many benchmarks represent abstract scenarios that have only little bearing on any real applications. How many applications have you seen that need as many floating point operations per second as possible? Or the other way around: What 'real' application would be a fair test of any possible processor?
A RISC processor (like the ARM) generally needs less transistors and the reduced instruction set tends to need fewer clock pulses for the instructions than a CISC processor. So it's a fast processor, even if you have to run it at a lower clock frequency, right?
Maybe it is, maybe it's not. It may execute more instructions per second, even at a lower clock frequency. On the other hand it may also need more instructions to do the same thing as a CISC processor. Anyway, a fair test would reveal both strength and weaknesses of the two processors, so it's your turn to tell me what such a fair test looks like and how we weigh al results to a final figure that tells us that processor A has X percent of the performance of processor B. Any time, any place, any circumstances.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|