|
Shouldn't that be like a few hours work in this day and age?
And shouldn't they do that BEFORE you show up for your first day at work?
If it was me I'd have another job after a week, because things probably aren't going to get better after that
|
|
|
|
|
I work for another three letter organization, newly half american, which produces cars. I'm basically in the same condition except that I started on Sept 24th and have a urgent release to be done Oct 31.
I can't even access the SVN server.
Nor do I have permissions to unlock most of the functionalities of the very program I'm developing.
GCS d-- s-/++ a- C++++ U+++ P- L+@ E-- W++ N+ o+ K- w+++ O? M-- V? PS+ PE- Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- ++>+++ y+++* Weapons extension: ma- k++ F+2 X
|
|
|
|
|
Do you find pressing F5 refreshing?
Sent from my Amstrad PC 1640
Never throw anything away, Griff
Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
Yes! Effing refreshing!
"If we don't change direction, we'll end up where we're going"
|
|
|
|
|
Very re-F-F-F-F-F-reshing
Thanks & Regards
Puneet Goel
Save Paper >> Save Tree >> Save Humanity
|
|
|
|
|
Unless there's an ALTernate method at hand. They key to this is to ESCape from the hum-drum responses and maintain ConTroL.
By the way, wireless people aware of the Ctl-F5 true refresh?
Ravings en masse^ |
---|
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein | "If you are searching for perfection in others, then you seek disappointment. If you are seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010 |
|
|
|
|
|
W∴ Balboos wrote:
By the way, wireless people aware of the Ctl-F5 true refresh? A true refresh is only achieved by rebooting.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
You're out of Ctrl, dude!
/ravi
|
|
|
|
|
You're pretty spaced out yourself, Ravi!
"If we don't change direction, we'll end up where we're going"
|
|
|
|
|
Yeah, I was thinking the same thing. Maybe he should go Home early have a lovely beverage like Tab[^] and watch some F1 racing.
"the debugger doesn't tell me anything because this code compiles just fine" - random QA comment
"Facebook is where you tell lies to your friends. Twitter is where you tell the truth to strangers." - chriselst
"I don't drink any more... then again, I don't drink any less." - Mike Mullikins uncle
|
|
|
|
|
How do I Esc from this thread? I'd like this to End, please.
|
|
|
|
|
No - the very action of pushing down the F5 key is depressing!
Socialism is the Axe Body Spray of political ideologies: It never does what it claims to do, but people too young to know better keep buying it anyway. (Glenn Reynolds)
|
|
|
|
|
I'm sure it will bounce back
Message Signature
(Click to edit ->)
|
|
|
|
|
It does but I repress it.
|
|
|
|
|
Yes
And as a side note I found a use for AutoHotKey today. I use Navicat for connecting to our Postgres databases and it's keyboard shortcut for running a query is ctrl+r. I'm used to the usual F5 to refresh, so I used AHK to send ctrl+r in Navicat when pressing F5.
|
|
|
|
|
One of the things I like about Live Mail is that Send/Receive is by default bound to F5 - with Outlook is was sodding F9!
Sent from my Amstrad PC 1640
Never throw anything away, Griff
Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
I prefer CTRL-F5 as it satisfies the freak in me.
|
|
|
|
|
Not as refreshing as a vacation would be. But that's probably just a | dream.
|
|
|
|
|
Finding a hundred different benchmarks comparing the performance of various smartphones is easy. It is a lot harder to find comparisons with traditional desktop CPUs, or for the GPU: desktop graphic cards of various classes.
Obviously: A smartphone processor cannot consume 50-100 W power (or more, for extreme desktop/gaming PCs), so you can't expect the perforance to be at the same level. Yet, it is well known that the ARM cores give a lot of performance per watt, usually better than the X86/X64 family.
That aside: In absolute performance, if you port a "heavy" classical desktop application to a smartphone app, maintaining the same algorithms etc, and run the smartphone at max performance without worrying about battery life, how would it compare to a modern desktop CPU and graphics card?
Since I am mostly curious about the CPU/GPU performance, I assume that the desktop PC for comparison has a flash disk like the smartphone, no power saving features reducing performance etc.
I guess that the results would depend a lot on the kind of task, e.g. whether it is CPU-bound, GPU-bound, or I/O-bound, how well the code can utilize multiple cores etc. So I am not expecting a single numeric factor for the relative performance. I am looking for benchmarks showing the performace factors of various classes of workloads, on PCs and on smartphones. Where can I find that?
|
|
|
|
|
BAck in the early days of the ARM processor, the company used to show it off to potential customers by demonstrating it performing the same task as a Pentium processor.
The only difference was that the ARM was powered from the waste heat emitted by the Intel device ...
It's a RISC chip (in theory, have a look at the instruction set some day and you may start to doubt that) and they are generally faster and more efficient than the traditional CISC devices fitted to desktops.
But ... you are comparing apples and oranges to a large extent: the OS running on the chip makes a HUGE difference to perceived performance (compare a Linux setup to a Windows 10 one on similar hardware and you'll see what I mean) and smartphones OSes are generally tightly coupled to the hardware they are running on, unlike desktop devices which have to cope with a huge variety of hardware environments. And GPU's are different too - smartphones don't have or need the kind of processing a modern PC graphics card will have (heck, the latest Nvidia devices have 46 cores, and 8 GB of RAM!).
Sent from my Amstrad PC 1640
Never throw anything away, Griff
Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
OriginalGriff wrote: It's a RISC chip (in theory, have a look at the instruction set some day and you may start to doubt that) and they are generally faster and more efficient than the traditional CISC devices fitted to desktops. A common misconception that it's the number of instructions that is being reduced. It's actually the number of addressing modes and the number of variants of each instruction that use these addressing modes that are reduced.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
For all practical purposes, that is the situation today. The CISC addressing modes had grown into a huge mess as the various processor families evolved (I will not mention x86 i particular).
If you go back in history, RISC was Reduced INSTRUCTION SET computers, not Reduced ADDRESSING MODES computers. RISC was more than reduced number of instructions: There was a reduction in number of instruction formats: All instructions being x bits wide, all having the operand spec in the same bits etc. The regularity was just as important as the count. It lead to far more direct hardware decoding of the instruction / operand codes into signal lines within the CPU, avoiding (possibly multiple layers of) microcode decoding, making faster instruction execution possible.
Now, ARM is certainly not as regular as the classical RISC chips (or for that sake, 68K). And when microprocessors startet adopting pipelining, speculative execution etc, and interrupt handling became more sophisticated, the ideal of direct decoding from instruction code bits to internal signals began breaking down. You don't see very many references to RISC architectures today, because very few chips follow the RISC principles of the 1980s and 1990s.
|
|
|
|
|
Member 7989122 wrote: If you go back in history, RISC was Reduced INSTRUCTION SET computers, not Reduced ADDRESSING MODES computers. RISC was more than reduced number of instructions: There was a reduction in number of instruction formats: All instructions being x bits wide, all having the operand spec in the same bits etc. The regularity was just as important as the count. It lead to far more direct hardware decoding of the instruction / operand codes into signal lines within the CPU, avoiding (possibly multiple layers of) microcode decoding, making faster instruction execution possible. I see that more in microcontrollers where instruction memory needed not be organized in bytes and any number of bits could be realized as instruction word size. This way the processors only had to fetch one instruction word of n bits instead of several bytes and could execute most instructions in a single cycle.
Did you ever see a 8 bit RISC CPU (as opposed to microcontroller)? I still like to program on the granddaddy of all RISC (and CMOS) CPUs and I can assure you that the addressing modes are extremely reduced there. Some people went so far to call it one of the earliest and most radical RISC implementations ever. I think this spartan design was not due to a radical design philosohy. It was probably the low number of gates available on the die because it was also an early CMOS design and CMOS gates need transistor pairs.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
I can hardly think of even a 16-bit RISC CPU! You could, at least to a ceratin degree, say that the RISCs cleared the way for 32 bit microprocessors: Due to their lower architectural complexity (including, as you point out, addressing modes), it was possible to fit a 32 bit CPU on a single chip, given the technology of the 1980s.
There were microprocessors labeled as CISC which had far more regular, simpler addressing modes than the x86: When I see what modern RISCs have come to, I repeat once more: M68K was a close to a RISC as a CISC could possibly get. If you consider it as somewhat RISCy: The first models had external 16-bit buses (or even 8 bit, for the 68008), but the internal architecture was 32 bit from day 1.
|
|
|
|
|
I loved the 68K back in the day. It was an actual pleasure to write programs for it in assembly language.
|
|
|
|