|
(c) is related to (a) because some of the pieces would be a great soundtrack for rabbit porn.
|
|
|
|
|
wrote: a.) Greta Thunberg
Misguided autistic teenager who's been used and abused by the climate change hysterics.
wrote: b.) Climate
Been changing for 4 billion years. Solar activity is the direct correlation and cause, not carbon. The simple proof for this is that not a single carbon based model has accurately predicted historical numbers.
wrote: c.) Corona
A virus that is 99.9% survivable for anyone who has yet to reach human life expectancy (CDC statistics)
========
All three have been used by the power elite to take more an more control and freedom from the proletariat.
|
|
|
|
|
Oh look, another self-credentialed omni-scientist here to tell everyone how things really are.
|
|
|
|
|
Actually someone who does his own research looking for the raw numbers and ignoring the media and hype machine.
|
|
|
|
|
Of course you did.
And because you know everything about every scientific discipline there is, your totally unbiased internet research is equivalent to the expertise of people who have dedicated their careers to this stuff.
How lucky we are to have you in our presence.
|
|
|
|
|
Depends if Corona is the virus or the beer.
|
|
|
|
|
|
Sales driven tech-speak?
The less you need, the more you have.
Even a blind squirrel gets a nut...occasionally.
JaxCoder.com
|
|
|
|
|
Maybe, but in the world of computer technology you should not belittle forecasts of the future!
Get me coffee and no one gets hurt!
|
|
|
|
|
Cp-Coder wrote: My neurons are smoking!
I'd like to know what that guy Woolley is smoking. He's living in la la land.
The difficult we do right away...
...the impossible takes slightly longer.
|
|
|
|
|
Speaking just as an individual...
All I know is that hard drive capacity seems to have stalled over the last few years. Capacity (for real disks, that exist right now, that I can buy today) still seems to only be increasing 2TB at a time. I have some 10TB drives, and all I've seen showing up on the market since are 12, 14, 16, 18 and 20TB drives. Anything larger has the physical dimensions of a pair of drives RAIDed together in an enclosure (yuck).
There was a point in time where every time storage capacity increased, it was in multiples of 2. Where's the 32TB drives? The 64TB ones? Are the spinning drive makers waiting for SSDs to catch up in capacity? 'cuz when that happens, it'll be game over for them...
Zettabytes, my ass. At this rate, I guess I should be grateful if a petabyte drive ever becomes a common consumer device in my lifetime.
|
|
|
|
|
dandy72 wrote: if a petabyte drive ever becomes a common consumer device Just the thing to store that every-second-of-every-day-of-your-life video your parents started when you were born.
Software Zen: delete this;
|
|
|
|
|
Here's one : Core i9 12900K processor review - Introduction[^]
According to that review, it is pretty darned fast. A bit faster than the 5950X in most single-threaded stuff and a bit slower in most multi-threaded stuff. Hopefully this will heat up the competition and drive prices lower. I hope it happens with GPUs too.
"They have a consciousness, they have a life, they have a soul! Damn you! Let the rabbits wear glasses! Save our brothers! Can I get an amen?"
|
|
|
|
|
All tech youtubers will switch from promoting AMD to promoting Intel.
CI/CD = Continuous Impediment/Continuous Despair
|
|
|
|
|
I am not so sure. This really wasn't a slam-dunk for Intel, although some sites (like Ars Technica) think it was. Regardless of the facts, you are likely to be right because there are an awful lot of Intel fan-boys around.
"They have a consciousness, they have a life, they have a soul! Damn you! Let the rabbits wear glasses! Save our brothers! Can I get an amen?"
|
|
|
|
|
These testing guys ought to set up a double-blind test setup, with a KDM switch controlled by a PC with a random generator, so that no one knows which machine the user is working against until that KDM control PC reveals its logs. The software on the machines should include useful applications, but no benchmarking software. Maybe (stop)watches should be treated as benchmarking as well.
If you can't tell which machine is faster from the "experienced performance" when using ordinary production software (i.e. not through any measurement giving you a numeric figure), then you certainly do not need to upgrade you PC, if your current one according to benchmarks is at the level of the slower one.
I will not believe in people claiming to easily and reliably notice a performance difference of less than 10%. The experience of performance is extremely strongly affected by psychological factors, expectations and placebo. If all such factors are eliminated in a double-blind setup, you would be in a similar situation to HiFi freaks who claim to hear differences in sound quality from quite small differences in bitrate ... when they can read the bit rate from a display. If they cannot, they are about as good as a die to tell which of two sound samples have the higher bit rate.
I am quite sure that CPU performance would be very similar. Personally, I think 10% difference is well within die throwing reliability - but if I say 30%, I am sure that a lot of gamers will scream up in protest. Like when I tell the HiFi freaks that they won't be able to hear the difference between 128 an 160 kbps AAC+. They insist that they can - and when the ABX test program says that a die scores just as well, they insist that the ABX program is broken
|
|
|
|
|
trønderen wrote: If you can't tell which machine is faster from the "experienced performance" when using ordinary production software
What is ordinary? As an embedded developer I tend to use yocto quite a bit and trust me, on 5-7 hours builds it makes a lot of difference. Same goes for rendering.
GCS d--(d-) s-/++ a C++++ U+++ P- L+@ E-- W++ N+ o+ K- w+++ O? M-- V? PS+ PE- Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- r+++ y+++* Weapons extension: ma- k++ F+2 X
|
|
|
|
|
I am still reasonably sure that if your build cluster had its speed raised by 10% without telling anyone, it wouldn't cause everybody to go 'Wow!'. Or, if two build clusters were available, one 10% faster than the other, you couldn't tell on which of them your five hour build had been run, without some sort of timing - read: benchmarking. And: My experience of five hour builds are that the build artifacts and logs are available when I arrive at work the next morning. If the log reveals that the build was completed at 3:30 am rather than at 3:00, it doesn't affect my experience at all. (If the build cluster is so overloaded that my build is not completed at 08:15 am, we certainly need more than a 10% upgrade!)
Maybe you have extremely stable build times. My experience is that wall clock time for a build to complete easily varies by more than 10% from one build to the next. Obviously, I am assuming fairly identical, clean and complete rebuilds. There will a few source code updates - that's why you rebuild - but they are not responsible for any 10% variation in build time. Even if you rebuild with no source code change at all, 10% shorter or longer wall clock times before the job completes is perfectly normal.
The context for my remark was the Core i9 12900K (with lots of references to the Ryzen 9 5950X) - not a typical processor you would choose for a build cluster. The typical application area is for interactive use, read: gaming. A gamer won't be able to distinguish between 100 and 90 frames per second, or between 50 and 45 ms response time. Or for other kinds of use: That huge file copying task completing in 20 or 18 seconds. Or your video being rendered in two minutes or one minute and 48 seconds.
Maybe you could tell by checking your watch, i.e. benchmarking it. But if a speedup (or slowdown) of 10% may or may not be activated, and you were to guess whether it is active or not, with no access to timing devices of any kind, do you think you could tell from the experience of a single run (of a game, a video rendering, a database query, ...)? From ten successive runs?
Methinks not. Neither from interactive user interfaces nor system builds.
|
|
|
|
|
The only time I've seen a truly notable improvement in build times was when we moved from a 6 drive RAID 5 array to a 24 drive RAID 5 array. Our 75-90 minute builds came down to under 20. We'd see further improvement using SSD's obviously.
I'm not sure how RAID works with SSD's. We use RAID 5 mostly to increase capacity and partly for error recovery. I've replaced each of the drives in our older build machine at least once, and a couple of them twice or three times. Never had to re-image the machine yet after a drive failure.
Software Zen: delete this;
|
|
|
|
|
I can't tap my AMD out in terms of pegging all the cores. It's too damned fast, but single thread performance could be better, IMO. I peg one of the cores pretty routinely when I'm developing. It's unfortunate that compilation can't be more of a parallel operation. Too much interdependent mess. I mean, you can get around some of it in C# or C and C++ by using a GLR parser on the front end but that still leaves tree selection from the GLR process, and type/member resolution, which at best, with GLR, you could brute force try each tree on a different core, but you really can't parallelize beyond that. Meh.
Real programmers use butterflies
|
|
|
|
|
The stuff I am doing now can (practically) peg has many cores as I allow it too. I am using a 32-core Threadripper machine right now and I set my solver to use 61 cores and it pegs them all. We reserve three other cores for communication and UI. This is really fun stuff. I wrote it to use either the CPU or a GPU and it is very interesting. My 3090 GPU (with a 5900 CPU) is slightly faster than this Threadripper but a 24-core EPYC CPU using 45 threads is faster than both of them. We have had two machines with EPYC CPUs on order since June and there is no delivery date in sight.
"They have a consciousness, they have a life, they have a soul! Damn you! Let the rabbits wear glasses! Save our brothers! Can I get an amen?"
|
|
|
|
|
honey the codewitch wrote: I can't tap my AMD out in terms of pegging
Your choice of words is... disturbing. And strangely arousing. But mostly disturbing.
GCS d--(d-) s-/++ a C++++ U+++ P- L+@ E-- W++ N+ o+ K- w+++ O? M-- V? PS+ PE- Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- r+++ y+++* Weapons extension: ma- k++ F+2 X
|
|
|
|
|
For the record, you are the one that took it there, not me.
Just for anyone taking notes.
Real programmers use butterflies
|
|
|
|
|
Does CPU speed make that much difference? In what type of uses does it really matter?
Don't the ram, disk drives etc, make a difference to overall computer performance?
My machines have always run faster than I can think .
Zaphod
|
|
|
|
|
It can, it largely depends on the nature of the problem. There is always some limiting factor involved and all of those you mentioned can be possibilities.
"They have a consciousness, they have a life, they have a soul! Damn you! Let the rabbits wear glasses! Save our brothers! Can I get an amen?"
|
|
|
|