|
Those are some cool ideas and very cool that you've set all of that up.
Keep up the good work!
|
|
|
|
|
That looks awesome! We're all about the fun and learning, so keep sharing your photos and projects with us
|
|
|
|
|
What, no selfie??
(and that kits looks way nicer than the one I have).
Well done! (and that goes to everyone else who had a crack at the challenge)
cheers
Chris Maunder
|
|
|
|
|
Chris Maunder wrote: What, no selfie??
Because selfies are all the rage here on CP with all the ENGINEERING NERDS, right?
I'd get drummed out of The Lounge...
Lounge Denizens would yell: "Take it to the soapbox, freak!"
|
|
|
|
|
We got a new machine in at work. This one is in a very deep 1U package and it has two processor chips, each with twenty hyperthreaded cores. This means it can handle eighty (that's 80) threads simultaneously. WOW !
Unfortunately, I seem to be seeing a bug with the OMP library. It doesn't seem to handle that many threads correctly.
Here's a screenshot from the task manager showing all of those little CPU usage graphs : https://i35.servimg.com/u/f35/17/98/38/10/taskma10.png[^]. I have never seen that many at once.
|
|
|
|
|
It's a bit underutilized ...
Sent from my Amstrad PC 1640
Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
OriginalGriff wrote: It's a bit underutilized ...
Just install Symantec antivirus and it will take care of the rest.
|
|
|
|
|
Rajesh R Subramanian wrote: Just install Symantec antivirus and it will take care of the rest.
Ain't that the truth.
|
|
|
|
|
Niiiice.
I recall MS showing Task Manager displaying - I think - 128 individual graphs a few years ago.
Nice to see these sorts of machines are finally starting be seen "in the wild".
[Edit]
This article from Mark Russinovich is from 2008...seems like MS had 64-CPU systems a decade ago.
|
|
|
|
|
Very interesting! For quite a while Windows NT had a limit of 32 CPUs. I guess they extended that not long afterward. Back then, multi-CPU machines were a bit different. There was a company called Sequent who was bought by IBM and made machines with multiple processors on a backplane bus. There was one processor per card slot.
|
|
|
|
|
Neat. So how many physical CPUs would that handle, total?
I "inherited" a server box a few years back with a second physical processor slot on the motherboard. Some IBM ThinkServer model. The machine had 16GB of RAM, but could handle up to 32 (that's back when that was still considered a lot). The problem is that, strangely enough, if you wanted to make use of that second half of the memory capacity, you had to get a second processor. Which could only be purchased through IBM, and cost more than an entire brand new 32GB system you could put together at the time.
That's the only time I was ever in possession of a multi-CPU machine. So I've never really had the opportunity to decide for myself whether a multi-CPU machine was worth the extra money. I'll stick with multi-core, hyperthreading single CPUs I guess.
|
|
|
|
|
Schwing!!
When I see a number cited as small as 80 I harken back to the days when 5k was a lot of memory.
Were going to see KiloThreads someday.
We'll have to have another way to monitor them if even necessary.
|
|
|
|
|
For most computer uses, 80 threads is a solution in search of a problem.
Some engineering and math problems are crying for massive parallelism, with weather forcasting as the primary schoolbook example. All the top supercomputers in the world are hugely, massively, parallell.
But for everyday desktop problems, it is next to impossible to split the task into 80 similar-size, independent subtasks. One action follows the other, and if you manage to split it into eight or ten action sequences (or threads), most of the time a few of them will be idling waiting for one of the others to catch up. The more threads you create, the greater is the chance that a large fraction of them will be idling.
Then, if you manage to run 80 threads at full speed for an extended period of time: In most cases, they wil block on some other resource, probably I/O capacity. When my old university got their first Cray supercomputer in the early 1980s, it didnt last more than a couple of years: The processing capacity of the CPU was more than sufficient for FEM and weather forcasting, but the CPU was idling waiting for the raw data to come into memory; the I/O channels were not wide enough. Its replacement (a newer Cray) didn't have a much faster CPU, but significantly improved I/O, giving a dramatic improvement in throghput. Look at today's supercomputers: Not only do they have massively parallell processing, but also massively parallell I/O. And by building the machine from several thousand processing nodes, the combined RAM access capacity is immense. The individual CPU chips are not very impressing at all.
|
|
|
|
|
My guess is that they're using it for VMs.
|
|
|
|
|
Joe Woodbury wrote: My guess is that they're using it for VMs.
I was going to point that out. I have dozens of VMs running on consumer hardware, and while it's never starved for CPU time, it sure would be nice if it could dedicate a couple of threads to each VM.
|
|
|
|
|
Member 7989122 wrote: For most computer uses, 80 threads is a solution in search of a problem.
Yes, this is true. In our case, we have a very well-defined problem that benefits nicely from lots of threads. Until now, we could't get enough threads in one box so we distributed the calculations across multiple computers. We hope this one will allow us to use just one box at half the cost of the multiple machines and a small fraction of the rackspace - 1U vs. 4 or 5 4U boxes.
|
|
|
|
|
Rick York wrote: Unfortunately, I seem to be seeing a bug with the OMP library. It doesn't seem to handle that many threads correctly.
Are you are actually trying to implement a single process with 80 threads... ? If so... why?
Hopefully you are just testing to see what happens.
Best Wishes,
-David Delaune
|
|
|
|
|
Maybe I did not use 80 threads, but at times I have a few of them. Rendering, UI, message queue, application thread, async calculations, async database queries...
Just a game[^]
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
The single process will use around 65 or 70 threads. We do this because it will be a much more efficient solution than distributing the sixty+ threads across four or five different machines as we do now. We want to to eliminate the steps of distributing the calculation parameters across the network and collecting the results.
|
|
|
|
|
Randor wrote: Are you are actually trying to implement a single process with 80 threads... ? If so... why?
It all depends on what these threads are doing. Here's a real-world example:
I have a tiny utility sitting in my system tray that runs many small WMI queries across my LAN to refresh hardware configuration data from remote machines at startup. The payload is very small, so the LAN can take it, but WMI queries are inherently very slow, so it made sense here to dedicate not only one thread per machine, but one thread per query (each machine runs maybe a dozen WMI queries). Multiply that by a dozen machines, and it very quick adds up.
What used to be a queued set of queries that took 10+ minutes to complete is now a bunch of threads starting in parallel and all completing within 30 seconds.
[Edit]
Of course this doesn't imply I need an 80-core machine to run this. Just saying it's not all that unreasonable to spawn this many threads, even if just for a limited time.
modified 29-Jun-18 10:36am.
|
|
|
|
|
That's cute.
AMD just gave Intel the finger with the announcement of the 2nd gen Threadripper. 32 cores, 64 threads, and 250W of heat to get rid of.
Drop of a couple of those on a motherboard.
|
|
|
|
|
I would love to give one of those a try. I think it would work really well with our application. The application is known in our industry as "primary breakdown optimization." A search at google will turn up lots of results, none of which is us because we are a privately held company and we do this for internal consumption only. It is a very mathematics-heavy application that gets its input data from LASER scanners.
|
|
|
|
|
Ryzen is pretty cool, but it's not very good at 256bit vector arithmetic (gets split into two 128bit operations) so for math-heavy applications it can easily disappoint.
|
|
|
|
|
Dave Kreskowiak wrote: and 250W of heat to get rid of.
I've only ever owned one AMD-based system. I called it the space heater.
I see they still haven't dealt with the one reason I was happy to get rid of it. With an expected high of 35C (excluding humidity) over the weekend, I'll happily continue ignoring AMD's latest offerings.
|
|
|
|
|
I hear that. I've got a 6700K in my machine, running 24x7, and it does a nice job keeping the office warm at 95W(?)
|
|
|
|