|
Mark_Wallace wrote: And you prefer a cloud solution for that?
Yep. Like this High Performance Computing | Microsoft Azure[^]
Mark_Wallace wrote: I can't think of any servers that are particularly optimised for graphical processing, anyway -- they're intended to mainly serve data;
Something like this.
NVIDIA GPU Optimized Servers - Thinkmate[^]
The hardware architecture should combine a cluster of nodes that specializes in different departments. I'm no expert in this actually. Looking to hear from you guys. thanks
|
|
|
|
|
Nand32 wrote: Yep. Like this High Performance Computing | Microsoft Azure[^] Hmm.
With something like that, I'd still want a local, heavy-duty server to channel clients into it, so it looks to me like an additional expense.Nand32 wrote: Something like this.
NVIDIA GPU Optimized Servers - Thinkmate[^] Live and learn.
I'd want to do a lot of testing on how it handles load, how it assigns resources to connected clients, and how it handles clients that are asking too much, before considering going live with it. See if they'll lend you one that you fancy, for a PoC.
The first thing I'd want to know is if you've got 20 clients working on a server, and one of them does something that would freeze, lock-up, or crash a workstation, if the work were being done there, what happens to the other 49 clients?
That's easy (and fun!) to test.
I wanna be a eunuchs developer! Pass me a bread knife!
|
|
|
|
|
I think I would recommend the boss to go for server consultants like ThinkMate.
|
|
|
|
|
Mark_Wallace wrote: The first thing I'd want to know is if you've got 20 clients working on a server, and one of them does something that would freeze, lock-up, or crash a workstation, if the work were being done there, what happens to the other 49 clients?
Probably those 30 clients magically appearing out of thin air that crashed the system in the first place... damn those pesky magicians and there mysterious ways.
|
|
|
|
|
musefan wrote: Probably those 30 clients magically appearing out of thin air that crashed the system in the first place I see that you've had solid experience in fault-finding.
(In this case, it's easy, because it's my fault)
I wanna be a eunuchs developer! Pass me a bread knife!
|
|
|
|
|
Mark_Wallace wrote: (In this case, it's easy, because it's my fault)
User error. My experience tells me look at the user first, and then at myself (code) later (or more often not at all).
|
|
|
|
|
Are you sure those numbers are right?
Because 0.5 Petabyte / week needs insane transfer rates: call it 7 gigabit per second upload speed - and that's some serious bandwidth for cloud!
To be honest, those look like numbers somebody plucked out of the air without thinking too much about them ...
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
And that only for continuously transferring it... it needs to be processed too
M.D.V.
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.
|
|
|
|
|
OriginalGriff wrote: Because 0.5 Petabyte / week needs insane transfer rates: call it 7 gigabit per second upload speed
It could be true. It could be a cumulative estimate of distributed data. It might not go into a single server. Some of them are media (i.e Hi-res Videos, Image snapshots & mathematical data).
Think of it as data flowing from the customers' facilities through surveillance cameras deployed across the globe.
It's a deep learning project for videos captured in real-time. I guess the client is close to the estimate.
But I'm not sure if they worked out a real math to arrive at this number.
|
|
|
|
|
I'd want to work out the "real math" on that one, and then add a margin on top - you're talking about some networking going on regardless of cloud / inhouse (and really serious for cloud access) and the infrastructure for that is going to be big money ignoring the actual storage / processing hardware. We aren't talking about doing this over a wireless link (or even 5G, though that's technically just about capable in the real world you'll get nothing like that).
I'd treble check the numbers are real before I went any further - a tiny error has major consequences at this kind of scales.
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
I'm dialing this company. They usually put us through a questionnaire to gasp the requirement. Gonna link the customer with this company and get it straightened out.
Application-Ready Solutions - Thinkmate[^]
|
|
|
|
|
Right.
But if those numbers are correct...that's why the cloud is a non-starter. Otherwise they'll be blowing their budget on bandwidth, even before getting anything else done.
|
|
|
|
|
Yep. You'll be talking about a 10gigabit EAD leased line, with fibre optic direct to your building. Depending on where you are that'll be expensive. As in "Cheaper to build a new building somewhere else" expensive, I suspect.
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
OriginalGriff wrote: As in "Cheaper to build a new building somewhere else" expensive, I suspect
I've got it - move the building into one of the big cloud providers' data centers.
Save the bandwidth, the data never needs to get exposed through the public internet, and it's still cloud-based. Win-win-win.
|
|
|
|
|
That's brilliant!
Solves all the problems with the cloud I can think of...
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
In hindsight, I should've just called it "redefining what 'on-prem' means".
|
|
|
|
|
|
Sounds funny, but they are definitely talking something similar. But I also feel if this is all about testing our solution/architecture abilities. The easiest answer I had was to try HPCs on Azure.
|
|
|
|
|
just because doing graphics processing what's the need for so much storage?
sounds like somebody got suckerd by the sales goons, or just some idiot trying to impress with big number-words.
for instance city-wide traffic monitoring systems that record plates etc don't need that much, city facial recognition systems don't need that much.
1. the figures quoted are just nonsense, throw those away
2. get rid of the goons that came up with that crap, clueless / out of their depth / making irrational crap up
then:
3. be more specific on the nature of processing, volume and retention to get useful recommendations
-- without that it's just other peoples guesses.
pestilence [ pes-tl-uh ns ] noun
1. a deadly or virulent epidemic disease. especially bubonic plague.
2. something that is considered harmful, destructive, or evil.
Synonyms: pest, plague, people
|
|
|
|
|
Depends what the system does though. CCTV for example would require the data to be kept, as least for X amount of time. No point in having CCTV if the video isn't there to review when you need it.
Imagine also a city-wide service that takes peoples faces and puts a hat on them. Then at any point in time, a citizen can log in and see how good/bad they look in a hat... gotta store all those images to do that. Silly example, but you see my point
|
|
|
|
|
Yes, 0.5 petabytes per week seems too high. For example, I had visited the Nuclear Medicine department of a Cancer hospital, where they have four PET-CT machines, each spewing out 300 MB imaging data every 15 minutes or so (say 5 GB per hour, with all the four machines running). With the hospital working for 12 hours every day, we have 60 GB accumulated from that one department alone on a single day. Then, the CT and MR machines themselves spew out comparable sizes of data. So, we have about 150 GB per day from all departments put together. Taking 6 days as a week, we have about 1 TB per week.
Much less than 0.5 petabytes per week.
|
|
|
|
|
Amarnath S wrote: I had visited the Nuclear Medicine department of a Cancer hospital
Using big scary words doesn't equate to big scary amounts of data. You seem to suggest there should be a correlation between importance of software and the amount of data it produces. That would probably make Youtube one of the most important bits of software in the universe
|
|
|
|
|
Bingo.
|
|
|
|
|
0.5 petabtyes per week, and you want to process and store it in the cloud
ok on a 10 gigabit link (which you won't have) it'll take around 937 days to upload 0.5 petabytes
yes, you would need 937 days per week to upload the weeks data
-- The fastest link currently available is 18.2Gb (South Korea), still looking at >400 days
-- 5g promises 100Gb, that's >90 days per week
your figures and information are just ridiculous, just pathetic.
clearly whoever is coming up with those is totally clueless
and you want to put it on the cloud or the clown???
don't care you crossed it out,
asking people for advice?
provide real and proper information, not this bogus crap
BINGO THAT!
pestilence [ pes-tl-uh ns ] noun
1. a deadly or virulent epidemic disease. especially bubonic plague.
2. something that is considered harmful, destructive, or evil.
Synonyms: pest, plague, people
|
|
|
|
|
I never thought that a Petabyte word could have troubled someone so much.
Please Ignore the message if it did not interest you. Just like I'm doing for your message now.
And go home and have a chill beer on my name
Bingo that?
|
|
|
|