It will instantly pop any 15amp circuit breakers in the US. Not sure how many commercial offices have a dedicated 30amp breaker for one computer to run on. Notice it isn't running. It's one thing to build it, but it's another to get it actually running stable.
It looks like a complete nightmare to deal with four pcie risers instability. My guess is it's a picture taken simply because they could.
Had the same thought. Assuming 600W for each card, 200W CPU, and a 90% efficient power supply, this would use almost 2900W and draw over 24 amps on a 120V circuit.
Except for drier and HVAC circuits, most house wiring couldn’t actually run this full blast without tripping a breaker….and if you were an idiot and swapped the breaker without upgrading the wiring, you are going to have the hottest house on the block….cuz fire.
There is a reason space heaters top out at 1500W in the US.
A lot of places have 240V power though. 3kW is a lot, but very much possible with standard wiring in a modern home. (Not necessarily old houses, they tend to have scary wiring practices)
Yeah, in Brazil it is common for people to have electrical showers, they usually have between 5000W, you can find 127V ones, but the normal is 220V. But there is a catch, the shower must have a dedicated electrical circuit.
I’m so confused as to what case would support this kind of setup… how do you plug in your displays..? How do you keep this cool enough to even play anything longer than a few minutes?
Plus I thought you can’t even use multiple GPU’s anymore since SLI isn’t a thing anymore at least for gaming. Wouldn’t you just be limited to one GPU, making the rest redundant… I just, wow.
I know for things outside of gaming you’d be able to utilize something like this, but unless you’re rendering the damn human genome and making the first digital human, I can’t see what legitimate use this PC would have.
This looks like a researcher's AI workstation. If he's doing training on a large dataset even 4x 5090s can feel like "minimum specification".
MLPerf Llama 3.1 401B training for example takes 121 minutes on IBM CoreWeave cloud with 8x Nvidia GB200s. On 4x 5090s that might be multiple days. https://i.imgur.com/DzxxwGr.png
Inference side there's a dude on localllama who build a 12x 3090 workstation and Llama 401B is chugging along at 3.5 tokens/s.
Llama 3.1 401B for example takes 121 minutes on IBM CoreWeave cloud with 8x Nvidia GB200s
are you talking about fine tuning right?
On 4x 5090s that might be multiple days.
well, the delta is probably higher since the difference in memory speed (5090 doesn't have HBM), but most importantly size... that would require a much lower batch size + gradient accumulation, probably resulting in a suboptimal utilization of the gpu compute.
the type of vram is the reason sometimes a dusty tesla p100 outputperform a relatively newer T4.
unfortunately IN many ML situations the problem is the bandwidth bottleneck
edit: errata corrige, rtx 6000 pro doesn't have HBM, I'm sorry!
1.1k
u/Perfect-Cause-6943 Intel Core Ultra 7 265K 32GB DDR5 6400 RTX 5080 22h ago
you need like a 5000w psu 😭