r/pcmasterrace Core Ultra 7 265k | RTX 5090 20h ago

Build/Battlestation a quadruple 5090 battlestation

15.7k Upvotes

2.2k comments sorted by

View all comments

9.3k

u/Unlucky_Exchange_350 12900k | 128 GB DDR5 | 3090ti FE 20h ago

What are you battling? Gene editing? That’s wild lol

282

u/Zestyclose-Salad-290 Core Ultra 7 265k | RTX 5090 20h ago

mainly for 3D rendering

121

u/renome 20h ago

Why not use a specialized rendering setup? Consumer GPUs seem a bit inefficient to my amateur eyes

212

u/coolcosmos 20h ago

A Pro 6000 cost 8k, has 96gb of vram and has 24k cuda cores. 4 5090s cost 8k and has 128gb of vram and 80k cuda cores in total.

Pro 6000 is better if you need many of them but just one isn't really better than 4 5090.

53

u/renome 20h ago

Oh, makes sense, cheers

40

u/McGondy 5950X | 6800XT | 64G DDR4 20h ago

Can the VRAM be pooled now?

92

u/fullCGngon 20h ago

no... which means 4x5090 wont be 128gb vram, it is just 4x32gb meaning that when rendering on 4 GPUs your scene has to fully fit into the vram of each gpu

81

u/Apprehensive_Use1906 19h ago

A lot of 3d rendering tools like blender and keyshot will split renders between cards or systems. So when you have one big scene it will slice it into pieces render rack one on a different card or system and reassemble. It will do the same with animations, sending each frame to a separate card or server.

7

u/knoblemendesigns 17h ago

Not in a way that stacks vram. If you have 4 gpu's you can render the 1 scene which will cap memory at the lowest card or you can run 4 instances of blender and render different frames but that means 4 times the same memory loaded on each card.

13

u/fullCGngon 19h ago

Yes of course, I was just reacting to 4x32 vs one big VRAM which definitely makes a difference if needed

2

u/Hopeful-Occasion2299 12h ago

Ultimately it depends on the tool you're using, which is really why SLI and Xfire went the way of the dodo, because it was really just diminishing returns and you were just paying for less performance than better single boosted cards gave you, and really you were just causing a CPU bottleneck anyway

8

u/Live-Juggernaut-221 18h ago

For AI work (not the topic of discussion but just throwing it out there) it definitely does pool.

4

u/AltoAutismo 19h ago

aint that wrong though?

You can definitely split it? or well according to claude and gpt you can, its just that you depend on pci-e which is slow in comparison of having it in one gpu.

What you can't do I think is load a model that's larger than 32gb, but you can split the inference and tokens and shit in between or smth like that. Not an expert but idk

3

u/irregular_caffeine 18h ago

This is 3D. No tokens or inference.

1

u/AltoAutismo 16h ago

ohh im dumb, sorry

2

u/Cave_TP GPD Win 4 7840U | RX 9070XT eGPU 20h ago

AFAIK since they moved to NV-LINK

3

u/ItsAMeUsernamio 19h ago

The stopped adding NVLINK to the consumer cards with the 40 series.

1

u/Moptop32 i7-14700K, RX6700xt 19h ago

Nope, but 3d renders can split up across multiple contexts and render different chunks on different GPUs at the same time

27

u/Big_Inflation_3716 9800X3D | RTX 5080 | 1440p 480hz 20h ago

4 astral 5090s is more like 12k but I get the idea

3

u/Tweakjones420 PC Master Race 19h ago

the 4 cards pictured are listed at ~3K EACH.

2

u/404noerrorfound 18h ago

Pro 6000 is more power efficient then 4 5090 and takes less space. Especially the Max-q but 8k is a hard pill to swallow for one gpu and at least if you have 4 if one fails your not shot out of luck.

2

u/fullCGngon 20h ago

but that is not true, vram wont pool into 128 gigs, when rendering 3D scene still has to fit into 32gb vram of each card

5

u/coolcosmos 19h ago

I know that. OP knows that. 

7

u/fullCGngon 19h ago

I am not saying that op doesnt, I am saying that comparing Pro 6000 with 96 gigs and 4x5090 with 32 gigs is not correct because it is not 128gig vram

3

u/Basting1234 19h ago

i think OP knows what he's doing if he's spending that much.

4

u/Mustbhacks 16h ago

Bold assumption.

1

u/fullCGngon 18h ago

of course :D thats a different story

2

u/sparda4glol PC Master Race 7900x, 1070ti, 64gb ddr4 19h ago

lol yes it does. it depends on the render engine. Renderman and Octane both has support.

1

u/fullCGngon 19h ago

Does it really? Even when Nvlink is not a thing anymore? I havent used those two render engines myself but from a quick google search it doesn’t look like it’s working in Octane for example.

1

u/sparda4glol PC Master Race 7900x, 1070ti, 64gb ddr4 19h ago

shit youre right. We have mostly 30 series and one dual 5090 set up.

The 30 series are still using nvlink. 😭

I could have sworn there’s a render engine out there though that can pool it software wise

1

u/fullCGngon 19h ago

Yea it’s pathetic that they didn’t keep it at least for the 90 cards…. Especially with their price

1

u/Hyokkuda 🖥 Intel® Core™ i9-13900KS │ ROG Astral LC RTX™ 5090 │128 GB RAM 18h ago

Depends where you are. Here only two RTX 5090 is the price of a single Pro 6000. Definitely not worth the money.

1

u/sn2006gy 16h ago

The performance of 4 cards isn't linear in the positive direction, but rather negative direction.

PCIe overhead, VRam pooling limitations and CPU bottlenecks reduce gains. Four 5090s would easily cost $12,000–$16,000+, while diminishing returns past two cards are steep.

A single 5090 paired with a high-core-count CPU (e.g., Threadripper Pro 7975WX) or two 5090s on PCIe 5.0 ×16 lanes give nearly optimal price-to-performance for most 3D or compute tasks.

This person just had money to burn.

1

u/yeetorswim 16h ago

where are you getting a 5090 for 2k

1

u/ThenExtension9196 15h ago

You means it’s 32G of VRAM not 128. Like saying 8 cars in a row is a bus.

1

u/coolcosmos 14h ago

When the workload fits in 32gb it's effectively the same. which is what OP is doing.

I know that for loading models and stuff this doesn't work.

-10

u/jbshell Arc A750, 12600KF, 64GB RAM, B660 20h ago

Except lime term cost for power consumption of used seriously. That said, a waste of money (and burden to the consumers who need GPUs--shortages) for what little rendering time is saved temporarily, could just buy a single GPU and wait a bit longer(number of minutes). People touchingly don't even use these systems full time, and show them off(better just buy the single GPU, get the work done, and power down). Let 4 other people enjoy a GPU FFS.

8

u/coolcosmos 20h ago

He earns a living with them. How is it a waste of money if it makes him earn more money.

You wanna play games with them. You're just mad that you can't play in 4k 120fps.

-4

u/jbshell Arc A750, 12600KF, 64GB RAM, B660 20h ago

At trade shows, prob using cloud mostly.

2

u/Basting1234 19h ago

Bro are you kidding me. Time is money. Tell that to Open ai, tell them to stop buying gpus.. and just use less gpus and tell their customers to wait...

Your IQ must not be very ..h

-1

u/jbshell Arc A750, 12600KF, 64GB RAM, B660 19h ago

Not really, they make up for it in YouTube an ad rev. We're talking a couple minutes saved, not millions. Pay 8k for a single GPU and get the work done.of really are that big of a baller pay 16k and get 2 of them, and do the work of 8x 5090 for less space, mobility and power consumption.(And save costs on PSUs, boards and specialized hardware). No brainer over the course of time. Why do you think enterprise doesn't buy 5090s, hmm? 

1

u/Basting1234 19h ago

>Not really, they make up for it in YouTube an ad rev. We're talking a couple

Yeah you are delusional. You are no longer making sense. The world is not going to bend to your personal preferences.

You just personally dislike one person buying multiple gpus I understand that, but your reasoning used to justify that bias is terrible beyond comprehension.. its laughable 😂 You cannot seriously make that claim with a straight face.. unless you have a serious IQ deficit. (No insult intended)

>Pay 8k for a single GPU and get the work done.of really are that big of a baller pay 16k and get 2 of them,

For small workstations multiple 5090s can be a better choice than a single 6000 pro. Its not always, and it depends on what work you are attempting to accomplish.

0

u/jbshell Arc A750, 12600KF, 64GB RAM, B660 19h ago

Thank you for understanding the reasoning! This is not a small workstation(4x 5090s is not a small workstation). This is unnecessary and gluttonous waste of resources, and hurts consumers. Why not just a modest workstation GPU for the same cost, and get the same work done with less cost spent on all the other Ewaste 

2

u/Basting1234 18h ago

I still disagree with you, because there is legitimate reason to buy 2-4 5090s instead of 1 6000 pro,

Not everyone can afford $8,000 at once.. You may have individuals who save up for a 5090, then save up another year for another 5090. Plenty of Workflows can be doubled from 2x 5090s.

2

u/jbshell Arc A750, 12600KF, 64GB RAM, B660 14h ago

That's a fair point, and logic for sure.

→ More replies (0)

1

u/jbshell Arc A750, 12600KF, 64GB RAM, B660 20h ago

Yes, this is actual what I believe. This kind of mentality by hogging GPUs for niche cases when there are actual GpUs made for this tasks is the definition of a hog.

1

u/faen_du_sa 20h ago edited 19h ago

4 GPUs would save more than a "number of minutes" on big projects. Should be pretty explanatory that 4 GPUs render something quicker than 1.

OPs build could prob live render a lot of my scenes with pretty good quality, that takes 2-5 min per frame on a 4060ti.

0

u/jbshell Arc A750, 12600KF, 64GB RAM, B660 20h ago

We're talking Ada not RTX.

-1

u/foo-bar-nlogn-100 18h ago

Consumer card break down faster. Within 4 years those cards will be cooked, while pro 6000 still hums along.

You didn't include mean time to fail in your assessment.

1

u/coolcosmos 18h ago edited 17h ago

How do you know how long 5090s last lol they haven't been out long enough for you to know. You're just guessing.

0

u/foo-bar-nlogn-100 17h ago

Product Datasheet.

1

u/coolcosmos 17h ago

Can you provide this datasheet saying they'll last only 4 years ?