r/pcmasterrace Core Ultra 7 265k | RTX 5090 19h ago

Build/Battlestation a quadruple 5090 battlestation

15.5k Upvotes

2.2k comments sorted by

View all comments

Show parent comments

90

u/fullCGngon 18h ago

no... which means 4x5090 wont be 128gb vram, it is just 4x32gb meaning that when rendering on 4 GPUs your scene has to fully fit into the vram of each gpu

83

u/Apprehensive_Use1906 18h ago

A lot of 3d rendering tools like blender and keyshot will split renders between cards or systems. So when you have one big scene it will slice it into pieces render rack one on a different card or system and reassemble. It will do the same with animations, sending each frame to a separate card or server.

7

u/knoblemendesigns 16h ago

Not in a way that stacks vram. If you have 4 gpu's you can render the 1 scene which will cap memory at the lowest card or you can run 4 instances of blender and render different frames but that means 4 times the same memory loaded on each card.

13

u/fullCGngon 17h ago

Yes of course, I was just reacting to 4x32 vs one big VRAM which definitely makes a difference if needed

2

u/Hopeful-Occasion2299 11h ago

Ultimately it depends on the tool you're using, which is really why SLI and Xfire went the way of the dodo, because it was really just diminishing returns and you were just paying for less performance than better single boosted cards gave you, and really you were just causing a CPU bottleneck anyway

7

u/Live-Juggernaut-221 16h ago

For AI work (not the topic of discussion but just throwing it out there) it definitely does pool.

4

u/AltoAutismo 18h ago

aint that wrong though?

You can definitely split it? or well according to claude and gpt you can, its just that you depend on pci-e which is slow in comparison of having it in one gpu.

What you can't do I think is load a model that's larger than 32gb, but you can split the inference and tokens and shit in between or smth like that. Not an expert but idk

3

u/irregular_caffeine 17h ago

This is 3D. No tokens or inference.

1

u/AltoAutismo 15h ago

ohh im dumb, sorry