CDPR will never implement it and Nvidia will never make any driver profile for it. SLI is dead that´s why!
I am glad SLI is dead (it was inevitable and only a matter of time)!CDPR will never implement it and Nvidia will never make any driver profile for it. SLI is dead that´s why!
We may be a minority, but we have major dreams and hopes.Much needed feature? It's a feature that 99% of the people playing the game won't be able to take advantage of if it were implemented, I seriously doubt they would waste their time re-engineering the game to add something almost nobody will use when they clearly have much more important and useful things to be doing with their time and skill.
As a secondary note, your statement that 'almost everyone' has a spare GPU powerful enough to be useful just lying around is ridiculous. First of all, a sizeable chunk of people will sell an old graphics card if they buy a new one and I'd say the majority of people will upgrade their graphics card at a point where their old graphics card would be pretty much useless as a second card. Ignoring that, not everyone even has a motherboard capable of holding multiple graphics cards. I sure don't and my computer is half decent.
You seem to be mistaking your personal experience for that of the majority of people. I assure you, your issues are far from the issues of the majority.
Thank you for your detailed reply. You are completely right about the CPU-intensiveness.Just tossing out there again that I don't think it's likely the game will receive any, official multi-GPU support. I have no idea for sure, but games that require a lot of CPU-intensive processes don't make great candidates for multi-GPU. The reason is that lots of these types of games require specific frame timing for various functions. Multi-GPU, by its very nature, tends to "share" frames between the GPUs, meaning there's no, one frame a game can make a call to. This is less of an issue with games that are GPU-intensive: action games, shooters, non-simulation racers, exploration, adventure, etc. It won't matter to those games whatsoever whether they're running at 60 FPS or 260 FPS -- they can just crank out the frames!
From what I can deduce, CP2077 is both CPU- and GPU-intensive, which is the best and worst of both worlds. Managing multi-GPU support would probably require game processes to be completely re-written to be able to manage data coming from specific, multi-GPU hardware using a specific approach. Not only does the hardware vary greatly, but many offer numerous different rendering and drawing approaches that users can switch between. Thus...game functions would likely have to be re-written to account for each different combination of these elements...
...and that's why I'd argue it's extremely unlikely.
_______________
On the other side of this coin, there's no reason players can't get ~60 FPS gameplay. My specs are below:
i7 4790K
16 GB G.Skill Ripjaws RAM DDR3
eVGA GTX 980 ti 6 GB VRAM DDR4
Samsung EVO 860 SSD
Windows 10
1920x1080, all settings at Ultra (no ray-tracing, as it's not supported by the 980 ti). Frame Limit at 56.
I normally get a pretty steady 45-56 FPS everywhere. Occasional, tiny spots dip all the way down into the 20 FPS range, but those spots are very few and far between (Like the Delamain garage area. The amount of steam seems to drag the GPU to its knees. I can't imagine this will be as much of an issue for a >1080 or equivalent GPU.)
For people with more powerful cards, try locking frames at either 64 or 72. Not needing to constantly adjust the render/draw rate will likely smooth things out. Keeping it divisible by 8 seems to help sync frame timing with multitrheading processes in the CPU. (This approach works for me across a lot of games, as well -- not just Cyberpunk.) Since there's no multi-GPU support, hopefully this will help some players get a smoother experience.
Thank you for your detailed reply. You are completely right about the CPU-intensiveness.
I am gaming on a 5K (resolution) monitor, BUT that's why I bought the freaking 2000$ rtx 3090 (and would happily buy another one for mGPU [dx12 mgpu doesn't need nvlink connection so I could even go for a 3080 or 3080ti for 90% performance uplift and save some money]).
Just to make it clear: my goal is NOT to buy multiple thousand-dollar GPUs and Corsair AX 1600i PSUs (500$) + the electricity bill!! just for the sake of it (although PC-building\PC masterrace is somewhat fun ).
My aspiration is just to reach higher fps (higher than the 30 I am getting) WITHOUT having to lower the monitor resolution to a sub-native one or turning graphic settings to medium or low (not in this game, in every other game maybe, but in this graphical masterpiece is would be a blasphemy).
If Nvidia (and AMD) could improve DLSS so that it gives us the graphical quality of ''DLSS OFF'' or ''DLSS QUALITY'' but with the framerate of ''DLSS PERFORMANCE'' then... I would delete this thread immediately and I would be on cloud nine. But since Nvidias strategy is to leave people craving for more (the next gen 4000 series) this probably won't happen.
P.S the 30fps I am always refering to is with ''DLSS QUALITY'' turned on. DLSS PERFORMANCE looks like crap at least on my system with terrible smearing artifacts when ANYTHING moves; DLSS QUALITY has it too but it's much more acceptable. DLSS OFF gives me 15fps(unplayable) so don't anyone think I am not using all the technologies my expensive-as-a-second-hand-car GPU has to offer.
The attached screenshots (I made on the day the game released) coincidentally perfectly describe my relationship to Nvidia. I love them for pioneering this amazing ray tracing tech, but at the same time I hate and blame them for the patheticaly weak cards and DLSS they are producing (when considering the prices).
That's a lot of different concerns, aside from multi-GPU!
1.) There's no reason that you can't get 60 FPS gameplay with that card, but a few caveats (which will remain true across the board for all games):
2.) I'll argue that >2K at steady, 60+ FPS is completely out of reach of most games released in the last 5 years or so. Probably a lot of titles over the last 10 years won't be smooth at that insane level of resolution. We're still entering the arena of 1440p and 2K becoming standard resolutions. They're still not. The vast majority of all games on the market in 2020 will still want to default to 1080p. Pushing things beyond what they're really designed for is possible, but not guaranteed to provide consistent or stable results. Realistically, I'd argue that top-of-the-line GPUs right now are capable of handling modern games at 2K, 60 FPS without ray tracing. If we engage ray tracing options or jack resolutions up to 4K+, I think we'll hit 30 FPS really, really fast. (Multi-GPU, I doubt would provide much improvement. Maybe 40-50 FPS. Players would probably still need to tone down settings to make even that stable.) And that's talking about games that are not being specifically designed to operate in 4K (...which is still the vast majority of games and rendering techniques.)
- Resolution is the single, most impactful thing for FPS. If the hardware is not powerful enough to muscle through the numbers generated by higher resolutions, then you are going to experience performance issues. This is true even with multi-GPU support. Plus, resolution affects rendering and drawing on an exponential curve. Lowering it a step or two will often have drastic affects on improving performance, and raising it will often have an equal and opposite effect.
- A GPU doesn't have total and absolute control of graphical performance -- it simply does the rendering. This can still be throttled by either CPU functions or an engine that's doing a lot of stuff. We can see a lot of examples of this in many different titles, where going from 4K all the way down to 1600x900 may have no affect on graphical performance at all.
- In such situations, multi-GPUs will often have no effect. It's not always possible to "brute-force" through such issues, as "processing power" isn't the deciding factor. In actuality, introducing multi-GPU may actually make the performance worse or introduce severe instability, making it even harder for the CPU or game engine to get at what it needs when it needs it.
3.) I'm sooo behing DLSS. I am eagerly awaiting new updates. I bought a really slick monitor this past year without considering just how disparate the scaling would be if I wanted to bust the resolution down to 1080p. Playing many things now in a window...or with big, ol' black borders around my fullscreen space. Being able to fill the drawspace and get 1:1 scaling with virtually no blurring or pixelation would be wonderful. I also think this is a tech that probably would have been better to pursue years and years ago, instead of trying to iterate on heavily taxing AA techniques. But -- it's on its way!
These things, though, are really not going to benefit or suffer much from multi-GPU. For example...if I'm getting smooth FPS at 2K+ native...then there's no reason to activate DLSS at all. Unless my monitor is the size of a wall, 2K/4K resolution is too fine for the human eye to discern individual pixels anyway (given the average 15" - 30" screen). Using it would simply add weird pixel blocks around some edges. It would only be useful if trying to render something like 2K resolution on huge, 60", 8K screen.
THIS is what I am most afraid of. And yes you figured me out , brute forcing things is indeed a character trait of mine (from losing weight [100+ kg] by just making an insane amount of push-ups daily to small things like always using gcam app on my smartphone with the HDR+ 72 option with a tripod [which basically means the phone takes the photo 72 times and stitches them all on top of each other] just to be able to take the best vacation photos (yes it takes forever, but then I have the bragging rights, because the photos rival my wife's DSLR camera.
- A GPU doesn't have total and absolute control of graphical performance -- it simply does the rendering. This can still be throttled by either CPU functions or an engine that's doing a lot of stuff. We can see a lot of examples of this in many different titles, where going from 4K all the way down to 1600x900 may have no affect on graphical performance at all.
- In such situations, multi-GPUs will often have no effect. It's not always possible to "brute-force" through such issues, as "processing power" isn't the deciding factor.
You are very wise.
I think everything on this topic has already been said in this thread.
But nonetheless please don't delete this thread. Maybe someday when CDPR are finished reparing all bugs (that the majority of people [not me] find more important than the raw fps) and programmers get bored, they may scroll through this forum and consider adding this feature (if Nvidia hasn't released DLSS 3.0 by that time). Hope dies last.
P.S. Rest of my system (to see what hardware is NOT enough to make me happy)
besides the Strix rtx 3090:
-r9 3900x (wanting to upgrade to 5900x but waiting for when it doesn't cost as much as a 5950x)
-samsung 980 pro 1tb nvme (+ other slower ones but cp2077 is on this monster).
-32GB RAM 3600mhz
-Crosshair VIII Hero (doesn't make game faster but gives more stability).
-Corsair H1000w (which I will need to upgrade when doing mGPU)
-a 3440x1440 monitor and the 5K one