Cyberpunk 2077 on 5950X and 6900XT

+
Hi all!

I'm not sure if this has been asked here before or not. Please do forgive me if this has been asked and answered a zillion times and just kindly link me to the current working solution.

So, I've just downloaded Cyberpunk 2077 on GOG Galaxy today, with the latest patch and all. I decided not to have a go at it on launch day after hearing about the bad performance everywhere, and I also didn't really have any time to spare to play anyway on launch. But, I figured that maybe after a few days and a few patches, the game would at least be somewhat playable.

Anyway, I have got the latest available driver for the 6900XT, version 20.12.1 according to Radeon Software, and I have got the latest chipset driver as well for my X570 motherboard, version 2.10.13.408 according to AMD's website.

I use a 1080p monitor and tested different resolutions between 720p, 1080p, 1440p, 2160p, and 2880p, by using the virtual super resolution feature and setting FidelityFX CAS dynamic resolution rendering to off. So, the GPU technically rendered the full virtual resolution of my choice. Furthermore, I also tested different graphics preset (low, medium, high, ultra).

Across all resolutions and graphics presets, I can only get 25-35 fps. There are no RT features enabled yet for AMD cards, so, this is with RT off. This already is an unusual behavior as the performance basically doesn't scale with the resolution or graphics quality at all.

Across all resolutions and graphics presets, the only thing changing is my GPU usage. The lower the resolutions/presets, the lower the GPU usage will be, and the higher the resolutions/presets, the higher the GPU usage, but it was never pinned at 100% no matter what. GPU usage is around 10-15% at 720p low preset, while it went as high as 65% at 2880p ultra preset. But basically, I never saw anything more than 25-35 fps no matter what I did. It is like my CPU is being weirdly artificially bottlenecked to only be able to process 25-35 frames per second and no more, which then causes the limited/low GPU usage.

I'm a bit lost at the moment, as posting on several threads and subreddits on Reddit, the GOG forum, contacting GOG and CDPR support directly get me nowhere. I got little to no reply/help, but when I did get some suggestions, I tried them all to no avail. Just to list them, so far, I have tried the following fixes:
- Manually HEX editing the EXE with HxD
SOURCE: https://www.reddit.com/r/Amd/comments/kbuswu
- Manually changing the PC entries in the memory_pool_budgets.csv file. I have tried to set it to half of my RAM capacity and my entire GPU VRAM. I have also tried to just put the value 0 for dynamic allocations.
SOURCE: https://www.reddit.com/r/cyberpunkgame/comments/kccabx
- Performance overhaul patch from GitHub
SOURCE: https://github.com/yamashi/PerformanceOverhaulCyberpunk

Does anybody have any idea what else to do in this case or how to try to fix it?

That is, aside from waiting for CDPR to patch it or for AMD to release new drivers of course...

Any help would be greatly appreciated! Many thanks!

#Hardware:
* CPU: Ryzen 9 5950X 16C/32T
* GPU: AMD Radeon RX 6900XT 16GB
* RAM: 32GB Corsair Dominator Platinum 2800MHz CL16
* MOBO: ASUS ROG Crosshair VIII Impact
* STORAGE: Samsung 970 EVO Plus 1TB, Corsair Force MP510 1.92TB, Seagate Barracuda 5TB 2.5" HDD (Game is on an external HDD though)
* RESOLUTION (+ TARGET FPS): 1440p, 90fps
* PLATFORM: GOG
 
Hi all!

I'm not sure if this has been asked here before or not. Please do forgive me if this has been asked and answered a zillion times and just kindly link me to the current working solution.

So, I've just downloaded Cyberpunk 2077 on GOG Galaxy today, with the latest patch and all. I decided not to have a go at it on launch day after hearing about the bad performance everywhere, and I also didn't really have any time to spare to play anyway on launch. But, I figured that maybe after a few days and a few patches, the game would at least be somewhat playable.

Anyway, I have got the latest available driver for the 6900XT, version 20.12.1 according to Radeon Software, and I have got the latest chipset driver as well for my X570 motherboard, version 2.10.13.408 according to AMD's website.

I use a 1080p monitor and tested different resolutions between 720p, 1080p, 1440p, 2160p, and 2880p, by using the virtual super resolution feature and setting FidelityFX CAS dynamic resolution rendering to off. So, the GPU technically rendered the full virtual resolution of my choice. Furthermore, I also tested different graphics preset (low, medium, high, ultra).

Across all resolutions and graphics presets, I can only get 25-35 fps. There are no RT features enabled yet for AMD cards, so, this is with RT off. This already is an unusual behavior as the performance basically doesn't scale with the resolution or graphics quality at all.

Across all resolutions and graphics presets, the only thing changing is my GPU usage. The lower the resolutions/presets, the lower the GPU usage will be, and the higher the resolutions/presets, the higher the GPU usage, but it was never pinned at 100% no matter what. GPU usage is around 10-15% at 720p low preset, while it went as high as 65% at 2880p ultra preset. But basically, I never saw anything more than 25-35 fps no matter what I did. It is like my CPU is being weirdly artificially bottlenecked to only be able to process 25-35 frames per second and no more, which then causes the limited/low GPU usage.

I'm a bit lost at the moment, as posting on several threads and subreddits on Reddit, the GOG forum, contacting GOG and CDPR support directly get me nowhere. I got little to no reply/help, but when I did get some suggestions, I tried them all to no avail. Just to list them, so far, I have tried the following fixes:
- Manually HEX editing the EXE with HxD
SOURCE: https://www.reddit.com/r/Amd/comments/kbuswu
- Manually changing the PC entries in the memory_pool_budgets.csv file. I have tried to set it to half of my RAM capacity and my entire GPU VRAM. I have also tried to just put the value 0 for dynamic allocations.
SOURCE: https://www.reddit.com/r/cyberpunkgame/comments/kccabx
- Performance overhaul patch from GitHub
SOURCE: https://github.com/yamashi/PerformanceOverhaulCyberpunk

Does anybody have any idea what else to do in this case or how to try to fix it?

That is, aside from waiting for CDPR to patch it or for AMD to release new drivers of course...

Any help would be greatly appreciated! Many thanks!

#Hardware:
* CPU: Ryzen 9 5950X 16C/32T
* GPU: AMD Radeon RX 6900XT 16GB
* RAM: 32GB Corsair Dominator Platinum 2800MHz CL16
* MOBO: ASUS ROG Crosshair VIII Impact
* STORAGE: Samsung 970 EVO Plus 1TB, Corsair Force MP510 1.92TB, Seagate Barracuda 5TB 2.5" HDD (Game is on an external HDD though)
* RESOLUTION (+ TARGET FPS): 1440p, 90fps
* PLATFORM: GOG
I have the same hardware as you CPU 5950X and RX 6900 XT Red Devil. 32GB of 3500Ghz RAM.

The first time I tried the game in 2020 it was a terrible frame rate 25/30

Now, I am getting a regular 77fps average on 5120 x 1440 at 240 Hz refresh, everything maxed out and no overclock.

If you want to know my GPU settings. That is the only thing I've played with. I don't like HDR on this game, so that's the only thing I have turned off.
 
32GB of 3500Ghz RAM.
That's an interesting RAM Kit...

Now, I am getting a regular 77fps average on 5120 x 1440 at 240 Hz refresh, everything maxed out and no overclock.
I highly doubt that. Nothing against a 6900XT but Cyberpunk with everything maxed and RT without any kind of image scaling (DLSS, FSR) will destroy this GPU. Heck, even a 3090 can't run this game at 5120x1440 (almost 4k, 88%) at roughly 70fps. (I do know that the 6900XT and the 3090 are comparable, but the 3090 has superior RT performance which is what causes the most significant impact in Cyberpunk 2077)

Also, RT maxed out would mean psycho lightning, which is another topic entirely.
 

DC9V

Forum veteran
Did you try this?
  • Uninstall ASUS AI SUITE and make sure that it's actually uninstalled
  • Check CPU POWERPLAN in Windows settings and set it to HIGH PERFORMANS
 
That's an interesting RAM Kit...


I highly doubt that. Nothing against a 6900XT but Cyberpunk with everything maxed and RT without any kind of image scaling (DLSS, FSR) will destroy this GPU. Heck, even a 3090 can't run this game at 5120x1440 (almost 4k, 88%) at roughly 70fps. (I do know that the 6900XT and the 3090 are comparable, but the 3090 has superior RT performance which is what causes the most significant impact in Cyberpunk 2077)

Also, RT maxed out would mean psycho lightning, which is another topic entirely.
Can confirm, Cp2077 @4k max without DLSS = 25 fps on a 3090 :D
 
For a AMD ryzen 3rd gen doesn't the infinity fabric max out at 1900mhz? so probably want 3800 ram not 2800, that can have a big effect on AMD CPU performance.

And put the game on one of your Internal SSDs, the bottle neck from the external HDD will be horrible.
 

DC9V

Forum veteran
For a AMD ryzen 3rd gen doesn't the infinity fabric max out at 1900mhz? so probably want 3800 ram not 2800, that can have a big effect on AMD CPU performance.
I think it depends on the motherboard. Edit: apparently It depends on the CPU and some can do more than 1900, while some are only stable at 1800MHz. But I agree: IMO, anything slower than DDR4-3200-CL16, doesn't make much sense in combination with a 5950x. (edit: nowadays 3200 CL16 is unlikely to be Samsung B-die so you actually want to aim for CL14) But I think FCLK 1800MHz, combined with DDR4-3600 is still the way to go, after having watched some recent benchmarks. DDR4 is much cheaper than DDR5.

Anyway, since the OP is already digging into fine tuning: Here are my settings. I think it's a good mix between relatively low temps and great single core performance. (Disclaimer: Please only use these settings with a sufficient cooler, such as the bequiet Dark Rock Pro 4, in order to avoid damaging the CPU. I cannot ensure that these settings are safe.)
  • PBO_ENABLED
  • SMT_OFF
  • XMP enabled, everything else on auto, except:
  • PBO Curve optimizer: Per core offset: Negative 12 on all cores, except first core is Negative 13
  • Socket power limits: PPT: 367 ; TDC: 195 ; EDC: 195
  • (For safer settings: Default power limits ; Offset: all cores, Negative 10.)
3Dmark score:
Time Spy CPU score: 15275
CPU profile, 16 threads: 12907
 
Last edited:
I think it depends on the motherboard. But I agree: IMO, anything slower than DDR4-3200-CL16, doesn't make much sense in combination with a 5950x. But I think FCLK 1800MHz, combined with DDR4-3600 is still the way to go, after having watched some recent benchmarks. DDR4 is much cheaper than DDR5, and some boards are even limited to FCLK 1600.

Anyway, since the OP is already digging into fine tuning: Here are my settings. I think it's a good mix between relatively low temps and great single core performance. (Disclaimer: Please only use these settings with a sufficient cooler, such as the bequiet Dark Rock Pro 4, in order to avoid damaging the CPU. I cannot ensure that these settings are safe.)
  • PBO_ENABLED
  • SMT_OFF
  • XMP enabled, everything else on auto, except:
  • PBO Curve optimizer: Per core offset: Negative 12 on all cores, except first core is Negative 13
  • Socket power limits: PPT: 367 ; TDC: 195 ; EDC: 195
  • (For safer settings: Default power limits ; Offset: all cores, Negative 10.)
3Dmark score:
Time Spy CPU score: 15275
CPU profile, 16 threads: 12907
While those settings work for you I dare say that it probably won't increase the performance drastically. Sure, it is always nice to optimise ones PC and also undervolt both CPU and GPU.

However, I strongly advice against copying the values here because his CPU might be different to yours and if you want to get into PBO and Cure Optimiser settings you should also read up on it.
 
I think it depends on the motherboard. But I agree: IMO, anything slower than DDR4-3200-CL16, doesn't make much sense in combination with a 5950x. But I think FCLK 1800MHz, combined with DDR4-3600 is still the way to go, after having watched some recent benchmarks. DDR4 is much cheaper than DDR5, and some boards are even limited to FCLK 1600.

Anyway, since the OP is already digging into fine tuning: Here are my settings. I think it's a good mix between relatively low temps and great single core performance. (Disclaimer: Please only use these settings with a sufficient cooler, such as the bequiet Dark Rock Pro 4, in order to avoid damaging the CPU. I cannot ensure that these settings are safe.)
  • PBO_ENABLED
  • SMT_OFF
  • XMP enabled, everything else on auto, except:
  • PBO Curve optimizer: Per core offset: Negative 12 on all cores, except first core is Negative 13
  • Socket power limits: PPT: 367 ; TDC: 195 ; EDC: 195
  • (For safer settings: Default power limits ; Offset: all cores, Negative 10.)
3Dmark score:
Time Spy CPU score: 15275
CPU profile, 16 threads: 12907
amd SAYS THE 5950 CAPS OUT AT 3200Mhz for memory so probably best to stick no higher than that.

Power tweaking is always on a per system basis. has to be done for that particualr system, and there are some more obvious problems here, like the external storage of the game.
 

DC9V

Forum veteran
While those settings work for you I dare say that it probably won't increase the performance drastically.
Yea, it's probably just 1-2% more performance.
amd SAYS THE 5950 CAPS OUT AT 3200Mhz for memory so probably best to stick no higher than that.
Uhm, no they don't? At launch the performance was maxed out at FCLK 1900. ..
it runs at 1800Mhz, no problem:
cpu-z.png
1639301015401.png

Post automatically merged:

Anyway, I assume that OP's problem is ASUS AI suite. Sometimes it overrides the CPU powerplan, or deactivates fans. It has been a problem since 7 years now. It's also very difficult to uninstall...
 
Last edited:
Top Bottom