Frame drop since 1.6

+
Or they are and they simply don't realize how well it used to run before this update. They are returning players after all.
 
Xsx version is 48-60 fps. All starting areas like vi's house, kabuko market, tom's dinner is fps drop mess even on vrr tv. It looks good only on youtube when people don't turn their camera alot. I want to play, i want 60 fps, not levitating people, running threw cars and this fps drops all the time.
 
1664689485350.png


1664689735256.png


I have experienced performance issues for a while now, but its a lot more frequent in 1.6. This is on a 5900X + 32GB RAM + 3060ti.

I am v/f undervolting GPU and power limiting to 91% TDP. Since this is a 200W card, peak power draw will top out at around 180W. This issue also occurs at completely stock settings, with similar frequency. I do not believe this issue is due to unstable undervolt.

Whenever I go to menu, inventory, map or use mirrors (to go to character creation), my GPU power drops heavily from the expected 180W. I become extremely power limited and my framerate dies.

I can usually fix this temporarily by going to menu again, or switching ray tracing off then on again. It happens both with ray tracing on and off, although much more frequently with it on. The overlay below shows expected behaviour (not power limited). Note this is with RT Lighting = Medium and RT Reflections = On. Framerate is significantly worse after the undocumented changes to lighting in patch 1.5:

1664689554142.png


1664690932898.png


We will pretty much always be power limited on GPU in this game but note that power consumption is back up to roughly 90% TDP where it should be on this card. You can see in the charts when I am idling in menu and alt tabbed because performance cap reason turns grey.

Going to menu drops GPU utilization if frame capped. I have a 120 fps frame cap in the game because I don't want to idle in menus/inventory with my GPU sucking 180W to pump out 500+ pointless frames per second and unnecessarily blasting heat into my room:

1664689995489.png


I don't think its transitioning from low to high utilization states properly. Like, GPU power just gets stuck, until you screw around in menus and get it to unstick again.

Massive fps loss is always correlated to massive GPU power drop. Anyone else observe similar behaviour?
 
Last edited:
Massive fps loss is always correlated to massive GPU power drop. Anyone else observe similar behaviour?
I think ive been seeing people claim this often tbh even since earlier patches. Not sure ive been able too pinpoint any solution tho. Hmm im not sure its the same issue, since most often it also limits clockspeeds/usage. Yours seem too not drop even if it pulls lower wattage.... Very weird.

Hmm, some questions: Playing in WIndowed mode? If so what happends if you play in full screen? Using secondary monitors too?
Seems a bit odd that you have too undervolt/powerlimit the card, if 200w is the limit it should reach that at 100% one might think? Does the same thing happend if you dont undervolt?
 
Last edited:
Clocks don't drop because because GPU voltage doesn't drop and utilization is still near 100%. I expect the boost clock to be pegged to the absolute highest it can go at 1.1V.

Windowed/Fullscreen makes no difference. No secondary monitor.

I don't think its odd at all to undervolt/power limit GPUs. The whole point of it is to lower power draw so less heat is produced, which gives frees up thermal headroom to boost more. Thats why this 3060ti is able to get 1890mhz to 1925mhz boost clock. If you power limit a 200W stock card to 90% you should expect to power limited at 180W. You will always be power limited in some form or other in this game.

I have observed this pattern of behaviour over several months at this point and yes I have tested it for several days at a time at 100% stock settings. It still happens with about the same frequency. Same things trigger it - going to menus/map etc. I do not believe this issue is related to the undervolt. I do not believe this undervolt is unstable in Cyberpunk.
 
Last edited:
I don't think its odd at all to undervolt/power limit GPUs. The whole point of it is to lower power draw so less heat is produced, which gives frees up thermal headroom to boost more. Thats why this 3060ti is able to get 1890mhz to 1925mhz boost clock. If you power limit a 200W stock card to 90% you should expect to power limited at 180W. You will always be power limited in some form or other in this game
True but it can also give you stability issues. Just like higher voltage can. This does not seem to be the issue tho so.
I have observed this pattern of behaviour over several months at this point and yes I have tested it for several days at a time at 100% stock settings. It still happens with about the same frequency. Same things trigger it - going to menus/map etc. I do not believe this issue is related to the undervolt. I do not believe this undervolt is unstable in Cyberpunk.
Are you using windows 11? Has recently been a issue with some update there, think its fixed in the latest nvidia driver or GFE experimental build. It kinda sounds like you said it fails too go into high power state.
Clocks don't drop because because GPU voltage doesn't drop and utilization is still near 100%. I expect the boost clock to be pegged to the absolute highest it can go at 1.1V.
Ye i noticed that too, odd. nothing but power draw drops even utilization is pinned at max but fps is cut in half or more -.- never seen this happend tbh.
 
Oh I forgot to mention this is on Windows 10. So probably not related to the Windows 11 performance issues. Its the reason for the power drop I can't explain. Just wondered if the framerate issues other people were experiencing have similar features.
 
I am v/f undervolting GPU and power limiting to 91% TDP. Since this is a 200W card, peak power draw will top out at around 180W. This issue also occurs at completely stock settings, with similar frequency. I do not believe this issue is due to unstable undervolt.
Whenever troubleshooting, you need to return your entire system to factory defaults. This power limitation may be causing additional issues with the way your system is trying to process between the CPU and GPU. This is very likely complicating the issue.


Going to menu drops GPU utilization if frame capped. I have a 120 fps frame cap in the game because I don't want to idle in menus/inventory with my GPU sucking 180W to pump out 500+ pointless frames per second and unnecessarily blasting heat into my room:
The frame cap is a good idea, but if your performance is normally around 60, cap at or below 60. If your frames are normally around 50, cap at or below 48. Using the in-game Vsync vs. something like RTSS may yield different or better results.


I have experienced performance issues for a while now, but its a lot more frequent in 1.6. This is on a 5900X + 32GB RAM + 3060ti.
We have very similar builds, but yours is higher spec. I'm running a 5600X + 16 GB RAM + 3060. I played last night for over 4 hours without a single hitch or stutter. FPS remained a fairly constant 45 at 1080p, RT on, DLSS Quality, settings at Ultra. Let's try this with matched setting and see how that works. Fastest way to match the Graphics settings to configure the Quick Preset to "Ray Tracing Ultra", then go down to DLSS and switch it from "Auto" to "Quality":

1664715507558.png


1664716002458.png


1664716076519.png


1664716266266.png


1664716368141.png

Now, you may not be satisfied with 1080p, 48 FPS, and you may be able to get Screen Space Reflections Quality and RT Lighting up to Cyberpsycho with a 3060 ti -- but for now what we're looking for is equal to superior performance, consistently, at these exact settings. A 3060 ti should not be falling behind a standard 3060. Only possible caveat is that I'm on Windows 11, so that may add some variation. Win11 overall is not working the same for me as Win10 was in different spots. General performance, though, should not be that different. (Should not.)

If you don't see equivalent performance, then we're definitely dealing with a system config issue (Windows, DirectX, drivers, installation, BIOS, etc.) If you do see the same performance, then it's probably down to settings. (One thing I'm wondering is if people are trying to use both DLSS and FidelityFX at the same time. I don't think that works correctly or is even allowed in most games.)

Here's a bunch of in-game shots after 1 hour or so of playtime. (FPS in upper left: )
1664717534740.png


1664717984248.png


1664718923534.png


Last note, the FPS regularly dips to 45 when moving, but the performance is absolutely solid.
 
Whenever troubleshooting, you need to return your entire system to factory defaults. This power limitation may be causing additional issues with the way your system is trying to process between the CPU and GPU. This is very likely complicating the issue.
I've already said multiple times that I've tested stock (extensively) and the same behaviour occurs. Its also not about leaving it on for an hour. Its about power drop when you enter/exit menus. If I don't go enter/exit menu for an hour, I usually don't see any power drop.
 
I've already said multiple times that I've tested stock (extensively) and the same behaviour occurs. Its also not about leaving it on for an hour. Its about power drop when you enter/exit menus. If I don't go enter/exit menu for an hour, I usually don't see any power drop.
I'm in and out of menus constantly with no drops. If you're willing to work with me, I can try to test things back and forth. It won't be fast, and it won't be fun, exactly, but since our systems are very close, we may be able to root out something that either corrects the issue on yours or causes it on mine.

Another thing that I had to deal with when I first got this system was a terrible UEFI configuration. Both AMD's Core Performance Boost and ASRock's Base Frequency Boost were set to auto fire whenever the PC seemed to have power available...and they would compete. My general performance (across the board) was initially very inconsistent: stutters, hitching, and freezing in numerous titles. And heat levels / voltage were concerning to say the least. 1.52 volts being pulled and the CPU jacking to 90*C while surfing the net. Ridiculous.

It did take about a week's worth of research and some trial and error, but I eventually turned off both of the "Boost" options altogether in the UEFI, but left the XMP settings enabled, as I was seeing no issues that smelled like RAM. Smooth sailing ever since. No hitching or stuttering at all, in any title. Temps stay below 75*C pretty much, even under load, on both CPU and GPU. Most games still run at a solid 120-144 FPS. You may want to look at that in particular.
 
Yea i saw some odd cpu settings in my auto bios too. Not that bad, but its very very power hungry and liked too ramp up voltage. With the CRT software i could get it too run 4,4 ghz all core(3900x) while pulling less power at higher power response settings stable (In everything except this game for some reason?) It took alot of seaching before i found that -.-
 
Yea i saw some odd cpu settings in my auto bios too. Not that bad, but its very very power hungry and liked too ramp up voltage. With the CRT software i could get it too run 4,4 ghz all core(3900x) while pulling less power at higher power response settings stable (In everything except this game for some reason?) It took alot of seaching before i found that -.-
I'm not following the logic of what they're trying to do. Rather than reading actual performance numbers, it seems to read voltage overhead and then push it all the way up even if there's no performance issue. Plus, the constant frequency fluctuations are never going to feel smooth. I don't think that these features are necessarily working as intended.

The biggest issue, however, is that they're almost wholly unnecessary. There are very, very few titles on the market that will benefit in any meaningful way from this sort of thing. If someone is running a top-range rig with aggressive cooling and trying to get 100+ FPS at 2K-4K resolutions without formal overclocking, it may help. Mid-range systems with standard cooling will run pretty much any title smoothly (60-144 FPS) at 1080p-1440p using stock settings. It serves almost no practical purpose but to burn out your hardware faster and create instability.

The power draw, especially, I'm going to call just plain unwise if not unsafe for the sorts of motherboards that come in systems like mine. This little mobo was not made for that type of constant cooking. And I should not be smelling hints of ozone running a system out-of-the-box. To be honest, I was getting quite annoyed as I worked things out. I don't think it was malicious -- I think it was negligent if not downright incompetent. More evidence of "buyer beware" whenever going with a prebuilt system. (Man, I wish I didn't have to go that route...but I had no choice.)
 
Tbh it did little differance in any game ofc. It was for render and so on i made all those changes on my cpu. when using just 1-2 cores it did ramp up quite a bit in voltage totally meaningless tho. At 4k your really dont need much of a cpu too keep up with the gfx card, since the fps will generaly be lower. Only game i noticed a cpu bottleneck at 4k was probably cs go or something like that ^^
 
Tbh it did little differance in any game ofc. It was for render and so on i made all those changes on my cpu. when using just 1-2 cores it did ramp up quite a bit in voltage totally meaningless tho. At 4k your really dont need much of a cpu too keep up with the gfx card, since the fps will generaly be lower. Only game i noticed a cpu bottleneck at 4k was probably cs go or something like that ^^
What I'm curious about now is if this sort of thing might be throwing Cyberpunk for a loop. It seems that default BIOS / UEFI defaults these things to "Auto" settings. As I noticed such a big, positive difference on my end by disabling them, I'm wondering if others will see any improvement. (I'm not sure if it will affect inventory issues so much as the big FPS drops some people are reporting.)
 
What I'm curious about now is if this sort of thing might be throwing Cyberpunk for a loop. It seems that default BIOS / UEFI defaults these things to "Auto" settings. As I noticed such a big, positive difference on my end by disabling them, I'm wondering if others will see any improvement. (I'm not sure if it will affect inventory issues so much as the big FPS drops some people are reporting.)
I think the first thing i tested was just using a voltage offset. -0,1 or something like that and i saw improvments on multicore loads and lower temps. If i went too low it would crash tho. Bu ti think this is the easiest way for people who arent willing too invest in loads of testing. Offset -0,1 and when in windows run some cinebench or something too se that it does not crash. Might be a good ide too run it before too to se if score increases or decreses.
 
Another thing that I had to deal with when I first got this system was a terrible UEFI configuration. Both AMD's Core Performance Boost and ASRock's Base Frequency Boost were set to auto fire whenever the PC seemed to have power available...and they would compete. My general performance (across the board) was initially very inconsistent: stutters, hitching, and freezing in numerous titles. And heat levels / voltage were concerning to say the least. 1.52 volts being pulled and the CPU jacking to 90*C while surfing the net. Ridiculous.

What do you mean by AMD's core performance boost? Do you mean Precision Boost Overdrive (PBO) 1 or 2?

In PBO2, its normal for it to spike 1.52V on your best core and hit 5.1ghz, even at idle. Zen 3 cpus behave a bit like GPUs. They have a boost algorithim that does its own thing within pre-defined limits (temperature, SoC power, RMS/Peak current).

It will keep boosting depending on workload until it hits 1 or more of those limits. So at idle, it just sees a tonne of power and thermal headroom, so it uses it to hit stupid clocks on 1 core. The voltage (and frequency) actually goes down if all cores are loaded, because theres less headroom.

It did take about a week's worth of research and some trial and error, but I eventually turned off both of the "Boost" options altogether in the UEFI, but left the XMP settings enabled, as I was seeing no issues that smelled like RAM. Smooth sailing ever since. No hitching or stuttering at all, in any title. Temps stay below 75*C pretty much, even under load, on both CPU and GPU. Most games still run at a solid 120-144 FPS. You may want to look at that in particular.

Which boost did you turn off?
 
What do you mean by AMD's core performance boost? Do you mean Precision Boost Overdrive (PBO) 1 or 2?

In PBO2, its normal for it to spike 1.52V on your best core and hit 5.1ghz, even at idle. Zen 3 cpus behave a bit like GPUs. They have a boost algorithim that does its own thing within pre-defined limits (temperature, SoC power, RMS/Peak current).

It will keep boosting depending on workload until it hits 1 or more of those limits. So at idle, it just sees a tonne of power and thermal headroom, so it uses it to hit stupid clocks on 1 core. The voltage (and frequency) actually goes down if all cores are loaded, because theres less headroom.



Which boost did you turn off?
I'm not remembering offhand what each one was called exactly -- they were very similar. This may differ on different BIOS and different chipsets. On my rig, AMD's boost tech was activated and ASRock's totally separate boost tech was activated at the same time, even though both of them did the exact same thing in different ways. So what would happen is that one of them would kick in -- ASRock's seemed to run the show, and I could tell easily because of the voltage settings that each was set to use. So, if I, for example, opened up a window in Firefox, ASRock's boost would immediately go, "User activity with plenty of power not being used! Max throttle!! Let's DO THIS!!!" and I would watch my voltage leap across the board on the mobo, CPU, and GPU, instantly and viciously. Fans would spin up like nuts -- like I've not heard since the 1990s -- and...a single window would open. Then everything would start winding down, which could take around 20 seconds or so. Just stupid. In games, I can't begin to tell you the stuttering, chopping, and hitching this was creating. AMD would go first, then the ASRock would go, "NO!!! I will control this!!!" and wrench the voltage and frequencies to the max, bringing the CPU into and over the 90*C range in a matter of seconds and leaving it there until it began to heat throttle. At that point, it would turn off, and the AMD tech would say, "Ah, my turn, then!" and instantly crank the voltage and clocks back up to a more reasonable level. As soon as the power draw and heat were within ASRock's desired levels, it would start this process all over again. This pattern would occur about once every 5-10 seconds during any form of gameplay, especially under load.

By disabling just the ASRock stuff, the AMD boost was more tame, but would still fire up any time things got even a little heavy -- even if there was no need. This would, naturally, create a hitch or stutter in many games, noticeable, but it wouldn't show up in the FPS, as it wasn't coming from the GPU. Once again, an issue that created noticeable grabbing even though there was no need. It just did it because it could.

By disabling both, I have silky smooth performance with absolutely no loss or FPS drops at all. I can often get better performance in most games, since the power and frequencies aren't jumping all over the place constantly. (How about that? Making things stable resulted in stability. Who'd of thunk it?) End result is that for most end-users, this tech is simply not necessary.

For it to work as intended, a system really needs to be cooled very aggressively and constantly under a pretty heavy load. For most end-users, the only thing that will happen is that their systems will fluctuate all over the place the minute they're under pressure if both are available and activated at the same time.

This was my experience, however, with my extremely budget motherboard. Mileage will vary depending on the quality of the parts.


_______________


XMP seems to be a very safe tweak to leave on. By looking at what it does, it simply allows a system to take advantage of the RAM's true speed, even if certain BIOS / UEFI setups are coded to clock RAM speeds certain ways under certain configurations. It's more like a way of getting around actual hardware capabilities being choked by outdated software techniques, not truly an "overclock". The user must ensure that they have their RAM in the correct slots, however, or this can cause wild instability. This will vary by motherboard, so the only way to know for sure is to read the mobo manual.
 
I'm trying to follow you but its very hard for me to understand what you mean without the specific names of these "boost tech" options in BIOS.

Off the top of my head there is:

1. Core Performance Boost (CPB) - this is Zen 3's stock/default boost behaviour.
2. Precision Boost (PB) - this is the successor to XFR (eXtended Frequency Range). It extends the frequency boost tables of a limited number of cores if there is ppt, tdc, edc* headroom to allow for it to reach those clock frequencies. This is also part of Zen 3's stock/default boost behaviour.
3. Precision Boost Overdrive (PBO2) - this is a kind of single/multi core boost algorithm that can operate outside stock ppt, tdc, edc with an undervolting utility (curve optimiser).

*ppt = package power tracking, tdc = thermal design current, edc = electrical design current

AMD just love their confusing acronyms. Nevertheless, stock Zen 3 boosts using an algorithm within pre-defined electrical and temperature limits. Thats just how it works as standard.

XMP is overclocked RAM. Make no mistake about that. You are running it at voltages and frequencies well beyond JEDEC memory specification.

You know what. To keep things simple, I'm just going to revert to completely stock cpu and gpu and test again.
 
Last edited:
This was my experience, however, with my extremely budget motherboard. Mileage will vary depending on the quality of the parts.
I generally go with enthusiast level boards and i did not have the same issues, just temp that was high even on watercooling. What CTR does is test your cpu with lower and lower voltage untill it fails at a specific speed. Then you can kinda setup profiles for low/heavy/single thread use that are more optimised. It does require specific settings in bios too be exact in it tho like fast switching and more power delivery bla bla.
XMP seems to be a very safe tweak to leave on. By looking at what it does, it simply allows a system to take advantage of the RAM's true speed, even if certain BIOS / UEFI setups are coded to clock RAM speeds certain ways under certain configurations. It's more like a way of getting around actual hardware capabilities being choked by outdated software techniques, not truly an "overclock". The user must ensure that they have their RAM in the correct slots, however, or this can cause wild instability. This will vary by motherboard, so the only way to know for sure is to read the mobo manual.
Depends. It is a OC but its a certified OC that why voltage always increases too keep it stable(Timings generally increase too). Theres alot of memory chip makers that all have diffrent quality and so on. If you look at Gskill ddr4 4000 cl14 memroy for example you see a really low CL/timings with fast ram. It draws 1.55 volt tho, so its overclocked and very high binned memory chips. Its a higher chanse it will run at those speeds then if you buy random ram and Oc yourself since they validate it tho. Same with the really high specc cpus intel hade that could oc too 6 ghz garanteed.
1. Core Performance Boost (CPB) - this is Zen 3's stock/default boost behaviour.
2. Precision Boost (PB) - this is the successor to XFR (eXtended Frequency Range). It extends the frequency boost table on a limited number of cores if there is ppt, tdc, edc headroom to allow for it. This is also part of Zen 3's stock/default boost behaviour.
3. Precision Boost Overdrive (PBO2) - this is a kind of single/multi core boost algorithm that can operate outside stock ppt, tdc, edc with an undervolting utility (curve optimiser).

AMD just love their confusing acronyms.
Yea this!! Pretty much what CTR does but automatically.
 
I can attest to the performance drop between 1.5 and 1.6

I ran benchmarks using my personal settings (basically mix of med-high with volumetric fog, subsurface scattering, ssao, and screen space reflections on low) and again with every setting on it's lowest option (including off) in the menu. I've attached the results.

My system:
Intel i5-12400F
AMD Vega 64 Undervolted
16 GB RAM 3200 cl16
Intel NVME SSD

All runs were done on the same system and same drivers and same in menu settings. When switching to 1.6 I reset the options to default before entering the graphics settings like some people suggested - I tried this before and it didn't work but figured I might as well do it to be safe. And since I'm on an AMD gpu it shows it's not just an NVIDIA issue.

I'd be curious to know like exactly what settings were changed - the patch notes as usual were pretty vague. Before running them side by side I'd assume it was the fog but I honestly couldn't tell the difference visually.
 

Attachments

  • Cyberpunk 1.5 vs 1.6.jpg
    Cyberpunk 1.5 vs 1.6.jpg
    160.4 KB · Views: 73
Top Bottom