Or they are and they simply don't realize how well it used to run before this update. They are returning players after all.
I think ive been seeing people claim this often tbh even since earlier patches. Not sure ive been able too pinpoint any solution tho. Hmm im not sure its the same issue, since most often it also limits clockspeeds/usage. Yours seem too not drop even if it pulls lower wattage.... Very weird.Massive fps loss is always correlated to massive GPU power drop. Anyone else observe similar behaviour?
True but it can also give you stability issues. Just like higher voltage can. This does not seem to be the issue tho so.I don't think its odd at all to undervolt/power limit GPUs. The whole point of it is to lower power draw so less heat is produced, which gives frees up thermal headroom to boost more. Thats why this 3060ti is able to get 1890mhz to 1925mhz boost clock. If you power limit a 200W stock card to 90% you should expect to power limited at 180W. You will always be power limited in some form or other in this game
Are you using windows 11? Has recently been a issue with some update there, think its fixed in the latest nvidia driver or GFE experimental build. It kinda sounds like you said it fails too go into high power state.I have observed this pattern of behaviour over several months at this point and yes I have tested it for several days at a time at 100% stock settings. It still happens with about the same frequency. Same things trigger it - going to menus/map etc. I do not believe this issue is related to the undervolt. I do not believe this undervolt is unstable in Cyberpunk.
Ye i noticed that too, odd. nothing but power draw drops even utilization is pinned at max but fps is cut in half or more -.- never seen this happend tbh.Clocks don't drop because because GPU voltage doesn't drop and utilization is still near 100%. I expect the boost clock to be pegged to the absolute highest it can go at 1.1V.
Whenever troubleshooting, you need to return your entire system to factory defaults. This power limitation may be causing additional issues with the way your system is trying to process between the CPU and GPU. This is very likely complicating the issue.I am v/f undervolting GPU and power limiting to 91% TDP. Since this is a 200W card, peak power draw will top out at around 180W. This issue also occurs at completely stock settings, with similar frequency. I do not believe this issue is due to unstable undervolt.
The frame cap is a good idea, but if your performance is normally around 60, cap at or below 60. If your frames are normally around 50, cap at or below 48. Using the in-game Vsync vs. something like RTSS may yield different or better results.Going to menu drops GPU utilization if frame capped. I have a 120 fps frame cap in the game because I don't want to idle in menus/inventory with my GPU sucking 180W to pump out 500+ pointless frames per second and unnecessarily blasting heat into my room:
We have very similar builds, but yours is higher spec. I'm running a 5600X + 16 GB RAM + 3060. I played last night for over 4 hours without a single hitch or stutter. FPS remained a fairly constant 45 at 1080p, RT on, DLSS Quality, settings at Ultra. Let's try this with matched setting and see how that works. Fastest way to match the Graphics settings to configure the Quick Preset to "Ray Tracing Ultra", then go down to DLSS and switch it from "Auto" to "Quality":I have experienced performance issues for a while now, but its a lot more frequent in 1.6. This is on a 5900X + 32GB RAM + 3060ti.
I've already said multiple times that I've tested stock (extensively) and the same behaviour occurs. Its also not about leaving it on for an hour. Its about power drop when you enter/exit menus. If I don't go enter/exit menu for an hour, I usually don't see any power drop.Whenever troubleshooting, you need to return your entire system to factory defaults. This power limitation may be causing additional issues with the way your system is trying to process between the CPU and GPU. This is very likely complicating the issue.
I'm in and out of menus constantly with no drops. If you're willing to work with me, I can try to test things back and forth. It won't be fast, and it won't be fun, exactly, but since our systems are very close, we may be able to root out something that either corrects the issue on yours or causes it on mine.I've already said multiple times that I've tested stock (extensively) and the same behaviour occurs. Its also not about leaving it on for an hour. Its about power drop when you enter/exit menus. If I don't go enter/exit menu for an hour, I usually don't see any power drop.
I'm not following the logic of what they're trying to do. Rather than reading actual performance numbers, it seems to read voltage overhead and then push it all the way up even if there's no performance issue. Plus, the constant frequency fluctuations are never going to feel smooth. I don't think that these features are necessarily working as intended.Yea i saw some odd cpu settings in my auto bios too. Not that bad, but its very very power hungry and liked too ramp up voltage. With the CRT software i could get it too run 4,4 ghz all core(3900x) while pulling less power at higher power response settings stable (In everything except this game for some reason?) It took alot of seaching before i found that -.-
What I'm curious about now is if this sort of thing might be throwing Cyberpunk for a loop. It seems that default BIOS / UEFI defaults these things to "Auto" settings. As I noticed such a big, positive difference on my end by disabling them, I'm wondering if others will see any improvement. (I'm not sure if it will affect inventory issues so much as the big FPS drops some people are reporting.)Tbh it did little differance in any game ofc. It was for render and so on i made all those changes on my cpu. when using just 1-2 cores it did ramp up quite a bit in voltage totally meaningless tho. At 4k your really dont need much of a cpu too keep up with the gfx card, since the fps will generaly be lower. Only game i noticed a cpu bottleneck at 4k was probably cs go or something like that ^^
I think the first thing i tested was just using a voltage offset. -0,1 or something like that and i saw improvments on multicore loads and lower temps. If i went too low it would crash tho. Bu ti think this is the easiest way for people who arent willing too invest in loads of testing. Offset -0,1 and when in windows run some cinebench or something too se that it does not crash. Might be a good ide too run it before too to se if score increases or decreses.What I'm curious about now is if this sort of thing might be throwing Cyberpunk for a loop. It seems that default BIOS / UEFI defaults these things to "Auto" settings. As I noticed such a big, positive difference on my end by disabling them, I'm wondering if others will see any improvement. (I'm not sure if it will affect inventory issues so much as the big FPS drops some people are reporting.)
Another thing that I had to deal with when I first got this system was a terrible UEFI configuration. Both AMD's Core Performance Boost and ASRock's Base Frequency Boost were set to auto fire whenever the PC seemed to have power available...and they would compete. My general performance (across the board) was initially very inconsistent: stutters, hitching, and freezing in numerous titles. And heat levels / voltage were concerning to say the least. 1.52 volts being pulled and the CPU jacking to 90*C while surfing the net. Ridiculous.
It did take about a week's worth of research and some trial and error, but I eventually turned off both of the "Boost" options altogether in the UEFI, but left the XMP settings enabled, as I was seeing no issues that smelled like RAM. Smooth sailing ever since. No hitching or stuttering at all, in any title. Temps stay below 75*C pretty much, even under load, on both CPU and GPU. Most games still run at a solid 120-144 FPS. You may want to look at that in particular.
I'm not remembering offhand what each one was called exactly -- they were very similar. This may differ on different BIOS and different chipsets. On my rig, AMD's boost tech was activated and ASRock's totally separate boost tech was activated at the same time, even though both of them did the exact same thing in different ways. So what would happen is that one of them would kick in -- ASRock's seemed to run the show, and I could tell easily because of the voltage settings that each was set to use. So, if I, for example, opened up a window in Firefox, ASRock's boost would immediately go, "User activity with plenty of power not being used! Max throttle!! Let's DO THIS!!!" and I would watch my voltage leap across the board on the mobo, CPU, and GPU, instantly and viciously. Fans would spin up like nuts -- like I've not heard since the 1990s -- and...a single window would open. Then everything would start winding down, which could take around 20 seconds or so. Just stupid. In games, I can't begin to tell you the stuttering, chopping, and hitching this was creating. AMD would go first, then the ASRock would go, "NO!!! I will control this!!!" and wrench the voltage and frequencies to the max, bringing the CPU into and over the 90*C range in a matter of seconds and leaving it there until it began to heat throttle. At that point, it would turn off, and the AMD tech would say, "Ah, my turn, then!" and instantly crank the voltage and clocks back up to a more reasonable level. As soon as the power draw and heat were within ASRock's desired levels, it would start this process all over again. This pattern would occur about once every 5-10 seconds during any form of gameplay, especially under load.What do you mean by AMD's core performance boost? Do you mean Precision Boost Overdrive (PBO) 1 or 2?
In PBO2, its normal for it to spike 1.52V on your best core and hit 5.1ghz, even at idle. Zen 3 cpus behave a bit like GPUs. They have a boost algorithim that does its own thing within pre-defined limits (temperature, SoC power, RMS/Peak current).
It will keep boosting depending on workload until it hits 1 or more of those limits. So at idle, it just sees a tonne of power and thermal headroom, so it uses it to hit stupid clocks on 1 core. The voltage (and frequency) actually goes down if all cores are loaded, because theres less headroom.
Which boost did you turn off?
I generally go with enthusiast level boards and i did not have the same issues, just temp that was high even on watercooling. What CTR does is test your cpu with lower and lower voltage untill it fails at a specific speed. Then you can kinda setup profiles for low/heavy/single thread use that are more optimised. It does require specific settings in bios too be exact in it tho like fast switching and more power delivery bla bla.This was my experience, however, with my extremely budget motherboard. Mileage will vary depending on the quality of the parts.
Depends. It is a OC but its a certified OC that why voltage always increases too keep it stable(Timings generally increase too). Theres alot of memory chip makers that all have diffrent quality and so on. If you look at Gskill ddr4 4000 cl14 memroy for example you see a really low CL/timings with fast ram. It draws 1.55 volt tho, so its overclocked and very high binned memory chips. Its a higher chanse it will run at those speeds then if you buy random ram and Oc yourself since they validate it tho. Same with the really high specc cpus intel hade that could oc too 6 ghz garanteed.XMP seems to be a very safe tweak to leave on. By looking at what it does, it simply allows a system to take advantage of the RAM's true speed, even if certain BIOS / UEFI setups are coded to clock RAM speeds certain ways under certain configurations. It's more like a way of getting around actual hardware capabilities being choked by outdated software techniques, not truly an "overclock". The user must ensure that they have their RAM in the correct slots, however, or this can cause wild instability. This will vary by motherboard, so the only way to know for sure is to read the mobo manual.
Yea this!! Pretty much what CTR does but automatically.1. Core Performance Boost (CPB) - this is Zen 3's stock/default boost behaviour.
2. Precision Boost (PB) - this is the successor to XFR (eXtended Frequency Range). It extends the frequency boost table on a limited number of cores if there is ppt, tdc, edc headroom to allow for it. This is also part of Zen 3's stock/default boost behaviour.
3. Precision Boost Overdrive (PBO2) - this is a kind of single/multi core boost algorithm that can operate outside stock ppt, tdc, edc with an undervolting utility (curve optimiser).
AMD just love their confusing acronyms.