Building a gaming PC

+
They reduced power usage supposedly in comparison to 7950X due to that. If you compare specs, its max temp and TDP are lower (though latter is a poor metric).

What I didn't find so far is what the max clocks are for the more cached CCD for 7950X3D.

If I'm getting a 7950X, I'm planning to run it in 105W eco mode anyway since as reviewers pointed out, performance drop is very small with big power usage reduction gain.
 
Thats the thing, if theres diffrent clockspeeds between CCDs wonder how that will work out. Will also be hard too remedy this issue since temps are harder to manage with cache ontop of the CCD.
 
Thats the thing, if theres diffrent clockspeeds between CCDs wonder how that will work out. Will also be hard too remedy this issue since temps are harder to manage with cache ontop of the CCD.
Those already exist, and can be pretty damatic, its knowing which CCD games want/need that is tricky
 
So far I'm skeptical anything would know it (i.e. schedulers). I expect random scheduling. If it will end up still beneficial, it might be worth buying.
 
Those already exist, and can be pretty damatic, its knowing which CCD games want/need that is tricky
Yea exactly what i meant. Would need to tell the cpu what ccd too use for what tasks in some way. And in gaming wonder witch would be faster for example? Clockspeed vs cache.
 
With their next CPUs iteration they might get to 16 core one with fully stacked 3D v-cache already instead of this half / half solution.
I'm not sure that is going to happen, the problem is Cache (L1,L2,L3, V-cache etc.) is Static RAM and Static RAM stopped scaling at 7nm which is part of the reason AMD went to a chiplet style design for their GPUs. Scaling down is not just about density it's also about power efficiency. So while the CCD chiplets are 5 nm the cache is I believe 6 nm (AKA 7nm+) which means it's not as power efficient and can't attain the same clock speeds as the 5nm chiplets without overheating hence the need to nerf the clocks. (And kill manual overclocking) The reason for the half and half design is so you don't kill the performance for most productivity applications and is a compromise for people like me that didn't want to nerf my productivity work to gain a few FPS in games. If all you are doing is gaming then the 8 core devices with 3D cache are plenty and there is little to gain for gaming with 12 or 16 cores with full V-cache and you would have to lower the clocks perhaps even more than with the 8 core devices .... As a general rule of thumb the more cores or CUs a device has the lower the boost clock speed in all core workloads because of thermals .... You also have to take into account when reading the specs is the max boost frequency is based on a single core workload and will be less than the rated max in all core workloads.
 
Last edited:
Yeah, all core workloads are limited by thermals so it's not expected to hit max frequency on all cores if it's not cooled enough.

And interesting point about cache using an older node. Why has memory stopped scaling down in comparison to CPU cores?

So far I'm leaning towards simply getting 7950X. But I'd still wait for benchmarks.

And a side note, why are these CPUs selling below MSRP now? It's really surprising.
 
So far I'm skeptical anything would know it (i.e. schedulers). I expect random scheduling. If it will end up still beneficial, it might be worth buying.
True Windows (and Linux) will have to make their schedulers work with it but it's really not any different from the Intel Big Little design of the 12000 series where the performance cores and efficiency cores run at two different clock speeds and the Windows scheduler has to account for that. Since AMD and Microsoft Azure have a big partnership going right now I suspect both will want to get this right in no short order.

The reason SRAM isn't scaling like logic is the topology. Generally the most basic logic block uses 2 transistors (CMOS) while a SRAM cell uses 6 transistors and because of the topology you just can't shrink the 6 transistor cell down as much. Also the transistors themselves are different where logic transistors are FinFET and SRAM transistors are a bit different
 
Last edited:
I guess if current schedulers are already aware of big little idea plus cache size added to that mix, then they might work decently well out of the box with these new CPUs. But probably some work still needs to be done.

And if cache indeed doesn't scale down anymore then yeah, even with newer models they'll be too limited to add more of it.
 
Last edited:
Well they could add more and put it above both chiplets but in order to do that they would have to reduce the clocks and power draw and it would kill productivity while not really gaining much for gaming. Basically the Law of Diminishing Returns applies here as far as gaming goes when using V-cache with more than 8 cores

What I'd like to see is a 7600X3D variant because having 25% fewer cores should give you a little more headroom for clocking and power so they might be able to do it without nerfing the clock speeds
 
I started using it in this game not because I needed an FPS boost (No Ray Tracing, it's just not worth it until they fix the CPU threading problem) but because the latest version on quality looks better than the TAA anti-aliasing
 
I started using it in this game not because I needed an FPS boost (No Ray Tracing, it's just not worth it until they fix the CPU threading problem) but because the latest version on quality looks better than the TAA anti-aliasing
Ive noticed that it generaly lifts some smaller details like small lines and so on in the distance that you cant se in normal res. im pretty impressed with DLSS 2 tbh but really wanna se what 3 can do in cp2077.
 
I'm pretty sure everyone is doing that to avoid oversupply. More importantly, is there a shortage or not? During previous release it was impossible to buy GPUs for months and prices went crazy high indeed. Now they look availbale all over and scalpers don't seem to be buying them all. Prices are at MSRP levels.
 
Sure, but question is whether current price is higher than it could be while being profitable. I have no idea by how much.

Nvidia surely raised the price more than necessary because they rushed to use newer / less established node and passed that cost to the end user.

To make prices better, it would have helped to have a few more major GPU makers.
 
Got Sapphire Pulse RX 7900 XTX.

Good performance increase over 6800 XT.

6800 XT:



7900 XTX:

 
Better Ray Tracing performance too ..... Not as good as a 4000 series Nvidia but as good as the 3000 series Nvidia cards which means they are catching up and are only a generation behind now instead of 2 plus generations behind. AMD still needs to do better with their drivers though because it has given them a bad reputation and caused them to lose Market Share with the 6000 series (21% down to 10%)

It's going to be hard to beat Nvidia's main advantage and that is they have an actual ecosystem that includes not just hardware but also a large software division that works hand in hand and AMD just doesn't have that and Nvidia is currently killing it when it comes to AI which is why their GPU prices are so high right now because their Consumer GPU division has to compete with the Commercial Cloud Computing and AI divisions.

 
I tried ray tracing, since it just started working on Linux for CP2077 with radv. But performance isn't really playable yet even on medium preset. But it's a fun benchmark to test:

 
Top Bottom