Building a gaming PC

+
More like, 7900 XT should be priced less. It's just priced way too close to 7900 XTX to consider it. But overall it's still cheaper than comparable Nvidia, so they have an edge in that.

I think many reviewers pointed out it's the reason 7900 XT isn't selling as well as 7900 XTX.
Yea the XT feels like a ploy too make people pay for the XTX instead. Atleast at MSRP prices.

Think i finaly found a stable Oc on my 4090 that dont consume too much power:

2910/24000 with 115% power target pulls around 450w in CP benchmark and in game 450-480. Never seen it drop under 60 fps in CP with maxed out gfx and quality DLSS at 4k, now i just need a waterblock too get the temps down a bit and the noice -.-

Edit: Ops did a timespy extreme bench.... 515w draw under 2 part. Dam that test allways goes way beond anything, even port royale never hits that high powerdraw -.-
 
Last edited:
Well that's just the normal progression of things.

Regarding the Zen4 X3D lineup: it's pretty wild. The 7800X3D is a straightforward successor to the 5800X3D, but the 7900X3D and 7950X3D are some Frankenstein stuff. One core chiplet with added v-cache, one without, but with ~15% higher max clock... scheduling will be wild, and I mean WILD.
Most likely only the chiplet without the V-cache will clock that high and the other one with the V-cache will only clock as high as the 7800X3D ..... If all you are doing is gaming then the 7800X3D is all you really need, the other two are only good for if you do production work where you need the extra cores/threads but also want to use it for gaming. So unless you are also doing production work then the top 2 skews are a waste of money when you can save a few hundred bucks and have the same gaming performance with a 7800X3D. It's why I opted for a 5800X over the 5800X3D, I didn't want to give up production performance with the lower clocks on the 5800X3D to get minimal gains in the games I play which are single player with graphics cranked up as high as possible and still maintain 60 FPS in worst case scenarios. Plus the 5800X was only $310 at the time and the X3D was still $450 and there just wasn't $140 worth of advantage in my usage case

At least this time around I can bump up to a 7900 X3D and keep my production levels up while still taking advantage of the gaming increases and by the time I'm actually ready to upgrade to the AM5 platform in a year to year and a half these should be considerably cheaper along with cheaper motherboards and DDR5
Post automatically merged:

Yea the XT feels like a ploy too make people pay for the XTX instead. Atleast at MSRP prices.

Think i finaly found a stable Oc on my 4090 that dont consume too much power:

2910/24000 with 115% power target pulls around 450w in CP benchmark and in game 450-480. Never seen it drop under 60 fps in CP with maxed out gfx and quality DLSS at 4k, now i just need a waterblock too get the temps down a bit and the noice -.-

Edit: Ops did a timespy extreme bench.... 515w draw under 2 part. Dam that test allways goes way beond anything, even port royale never hits that high powerdraw -.-
What I've been waiting for is for someone with a 7900 XT card with dual V-BIOS to flash a XTX V-BIOS and see if it up the performance. I highly suspect AMD intentionally nerfed the clock speed of the XT model like they did with the 5700 non-XT but you could get a lot of performance back by flashing it with a XT BIOS which raised both the clocks and the power limit
 
Last edited:
Question about 7950X3D isn't vs 7800X3D now, but vs 7950X proper which has higher clocks on more cores. Which is better even for gaming isn't yet clear.

And funny thing about benchmarks - they'll probably need to be run many times on 7950X3D for each title to even out the results, due to hardware being so assymetrical and results being probably very random because of that with big deviation.
 
Question about 7950X3D isn't vs 7800X3D now, but vs 7950X proper which has higher clocks on more cores. Which is better even for gaming isn't yet clear.

And funny thing about benchmarks - they'll probably need to be run many times on 7950X3D for each title to even out the results, due to hardware being so assymetrical and results being probably very random because of that with big deviation.
Tbh im guessing the 3d will be faster in games anyways, since games seem too love cashe. Not sure the differance from 7800x3d too 7950x3d will be gigantic in games tho, most games wont use the extra cores anyways. Im kinda sad i missed out on them but my 13900k is pretty fast and at 4k cpu matters very little.
 
Some games might like more cache, sure. Especially if they are CPU limited with some thread patterns. But even with 5800X3D not all games benefited from it. So it means it can be a hit and miss in practice, with some games benefiting more from higher core clocks.

I guess the question can be rephrased as what is the most balanced option for the money. 7950X or 7950X3D. I don't use my computer for gaming only, so having a good overall CPU is a plus and I was planning to get a 16 core one this time, to upgrade from 12 core one.
 
Some games might like more cache, sure. Especially if they are CPU limited have some thread patterns. But even with 5800X3D not all games benefited from it. So it means it can be hit and miss in practice, with some games benefiting more from higher core clocks.

I guess the question can be rephrased as what is the most balanced option for the money. 7950X or 7950X3D. I don't use my computer for gaming only, so having a good overall CPU is a plus.
The only real differance is in cpu limited games no matter what cpu your using. Resolution also plays a part ofc and theres other limiting factors. The 5800x3d was beating the 13900k in some games since those games made good use of the cache but it was AMDs top cpu for games even with the 7950x out -.-

Im guessing the 7950x3d will be wicked fast for productivity too aslong as theres use for the extra cache.
 
From the recent examples, new TW3 is kind of weirdly CPU limited and doesn't even saturate the GPU for me in DX12 more (in Wine+vdkd3d-proton) so I wonder if it will benefit from such kind of CPU.
 
Last edited:
From the recent examples, new TW3 is kind of weirdly CPU limited and doesn't even saturate the GPU for me in DX12 more (in Wine+vdkd3d-proton) so I wonder if it will benefit from it such kind of CPU.
Probably will help. I was surprised i got 100% gpu usage in 1080p in cp2077 tbh, 720 was around 80% usage so hitting the bottleneck.
 
From what I see, their new GPUs are pretty competitive. They didn't overtake Nvidia in ray tracing but they are catching up and in everything else for gaming I don't see Nvidia having any advantage now especially with their pricing.

So it's not like there is no competition.

For Linux gamers it's even better - AMD usage is continuously growing and Nvidia usage is dropping according to user stats on GOL for example, and this month was the breakthrough moment:




Post automatically merged:


Well, I wasn't in a rush to upgrade since they said 3D vcache models are coming later this year and that 16 core one looks really good:

https://www.amd.com/en/products/apu/amd-ryzen-9-7950x3d
The competition is not there though.
Most gamers buy Nvidia.
AMD share has dropped to 10% in 2022.
Their drivers are subpar. With Nvidia is plug and play. With AMD is always a hassle.
Their FSR is subpar. DLSS being much better and useful.
The same with Ray Tracing. Nvidia has left AMD really behind.
 
Their FSR is subpar. DLSS being much better and useful.
i would not say its subpar yet, with DLSS3 it probably will be tho but FSR runs on anything so buying nvidia seems like you get more options/games with upscaling. And the RT is sadly very true, they are catching up to last gen in RT preformance witch is kinda -.- heres hoping for next gen and them making a maxed out card next time.
 
The competition is not there though.
Most gamers buy Nvidia.
AMD share has dropped to 10% in 2022.
Their drivers are subpar. With Nvidia is plug and play. With AMD is always a hassle.
Their FSR is subpar. DLSS being much better and useful.
The same with Ray Tracing. Nvidia has left AMD really behind.

Try gaming on Linux - AMD drivers are better than Nvidia's by a huge margin.

I personally don't care about ray tracing much. Nvidia sells it as marketing kool aid making people think it's the most important feature. But the reason they do it is becasue in everything else competition has caught up. I don't doubt AMD will catch up in ray tracing too. Nvidia will come up with something new then as "essential feature" forgetting that ray tracing is so important. It's all a marketing game. Try to think for yourself what actually matters.

Also, I don't see a point in upscaling as discussed above when gaming on higher end hardware. And disagreed about DLSS being more useful. They are pretty comparable in usefulness and it's not very useful in general for those who want better image quality.
 
I'll agree on ray tracing. My thoughts on it, now that I have it and can largely use it, is mostly unchanged. It's pretty. I don't really miss it if I turn it off.

When it becomes standard, it will be lovely to have such accurate and immersive lighting. For now, it's mostly a performance sink, and it's sometimes hard to tell the difference between a rasterized scene and a ray-traced scene unless you're specifically looking for it.

On the whole, if I have to choose between 30 FPS gameplay with full ray tracing, or 80+ FPS with rasterized lighting, I'll go with FPS every time. It's just not worth that much performance loss in most games. But it is pretty.
 
Last edited:
Yeah, I have the same sentiment. Performance cost for it currently often isn't worth it, and idea to use upscaling to compensate it doesn't sound appealing either. When it will get to the point of good performance without any upscaling needs - then it will be way more useful.
 
From the recent examples, new TW3 is kind of weirdly CPU limited and doesn't even saturate the GPU for me in DX12 more (in Wine+vdkd3d-proton) so I wonder if it will benefit from such kind of CPU.
Not very much if at all ..... It's CPU bound for all the wrong reasons mainly the game engine not being properly programmed for DX12 .... You might gain 1 or 2 FPS but it's still going to stutter and have other problems especially with RT enabled.
 
Not very much if at all ..... It's CPU bound for all the wrong reasons mainly the game engine not being properly programmed for DX12 .... You might gain 1 or 2 FPS but it's still going to stutter and have other problems especially with RT enabled.
Interestingly it doesn't even max out CPU cores for me. Really weird. It's like it's sitting doing nothing some of the time.
 
Yea ofc the second i uppgrade they announce something new... just my luck
That's the way it is. I knew that perfectly well when I bought my 5800X3D on BF, but the deal was so sweet I wasn't gonna let it pass.

I'm planning on keeping it at least until the next desktop processors in the 3D V-cache series are launched. (9800X3D or whatever)

My guess is that these CPUs will be very expensive anyway and imo AM5 isn't really worth upgrading to atm. Let the platform mature a bit more.
 
With their next CPUs iteration they might get to 16 core one with fully stacked 3D v-cache already instead of this half / half solution.
Yea they might, im a bit doubtfull tho. Read up some on the 7950x3d and im not sure its going to be that awsome tbh. If half the cpus has the v-cache its probably going too limit how much gain you can get in games especially. Also might be big problems with cpu scheduling and so on. I get why they hade too do it like this but now im not that sad about missing out on it tbh :D its probably going to be a bumpy ride for awhile..
 
Yep, that was my question exactly. Scheduling will be all over the place. I highly doubt there will be schedulers any time soon which can use such kind of hybrid CPU efficiently. So the question is still, what would be better, that or regular 7950X with symmetric design :) I'll wait for some extensive benchmarks.
 
Yep, that was my question exactly. Scheduling will be all over the place. I highly doubt there will be schedulers any time soon which can use such kind of hybrid CPU efficiently. So the question is still, what would be better, that or regular 7950X with symmetric design :) I'll want for some extensive benchmarks.
Temps are probably the biggest hurdle with the v-cache tbh. its why the 7800x3d has lower clocks and why they did this to the 7950x3d. Im not sure they will be able too solve this issue since its basicly sitting on top of the cpu. Will be interesting too se how it goes tho and next generation it might be better too.
 
Top Bottom