Building a gaming PC

+
Hmm i tested undervolting slightly, seems too drop in score in timespy but the clock is pretty much the same. Power draw is down a bit tho so im guessing its like AMD cpus. You achive higher clocks but loose points at some voltages.. Really odd tbh.
 
I'm not sure why he is testing the reference model for anything. It's not what I'd ever use in general.

Let him publish such research on some custom model with decent design.

But interesting point about position of the card potentially affecting the vapor chamber performance. Kind of makes sense, but I've never heard about such issues before.
Post automatically merged:

Not sure about current situation, but in the past, AMD released some limited amount of reference models, and the rest of production just went to custom models until next generation of GPUs. So overall I'd say this isn't a major issue except for those who don't know to avoid reference models in general.
 
Last edited:
I just find it funny that they keep throwing jabs at their rival while also having major issues in their own design. It's not a good look for the company/brand and makes them look immature. Just focus on your own products ffs.
 
Yeah, that's just pointless marketing. In general, I don't get why they need to make reference models at all. Hgher quality cooling design models are just so much preferable.
 
I'm not sure why he is testing the reference model for anything. It's not what I'd ever use in general.

Let him publish such research on some custom model with decent design.
But its not a problem on the non reference models, thats the whole thing. Its AMDs design that sucks not the partners... Also in some places the custom cards has been non existent so people pretty much hade too buy reference cards -.- Nvidias reference models has gotten pretty good so hopefully AMD can do the same in the coming years by learning from this kinda research. Its nice too know why a issue happens instead of people speculating just as with the 12vhpwr and melting :D

Its not good if a big part of the cards suffer from a design flaw that gimps the card, its going too be hard to fix and it will not reflect well on AMD. Especially when someone from AMD said it was normal -.- Nvidia fucked up the 3090 memory last time, they learned too not place the very hot gddr6x on the back without any active cooling and often bad thermalpads. No issue now on the 4090 (since its 2 gb modules now and on the front)
 
It's surprising such kind of issues aren't discovered with testing. How extensively are they testing their designs?
Yea its very odd, its kinda odd the partners who made reference cards dident discover the problem pre sales too... They should know more then AMD about cooler designs and so on.
 
But its not a problem on the non reference models, thats the whole thing. Its AMDs design that sucks not the partners... Also in some places the custom cards has been non existent so people pretty much hade too buy reference cards -.- Nvidias reference models has gotten pretty good so hopefully AMD can do the same in the coming years by learning from this kinda research. Its nice too know why a issue happens instead of people speculating just as with the 12vhpwr and melting :D

Its not good if a big part of the cards suffer from a design flaw that gimps the card, its going too be hard to fix and it will not reflect well on AMD. Especially when someone from AMD said it was normal -.- Nvidia fucked up the 3090 memory last time, they learned too not place the very hot gddr6x on the back without any active cooling and often bad thermalpads. No issue now on the 4090 (since its 2 gb modules now and on the front)
Its a pitty AMD cant get their grip together in the GPU departament.
We suffer from lack of good competition like it is in the cpu market.
Nvidia has increased 3 times in a row their GPU prices.
In 2014 a gtx 980 had an msrp of 550$.
Now a rtx 4080 has a msrp of 1200$.

I really would like to upgrade my GPU. My 6 years old graphics 3GB VRAM its improper in 2022.
But there is no more budget video cards. A RTX 3050 is 350$ and a GTX 3060 its a 450$ in my country. :sad:

Together with hardware prices getting ridiculous we have quality of games getting lower.

The future of gaming may not bode well.
These big hardware and games companies seem to care less and less about their customers.

😔
 
Last edited:
Its a pitty AMD cant get their grip together in the GPU departament.

From what I see, their new GPUs are pretty competitive. They didn't overtake Nvidia in ray tracing but they are catching up and in everything else for gaming I don't see Nvidia having any advantage now especially with their pricing.

So it's not like there is no competition.

For Linux gamers it's even better - AMD usage is continuously growing and Nvidia usage is dropping according to user stats on GOL for example, and this month was the breakthrough moment:




Post automatically merged:

Yea ofc the second i uppgrade they announce something new... just my luck
Well, I wasn't in a rush to upgrade since they said 3D vcache models are coming later this year and that 16 core one looks really good:

https://www.amd.com/en/products/apu/amd-ryzen-9-7950x3d
 
Last edited:
Yea ofc the second i uppgrade they announce something new... just my luck
Well that's just the normal progression of things.

Regarding the Zen4 X3D lineup: it's pretty wild. The 7800X3D is a straightforward successor to the 5800X3D, but the 7900X3D and 7950X3D are some Frankenstein stuff. One core chiplet with added v-cache, one without, but with ~15% higher max clock... scheduling will be wild, and I mean WILD.
 
Well that's just the normal progression of things.

Regarding the Zen4 X3D lineup: it's pretty wild. The 7800X3D is a straightforward successor to the 5800X3D, but the 7900X3D and 7950X3D are some Frankenstein stuff. One core chiplet with added v-cache, one without, but with ~15% higher max clock... scheduling will be wild, and I mean WILD.

Does it mean one core block will have lower clocks but more cache and another will have higher clocks but less cache? That's really an unusual approach. I hope Linux CPU schedulers will implement some strategies for using that right.
 
Does it mean one core block will have lower clocks but more cache and another will have higher clocks but less cache? That's really an unusual approach. I hope Linux CPU schedulers will implement some strategies for using that right.
Exactly this. Thread scheduling will have to *know* which thread to put on which CCD, or even put it where at which *precise time* for this to yield the optimal possible performance.
 
Intel in a sense has a similar issue since they mix low and high performance cores. I'd look into what work was done for Linux schedulers for these new AMD chips.
Post automatically merged:

From what I gather, no such work was done. I doubt even on Windows.

And if you think about it - how would the scheduler even know what core complex to use, one with more cache or with higher clocks? There is no easy way to figure it out for each thread.

It feels like AI grade problem, where AI analyzes thread behavior and predicts what to use based on that.

Which brings another question, Assuming you don't have such scheduler available, which CPU would be better for gaming, 7950X or 7950X3D?

I'm primarily interested in the Linux context, but I think similar question would apply for Windows.
 
Last edited:
Well. Assuming scheduling was completely random, the 7950X3D should probably still have a slight advantage - one CCD will yield the same performance as those on the 7950X. The other will do somewhat worse in some tasks and up to (completely theoretical upper limit) +~200% faster, if everything was completely tied by cache size. The average benefit comparing two single CCDs with/without the extra cache should be at around 15% again, as seen for 5800X3D vs 5800X. The fun part is when tasks can jump freely between the two different CCDs or need more than 8 cores, that's where performance could become inconsistent without a scheduler that's aware of the differences.
 
I think if AMD really wanted to make a dent in the GPU market they would have priced the 7900 XT and XTX both $100 less each. That would have got me to seriously consider both of them. Instead I stumbled upon a MSRP RTX 4080 and snatched it up because for the extra $200 I think it is worth it.
 
More like, 7900 XT should be priced less. It's just priced way too close to 7900 XTX to consider it. But overall it's still cheaper than comparable Nvidia, so they have an edge in that.

I think many reviewers pointed out it's the reason 7900 XT isn't selling as well as 7900 XTX.
 
Top Bottom