Building a gaming PC

+
Posted a video, 4090 4k everything maxed (RT psycho) and around 125-135 FPS with DLSS (on performance I think) and frame generation enabled :)
(So it's just a guess, but with custom 4090, it could be even slightly more).
It will depend on the area, checked out the video again and its generally at 70-85 drops too 67 in afterlife. But thats at quality so 1440p upscaled to 4k. Dropping it to performance would do 1080p to 4k witch would cap it at times i would guess. have drops at spots im guessing tho. I would probably take the quality over the extra few fps tho. over 60 allways would be ok for me.
 
Depends on what you mean with high refresh? 120hz 4k might be doable with a 4090 and hit the cap at times. Saw some dude testing out his setup and he got like 70-80 fps at afterlife with maxed out 4k with quality setting on DLSS. DLSS 3 will increase it and you could easily drop it too balanced without too much drop in quality. The new 240hz ones.. no just not going too happend. Laughed when i saw people say DP2.1 on the new AMD cards is a gamechanger.. Like why? you not getting those fps anyways. Seems screens are always a bit over what can be achived at 4k.

If I'm getting a high end card, I'm not going to use upscaling :) That's why I'm not in a rush for 4K yet. Once 2560x1440 without upscaling will start being maxed out for my 180 Hz display, I'd think about 4K / 144+ Hz or so.

It's just very counter intuitive for me to pay for such cards and then rush to high resolution that requires upscaling becasue they can't handle it with sufficient framerate natively.

I.e. I think native image quality + lower resolution > upscaled image + higher resolution even if resulting framerate is the same.
 
Last edited:
Posted a video, 4090 4k everything maxed (RT psycho) and around 125-135 FPS with DLSS (on performance I think) and frame generation enabled :)
(So it's just a guess, but with custom 4090, it could be even slightly more).
More than slightly for sure. Something as minor as increasing max power draw by 10%, without even touching clock speed, can yield notable uplift in a lot of titles. The 4090 is the reborn 1080ti enthusiasts have been waiting for. Needless to say; I was pretty salty this morning that DLSS 3 didn't ship with the update.
Post automatically merged:

The new 240hz ones.. no just not going too happend. Laughed when i saw people say DP2.1 on the new AMD cards is a gamechanger.. Like why? you not getting those fps anyways. Seems screens are always a bit over what can be achived at 4k.
I did as well. Watched a man have several meltdowns as AMD didn't even meet rumored expectations. He gonna eat that hat, one way or another.
 
Yeah, DP 2.1 is nice but a gamechanger? Not really. It's more like we have more recent hardware because we can. Which is nice, but not a critical feature.

I'd imagine more screens for multi display can benefit from it more than 8K and such.
 
If I'm getting a high end card, I'm not going to use upscaling :) That's why I'm not in a rush for 4K yet. Once 2560x1440 without upscaling will start being maxed out for my 180 Hz display, I'd think about 4K / 144+ Hz or so.

It's just very counter intuitive for me to pay for such cards and then rush to high resolution that requires upscaling becasue they can't handle it with sufficient framerate natively.

I.e. I think native image quality + lower resolution > upscaled image + higher resolution even if resulting framerate is the same.
Hmm native 4k with full settings in cp? yea not going too happend anytime soon. Tbh the upscaled trough dlss isent bad, even better then native on some things. lower resolution really dont win there for me atleast. i just cant go back now, 4k looks so much better then even 1440 :S
I did as well. Watched a man have several meltdowns as AMD didn't even meet rumored expectations. He gonna eat that hat, one way or another.
Im still getting recommended vids with AMD wins this generation and so on. Yea they are probably going too sell more 7900 then nvidia can sell 4090 since its cheaper. Some dude said 450w was too much on the nvidia but later was exited about the 450w AMD cards -.- starting too sound like the console wars v2.0. If i was going just for gaming i would proabably buy a 7900xtx or a 4080ti when ever that comes. Sadly im kinda stuck with the top end stuff -.-
 
Yeah, DP 2.1 is nice but a gamechanger? Not really. It's more like we have more recent hardware because we can. Which is nice, but not a critical feature.

I'd imagine more screens for multi display can benefit from it more than 8K and such.
Even though I picked up a dummy thicc green card; I would like to see something special from AMD again. My first and second builds had Radeons, not for the price mind you- at the time they weren't cheap, they were the best.
 
AMD ISA is arguably better than Nvidia's, but that's not always visible to the end user. I think it was way more pronounced in Nvidia's 1000 series which work poorly becasue of that with vkd3d-proton.

But overall yeah, AMD are often playing catch up still, except may be being ahead with chiplets.
 
AMD ISA is arguably better than Nvidia's, but that's not always visible to the end user. I think it was way more pronounced in Nvidia's 1000 series which work poorly becasue of that with vkd3d-proton.

But overall yeah, AMD are often playing catch up still, except may be being ahead with chiplets.
Chiplets do absolutely nothing to increase the performance and likely decrease it slightly. What they do really really well is lower production costs because you don't waste so much silicon. I'm expecting the 7900 XTX or even the 7900 XT to have as good or slightly better rasterization performance than a 4080 but at a considerably lower price .... Power efficiency looks to be a wash as both the 4080 and 7900 XTX come in at 320 watts but if the 7900 XTX is 10% faster then it would in turn be 10% more efficient
 
Chiplets will allow better efficiency because it's a more flexible design. Different components can be controlled differently. Main problem with chiplets is interconnection that's more complicated than a monolithic die. But I think AMD ironed that out with CPUs already, so they are way ahead of Nvidia in this.

I'm pretty sure Nvidia will use chiplets soon enough too.
 
Chiplets will allow better efficiency because it's a more flexible design. Different components can be controlled differently. Main problem with chiplets is interconnection that's more complicated than a monolithic die. But I think AMD ironed that out with CPUs already, so they are way ahead of Nvidia in this.

I'm pretty sure Nvidia will use chiplets soon enough too.
Isent the chiplets just cashe pretty much so far? It will be interesting too see where they will take it but not that impressed with it so far. It will be alot cheaper tho, 4090 yeild must be absolute shit compared too the 7900. its a smarter way then the old way for sure. The chiplets are even 6 nm so they save even more, hence the 999 dollars (im interested too se those prices over here too. im guessing atleast 1600)
 
Isent the chiplets just cashe pretty much so far? It will be interesting too see where they will take it but not that impressed with it so far. It will be alot cheaper tho, 4090 yeild must be absolute shit compared too the 7900. its a smarter way then the old way for sure. The chiplets are even 6 nm so they save even more, hence the 999 dollars (im interested too se those prices over here too. im guessing atleast 1600)

I think it goes beyond simply saving money on better yield. But that one is a major benefit for sure.
 
Chiplets will allow better efficiency because it's a more flexible design. Different components can be controlled differently. Main problem with chiplets is interconnection that's more complicated than a monolithic die. But I think AMD ironed that out with CPUs already, so they are way ahead of Nvidia in this.

I'm pretty sure Nvidia will use chiplets soon enough too.

Actually that's not true, the flexibility is all in keeping the costs down by getting much better yields per piece of expensive silicon wafer ..... Anytime you divide things up and then have to use a separate interface to tie everything together it's going to slow things down compared to something that's a single piece of silicon for a number of reasons, mostly the interconnects make the paths between sections longer and thus less efficient .... At one time your L1, and L2 caches for a CPU were on a separate piece of silicon but eventually they integrated them onto the same die as the rest of the CPU. Ditto for the ALU (Arithmetic Logic Unit) originally there wasn't one, then with the 486 they added an outboard one which made up the 486-DX package and one of the big breakthroughs with the Pentium line was it had the L1 cache as well as the ALU all one a single piece of silicon making it faster than everything else out there at the time even if they were running at the same clock speeds. Eventually the L2 caches was also integrated into the same die and another layer L3 cache was also added to the same die. Intel is able to control things differently using a single piece of silicon as the 12th and now 13th gen CPU prove with their performance cores and efficiency cores running at completely different clocks speeds and voltages. They also have an advantage with the memory controller built into the same die which means they can clock SDRAM faster. For instance ZEN 3 effectively topped out at 3600 and the Zen 4 tops out at 6000 due to the limitations of using a separate layer (Infinity Fabric) while 13th gen Intel have clocked memory as high as 8000 for practical usage and 10,000 for competitive overclocking according to the latest information coming from G.Skill. This is because on AMD you are limited by the speed of the Infinity Fabric interconnect layer while Intel is not

Chiplets will have a slightly degraded performance in every application, the key is to keep it in the 1-2% range where you essentially don't notice the difference. The difference you will no doubt notice is the lower cost for about the same performance. But what would I know about things like that, I've only been and Electronics Engineer for 29+ years and before that I worked as a component tester for Gateway Computers back in the Wild West days of the IBM Compatibles in the late 80's
 
Last edited:
Efficiency isn't about slowing down or not, it's about using less energy for the same function. Somehow I don't see these monolithic options starting to save energy. Only the opposite.
 
Actually that's not true, the flexibility is all in keeping the costs down by getting much better yields per piece of expensive silicon wafer
Yea this is pretty much my understanding about chiplet design too. Its easier too get good yeilds and cheaper, but has some inherent "problems" too. Was the same with the ryzen cpus its hardly noticeble now but cant run memory at max speeds since it will interfere with the infinityfabric clock at some speeds and not give any benefits. 6000 is max on the new ryzen for a 1:1 clock.

Saw that without the chiplets the core in 7900 is about half as big as the 4090. Thats around double the cores/waffer witch is alot cheaper and probably with better high quality yeilds. Then the chiplets are a cheaper 6 nm too add too that. We shall se how it works IRL but im guessing theres not much difference compared too a monolitic, AMD has been doing this for some time so they should know what there doing.. Might be problems in some games im guessing since its not exactly built this way but. Think the CPUs hade some issues early on too
 
I can confirm that RT works perfectly fine at 1080p with DLSS on at Quality. I've been using that setup for the gameplay. The difference between RT is extremely noticeable when compared to rasterized lighting, especially in the gradients of lighting and the reflections. There's no artifacting or anything due to lower resolutions.

I'm not sure what exactly others may be seeing, but if they're encountering a problem with RT and lower resolutions, I don't think its a limitation of the technique. Probably more like a driver / DLSS issue.

Saw that without the chiplets the core in 7900 is about half as big as the 4090. Thats around double the cores/waffer witch is alot cheaper and probably with better high quality yeilds. Then the chiplets are a cheaper 6 nm too add too that. We shall se how it works IRL but im guessing theres not much difference compared too a monolitic, AMD has been doing this for some time so they should know what there doing..
Yeah, just looking at the power requirements and general setup of the AMD cards' design (though, grain of salt, since the hands-on is not really there yet) I find that to be much more sensible. Nvidia's design seems to be focused on pressuring high-end users to basically build whole new systems around this monstrosity of a card, then have to deal with the insane power draw for...what exactly?

I have to get into diminishing returns here, now that I've seen the 4090 in action. Given the tradeoffs required to actually install and use one of these things...it's pointless. The present and near-future market is not going to start making games that run in 4K resolutions natively and demand 12-16 GB VRAM just because Nvidia released a briefcase-sized GPU with a 12-pin power connector. Conversely, AMD's design looks much more sensible and market-practical, not to mention gaming-practical, for what's likely to appear in the next 3-5 years or so.

And once again:
By the time those 3+ years have passed, plenty of cheaper and more reasonable designs will have been delivered, the tech will have been improved, and games that can actually take advantage of these cards will have been made. It's always possible that Nvidia created another "GTX 1080", and this card will prove incredibly powerful long into the future...but I doubt it. To me, it looks like Nvidia is trying to further capitalize on the insane market gouging that began during the pandemic, furthering the illusion that the only way to get "high-end" gaming is to pay out the toofus for ridiculous hardware which will afford you basically the same experience in practice as a card worth literally half the price.

While gaming, the difference between 144 FPS delivered at 1440p and 450w vs. 144 FPS at 4K and 600w is going to negligible to the eye and the brain. It's only going to have a potentially big impact on the ego.
 
Last edited:
I have to get into diminishing returns here, now that I've seen the 4090 in action. Given the tradeoffs required to actually install and use one of these things...it's pointless.

Not just that, but card with lower power draw will probably be close enough regardless of whether some titles can saturate it to 100%. So it looks more like a marketing idea, not a practical one.

Something similar can be observed for CPUs. AMD's 170 W TDP is just pushing it beyond optimal balance. But good thing they provide "eco modes", i.e. way to run the CPU at lower power. I'm going to use 105 W when I get one of the Zen 4 models.

There was a recent video about it:

 
While gaming, the difference between 144 FPS delivered at 1440p and 450w vs. 144 FPS at 4K and 600w is going to negligible to the eye and the brain. It's only going to have a potentially big impact on the ego.
Ehh gonna have too disagree with that. 4k looks way better then 1440p imo, even at like 100 fps only. Sadly its something you just cant show on youtube or something but it has too be seen irl. Depends on alot of factors too ofc like good screen and so on.
The present and near-future market is not going to start making games that run in 4K resolutions natively and demand 12-16 GB VRAM just because Nvidia released a briefcase-sized GPU with a 12-pin power connector. Conversely, AMD's design looks much more sensible and market-practical, not to mention gaming-practical, for what's likely to appear in the next 3-5 years or so.
Yea i kinda agree atleast, the 7900xtx isent targeting the 4090 tho its a 4080 competitor that has around the same power draw. 320-350w (reference models, the AIB seem too go for 450w on the 7950xtx for more FPS ofc) i think. releases next week so no clue yet about witch is faster, AMD is a bit cheaper im guessing. Most people defenitly dont need a 4090 i fully agree. if you wanna game at 4k with maxed settings and perhaps do some rendering/ai its the better choice. Atleast untill the 4090ti comes out (supposed too have even more memory and a fully unlocked core)
Something similar can be observed for CPUs. AMD's 170 W TDP is just pushing it beyond optimal balance. But good thing they provide "eco modes", i.e. way to run the CPU at lower power. I'm going to use 105 W when I get one of the Zen 4 models.
Well the 170 w is really 220w. Both the 7950x and the 13900k is pretty much the same power draw 220w vs 250w. TDP means something completely diffrent nowdays -.- another marketing gimmic. The eco mode is pretty decent at 105 tdp mode still tho, the diminish returns has hit both gpus and cpus hard this generation it seems. Also seen intel have these kinds of limits, a 125w mode and 65w. its a bit worse in work loads then the 7950x on that tho
 
Last edited:
Efficiency isn't about slowing down or not, it's about using less energy for the same function. Somehow I don't see these monolithic options starting to save energy. Only the opposite.
That would only apply to power efficiency .... There are more than one kind of efficiency metrics, for instance how long it will take for an electron to go from point A to point B .... The more interconnects and longer the path the more time it takes. Even things as subtle as making a PCB trace an arced corner rather than a 90 degree corner can make a difference. Then anytime you have parallel paths such as memory all the traces have to be the same effective length (But not the same exact length due to things like capacitance and inductance) . Look at the traces going between the CPU socket and the SDRAM socket and you will see some of the lines are squiggly to increase their length so the match the longer traces. The squiggly lines also add some inductance which slows the electrons which also has to be taken into account so all the data on all the parallel lines arrives at the exact same time or you'll have memory errors. This same basic truth applies to the Infinity Fabric and ultimately makes it slower and less efficient. In my memory speed example Intel Memory can clock faster in part because the paths are shorter and more direct and only needs 2 clocks rather than the 3 clocks Infinity Fabric needs with your ultimate speed being set by the Infinity Fabric clock which is typically the slowest of the 3 clocks
 
Last edited:
Sorry in advance for my "unadvised" opinion, but I don't get the comparison between Nvidia 4090 and RX 7900 :)
Because for me, both have clearly a different goal and aim a different market. With the 4090, Nvidia showcase what they're capable off, "no matter" the price, affordable only for those who can. Unlike RX 7900 which are designed to aim a larger public.
 
Well the 170 w is really 220w. Both the 7950x and the 13900k is pretty much the same power draw 220w vs 250w. TDP means something completely diffrent nowdays -.- another marketing gimmic. The eco mode is pretty decent at 105 tdp mode still tho, the diminish returns has hit both gpus and cpus hard this generation it seems. Also seen intel have these kinds of limits, a 125w mode and 65w. its a bit worse in work loads then the 7950x on that tho

What has happened for both CPUs and GPUs is we have hit the efficiency wall because we have effectively shrank the node size down about as far as it can go due to basic physics. For the past 10 years most of the power efficiency gains have come from shrinking the node size with minor gains from architecture and compute power gains have come from being able to pack more transistors into the same sized space. So now the only way to gain more compute power is to add more electrical power because you can't effectively just add more transistors due to the fact you can't make them any smaller .... The so-called 3nm and 4nm nodes are really just slightly tweaked 5nm that can pack a single digit percentage more transistors.

These basic facts are why many top engineers are declaring that Moore's Law is dead, at least with traditional silicon based devices. The only likely path forward will be quantum computing but we are likely to see a decade of stagnation before they become practical
 
Top Bottom