Building a gaming PC

+

DLSS 2.0 apparently delivers, big time.



Pretty shocked by this, considering how awful the original implementation looked. You'll be getting at least 20-25% more performance in games that support DLSS 2.0, with no hit to IQ.
 
Last edited:
Yeah, an Xbox 360 one. But like 4RM3D, I also prefer using a mouse (and always do these days).

So, it's possible at least..?
Post automatically merged:

I have tried it using my XBox controller. It works, I guess, but using a mouse is still the preferred way to go.

Ahh, actually, it's not the same interface handling, so it wouldn't work the same way. I guess it would just be inconvenient, unless a controller support version was released on PC to work the same way console GUI works today.
 
It's ironic, that Windows users have a lot more problems than Linux users with AMD these days. On Linux Navi cards became very good. The only trend here is people switching away from Nvidia.
The other people I know are not so positive -

The AMD Linux drivers are total mess. Especially their closed-source propietary counterpart. At best you'll get artifacts (black screen'n'shit) and run into similar problems Windows 10 users suffer from and at worst your carefully tuned Linux build will cease to launch altogether. And it will be AMD drivers fault. Not to mention the fuctionality castration from original Winblows driver - no HW acceleration in programs, updates come slower than for Windows counterpart, support is limited to few distributives AMD programmers themselves preferre (good luck installing this crap on Fedora). Open Source is even more castrated - no OpenCL support(even though claimed to be supported, marketing ploy), only partial Vulkan and OGL support, updates and new features come even slower (FreeSync added only recently). Seems like people don't actually like to work for free, hence the hedious result of open-sourcing.

Nvidias propietary driver is a feature complete mirror of Windows driver. There, mike is dropped, nothing else is needed to add.
 
When you make comments like, "Switch to Linux.", it throws a big fat wrench into determining where the problem is because your typical, run of the mill Linux user probably does think about those angles.

Linux user has a benefit of better documented graphics stack. Since Linux is open source, you can learn how graphics and GPU drivers work and have better ability to troubleshoot things and report bugs.

Windows mentality is a black box blob. Good luck figuring out on what level of the driver something went wrong. The most people do is reporting stuff like "this thing hangs for me, please help!". Linux users submit bug reports that are more informative. Some even can do code bisecting to find a regression. That's why I find using Linux a lot more enjoyable too. I wouldn't touch those blobs for gaming, no way. I wouldn't even be motivated to report any bugs in that case.
Post automatically merged:

The other people I know are not so positive -

The AMD Linux drivers are total mess. Especially their closed-source propietary counterpart.

You aren't supposed to be using closed AMD drivers on Linux. Even according to AMD themselves, you should be using amdgpu + Mesa for gaming. I.e. open driver stack, that AMD themselves officially support.

No idea why you need OpenCL for gaming though. If you need it for other things, you can use ROCm, which is open source and provides OpenCL support.

https://rocm.github.io/QuickStartOCL.html

Vulkan and OpenGL support in Mesa is much better than what Windows drivers from AMD provide. So basically your info is completely outdated.
 
Last edited:
Typical Linux user has a benefit of better documented graphics stack. Since Linux is open source, you can learn how graphics and GPU drivers work and have better ability to troubleshoot things and report bugs.

There are no disagreements here. However, consider for a moment the typical Windows user. Regardless of expertise switching to a completely different OS has growing pains. Even swapping between Linux distros can require an adjustment period. It's the same for any given piece of software. You don't have to convince me which OS is better. I already know which OS is better. As with many things compatibility adds another variable to the equation. Not just for gaming either.

On an unrelated note, has anyone had to replace a fan on a MSI 980 TI Lightning before? One of the fans on mine misplaced one of it's blades at some point :).
 

Man I can't wait for more companies to start implementing this. Cyberpunk 2077 especially, since they already confirmed the RTX features. I'm not too hot about ray tracing yet, but any extra performance increase is always welcomed.
 
Yeah.. At least 25% more performance with no visual drawbacks.


Roughly 12:20 onward talks about the details how they've done it.
 
Last edited:
Well, to make framerate higher, they should better invest into improving more general purpose GPU compute units, instead of optimizing one narrow use case. Such approach is a dead end, since how many special use cases they can support that way? Physical space on the die is limited.

I.e. sure, in this example, framerate improved. In many others, this extra hardware won't have any effect. Question is, what's a better trade-off.
 
Yeah but it also does improve image quality as well, like smoothing out rough edges and shimmering that a native res image leaves behind. And it has the potential to improve it even further with the whole A.I. leaning thing. I'm all for it, especially since I own an RTX card, lol.
 
My point is, they are focusing on narrow cases through dedicated hardware. Such approach simply doesn't scale. I.e. today this thing is hot, tomorrow another thing and so on. Are they going to continue to multiply the die size to address them all?
 
Fair point. But rn, I'm just happy that thing exists and my current hardware can take advantage of it. Especially so when it comes to Cyberpunk.
 
The above was just to highlight that RTX is more of a marketing tool than a practical one. Nvidia wanted to show that "we have something unique". They put in dedicated hardware for ray tracing. AMD reluctantly follows for the same marketing reasons. But in the long run, I don't see AMD and Intel using such approach due to the reason I explained above.

Also, push for "we have something unique" is an indication that AMD (and may be Intel) are close to catching up to them in performance and power consumption in all GPU segments. When there is no argument for "we perform better", arguments like "we have something unique" start being used.
 
Last edited:
Nvidia is clearly the leader right now.

Marginally. Chip manufacturers don't see the market in its current form. They plan several generations of their chips ahead. And Nvidia wouldn't have been investing money in such questionable approach like ray tracing ASICs, if they were super confident that they are going to remain the leader for long.

What I'm trying to say is, the fact that they did, is likely an indication of them preparing for AMD and Intel to catch up to them in the near future.
 
I mean, if we have similarly performing components and one manufacturer has something extra for the same price, then yeah, I understand their approach.

I will obviously choose the product that's generally faster, but if Nvidia can stay on top of the game in general performance, and deliver stuff like DLSS to further improve it, I don't see much reason to go for AMD.

And yeah, also having working drivers on top of it all.
 
I mean, if we have similarly performing components and one manufacturer has something extra for the same price, then yeah, I understand their approach.

That's what they are aiming at, yes. Likely not for the same price though, all those ASICs are adding to the cost naturally. But it's also the reason why AMD announced they are adding ray tracing hardware - to simply counteract that marketing from Nvidia (even though they admit, the benefit of such approach itself is rather minimal).

So would be interesting to see how competition will stack up later this year. My expectation is that RDNA 2 will catch up (or in good case beat) Nvidia in both performance and power consumption. Not sure what Intel are going to bring to the table this spring with their high end gaming cards.

Personally, I'm not planning to buy one of those upcoming monster RDNA 2 cards that are aiming to match something like 2080 Ti (or whatever Nvidia are preparing to replace them with). I'd be interested in something like RX 5700 XT GPU level upgrade, but using RDNA 2 for higher efficiency and performance.
 
Last edited:
Top Bottom