Hardware/Software Technical Discussion Thread

+
Very flaky support in OpenGL drivers on Linux (Mesa didn't support it at all as far as I know). It was a poor design to begin with. Vulkan approach is a lot better, but I think Mesa lists it as not started still as well (VK_KHR_device_group):

https://gitlab.freedesktop.org/mesa/mesa/blob/master/docs/features.txt#L423

UPDATE: Hmm. That's strange - features.txt shows it as missing, but actual vulkaninfo for radv shows it's supported:

Code:
 vulkaninfo | grep VK_KHR_device_group
        VK_KHR_device_group_creation        : extension revision  1
        VK_KHR_device_group                 : extension revision  1
Hello, Gilrond.
 
those cards came and died on their arses. it doesn't solve a lot of the inherent issues of using 2 GPUs

MMMmmm...the old cards came and died, which from everything I remember, were not really "multi-GPU". It was more like a processor to do base level teraflops, and a slave to handle post-processing.

Now that we're heading toward 4K resolutions standard -- bye-bye anti-aliasing. BYE-BYE. SEE YA!

That's huge. Now, we're dealing with two processors that can share the load of raw teraflops and share the load of any post-process effects per frame, since there's nothing that's super-processor-intensive like AA anymore. (A general FXAA can completely smooth out 4K, and I really don't think even that will be necessary.)

But the biggest gain is raw geometry that can be included in a title. That sort of floating point data seems to be begging hyperthreading. With no real overhead needed for major post processing, I think it simply makes more sense to start cramming GPUs (or GPU cores) onto single cards. And, of course, developing wider buses to deal with it. "PCI-ExpreXX," and so forth. (Get the data processed as fast as possible, and negate the need for post-process bells and whistles. It's basically all about the pixels in the end.)
 
Very flaky support in OpenGL drivers on Linux (Mesa didn't support it at all as far as I know).

Ya, I've never used it with Linux, always just used Windows. I could definitely see it as an issue with Linux though.
Post automatically merged:

Now that we're heading toward 4K resolutions standard -- bye-bye anti-aliasing. BYE-BYE. SEE YA!

That's huge. Now, we're dealing with two processors that can share the load of raw teraflops and share the load of any post-process effects per frame, since there's nothing that's super-processor-intensive like AA anymore. (A general FXAA can completely smooth out 4K, and I really don't think even that will be necessary.)

But the biggest gain is raw geometry that can be included in a title. That sort of floating point data seems to be begging hyperthreading. With no real overhead needed for major post processing, I think it simply makes more sense to start cramming GPUs (or GPU cores) onto single cards. And, of course, developing wider buses to deal with it. "PCI-ExpreXX," and so forth. (Get the data processed as fast as possible, and negate the need for post-process bells and whistles. It's basically all about the pixels in the end.)

On AA, believe it or not, even at 4k you can still see the sharpening. I'm currently replaying Far Cry 4, and dialing up the MSAA on it to 4x @ 4k, you can definitely tell how much sharper the lines are. It's not as obvious as it was at say 1080p, but it's definitely there. I was surprised, as I figured it wouldn't matter at that high of a resolution

Agree completely on putting multiple GPUs on a card. I was thinking about that last week when I was reading some hardware articles. We're at a point where NVidia is basically untouchable at the high end (hoping Intel can offer some competition when they release their Xe cards) and we're seeing very slow growth in terms of performance which is exacerbated by the fact that NVidia's prices on the latest Turing line are just out of this world insane because they have no competition. I'd love to see AMD / Intel start really pushing the type of technology where since we aren't seeing competition on the single gpu high end with NVidia, they start making it very easy to put 2, 3, 4, middle-tier GPUs together and have them scale well for people. Back when I was semi-poor, that's why I started getting into Crossfire initially - I couldn't afford the big dawg, $500-700 high end GPUs, but I could afford a couple $150-200 cards that would net me nearly the same level of performance for less $. It's a really neat technology, it just needs to be pushed harder and adopted by developers.

With DX12 & Vulkan making it so developers can simply code for it in their games, it's a consumer friendly feature that IMO, should absolutely be implemented on all AAA titles.
 
Last edited:
Hello, Gilrond.

Hello, Snowflakez.
Post automatically merged:

Now that we're heading toward 4K resolutions standard -- bye-bye anti-aliasing. BYE-BYE. SEE YA!

I don't think it's heading there yet. Putting several GPUs on one card or using two doesn't really matter much if that beast produces heat like an oven (and eats energy accordingly). When 4K will be handled with high framerate without creating a desk heater replacement, then it will be practical. That will take a while still I think.
 
Last edited:
Hello, Snowflakez.
Post automatically merged:



I don't think it's heading there yet. Putting several GPUs on one card of using two doesn't really matter much if that beasts produces heat like an oven. When 4K will be handled with high framerate without creating a desk heater replacement, then it will be practical. That will take a while still I think.
I can't seem to find your epic showdown with Sard anymore. Do you know what thread that is in? I like to re-read it once a month.
 
I hope the two delays will lead to good optimisation, because Ray Tracing on a GTX 1080 demands that.

Good ray tracing performance on a card without dedicated RT cores? I’d love to see that, and world peace too while we’re at it :p All joking aside, I don’t think that even the current top RTX cards are capable of delivering good RT numbers unless you’re gaming on a 1080p resolution and with DLSS 2.0 enabled. Decent RT performance might be possible with the upcoming GeForce card lineup, though, but we’ll see :think:
 
I hope the two delays will lead to good optimisation, because Ray Tracing on a GTX 1080 demands that.
Good ray tracing performance on a card without dedicated RT cores? I’d love to see that, and world peace too while we’re at it :p All joking aside, I don’t think that even the current top RTX cards are capable of delivering good RT numbers unless you’re gaming on a 1080p resolution and with DLSS 2.0 enabled. Decent RT performance might be possible with the upcoming GeForce card lineup, though, but we’ll see

^ This. Non-RTX cards, no matter how powerful they are in other areas, are not made to handle ray-tracing. And the same is true of Radeon equivalents.

Ray tracing is something that requires both the hardware and the software to work effectively / correctly. The 1060s and above technically support it, but they will not be able to handle it with the same accuracy as true RTX card.

A good work-around for such situations is to lower your resolution. Dropping to something in the 1024x768 range may actually give much more playable performance if a game starts to chug.
 
I agree with the conversation points about the true RTX cards and the synergies, but I think a key factor that needs to be taken into account is the availability of gsync or freesync compatible monitors with the hz and 4k capability to show the beams.

I think pc builders get so focused on the parts sometimes that they don't think about the complete package, and then the tweaks needed to get it all working effectively / correctly as @SigilFey stated.

Without a truly synchronized system you will create inherent skips, lags, and bottlenecks to data, power, and communication between your parts. Similarly, without a truly syncronized monitor you have a gap in what the card can do, and ultimately what you see; which in the end what we are talking about, what rays are projected in the screen image, clarity, and speed of transitions as the image/light moves across the pixels. I'm not going to be a salesperson for gsync or freesync, I think the 144hz of the monitor, type of panel, and general specs are more important. That said, getting a synergistic system that has properly allotment of matching ramm and good bio selections (XMP not Autoprofile), a good fast ssd, and then choosing an intel or amd driven chip and gpu set is key. Don't forget a good cooler to keep everything happy and pampered and make sure you have a power supply and even power source strong enough for your computer and other components, etc...

Anyway, I'll get off my soap box, but I see reddit filled with posts about this component or that on the internals, but not how it all works together or how it appears visually in the end.
 
Top Bottom