Building a gaming PC

+
I entered a contest that Gigabyte posted on their site about a week ago (they were giving away 5 GTX 1080 Xtreme Gaming cards) and ended up winning one of the cards :). Now I'm thinking I'll SLI the two cards so I'm definitely glad I went with an 850 watt PSU lol.
 
Is this a new feature for AMD cards? I haven't used an AMD card in about 15 years or so... but I know that Nvidia cards have been doing this for a few generations already. It's a nice feature to have if you want a silent PC :).

It's not a feature of NV or AMD cards, but something the NV's and AMD's board partners implement since ~2 generations of cards. None of the reference designs by NV or AMD do that.
 
It's not a feature of NV or AMD cards, but something the NV's and AMD's board partners implement since ~2 generations of cards. None of the reference designs by NV or AMD do that.

I wasn't trying to turn it into a pissing contest between AMD and Nvidia, I was just curious if past AMD cards also had this feature (regardless of whether or not they were reference or aftermarket designs) due to not using one in over a decade. Thanks for the info :).
 
My latest card is GTX 680, so I'm not familiar with this feature. Just pointing out that it's neat.

I'm not sure if the software is available for Linux but I know there are utilities in Windows such as EVGA Precision (only works with Nvidia cards) or MSI Afterburner (works with both Nvidia and AMD) that allow you to set custom fan profiles. Using those you can set custom fan curves which allow the fan to be as loud or as quiet as you'd like at any temps.
 
My current MSI GTX 970 uses by default passive cooling until the temp hits 60º C. In my custom fan profile I set it to 30% fan speed for temps between 30º and 40º and so on. At that speed the fan is practically inaudible (I seriously can't hear it start and stop, but my case has noise reduction) and keeps the card much cooler, say 32º while browsing (in the summer). Completely passive it would sit around 45º or so, depending on how hot the room is. Older cards maybe can get away with it as long as they don't get too hot, which I guess is all that matters.
 

That's a bit misleading, apparently:

the card utilizes a Polaris 10 GPU and includes 2 PCIe 3.0 M.2 slots for adding flash drives to the card. These slots are then attached to the GPU (...) which the GPU can then use as an additional tier of storage.

Source: http://www.anandtech.com/show/10518/amd-announces-radeon-pro-ssg-polaris-with-m2-ssds-onboard

So it's a professional GPU with extra storage via an M.2 SSD. Still interesting technology though, especially for high-throughput applications such as video or image rendering. I wonder if the SSD's can really be as fast as proper GDDR.
 
The transfer rate between GPU and SSD is no faster than for other M.2 SSDs. Still, by using it as some sort of additional cache of RAM, latency for accessing this storage pool is a lot lower than going the normal route to your system drives. I think you can compare the situation to a hybrid HDD with some gigabytes of flash memory for reduced access times.
 
The transfer rate between GPU and SSD is no faster than for other M.2 SSDs. Still, by using it as some sort of additional cache of RAM, latency for accessing this storage pool is a lot lower than going the normal route to your system drives. I think you can compare the situation to a hybrid HDD with some gigabytes of flash memory for reduced access times.

In CUDA applications (and probably other GPGPU API's) the latency of memory operations is hidden by properly scheduling reads and writes to and from device memory. Noone (maybe students?) writes on device memory, reads back and then loads files from slow storage such as hard drives. Worst case scenario, if data doesn't fit in RAM, it should be preloaded while the device is busy. I can't imagine an SSD being preferred over RAM, specially now that it is cheap and proper workstations and HPC nodes have hundreds of GB's. SSD's fall a step above mechanical hard drives in memory hierarchy but still below RAM. So if slow, permanent storage is still necessary why not just load every rendering node with a few massive SSD's, a few massive HDD's and the fastest Infiniband connection available to a humongous storage server? We're talking about multi million dollar investments here, not garage operations. Unless that's AMD's target audience since Nvidia is all over the place in industry, research labs and universities.

There are other, much more serious issues affecting the progress of GPGPU technologies for anything other than data-parallel applications with algebraic solutions. For instance, transparent and more direct memory operations and more control over different segments of the SP array..
 
Last edited:
I'm waiting for Von Neumann architecture to become obsolete. I.e. imagine a computer where storage is fast as RAM, persistent as drives and is measured in terabytes. There won't be a need for separating RAM / drives anymore. It will be one single virtual space. Technology is moving there, but it didn't quite reach it yet.
 
So far the Powercolor RX480 Red Devil is arguably the best custom 480. Its cooling is more powerful and quiet than the others that have been launched until now.
 
Last edited:
Top Bottom