Building a gaming PC

+
The 900 series is not defective; that's FUD. The 970 is the only card subject to the problem. The 980 performs all the way up to the full 4GB VRAM. Nothing wrong with the 980.

It's not just that though it's Nvidia's promises that Maxwell would be 20nm, so much for that.

Even though EVGA has long been nVidia's flagship manufacturer, and they still have the best warranties and customer support going, other manufacturers have better cooling implementations that run quieter and cooler. Gigabyte G1 and MSI 4G models are the best of these. ASUS, Palit, and Zotac are not far behind, though I really dislike Zotac support. With any high-end card, you should make sure it will fit your case; the Gigabyte is 312mm long.

Which would you go for between Gigabyte and MSI?
 
Between Gigabyte and MSI, I'd go for Gigabyte, unless space or quiet operation were a concern. Gigabyte's 3-fan cooler is a marvel, but it is also very long and far from silent. MSI is not far behind, though, and ASUS is not far behind MSI. EVGA got left at the gate on the 900 series.

nVidia got too optimistic on 20nm and got screwed when Apple and Qualcomm got all the 20nm capacity. We may not see 20nm consumer GPUs from them for years, or ever. They're going to 16nm with Pascal, but Pascal is for their monster render appliances.
 
Between Gigabyte and MSI, I'd go for Gigabyte, unless space or quiet operation were a concern.

I'd take that to mean MSI is more silent while Gigabyte's got better performance? What about thermal performance?

nVidia got too optimistic on 20nm and got screwed when Apple and Qualcomm got all the 20nm capacity.

There's that and then there's them delaying the GTX 980ti till 2016.
 
I'd take that to mean MSI is more silent while Gigabyte's got better performance? What about thermal performance?



There's that and then there's them delaying the GTX 980ti till 2016.

Thermal is better on the Gigabyte G1. Overclocking performance always depends on the individual GPU; you're as likely to get a good overclocker with one as the other.
 
http://www.anandtech.com/show/8935/geforce-gtx-970-correcting-the-specs-exploring-memory-allocation



The error, as NVIDIA explains it, is that in creating the GTX 970 reviewer’s guide, the technical marketing team was unaware of Maxwell’s aforementioned and new “partial disable” capabilities when they filled out the GTX 970 specification table. They were aware that the GTX 970 would have the full 256-bit memory bus, and unaware of the ability to independently disable ROPs they assumed that all 64 ROPs and the full 2MB of L2 cache was similarly available and wrote the specification table accordingly

So much bullshit it hurts my head.

But this will come to bite their asses at some point, most likely very soon.

 
Last edited:
^ That is a blunder on the order of the time long ago that Intel said the FDIV bug didn't matter. No, it's not a lie, nobody meant to deceive anybody. Somebody got or guessed at the wrong number, and it didn't get caught by proofreaders. It may even have been an engineering specification that driver writers worked against. It's a breakdown in communication within the company that will not be a career-limiting mistake due to the out-of-control demand for engineers in Silicon Valley.

56 ROPs, not 64. And because in nVidia architecture the memory bus is tied to the ROPs, it matches perfectly the observed inability to address one-eighth of the memory.
 
Btw there is a respectable rep named Peter on Nvidia forum who is now genuinely trying to help customers getting refunds etc on 970. If you're no longer satisfied by this GPU and your request of return getting denied by the individual seller then you can PM him the details and he'll talk to them on your behalf. I think Nvidia is now in full damage control mode just like AnandTech said.

https://forums.geforce.com/default/...tx-970-3-5gb-vram-issue/post/4438090/#4438090

@Guy N'wah

Hey man can you advice me on how sensible it would be to get into a dual card setup now ? I am asking about a crossfire setup, I have no experience with dual cards before so I am not sure what to expect from it, good experience or bad experience overall ? I have Sapphire R9 290 Tri-X and since they are real cheap these days now I was thinking about getting another one before TW3.

Asking online so far is a mixed bag experience for me, some say it's great while some say it's not worth it and staying with single card is the best solution. I also heard that new crossfire XDMA is much better than before solving frame pacing issues greatly and giving better scaling overall.

What's your advice on all this, should I wait for R9 300 series to appear or just get another R9 290 for crossfire setup ?
 
Btw there is a respectable rep named Peter on Nvidia forum who is now genuinely trying to help customers getting refunds etc on 970. If you're no longer satisfied by this GPU and your request of return getting denied by the individual seller then you can PM him the details and he'll talk to them on your behalf. I think Nvidia is now in full damage control mode just like AnandTech said.

https://forums.geforce.com/default/...tx-970-3-5gb-vram-issue/post/4438090/#4438090
I'm still waiting to get a reply from the retailer I bought the cards from but if it fails, I'll contact him.

Also, regarding crossfire:

https://www.youtube.com/watch?v=pGN1na3F5do

Generally speaking I've heard that SLI works better than crossfire in most cases. But that wouldn't surprise me since Nvidia did pioneer that technology.
 
I'm still waiting to get a reply from the retailer I bought the cards from but if it fails, I'll contact him.

Also, regarding crossfire:

https://www.youtube.com/watch?v=pGN1na3F5do

Generally speaking I've heard that SLI works better than crossfire in most cases. But that wouldn't surprise me since Nvidia did pioneer that technology.

Thanks for the reply, yes I saw that video before but so far everything I heard about XDMA CrossFire is good, some saying it's the right step to put CrossFire on par with SLI, still I know SLI has wider support in games and it generally has more consistent performance but since I have MSI Z97 G45 mobo with 3 PCI-E slots supporting either SLI or CF I think I am wasting that mobo with one GPU only.

For now I game only on 1080p and for that my single R9 290 give consistent 60 fps most of the time though I see 45 - 50 fps with demanding games like DAI specially with the use of AA, I am thinking to go 1440p and with that I want 60 fps at highest possible settings in games and some room for experimenting with AA.
 
Last edited:
Btw there is a respectable rep named Peter on Nvidia forum who is now genuinely trying to help customers getting refunds etc on 970. If you're no longer satisfied by this GPU and your request of return getting denied by the individual seller then you can PM him the details and he'll talk to them on your behalf. I think Nvidia is now in full damage control mode just like AnandTech said.

https://forums.geforce.com/default/...tx-970-3-5gb-vram-issue/post/4438090/#4438090

@Guy N'wah

Hey man can you advice me on how sensible it would be to get into a dual card setup now ? I am asking about a crossfire setup, I have no experience with dual cards before so I am not sure what to expect from it, good experience or bad experience overall ? I have Sapphire R9 290 Tri-X and since they are real cheap these days now I was thinking about getting another one before TW3.

Asking online so far is a mixed bag experience for me, some say it's great while some say it's not worth it and staying with single card is the best solution. I also heard that new crossfire XDMA is much better than before solving frame pacing issues greatly and giving better scaling overall.

What's your advice on all this, should I wait for R9 300 series to appear or just get another R9 290 for crossfire setup ?

If you had asked me a year ago, I would have said wait for a sufficiently powerful R9 300. But 290's in Crossfire run quite well. The big question is whether you intend to play DX9 titles, because AMD's frame pacing fix does not work in DX9. If all the demanding games you play are in DX10 or DX11, you're fine.
 
If you had asked me a year ago, I would have said wait for a sufficiently powerful R9 300. But 290's in Crossfire run quite well. The big question is whether you intend to play DX9 titles, because AMD's frame pacing fix does not work in DX9. If all the demanding games you play are in DX10 or DX11, you're fine.

Thanks man. Well I have very few games remaining to be played in DX9 and from here on all the games we'll get will be on DX11 or ahead so I don't think I will go back to DX9 and even if I do a single R9 290 will be enough for them (in case I had to turn off CF).

Btw what's your take on DX12 ? I heard that for full DX12 features we'll need new GPUs while AMD said DX12 will be supported on their GCN GPUs so it's still okay to go for R9 290 even if DX12 is coming soon ? The performance improvement benefits will still apply to recent GPUs right ?
 
Feature levels, DirectX12 has feature levels. The spec is still not finished so whatever GPU claims to be DX12, it only has partial support.

That said, they've mentioned that it's not an issue for the main 'highlight' of DX12, which is better performance by utilizing CPU properly, more drawcalls etc etc
 
I literally just purchased a GTX 970 G1 a week before this memory issue was discovered. Thinking I'll just keep it anyway, I can't justify an extra $250 for the GTX 980. Might get a Pascal card later next year anyway. So far I have been very happy with the performance, and while I don't think NVIDIA should have lied about the specs, I'll still buy their products in the future.
 
Yeah it's still the same card when it comes to performance and benchmarks, TDP etc etc. I still would've bought it if it was 3.5GB only and I can't think of anything else in that price range offering the same performance and features. On the (somewhat) plus side, they said they're working on a driver that will help with dealing with the 512MB allocation somehow, here's hoping.

That said
http://www.guru3d.com/news-story/middle-earth-shadow-of-mordor-geforce-gtx-970-vram-stress-test.html

Concluding

Our product reviews in the past few months and its conclusion are not any different opposed to everything that has happened in the past few days, the product still performans similar to what we have shown you as hey .. it is in fact the same product. The clusterfuck that Nvidia dropped here is simple, they have not informed the media or their customers about the memory partitioning and the challenges they face. Overall you will have a hard time pushing any card over 3.5 GB of graphics memory usage with any game unless you do some freaky stuff. The ones that do pass 3.5 GB mostly are poor console ports or situations where you game in Ultra HD or DSR Ultra HD rendering. In that situation I cannot guarantee that your overall experience will be trouble free, however we have a hard time detecting and replicating the stuttering issues some people have mentioned.
The Bottom line

Utilizing graphics memory after 3.5 GB can result into performance issues as the card needs to manage some really weird stuff in memory, it's nearly load-balancing. But fact remains it seems to be handling that well, it’s hard to detect and replicate oddities. If you unequivocally refuse to accept the situation at hand, you really should return your card and pick a Radeon R9 290X or GeForce GTX 980. However, if you decide to upgrade to a GTX 980, you will be spending more money and thus rewarding Nvidia for it. Until further notice our recommendation on the GeForce GTX 970 stands as it was, for the money it is an excellent performer. But it should have been called a 3.5 GB card with a 512MB L3 GDDR5 cache buffer.
 
Since a google search gave me no results, I hope that maybe somebody her does know whether Nvidia's Gameworks features like Hairworks etc. will run on a dedicated GPU as it is possible with PhysX.
 
Since a google search gave me no results, I hope that maybe somebody her does know whether Nvidia's Gameworks features like Hairworks etc. will run on a dedicated GPU as it is possible with PhysX.

AFAIK, only PhysX-based features can be offloaded to a dedicated GPU. Hairworks is DirectCompute, and there is no provision for a dedicated GPU in DirectCompute.
 
Top Bottom