GTX 970 VRAM Segmentation Handled Properly in RED Engine?

+
That's right.. Of course time will tell and for now it's only in my mind, but even given the fact that PC hardware continually becomes obsolete (more due to market than else) and therefore there is no real point in waiting a sweet spot, I do believe that there are every now and then (few years) moments in which you can make a better buy.. Simply due to the fact that there is a major performance raise which makes old hardware not competitive anymore. E.g. Sandy Bridge vs the previous generation
 
I'll wait for 16nm nvidia cards to come out, then I'll ask for my refund hehehehe. I hope Nvidia will allow me to do that :)

I doubt those cards will become available this year. I believe TSMC has begone production of those chips but doubt we will see a GTX card this year, if they are released this year it's near the end in Q4.

And I don't know if Nvidia will allow users to ask for refunds for such a long time.
 
But getting an AMD won't give you the NVidia eye candy D:

You should know that almost every Gameworks features work on AMD cards, the only things that don't work are TXAA, MFAA and Apex Turbulence. As an AMD user I always stick to MSAA or SMAA as they are supported in almost every game.

Other features used in TW3 like Hairworks/Fur, HBAO+ will work on AMD (tested many times) and Apex Clothing, Destruction and PhysX will be handled by CPU so there is no eye candy that you will loose by switching to AMD, it's just a myth.
 
You should know that almost every Gameworks features work on AMD cards, the only things that don't work are TXAA, MFAA and Apex Turbulence. As an AMD user I always stick to MSAA or SMAA as they are supported in almost every game.

Other features used in TW3 like Hairworks/Fur, HBAO+ will work on AMD (tested many times) and Apex Clothing, Destruction and PhysX will be handled by CPU so there is no eye candy that you will loose by switching to AMD, it's just a myth.

In theory, yes. In practice these features often run better on nvidia GPUs. Maybe that's intentional, maybe not, but it's something to think about.
 
^ Yes but according to my own testing HBAO+ doesn't hit AMD cards big time any longer and Hairworks in FC4 seem to have similar hit on both AMD and Nvidia cards. Nvidia owners also posted similar findings about Hairworks.
 
That's right.. Of course time will tell and for now it's only in my mind, but even given the fact that PC hardware continually becomes obsolete (more due to market than else) and therefore there is no real point in waiting a sweet spot, I do believe that there are every now and then (few years) moments in which you can make a better buy.. Simply due to the fact that there is a major performance raise which makes old hardware not competitive anymore. E.g. Sandy Bridge vs the previous generation

Another thing is that your projected scenario only works if the Maxwell cards deprecate and the pascal cards occupy the same price range. If maxwells hold their price and pascal's launch price is twice as much, then they'll continue to have their role in the market.
 
Another thing is that your projected scenario only works if the Maxwell cards deprecate and the pascal cards occupy the same price range. If maxwells hold their price and pascal's launch price is twice as much, then they'll continue to have their role in the market.

Never said they don't have their role in the market. In fact they have, and many people make studies about this. I'm only saying with that long story that if I have to make a prediction, many hints seem to suggest that this generation will we outclassed by the next. One of there may be the relatively low price. Don't mean buying now is wasting money: in fact I'm planning to buy soon. But in the end it really depends on what kind of investiment someone wants to do, if you want to/can wait and whether my predictions turn to be right or wrong. Now better to return on topic before mods bash us :)
 
You should know that almost every Gameworks features work on AMD cards, the only things that don't work are TXAA, MFAA and Apex Turbulence. As an AMD user I always stick to MSAA or SMAA as they are supported in almost every game.

Other features used in TW3 like Hairworks/Fur, HBAO+ will work on AMD (tested many times) and Apex Clothing, Destruction and PhysX will be handled by CPU so there is no eye candy that you will loose by switching to AMD, it's just a myth.

I didn't know about all other features but i did know about AMD doing PhysX because i tried it on Borderlands 2 when i had a AMD HD 50 series, it worked but at the cost of big fps hit, my cpu that was handling it wasn't that better either.
So when CDPR says they can't do this features in the consoles it's because NVidia wouldn't like it i assume, or because NVidia just can handle them much better and enabling them in the console would mean that they would need to sacrifice on something else due to performance.
 
I've got an AMD 6970 and a little worried I won't be able to play it, everything else should be fine though, what do you guys think?
 
I've got an AMD 6970 and a little worried I won't be able to play it, everything else should be fine though, what do you guys think?

6970's VLIW architecture, not GCN, but it would really surprise me if they did anything that made the game dependent on GCN. It's slightly behind the "minimum" 7870 in performance, so I think it will play, maybe in the lower half of expectations.
 
This whole issue is vastly exagerated.

The supposed performance decrease when VRAM usage passes 3,5GB is between 1%-3%.
That is less than 1FPS in most situations.

Reports of aditional stuttering is making the rounds. But that is unconfirmed hearsay.
No respected reviewer was able to reproduce it yet.

In fact if you go on youtube and search explicitely for "GTX 970 VRAM stuttering" all you find is videos of people testing it, and comming to the conclusion there is no aditional "hickup" issue other than the already bad performance at that point.

The GTX 970 is a great card in any regard.
The only thing nvidia did wrong was printing "4GB" on the box instead of "3.5GB+" or something..
 
Last edited:
The only thing nvidia did wrong was printing "4GB" on the box instead of "3.5GB+" or something..

That's kind of my impression here too. That Youtube video with the toothless guy is funny as hell, but the message seems to basically be that if the card had 4GB, it would be the second coming of christ, but since it has 12.5% less, it's actually complete garbage and everyone at Nvidia should be dragged to the guillotines. Of course they need to be called out for fudging the specs, but as usual, people are overreacting. I'm running a 970, and I've yet to see this fabled slowdown in TW2 or Crysis 3 or any of the other games I've played since I bought it.
 
This whole issue is vastly exagerated.

The supposed performance decrease when VRAM usage passes 3,5GB is between 1%-3%.
That is less than 1FPS in most situations.

Reports of aditional stuttering is making the rounds. But that is unconfirmed hearsay.
No respected reviewer was able to reproduce it yet.
Wrong. Here's a test from the renowned german hardware magazine computerbase.de: http://www.computerbase.de/2015-01/geforce-gtx-970-vram-speicher-benchmarks/2/
While most of the subtests don't show major stuttering, there is one blatant failure with 'The Talos Principle', where the 970 hangs a full seven seconds, while a 980 doesn't even hiccup: http://pics.computerbase.de/6/2/8/0/0/diagramme/6.1422524042.svg
Video of the test scene:
It's under very high resolution of course, but anyone interested in more than just FullHD (this includes the use of Nvidia's DSR antialiasing!) should be aware of this.


In other tests the 970 has a tendency to deteriorated frametime variance (aka microstuttering) compared to the 980.
 
Last edited:
I read the article (german too).

The hang is weird,yes. But after the hang the VRAM usage is still above 3,5 and the article itself mentions that besides the hang, frametime performance in Talos was the most equal between the cards. It doesn't paint a clear picture. The hang could have another reason (Yes the timing is peculiar).

As for the rest.... the card that costs twice as much performs better ... yeah i would sure hope so!
 
To my understanding, it's not the actual memory difference that has people so annoyed. Sure it's 500mb less, but it's the fact they blatantly lied about the specs for the card - and continued to do so, way after launch. Including the ROP count, etc. Until they were "caught out".

This and the fact that quite a few games have shown pretty heavy stuttering when trying to access the last 500mb section of the cards vram. Due to it's vasty lower bandwidth. It would probably be better off, if it actually was a 3.5gb card.

What has me so wary, is what else they skimp out on their cards that haven't been found out yet. If their willing to go as far as they have with the things that we've seen. But, who knows tbh.
 
Last edited:
To my understanding, it's not the actual memory difference that has people so annoyed. Sure it's 500mb less, but it's the fact they blatantly lied about the specs for the card - and continued to do so, way after launch. Including the ROP count, etc. Until they were "caught out".

This and the fact that quite a few games have shown pretty heavy stuttering when trying to access the last 500mb section of the cards vram. Due to it's vasty lower bandwidth. It would probably be better off, if it actually was a 3.5gb card.

What has me so wary, is what else they skimp out on their cards that haven't been found out yet. If their willing to go as far as they have with the things that we've seen. But, who knows tbh.

I don't think there are any more surprises, and this one would not have been a surprise had they published correct specs. This is a standard nVidia design for an incomplete memory bus. They used it on the 550 and 660 (both 24-ROP, 192-bit bus) before this one. (Other non-power-of-2 designs (560 Ti 448/570, 780/Titan) have used a non-power-of-2 memory size (1280, 2560, 3072, 6144) to get full interleave.)

What makes this one glaring is ( 1 ) they did not specify that the card had an incomplete complement of ROPs (56, but they put it out as 64), and ( 2 ) the performance deficit for the non-interleaved bank of memory is much worse, 7x (32-bit vs. 224) rather than 3x (64-bit vs. 192).
 
Last edited:
Nvidia needed the 970 to feature 4 GB in order to equalize AMD's offerings of the 290 series on paper (as you know, people tend to buy greater numbers, and 4 is simply more than 3,5...), being fully aware that the hardware configuration of the 970's cut-down GM204 can't actually use more than 7/8 of those 4 GB with unimpaired performance.
 
Nvidia needed the 970 to feature 4 GB in order to equalize AMD's offerings of the 290 series on paper (as you know, people tend to buy greater numbers, and 4 is simply more than 3,5...), being fully aware that the hardware configuration of the 970's cut-down GM204 can't actually use more than 7/8 of those 4 GB with unimpaired performance.

You also cannot boil eggs on a 970 unlike the AMD cards
 
---------- Updated at 09:44 PM ----------

6970's VLIW architecture, not GCN, but it would really surprise me if they did anything that made the game dependent on GCN. It's slightly behind the "minimum" 7870 in performance, so I think it will play, maybe in the lower half of expectations.
What do you mean VLIW or CGN? . I know the game is not out yet but maybe with all setting's on lowest and maybe get decent frame rates with my 1360x768 resolution 24" screen, what do you think? Thank for for the reply.
 
Top Bottom