Predicted witcher 3 system specs? Can I run it .

+
Status
Not open for further replies.
Turning down the resolution to "max" the game is counterproductive if your goal is the best possible image quality. There is a reason people pay through the nose for higher resolution monitors.
Is their a big difference in performance from 1080p to 1440p? How much fps will you lose by jumping to 1440p? I know it's guess work but if you had to guess.
 
Is their a big difference in performance from 1080p to 1440p? How much fps will you lose by jumping to 1440p? I know it's guess work but if you had to guess.
Well, if you look back at the benchmark for crysis 3 that you linked earlier, and you compare the 1080 performance to the 1440 performance, the minimum fps goes from 61 at 1080 to 38 at 1440. Thats a 23 frame difference with all the same settings. Which is about a 38% difference in fps. Now I think that is an extreme example, most of the time I think the performance hit would be closer to 30% then 40. But this is as good an example as any. You can expect at least a 30% reduction in frame-rate with the same settings going from 1080 to 1440.

For further reference 1440p has 80% more pixels then 1080p. So you are NOT taking an 80% performance hit, that would be ridiculous.
 
Last edited:
Well, if you look back at the benchmark for crysis 3 that you linked earlier, and you compare the 1080 performance to the 1440 performance, the minimum fps goes from 61 at 1080 to 38 at 1440. Thats a 23 frame difference with all the same settings. Which is about a 38% difference in fps. Now I think that is an extreme example, most of the time I think the performance hit would be closer to 30% then 40. But this is as good an example as any. You can expect at least a 30% reduction in frame-rate with the same settings going from 1080 to 1440.

For further reference 1440p has 80% more pixels then 1080p. So you are NOT taking an 80% performance hit, that would be ridiculous.
And what about a difference of 1080p to 1366x768 how much more fps would you get?
 
And what about a difference of 1080p to 1366x768 how much more fps would you get?

It depends on how output-bound the game is. Some games, TW2 is maybe the best example, are thoroughly output-bound. There, the frame rate is almost an exact inverse of the pixel count. Games like Crysis 3 are not as strongly output-bound, and the slope will be different.

But here's the problem: @theLaughingStorm is trying to draw a proportion of fps to pixels, but the relationship is reciprocal, not linear, and that makes the arithmetic very different. Make the comparison on frame time instead, which is the inverse of frame rate, and you will have a correct view of the effect of pixel count.

61 fps = 16.4 msec
38 fps = 26.3 msec
The frame time increased by 60%: that is your performance hit when you increase the pixels by 78%. It's a much stronger dependency on pixel count than the incorrect comparison on frame rate would indicate.
 
http://www.kdramastars.com/articles/19164/20140408/witcher-3-pc-max-settings.htm
Well according to this. (And it can be false but lets just say it's not.) The GTX 780 Ti will run the Witcher 3 PC version at 35-45 fps at 1080p max settings with 8x MSAA enabled. How can TW3 run maxed with 60fps without losing too much eyecandy?
1. Released state: Gain 5fps
2. Turn off Physics: Gain 10fps
3. Lower to 2x MSAA: Gain 10fps

That would then be 60/70 fps gain
 
Last edited:
This kind of analysis is still based on faulty arithmetic. Frame rates are reciprocal, not linear. If you try to add or subtract them, you get a false picture of differences or improvements.

Getting from 35 fps to 60 fps is not a 25 fps (about 71%) improvement. It's going from 1/35 to 1/60 seconds per frame, and that's about a 41% improvement. So the work needed to render a frame needs to be decreased by about 40 percent to make this happen.

By my calculation, TW2 needs about 200-250 instructions to render a pixel. A 40% decrease means you have to make it so that 80-100 of those instructions go away (or get executed across more parallel processors). That's a huge challenge for the programmers. Part of that is writing better code, and the other part is taking advantage of newer technology such as multiple rendering threads.

Mantle, the latest editions of DirectX 11 (I know Windows 8.1 is unpopular for other good reasons, but the graphics stack is much improved), and the latest OpenGLs all do this. I'm sure the consoles do too, though they're not coded to publicly known APIs: those 8 cores have to be good for something, and multithread rendering is about the best thing they're good for in a console. That's why they're of such great and immediate interest.
 
Last edited:
i heard the 880 is gonna be cheaper than the 780 Ti...
But then would that make it a less powerful of a GPU? I mean if it's cheaper and better than the 780 Ti then that would be to good to be true. But then again i heard the TITAN is more expensive than the 780 Ti and less powerful as well.
 
But then would that make it a less powerful of a GPU? I mean if it's cheaper and better than the 780 Ti then that would be to good to be true. But then again i heard the TITAN is more expensive than the 780 Ti and less powerful as well.

Not at all. Different generation, different architecture. When there is an 880, it won't be Kepler technology, it will be Maxwell (even if it is a reworked 28nm Maxwell). We've already seen what Maxwell can do on a small scale with the 750/750Ti,

Whether it's cheaper or not will be a different question. If they can't get 20nm into production, it probably won't be cheaper, because big chips are expensive to make,

Anyway, Titan's not for gamers, it's for number crunchers, and even if it's not the equal of the 780/780Ti at crunching textures, it blows the magic smoke off them in double precision. In its intended market, it's a flaming bargain.
 
Last edited:
http://www.tomshardware.com/reviews/geforce-gtx-780-ti-review-benchmarks,3663-7.html
This link really get's me thinking it shows how the 7990 runs Crysis 3 much better than the 780 Ti and that's because Crysis 3 is AMD friendly. TW3 will be probably run better on Nivdia GPUs. And that's one reason why i'm interested in getting a Nivdia GPU. Of course there was not that big of a difference when Crysis 3 launched but now there is because of the updates as well as Mantel.
 
GTX 580 Release - November 9th, 2010
GTX 680 Release - March 22, 2012
GTX 780 Release - May 23, 2013
GTX 880 - Fall 2014? - Has to be this year right?

They might release the 880 before TW3 comes out. For some reason in my gut i think that it will be out next year. I hope not.
 
http://www.tomshardware.com/reviews/geforce-gtx-780-ti-review-benchmarks,3663-7.html
This link really get's me thinking it shows how the 7990 runs Crysis 3 much better than the 780 Ti and that's because Crysis 3 is AMD friendly. TW3 will be probably run better on Nivdia GPUs. And that's one reason why i'm interested in getting a Nivdia GPU. Of course there was not that big of a difference when Crysis 3 launched but now there is because of the updates as well as Mantel.

A7990 is a dual gpu card.........that is why it runs it better......compare that 7990 to 780 sli or 780ri sli and then you get the real picture....
 
So how long do you guys generally last with your CPUs? I'm asking this because in the summer of 2015 I'll be upgrading my 2500K 3.3ghz to either go for a Haswell (E?) or a Broadwell. The reason I'll be upgrading my CPU is because I've set my sights on a GTX 800 series card, probably the GTX 880 and it just doesn't feel logical to run a sparkly new GPU on an 5 year old CPU. So how much gain should I get from upgrading my 2500K? It feels kinda stupid saying this, but this may affect the advice you'll give:
money isn't really an issue

Would you guys consider it fairly neseccary, a complete waste of money, or somewhere in between?

Props to all you guys informing the lesser informed btw, good stuff going on here.
 
well the i5 2500 is a pretty strong cpu even now,in all the games benchmarks i check(regulary,they test different cpu's performances )this cpu is on par with the i7 2600/i5 4670 (with max of 5 frames differences).
if i were you i would just spend my money on a new gpu,your cpu is more than enough for current games unless they are not optimized of course (ahm*ubisoft*ahm).
 
Last edited:
Upgrading CPUs is mostly a waste of money. Before an upgrade is worth it, you actually have to have a need to run programs that require greater performance than you get now.

Dual-core Core 2's and nasty old things like Athlon 64x2 and original Phenoms are the only things that really have to be upgraded. In particular, any Core i7 and any Sandy Bridge Core i5 don't need an upgrade unless you're a professional number cruncher.

Heavy SLI or Crossfire setups that push the bandwidth of earlier PCI-Express systems are the only things where the CPU is likely to become a bottleneck.

Everything else is just having the latest CPU for bragging rights.
 
Last edited:
Upgrading CPUs is mostly a waste of money. Before an upgrade is worth it, you actually have to have a need to run programs that require greater performance than you get now.

Dual-core Core 2's and nasty old things like Athlon 64x2 and original Phenoms are the only things that really have to be upgraded. In particular, any Core i7 and any Sandy Bridge Core i5 don't need an upgrade unless you're a professional number cruncher.

Heavy SLI or Crossfire setups that push the bandwidth of earlier PCI-Express systems are the only things where the CPU is likely to become a bottleneck.

Everything else is just having the latest CPU for bragging rights.

Which is kind of sad when you think about it, the cpu market has become very predictable and slow when it comes to new tech that actually matters.
 
Which is kind of sad when you think about it, the cpu market has become very predictable and slow when it comes to new tech that actually matters.

Well, the high performance computing market (servers, pro workstations, number crunchers) needs those new CPUs. Every bit of straight performance and performance per watt means money in the bank to them. For ordinary gamers, the point of diminishing returns is probably around the Sandy Bridge Core i5. But even in the high performance market, the advances over Sandy Bridge-E are merely fractional.
 
Last edited:
Which is kind of sad when you think about it, the cpu market has become very predictable and slow when it comes to new tech that actually matters.

In the consumer CPU market yea. Mostly because there's no demand for more powerful CPU's and also because AMD isn't much of a competitor in the consumer space.
As far as I know it's a lot different in the Professional market for servers, low power chips for mobile and various other things.
That is where Intel is putting most of it's effort in right now. Trying to compete with ARM and Qualcomm. (Losing pretty badly mind you.)
They even spent a ridiculous amount of money for a new Fab plant a while back for this stuff and it was mostly a waste of money.
Edit: Guy N'wah pretty much beat me to it. :p
 
Status
Not open for further replies.
Top Bottom