Obvious Downgrade

+
Status
Not open for further replies.
if you would bother to open the link you would see that this post contains links to other articles which state this.
Those articles comes from people who played the game on recent hands-on.

post on gaf is just summary containing translations from polish sites

Yes lets not question their "translation", or the actual previews them selves. From where did you get that the game BARELY ran on a 980? Almost nothing of what I have read has suggested any of the presentations had critical performance issues.
 
Yes lets not question their "translation", or the actual previews them selves. From where did you get that the game BARELY ran on a 980? Almost nothing of what I have read has suggested any of the presentations had critical performance issues.

Those are well respected benchmarking sites in Poland. They are not going to trash CDPR (their own countrymen) for no reason and they did not exactly trash them and tried to be very positive. They simply said the game was most likely seeing FPS in the teens and had pretty big drops in "wooded areas" and seemed to be locked at 30 (prob adaptive half vsync in Nvidia control panel).

They said there was a lot of optimization to be done and the game was running at high settings, with no hairworks. They DID remain skeptical that a "miracle" of optimization would occur.

Why? Because they know when the release of this game is, and they have probably looked at a lot of games before release to have a barometer.
 
Neogaf is probably the BEST source for video game news. It is heavily vetted, astroturfers from companies are quickly ip banned.

Also? The statements from Neogaf? I linked the Polish press where those statements are.

As the mods are so fond of saying? Stop throwing around FUD...

Its mystifying "vetting" process that has no indication of what it actually is, and doesn't demand that its leaks identify them selves. Turning into a salacious rumor mill comparable to Kotaku.
 
Those are well respected benchmarking sites in Poland. They are not going to trash CDPR (their own countrymen) for no reason and they did not exactly trash them and tried to be very positive. They simply said the game was most likely seeing FPS in the teens and had pretty big drops in "wooded areas" and seemed to be locked at 30 (prob adaptive half vsync in Nvidia control panel).

They said there was a lot of optimization to be done and the game was running at high settings, with no hairworks. They DID remain skeptical that a "miracle" of optimization would occur.

Why? Because they know when the release of this game is, and they have probably looked at a lot of games before release to have a barometer.

Again I ask where did they get these numbers from? Do they actually have the tech on hand in their machines with analyzing hardware? If the answer is no than they are full of shit. What did they do "eye ball" it at the press event, or something?
 
Last edited:
^ Definition of FUD.

Unified memory pool? The SDK's give access to like 4.5 gigs TOTAL between VRAM/game memory and the rest is reserved to the OS. Both consoles are in fact flawed. PS4 on the CPU due to memory (you want low latency for CPU), and this is why PS4 has had lower FPS in open world games like AC Unity. Xbox One is flawed on GPU ram, where you want bandwidth.

As long as the PC has enough VRAM (3 gigs is more than either console can push and the max a PS4 has pushed is 2.5 in a game) and system ram (and everyone builds with 8 gigs of ram), the PC has the IDEAL ram setup.

API? Yup, open world games like Witcher 3 would have bennefitted a hell of a lot more from something like AMD Mantle than Games Works. That is besides the point though. The 1.6 and .1.7 ghz CPU's in the consoles, even with a low level API can't come close to a I5 K/I7k. We may not see a locked 60 due to API though.The consoles are at 30 though...

So IF this game uses the high end of what a PS4 can push? Yup a 2 gig GTX 770 would suffer a little bit in comparison. A 150 dollar R9 280, 3 TFLOP GPU with 3 GB VRAM? Should be blowing the 1.31 TFLOP XB1 GPU, 1.7 TFLOP PS4 GPU away.

Something like a STOCK R9 290? Is 5 TLOPS.

Sorry but the API thing is BS, when these consoles can't run a game like Guild Wars 2, even with a low level API. Low level API's WILL help PC, but stating that we are behind with our current API? Not even close. Not on a decent CPU.

Okay, calling FUD on my post with such an aggressive tone is uncalled for. For the last time, calm down. I share your position for fuck's sake, I'm arguing technical points and you don't know me. So stop going for the jugular whenever anyone tries to discuss anything with you. Please :(.

Regarding the unified pool of memory, it goes deeper than that. First of all, I guess you are talking about the lowest common denominator that is the X1. 4.5 gigs is in the right ballpark for that case. The point is still that while committing draw command lists, the CPU doesn't have to hang on for as long as on PC. That's a fact. There's a ton of advantages to this configuration but also drawbacks. I agree both on the topic of GDDR5 latency and X1's bandwidth issue, those are also well-known facts. But I think what kills the performance is simply the lack of raw horsepower that PCs can provide.

So similarly, another fact is that I have today more transistors in my graphics card than in the entirety of a PS4. The 980 has four times as many ROPs as the GCN in a PS4. But saying a PC has an ideal memory setup is not quite right. There is still a latency problem inherent to PCI-E. I agree on the overall hardware overview. But that was beside my point.

I don't like talking in TFlops, and although the orders of magnitude warrant it, you have no idea how much time and performance is lost at the API level depending on the workload. You are also ignoring the specific hardware configuration of the consoles allowing them better efficiency. You win out in raw power but it's just not that simple. You might call that BS if you want, but all the major rendering engineers of this generation will beg to differ with you.

Finally, what I meant to say is the following : TFLops don't scale that well on today's hardware for two reasons. Rendering API and various bottlenecks. There's no way you can render a game at 4k and 60 fps on a GTX970 because of its fillrate. Does all that horsepower translate into performance? Absolutely, but it doesn't scale along hard numbers like you believe it does. I'm only trying to explain why.

Edit : TLDR, because the PS4 can dish out 1080p/30fps with X TFlops, it doesnt mean a PC with X*2 TFLops should dish out 1080p/60fps. X*8 TFlops doesn't mean you can 8x SSAA or play the game in 8k if your card can't handle the bandwidth required for that.
 
Last edited:
Okay, calling FUD on my post with such an aggressive tone is uncalled for. For the last time, calm down. I share your position for fuck's sake, I'm arguing technical points and you don't know me. So stop going for the jugular whenever anyone tries to discuss anything with you. Please :(.

Regarding the unified pool of memory, it goes deeper than that. First of all, I guess you are talking about the lowest common denominator that is the X1. 4.5 gigs is in the right ballpark for that case. The point is still that while committing draw command lists, the CPU doesn't have to hang on for as long as on PC. That's a fact. There's a ton of advantages to this configuration but also drawbacks. I agree both on the topic of GDDR5 latency and X1's bandwidth issue, those are also well-known facts. But I think what kills the performance is simply the lack of raw horsepower that PCs can provide.

So similarly, another fact is that I have today more transistors in my graphics card than in the entirety of a PS4. The 980 has four times as many ROPs as the GCN in a PS4. But saying a PC has an ideal memory setup is not quite right. There is still a latency problem inherent to PCI-E. I agree on the overall hardware overview. But that was beside my point.

I don't like talking in TFlops, and although the orders of magnitude warrant it, you have no idea how much time and performance is lost at the API level depending on the workload. You are also ignoring the specific hardware configuration of the consoles allowing them better efficiency. You win out in raw power but it's just not that simple. You might call that BS if you want, but all the major rendering engineers of this generation will beg to differ with you.

Finally, what I meant to say is the following : TFLops don't scale that well on today's hardware for two reasons. Rendering API and various bottlenecks. There's no way you can render a game at 4k and 60 fps on a GTX970 because of its fillrate. Does all that horsepower translate into performance? Absolutely, but it doesn't scale along hard numbers like you believe it does. I'm only trying to explain why.

Here is where you are right. Downclock a 6300 FX (consoles are also limited to 6 cores on the SDK, though the XB1 is opening up a half a core, with the cost being it may hinder multitasking/OS stuff) to 1.6 or 1.7 ghz and the console will outperform it as far as bottlenecking a card.

You are talking about 4 ghz AMD chips though, and Intels which are much faster on single core performance.

If the argument is an I3? It can come close. An I5. No. As far as why I was "harsh", I have seen the unified pool sold as the greatest thing ever, by many media outlets and they are 100 percent full of it. The 1.6ghz CPU on the PS4 with high latency ram on 6 cores is horrible for this kind of game, yet they will get near or at a locked 30 on it.

Now something like a I7-920? Same with the 8320's, 6300fx? Yup those are going to skyrocket on a low level API. I5/I7? Meh. We will go from 55 FPS to a locked 60 or above. Yup it will be nice, but we are already way ahead of the consoles. Not much gains on console games. PC exclusives like MMO's, Star Citizen? Sure.
 
Last edited:
I see your point about the CPU frequencies, and you are very right about that. You are also right about the way the press talked about the architecture of consoles, but that comes as very unsurprising to me :).

The problem I still see with CPUs is the API. We can have twice or thrice the frequencies and single core performance of a console with less latency but the API will still shoot us in the knee in terms of overhead, latency, draw call parallelism... There is a reason Metal, Mantle and OpenGL have seen such renewed interest while microsoft caught a lot of well-deserved flak for single-handedly holding PC gaming back for the last what... 5 or 6 years?

I'm not sure I understand your last point about low level API's and i5/i7. Could you clarify? What I think I understand is that we are locked at 60 anyway, so there's not much gain to be seen from our CPU's performance. If this is what you meant, I'd reply that this is true, but the problem is that if the API is creating the bottleneck, then no CPU can save us.
 
But I think what kills the performance is simply the lack of raw horsepower that PCs can provide.

I don't know if you are a console fanatic or just cdpr fanatic, because you are talking non senses using some technical knowledge, so you can't be just someone uninformed.

Since when PC lacks of something compared to consoles? since when the PCI-E is a problem for latencies and that latencies produce some problem with GAMES? stop.....just, stop.

i7 3.8 ghz + gtx980 is at least two times a PS4, not arguable.
 
I see your point about the CPU frequencies, and you are very right about that. You are also right about the way the press talked about the architecture of consoles, but that comes as very unsurprising to me :).

The problem I still see with CPUs is the API. We can have twice or thrice the frequencies and single core performance of a console with less latency but the API will still shoot us in the knee in terms of overhead, latency, draw call parallelism... There is a reason Metal, Mantle and OpenGL have seen such renewed interest while microsoft caught a lot of well-deserved flak for single-handedly holding PC gaming back for the last what... 5 or 6 years?

I'm not sure I understand your last point about low level API's and i5/i7. Could you clarify? What I think I understand is that we are locked at 60 anyway, so there's not much gain to be seen from our CPU's performance. If this is what you meant, I'd reply that this is true, but the problem is that if the API is creating the bottleneck, then no CPU can save us.

Some of the newer open world games aren't even hitting 60 on a OC 4770k/4790k. We are at like mid 50's on lows. I expect GTA V will be the same. It will help, I am just saying not much on SOME cpu's if the game has to run on a console as well. The CPU's with a lot of cores/threads that are slow on single core are gonna see the biggest jump (the claimed 50 percent and only in some genres of games).

Far Cry 4 not hitting a locked 60/stuttering? That is due to horrible threading and Ubisoft being incompetent or simply not caring. So a low level API will help a lot (will help laziness as well since Xbox API will be so similar) and I don't want to downplay it as something we don't need. We definitely need it because most people are NOT on OC I5's I7k's.

If CDPR said tomorrow, we are gonna push textures and put in a 30 fps vsync as an option, and you should probably play it there, I wouldn't even care to be honest. It isn't a competitive FPS/racing game. Not CDPR's fault that DX 11 is currently the standard.

I don't see any excuse where a 6300FX or an I3 with HT would have trouble hitting the same FPS as a console though. Getting near 60 with those CPU's on our current API though? Yeah that ain't gonna happen in an open world game.

What do I care about? LAZY or rushed ports on the PC side, and using the lowest console as a baseline for textures. I think that sucks. Why? Cus if the textures are the same and it is a good port? I can downsample from high resolutions and make chicken salad out of chicken @$%#. If the textures are bad AND the port is bad or mediocre? Then we are just stuck with mediocre textures and nowhere to go.

I would rather see Witcher 3 delayed on PC a month and get an A + port than have a bad port. I would wait a few for higher textures. I don't understand why this wasn't a PC first game to begin with and then downscaled from there though. Seems backwards to me.
 
This seem be Aliens Colonial Marines, Watch Dogs, Dark Souls 2, etc, etc, etc... Deja Vu
CDP remade the engine to be more equality in all platforms
The PC version is a port from the console version (that explain the BAD performance with hi-end machine)
CDP add a few filters (Nvidia exclusive) to makeup the PC version
Without a Nvidia card, you dont the filters and that mean PC = PS4/X1 version
 
Disclaimer: I'm no hardware or software engineer, merely an enthusiast and certain concepts can be wrong but I try my best.

It'll probably go to waste but let me try and chime some in...

A game that isn't optimized yet since the phase is still on-going has to use raw hardware bruteforce strength. This proves absolutely nothing. If the game on launch has performance like this, then there would be something to it but right now it means absolutely nothing. The game was running roughly 30 fps when it was shown for the first time when it was nothing more than a modified TW2 and RedEngine2, on a GTX680. I don't know where you saw the performance being mentioned since it was a hands-on, nobody got to test the game's performance. So for the moment let's assume that it was 30fps, that means it's running around the same mark with a newer card without optimizations, that has remained consistent since Optimizations are still on-going. This is not a bad thing at all, if the performance is similar after optimizations THEN it's worrisome.

Assets means things like textures and models, this has nothing to do with technology like HairWorks, TXAA, PCSS or anything of the sort, those are additions to the PC version this sudden "where are they" is total nonsense FUD spreading.

It's a multi-platform game and all 3 targeted platforms are x86_64 architecture(with extensions), it's only natural they share 1 code tree which later branches into 3 separate branches depending on each platform. This again is not news, we're not in the PS3/360 era where all 3 platforms are of a different architecture. This is the preferred method of development.

Textures...

Theoretical: Yes all three use the same assets, and? High quality textures mean VRAM usage, the consoles have 8GB of shared VRAM. The newest GPUs have 4GB on them and 8GB cards are much much rarer still. You think a 4GB VRAM card is gonna run textures of substancially higher quality than 8GB?

Practical: 8GB is probably not the entire pool available for the game, some of that is no doubt used by the OS functions, that still leaves it with more than 4GB VRAM in total for ALL video data. That's not how it works on PC, here the data has to be copied back and forth between System RAM and VRAM all the time.

TFlops are useless in this day and age, hardware instruction sets run our world.

Last but not the least, the API.
DirectX11 is a high-level API that has been shown to be at its limit at around 10,000 drawcalls and even that's taking it too far AND it pushes them 1 by 1 to the CPU, in a lane.

Consoles use a low-level API, a lowlevel API can push as much as 100,000(or more) drawcalls, I obviously don't know the specifics of console APIs but that's besides the point, the point is that the difference is quite a lot, machines can extract A LOT more juice out of the same hardware and it also doesn't push them in a lane. Add the reduced overhead of the API itself and you can see the advantage they have.

TL;dr version:
So what am I really saying?

Consoles are stronger than current PCs? No.

Consoles are using the hardware wayyy more efficiently than current PCs? Yes and it would be short-sighted to underestimate that and put your fingers in your ears to yell "PC! PC! PC! PC!" and ignore it.

It's premature to worry about the performance right now because the optimizations are still underway, do not believe random figures anybody makes up for the framerate. It was a hands-on event, nobody could have tested performance in that environment. Adam Badowski is not a technical engineer, he is making an estimate based on what he knows or what he was told, it's not a figure set in stone.

All platforms use the same architecture, so it's natural they share a codebase, this is not unusual.

Assets like textures and models being the same is not unusual either. That depends on the resource budget, not what platform it's running. Neither does a regular PC have more VRAM than the consoles to have some exceptionally higher quality textures.

video cards use normal RAM too, not GDDR and systems run on GDDR, the difference is in speeds and latency.
Which is exactly what I'm saying, data has to be shuffled between both and one of the two is slower so the whole process gets slowed.

So my computer has 12 GB of RAM, 8 the system, 4 de VGA. You saw any game using 8 gb of ram? LOL 3.5 at most, not even close to 4.
This completely ignores the concepts of large address aware, pagefile and commit memory.

consoles are computers too and need system ram
No, it's a shared pool, the whole point is to save performance by not needing to copy the data twice. That's the entire concept behind APUs, reducing redundancies.
 
Last edited:
All this discussion is a non sense, all the games have the same problems and performance at some point, the worst we saw was unity or watchdogs, got improved with patches. DAI works pretty well on all cross generation platforms. Now we are digging for a excuse to say why this game doesn't run well on PC? no sr. I will tell you why this happen.

This happen because the game is AMAZING, no doubt, but it's designed and optimized for consoles, specially PS4. The PC Witcher 3 is a PORT and the game is just delayed for consoles, everyone is waiting including cdpr itself, so the game will be out, running amazing, being the GOTY 2015...... for consoles. PC Master Race?........ good luck, pray god to get a miraculous optimization in the last weeks.
 
It may be simpler than that: maybe the game is simply still unoptimized for PC. Maybe they used straight settings form consoles or minimally perfected them, counting on the raw power of test configurations. That doesn't mean the final product will have the same performance. All of you are right or wrong simply due to the fact that we simply have to wait, it's still early to judge. It's ALWAYS early to judge if you don't know the backgrounds, technical knowledge can't be of any support here. At least this is my way of thinking

EDIT: Sidspyker came before and explained better :p I agree with him
 
Last edited:
There is an even simpler explanation: Consoles+greed=doomed the moment it was announced. There are around half a billion games that show this.

I mean, you all know what happens to every single game that suddendly is made for consoles, right?
 
Last edited:
There is an even simpler explanation: Consoles+greed=doomed the moment it was announced. There are around half a billion games that show this.

I mean, you all know what happens to every single game that suppendly is made for consoles, right?

Baloney.

A PC exclusive was not at the time it was planned and is not now a viable proposition. Not on this scale, not at this quality, not in a world where developers have to be paid. Without the console market, you don't have enough money to buy this game for PC.

So it's time to accept that not only is the console market a contribution to the success of this game, but also that the consoles in some ways exceed the performance of PCs and are not simply there to be disparaged as a poor man's platform for inferior games and to cause supposed downgrades of cross-platform games.
 
I don't know if you are a console fanatic or just cdpr fanatic, because you are talking non senses using some technical knowledge, so you can't be just someone uninformed.

Since when PC lacks of something compared to consoles? since when the PCI-E is a problem for latencies and that latencies produce some problem with GAMES? stop.....just, stop.

i7 3.8 ghz + gtx980 is at least two times a PS4, not arguable.

Erm... I never argued anything else. I was simply making the point that TFLops don't scale linearly. Of course the configuration you described is several times more powerful than a PS4. Did I sound like I was arguing the contrary? Sorry about that if that's the case.

My goal was to discuss why architecture matters in performance scalability and that raw numbers also need context. I'm not into being a fanatic for either cause and I can't see why you would think that. I don't understand what you are getting at regarding latencies, can you clarify instead of attacking me?

Edit : based on your following posts, I feel a bit more sane in seeing that you don't seem to understand the topic.


I completely agree to everything you are saying :). Slow multicore CPUs are going to enjoy new APIs a lot, and Ubisoft is the worst offender when it comes to shoddy port. I'd wager they are not the least cause of the inflammatory comments in here. I also would really like to get assets of the highest quality possible, regardless of what consoles are getting if only to future-proof the game.

If there is one thing I am a "fanatic" about it is against rushed PC ports and phony arguments in favor of performance/visual parity. I am expecting a lot of Star Citizen in this regard :).
 
Last edited:
There is an even simpler explanation: Consoles+greed=doomed the moment it was announced. There are around half a billion games that show this.

I mean, you all know what happens to every single game that suppendly is made for consoles, right?
"Doomed" what? I don't understand where all of this skepticism come from. The game I saw in action is far from doomed.
Comments like yours make me think that many gamers simply cannot manage in the correct way the informations the developers kindly provide them...
 
Status
Not open for further replies.
Top Bottom