Tom's Hardware "Don't Trust Projekt Red system requirements"

+
The game is not even out yet, and they are already recommending you buy a brand new $1,800 computer.

While I definitely don't believe it necessary, if you don't upgrade your computer for CP2077, then WHAT game? Upgrading PCs is a way of life. If you have a gaming PC, you know that upgrading it is a necessity sooner or later.

I have been planning of upgrading for CP2077 for about 2 years. The annoying thing was the lack of information about what was needed and what was happening with the 30-series. But we are at least getting there.
 
I'm hoping for an answer to what GPU we need for 4k max candy at 60fps.

The likely answer; None exist.

Sure, you could argue this point. The trouble is dedicated hardware for RT requires space. This hardware has to go somewhere. This space could theoretically be allocated to other tasks. Say, "traditional" rendering. It's not helping a great deal there when it's specialized toward RT. Especially if the software (game) isn't using RT.

Of equal worry, it's arguably not good when the goal of the hardware provider becomes forcing things to be locked behind their own devices. It happens with operating systems all the time (Microsoft, Apple). It happens with hardware. RT is an example of the last one. This type of behavior and it's adjacent practices is rarely consumer friendly.

The major problem facing HW RT, is that in order for it not to be a expensive gimmick, you need a paradigm shift in software. And you cannot have a paradigm shift in software if you don't have the hardware to support it. And the only way to get the hardware to support a software side paradigm shift is if the industry can shift to it, wholesale.

Which in turn requires AMD to be able to provide HW RT of similar capability, otherwise multiplatform releases would be impossible. Since the new consoles run on RDNA 2. So unless RDNA 2 is capable of pure path tracing based rendering, RT will not be a significant factor in gaming for the next 7 year console cycle.
 
The major problem facing HW RT, is that in order for it not to be a expensive gimmick, you need a paradigm shift in software. And you cannot have a paradigm shift in software if you don't have the hardware to support it. And the only way to get the hardware to support a software side paradigm shift is if the industry can shift to it, wholesale.

What do you mean by paradigm shift in software (genuinely curious)? The largest hurdle with RT is it's computationally expensive. Yeah, there are software oriented methods where specialized RT hardware doesn't need to be used to make these computations bearable. I was under the impression most of these were.... faking it. It's possible I am wrong though. I won't pretend to understand the concept they're using with these alternative methods fully.

Which in turn requires AMD to be able to provide HW RT of similar capability, otherwise multiplatform releases would be impossible. Since the new consoles run on RDNA 2. So unless RDNA 2 is capable of pure path tracing based rendering, RT will not be a significant factor in gaming for the next 7 year console cycle.

Well, I've often wondered if it wouldn't be in the best interest of AMD to go the other way with it. To clarify, I'm imagining a world where I look at a Nvidia GPU and see the flashy RT marketing. On the other hand, I see an AMD GPU ahead on traditional rendering. Instead of fixating on RT it devotes the additional space toward traditional cores/hardware on it's GPU's.

I say this because your claim AMD needs to pursue RT may not be completely accurate (not saying you're wrong, just giving a different perspective). Few games right now use RT. Most which do aren't using it for many of the areas in a game. At least this is how I understand it. Yes, more will come down the road. But.... What if I'm not playing those games or feel enabling those features tanks performance too much? Having an alternative GPU focusing all it's efforts toward traditional rendering techniques, and by extension devoting all "GPU space" toward that end, could be a big deal.

Basically, swoop in under the radar and steal the rasterization crown for a lower price. Pursue the RT functionality as a side project until the hardware "catches up" to the intense computational requirements of RT.
 
No connection as such.

The what's the connection between people using ray tracing and people 4k monitors?

Personally I don't even think about buying a 4K monitor, as I consider it a waste of PC processing power which can be spent on better settings or more fps, but I am interested in RT, to the point I will try it with C2077 to see if I can "at least" lock the game to 30fps with it.
 
They really should have listed resolution and FPS with the requirements.
Industry standard is 1080p and 30fps. There's no need for stating it in the requirements.
"Recommended" kind of means "If you're reading it we assume you'ra a noob so it is more than guaranteed that your rig sucks, so invest in these, and you'll be fine"
 
What do you mean by paradigm shift in software (genuinely curious)? The largest hurdle with RT is it's computationally expensive. Yeah, there are software oriented methods where specialized RT hardware doesn't need to be used to make these computations bearable. I was under the impression most of these were.... faking it. It's possible I am wrong though. I won't pretend to understand the concept they're using with these alternative methods fully.

A shift from rasterization based real time graphics, to path tracing based real time graphics. Which would mean effectively starting from scratch in terms of optimal core design. Throwing away all that is known about how to do computer graphics and starting afresh. Even the current RT cores still use some rasterization hacks to smooth things out.

It's the complexity of the various hacks that make rasterization computationally cheap, that make AAA-tier computer graphics increasingly expensive.

Path tracing simplifies that pipeline, which is why pre-rendered CGI-industry has moved from rasterization to path tracing years back, completely as far as I know. Not an expert, mind.

Such a shift would require a company with the necessary tech and the daring to do so. Problem is, that while nVidia has the tech, they don't have the daring. They like being on top too much to chance anything on a Hail Mary.

AMD, meanwhile, seems to be more willing to pull off risky moves. Not that they have a leadership position to lose. But while they may have the daring, they don't have the tech.

Well, as far as we know...

Well, I've often wondered if it wouldn't be in the best interest of AMD to go the other way with it.

[...]

Basically, swoop in under the radar and steal the rasterization crown for a lower price. Pursue the RT functionality as a side project until the hardware "catches up" to the intense computational requirements of RT.

That would be essentially like killing the golden goose. Exchanging the future viability of your company - or atleast the GPU side of it - for short term profits.

Besides, if AMD wants to win big, it needs to do the exact opposite. Ideal for them, would be to come out with a core design that can go full path tracing, but can also brute force emulate rasterization. A "miracle" core that can compete with or at least lose only slighty to nVidia Cuda and RT cores simultaneously. Thus is capable of ushering in the future.

I get the feeling that is AMDs strategy. To come up with a instruction set that allows them to do atleast a passable job of backwards compatibility, while possibly stealing the future crown from nVidia.

But that's for the generation after the next, unless AMD has managed to square that particular circle already. Probably not, but a man can dream.
 
While I definitely don't believe it necessary, if you don't upgrade your computer for CP2077, then WHAT game? Upgrading PCs is a way of life. If you have a gaming PC, you know that upgrading it is a necessity sooner or later.

Yes upgrading is necessary, but as I said in my original post, it's better to wait and see what card/processor preforms the best with your budget. Tomshardware does not even know how CP2077 will preform, yet they made this entire article which is all just speculations.

Wait until the bench-marking data comes out, then decide on what you should upgrade. :beer:
 
But that's for the generation after the next, unless AMD has managed to square that particular circle already. Probably not, but a man can dream.

I think I figured it out; The path towards - no pun intended - path tracing is trough APUs. That's why nVidia bought out ARM. They want to enter the CPU market and offload rasterization tasks to the APU. While having a GPU of full of pure RT cores.

AMD is ofcourse doing the same and they have a lead in it.

It's gonna be a wild ride.
 
It's not wrong to recommend a RTX 3080 if the user wants to max out settings at 1440p or even 4K with all the Ray-tracing stuff enabled. I even have doubts if a 3080 can sustain 60fps on those conditions.
 
A shift from rasterization based real time graphics, to path tracing based real time graphics. Which would mean effectively starting from scratch in terms of optimal core design. Throwing away all that is known about how to do computer graphics and starting afresh. Even the current RT cores still use some rasterization hacks to smooth things out.

It's the complexity of the various hacks that make rasterization computationally cheap, that make AAA-tier computer graphics increasingly expensive.

Path tracing simplifies that pipeline, which is why pre-rendered CGI-industry has moved from rasterization to path tracing years back, completely as far as I know. Not an expert, mind.

Such a shift would require a company with the necessary tech and the daring to do so. Problem is, that while nVidia has the tech, they don't have the daring. They like being on top too much to chance anything on a Hail Mary.

AMD, meanwhile, seems to be more willing to pull off risky moves. Not that they have a leadership position to lose. But while they may have the daring, they don't have the tech.

Well, as far as we know...

Interesting... In other words, shift everything to RT instead of most stuff being done via rasterization with some RT functionality? Including the hardware design? This strikes me as unlikely to happen over night. And, I'd question whether it would be viable from a performance standpoint. Again, I was under the impression the concept itself was taxing on hardware. Is this impression wrong? Obviously, you could mitigate that by going all in and tailoring the hardware to it. As you said, that comes off as risky though.

That would be essentially like killing the golden goose. Exchanging the future viability of your company - or atleast the GPU side of it - for short term profits.

Besides, if AMD wants to win big, it needs to do the exact opposite. Ideal for them, would be to come out with a core design that can go full path tracing, but can also brute force emulate rasterization. A "miracle" core that can compete with or at least lose only slighty to nVidia Cuda and RT cores simultaneously. Thus is capable of ushering in the future.

I get the feeling that is AMDs strategy. To come up with a instruction set that allows them to do atleast a passable job of backwards compatibility, while possibly stealing the future crown from nVidia.

But that's for the generation after the next, unless AMD has managed to square that particular circle already. Probably not, but a man can dream.

I don't mean abandon RT entirely. I meant dedicate less focus toward releasing products with it given few games actually use the technology (investing in developments on that front would be a different story). Again, more and more are beginning to do so. It's still a small amount. This type of stuff doesn't happen instantly. Hell, "new" hardware itself was likely being designed and developed years before it's available. People see a new CPU and don't consider the fact this CPU was likely first being developed well in advance.

In any case, I suspect AMD views it as perfectly acceptable if they are not competing with Nvidia on the top end for GPU's. Top end GPU's are ridiculously expensive. I doubt they make up a large percentage of the profits in the consumer space. As things stand it would appear AMD has been competing a step or two back from the greatest consumer GPU category. There is nothing inherently wrong there. Although, I wouldn't be surprised if the next round delivers beyond the expectations.
 
Interesting... In other words, shift everything to RT instead of most stuff being done via rasterization with some RT functionality? Including the hardware design? This strikes me as unlikely to happen over night. And, I'd question whether it would be viable from a performance standpoint. Again, I was under the impression the concept itself was taxing on hardware. Is this impression wrong? Obviously, you could mitigate that by going all in and tailoring the hardware to it. As you said, that comes off as risky though.

While it is taxing, you also have to remember that there are far more rasterization cores per GPU die than there are RT cores. Now whether or not a die full of just RT cores could run pathtracing and necessary denoicing in 60+ fps realtime? Don't know. But inevitably the answer will be yes, after a few die shrinks.

But I see a few paths to it, that can be used. Especially on the high end.

There is the dual GPU path, where you have one more traditional high end card that has both rasterization and RT. Working together with a pureplay RT card in SLi or Crossfire. Though I think this would be a temporary solution at best.

Then there is the APU path, where you push rasterization to the GPU component of the APU and essentially make it a CPU workload. And given the recent rapid expansion of core counts, I could easily see a circumstance where the diffrence between serial workloads and parallel workloads is increasingly blurred.

While these serial tasks and parallel tasks are not and will not be mathematically identical. But neither are integer and floating point calculations, but those tasks have been unified into the CPU. Floating point calculation being tasked to separate co-processor chips in the past. And since APUs have been a thing for a while, I don't see a reason this wouldn't be the future of rasterization optimized parallel tasks.

I don't mean abandon RT entirely. I meant dedicate less focus toward releasing products with it given few games actually use the technology (investing in developments on that front would be a different story). Again, more and more are beginning to do so. It's still a small amount. This type of stuff doesn't happen instantly. Hell, "new" hardware itself was likely being designed and developed years before it's available. People see a new CPU and don't consider the fact this CPU was likely first being developed well in advance.

AMD is also the current chip design partner for the consoles, so any capability that future games will likely optimize for, is also found on the consoles. Or rather. Consoles, being a definite spec for the next 7 year cycle. Will define what game studios will optimize for.

Microsoft is also pushing for RT, with DXR. So having RT on Xbox is a given. And since AMD is the company supplying the chips for Xbox. The real question is, how far has this technology been pushed?
 
I absoululty disagree. 45fps on average is not only totally fine and all you need, but about what the majority of pc players will probably be able to run with their hardware.
I absolutely disagree. I expect 60ish fps in order to not feel downgraded. Competitive games run at as high fps as possbile (100-200 range) but on medium settings since graphics matter less in those games.

Of course action-RPGs tend to run at lower fps due to having more eyecandy. But that doesnt mean its an acceptable limit to me any more.
 
I absolutely disagree. I expect 60ish fps in order to not feel downgraded. Competitive games run at as high fps as possbile (100-200 range) but on medium settings since graphics matter less in those games.

Of course action-RPGs tend to run at lower fps due to having more eyecandy. But that doesnt mean its an acceptable limit to me any more.

Sure yeah in competitive games. edit- totally different ball park. Its like saying you should expect a certain quality bat in home baseball becase olympics(I mean like, you playing home baseball with family friends vs actually compteing in any standard) uses a certain standard.

I mean, I guess we wil have to wait and see how cyberpunk is handled since its an FPP, but until that time I totally stick by 30-45 fps being a reasonable standard and anything over as .. I actually forget the word here.edit- I think the word i was looking for is luxury. Its like, shitting on a comfortable toilet in your home is reasonable, but then someone saying you should expect a heated seat. Yeah, of course sitting down on that is way better, you cant argue. But it doesnt mean it should be set as a standard.
 
It's not wrong to recommend a RTX 3080 if the user wants to max out settings at 1440p or even 4K with all the Ray-tracing stuff enabled. I even have doubts if a 3080 can sustain 60fps on those conditions.

I'd really like to know if a 3080 can sustain 60fps at 4k.
Also how a 3090 handles the game.
 
Last edited:
Basically their conclusion is that if you want the best experience, get a RTX 3080. What are your thoughts?


from reports Im seeing the 3000 series may of come out earlier than it should as there have been major stability problems with the cards. And while its hopped its just a driver issue there is a chance that its a design problem and the capacitors used are not up to the job to properly run them
 
While it is taxing, you also have to remember that there are far more rasterization cores per GPU die than there are RT cores. Now whether or not a die full of just RT cores could run pathtracing and necessary denoicing in 60+ fps realtime? Don't know. But inevitably the answer will be yes, after a few die shrinks.

Maybe so but if the concept itself is taxing it's difficult to navigate around it. Based on my rudimentary knowledge of RT it comes off as an innately taxing process.

There is the dual GPU path, where you have one more traditional high end card that has both rasterization and RT. Working together with a pureplay RT card in SLi or Crossfire. Though I think this would be a temporary solution at best.

I had thought about this as well. A separate piece of hardware for RT functionality. An "add on" device or component. Again, I won't pretend to be an expert on the ins and outs there. Intuitively I would expect this to present it's own technical challenges.

from reports Im seeing the 3000 series may of come out earlier than it should as there have been major stability problems with the cards. And while its hopped its just a driver issue there is a chance that its a design problem and the capacitors used are not up to the job to properly run them

Depends whether you believe them. A lot of these tech sites/youtubers are far less knowledgable than they'd like their audience to believe about the topics they cover (I can give very specific examples, although I'd rather not). Especially down to the level of circuitry.

They're new cards. New hardware often has growing pains associated with it. It's a big reason I don't tend to adopt the latest hardware as it's released. Instead I'll wait a year or so before jumping into that pond. The same is true for software in many instances.
 
Top Bottom