Building a gaming PC

+
Samsung B-die is fairly temp sensitive and even modest ram temps can cause instability, which is why I'm using the G.Skill Turbulence III cooler. It actually *does* make a difference (when pushing high frequencies and voltages).

Otherwise mobo/CPU temps are fine. I went with the Aorus Master board partly because of its beefy 12 phase power delivery system and partly because of its impressive VRM cooling.

 
Last edited:
I built a 9900K/2080Ti PC late last year (since I couldn't wait for Ryzen 3000) and the performance is just mind bogglingly amazing. TW3 runs like an absolute dream and can't wait to play CP2077 on this setup as well.

I am available for adoption, just give me a week or 2 so I can break the news to my wife and kids.
 
CPU: i9-9900K @ 5GHz
MB: Gigabyte Z390 Aorus Master
Memory: 4x8GB Trident Z 3200/CL14 @ 4133/CL17
GPU: MSI RTX 2080Ti Gaming X Trio (1.96GHz/975mV w/ Afterburner voltage curve)

Going for almost the same setup (16GB RAM instead of 32, also LG-34GK950F monitor) and can't wait. PC should arrive this wednesday.

Just to give you perspective, current setup:
CPU - i5 3350P
GPU - Geforce GTX 760
RAM - 16GB DDR2 1666Mhz
Monitor - some 21' IPS from LG
 
Last edited:
Ultrawide gaming is sweet and I've heard good things about that LG. I've had my X34A for over 3 years already and it still manages to impress me. And now with my current setup I feel like I can finally fully utilize it even in AAA games.
 
I built a 9900K/2080Ti PC late last year (since I couldn't wait for Ryzen 3000) and the performance is just mind bogglingly amazing. TW3 runs like an absolute dream and can't wait to play CP2077 on this setup as well.

CPU: i9-9900K @ 5GHz
Cooler: Corsair H150i Pro Push/Pull
MB: Gigabyte Z390 Aorus Master
Memory: 4x8GB Trident Z 3200/CL14 @ 4133/CL17
GPU: MSI RTX 2080Ti Gaming X Trio (1.96GHz/975mV w/ Afterburner voltage curve)
PSU: EVGA SuperNOVA 850 P2
SSD: MX100 256GB, 850 EVO 500GB, MX500 2TB
HDD: 2x4TB WD drives
Case: Define S
Monitor: Predator X34
Audio: Audio-gd NFB-11 & Violectric V200, Sennheiser HD800 (SuperDupont mod)
OS: Windows 10 Pro









Even went as far as OC'ing the 9900K to 5GHz and my 4x8GB 3200/CL14 Samsung B memory kit to 4133/CL17. Actually had to apply a RAM cooler to keep the memory 100% stable at 1.46 dram voltage.





Memory timings and performance:



Memory stability test was performed with Karhusoftware's ramtest with roughly 5000% coverage.

That is a beast of a system -- looks really nice! I'll make my usual plea for the love of all hardware everywhere. Disable the OC and run things at default clock speeds. There's probably no discernable gain (not something that will actually affect gameplay) to using it with the power that thing can put out at default...but it's still stressing your system and decreasing the lifespan of your hardware. Not to mention introducing potential stability problems that might not otherwise exist.

In 2004, I bought a Falcon-NW Mach V system as a big treat for myself (just over $4,000) and had a great talk with the tech that configured the system with me. I had always built my own systems, and was big into overclocking, but he convinced me that it really did more harm than good, and was largely unnecessary to get absolutely screaming performance. He was right. Gave that system to my nephew in 2007, and it was still able to run Skyrim smoothly with low-medium graphics in 2011. Haven't overclocked a single system I've built since then.
 
I don't really OC out of necessity, I just like to fiddle with my system and squeeze the most performance out of it as possible.

And for instance, I'm running my GPU roughly 100MHz above its factory OC, but since I've optimized it with Afterburner's voltage curve it's running cooler and quieter than the factory settings.

I'm actually planning on delidding my chip at some point and putting it on direct die cooling so I can go past 5GHz:


Besides, manually overclocking your memory actually makes a lot of sense considering how unstable factory XMP profiles can be. And you def get a noticeable performance boost with going over the 2133MHz DDR4 spec.
 
Last edited:
Careful with delidding. More often than not, you can either ruin the chip, or get a result that's a lot worse than it was by default. Not sure what that guy in the video suggests, but unless you have a lot of money to spare, it's risky.

Also, going forward, these overclocking frequencies will be the thing of the past. Once Intel will move to 10nm, they'll face the same thermal limitations AMD did with Zen 2.
 
I'm well aware of the risks and I'm not planning on doing it anytime soon. It's just something I've been interested in doing somewhere down the line.
 
In 2004, I bought a Falcon-NW Mach V system as a big treat for myself (just over $4,000) and had a great talk with the tech that configured the system with me. I had always built my own systems, and was big into overclocking, but he convinced me that it really did more harm than good, and was largely unnecessary to get absolutely screaming performance. He was right. Gave that system to my nephew in 2007, and it was still able to run Skyrim smoothly with low-medium graphics in 2011. Haven't overclocked a single system I've built since then.

OCing definitely won't provide huge performance gains but it's unlikely to cause any long-term harm. Not anymore anyway. Not unless you're pushing the components to the limit and/or don't know what you're doing.

Much of it isn't even about pure performance. It's about efficiency. Case and point, my 8700k "thinks" it needs higher voltage than necessary for a given frequency (aka, the programmed VID's on the CPU are well in excess of what is needed for those frequencies). Part of the reason I OC'd it is to run it at lower voltage. While this doesn't necessarily mean less measurable wear and tear it does mean less heat. Of course, I'm not really pushing anything to the limit. There is little reason to do so for typical use.

Careful with delidding. More often than not, you can either ruin the chip, or get a result that's a lot worse than it was by default. Not sure what that guy in the video suggests, but unless you have a lot of money to spare, it's risky.

I mean... nowadays you can find tools specifically built for delidding. It's not a terrible risk to engage in the.... procedure. Granted, it depends on the CPU. Some have the die soldered to the IHS. Others use cheap, garbage thermal compound between the IHS and die. In the former case, yes, there is really no reason to delid. The latter of the two cases is a different story.

I believe 9000 series intel chips use solder between the die and IHS (8000 use shit thermal compound, #fukyouintel), as do most AMD chips. So yeah, delidding in that case is probably pointless.
 
The STIM in the 9900K leaves a lot to be desired. Better than what Intel's usually put between the CPU and the heatspreader but delidding (and especially putting the CPU on direct die) should lower temps quite significantly.

I did buy der8auers's delid-die-mate 2 and the OC frame for the Intel 9th gen last time I ordered stuff from Germany, just in case. Figured the latter especially being such a novelty item it might be out of stock by the time I'm ready to delid.
 
OK, listen up you primitive screwheads!

SSDs...

Another trap I say.. to many connection types which I find confusing. Is M.2 for a desktop the best choice? M.2 SATA III that is?
Whatever, can somebody explain to me what option is best for a desktop?

Or is it strictly dependant on the MOBO of choice?
 
OK, listen up you primitive screwheads!

SSDs...

Another trap I say.. to many connection types which I find confusing. Is M.2 for a desktop the best choice? M.2 SATA III that is?
Whatever, can somebody explain to me what option is best for a desktop?

Or is it strictly dependant on the MOBO of choice?

Quick answer:
1.) M.2 is the best option. Downsides: costly.
2.) Standard SSDs are fine. You can get a lot more storage this way, and it will only scratch the read/write times a little bit.
3.) Major care is only needed if there's not a lot of space in your case.

Longer answer:
Between an HDD and an SSD of any kind, the difference in load times can be huge. Quite literally, what a fast HDD does in minutes, a slow SSD can do in seconds. However, you'll likely see little to no discernible difference between a standard SSD and an M.2. (e.g. How much weight do we want to put on a 20 second load time dropping down to 16 seconds?)

That being said, M.2 is still the better option if you can afford it. Would I spend the money to upgrade from a standard SSD to an M.2? No. Would I spend the extra money to build a new system with an M.2? Yes.

Pretty much any recent (gaming) motherboard should have the correct slots for either.

Another consideration is the amount of space it will take up in the case. Standard SSDs will need a bay slot, power cables, etc. An M.2 will plug directly into a slot on the board, and that should be it. However, certain builds with multiple GPUs, custom cooling, etc., may find that their rigging is putting pressure on the M.2 drive -- and that is definitely something I don't want to do.

 
I wouldn't worry about space for storage, but for wear. I.e. higher capacity SSDs allow higher number of writes, so they have better durability. That's why I usually buy something like 1 TB NVMe drive for the main system. Newest Samsung Evo NVMes for example dropped in price quite significantly in the recent times.

However you can't put all your games on it anyway. That's why I have NVMe for the main system and /home, and I have another big magnetic HDD for actually installing games, especially huge ones, and then mount that to $HOME/games. I wouldn't want my NVMe SSD to be filled up with them. I don't really care much about loading times for games, but personally I appreciate fast system boot, fast log-in time, fast applications start time (due to shared libraries sitting on NVMe), and fast operation of VMs which I put on NVMe as well. For the above, HDD would be annoyingly slow. Game loading times? Not something that bothers me much.
 
Just to add to what was mentioned above, it's helpful to consider what the various terms mean. M.2 is a drive form factor. It's along the same lines as "2.5 inch". NVME, SATA, and PATA are interface protocols. So you can have a M.2 or 2.5 inch SATA drive. You can also have a M.2 NVME drive. Odds are you will not see any major speed increase when discussing the former. In that case they're both SATA drives. They're just using different form factors. Odds are you will see speed differences when looking at SATA vs NVME for M.2 drives, however.

In terms of best... I suppose it depends. NVME M.2 drives are probably the "fastest" option available. They're also considerably more expensive relative to M.2 SATA or 2.5 inch SATA drives. As with all hardware, it depends on what you're asking it to do. The posts by Gilrond and Sigil above cover some of those considerations.
 
For gaming, the differences between M.2 NVME and Sata are anywhere between zero to minor. So unless you're doing some productivity work that actually benefits from those high read/write speeds, I'd rather go for a higher capacity standard sata drive. But your mileage may vary ofc..
 
So just to put some numbers to this...

Modern motherboards use SATA III which maxes out at a throughput of 600MB/s 6GB/s (or 300MB/s for SATA II, in which case, it’s time to upgrade). Via that connection, most SSDs will provide Read/Write speeds in the neighborhood of 530/500 MB/s. For comparison, a 7200 RPM SATA drive manages around 100MB/s depending on age, condition, and level of fragmentation.
NVMe drives, on the other hand, provide write speeds as high as 3500MB/s. That’s 7x over SATA SSDs and as much as 35x over spinning HDDs!
(from https://www.velocitymicro.com/blog/nvme-vs-m-2-vs-sata-whats-the-difference/)


YET.... look at the price
Crucial P1 1TB SSD 3D QLC NAND M.2 2280 PCIe NVMe 3.0 $99.00 (USD)
Read Speed Write Speed
2,200MBps 1,700 MBps


Crucial MX500 1TB SSD 3D TLC NAND SATA III 6Gb/s 2.5" Internal Solid State Drive $109.99 (USD)
Read Speed Write Speed
560MBps 510MBps

from https://www.microcenter.com
 
@Flyddon

In terms of real world performance, in gaming, the difference is extremely minimal between sata and m.2 nvme drives:


Also. Bear in mind, you're comparing QLC NAND drive against TLC NAND drive. The latter is better both in terms of performance and longevity too.



TLC NVME drive costs 250 bucks.
 
In terms of real world performance, in gaming, the difference is extremely minimal between sata and m.2 nvme drives:

This is the crux of what I use to make my decisions about what I will or won't put into a system, and what I will upgrade to over time. While numbers (benchmarks, comparison graphs, stress testing, etc.) are all well and good to collect accurate data to classify products...

...it creates a bit of static for consumer-end decisions. If I'm build a gaming system, I'm not going to be spending the majority of my time running benchmarks and ensuring that my read/write speeds are within the specs advertised on the side of the box. I'll be playing games and mostly enjoying how everything "feels". i.e. If I spend an extra $1,000 ensuring I squeeze every last CPU and GPU cycle out of my state-of-the-art components -- I'm going to launch into a game of Apex Legends within seconds and still sit there for the same amount of time I always have waiting for that one, potato computer to finish loading up.

Hence, medium. Cost balanced against the features that I'll actually notice and use, not the numbers.
 
Last edited:
Top Bottom