Holdup. Doesn't Windows 10 turn off pagefile on SSDs by default? Or are we talking about a completely different system here? Also, how can I tell if I have the virtual memory enabled or disabled?
Virtual memory is enabled on all drives, HDD or SSD by default. You can check the settings by right-clicking This PC ---> Properties ---> select Advanced System Settings on the right hand side ---> select the Settings button under the Performance section ---> select the Advanced tab:
This will show you your present virtual memory size.
It should be left Windows-specified in almost all cases. One exception to this is if you're using a really old HDD, say 5200x from the early 2000s and have no SSD installed. In that case, it may be a good idea to set your minimum paging file size to 0.5x your RAM and your maximum paging file size to 1.5x your RAM.
For SSDs, this is strongly discouraged. An SSD "ripples" the sectors that are used for virtual memory, ensuring that the same, physical cells are not being written to over and over again. Doing so will decrease the lifespan of those cells and thus the overall drive by proxy. One great way to kill sectors on your SSD years in advance is to define a set paging file, which will, by its function, create and maintain set, contiguous blocks on the drive to be used by virtual memory, and those cells will be hammered every time Windows writes to the paging file. (It's for similar reasons that you do not want to defragment an SSD nor reformat it when reinstalling Windows. There's no need. It simply stresses the drive.)
That's a lot of text to explain fragmentation and function of file systems. Doesn't address my original question though. Why would you use possibly fragmented file on slow disc for SWAP when there is plenty of RAM available? Even if random writes are slow from disc, do them to swap space in RAM, then copy that RAM part to required RAM locations much quicker than you could load up any disc swap files.
And i was under the impression that generally the days of virtual memory had little to do with speed but were mostly a measure to work around insufficient RAM space. Hell i've even seen people do booters that load the whole system disc into a virtual ramdisc, disable the HDD altogether and the system is insanely fast.
I think you're talking about a few different and unrelated things here. Yes, RAM is faster than virtual memory. But no, it's not as simple as, "Just use RAM only, then."
- Why would you use possibly fragmented file on slow disc for SWAP when there is plenty of RAM available?
Easy answer: not all programs are written to recognize and utilize unlimited amounts of RAM. Games, especially, are specifically written to operate within a set, RAM budget. This is a primary reason, in gaming, to opt for
faster RAM instead of
more RAM if you have the choice. 64 bit processing is better at this, but a program still needs to be told where to put data and where to go to get what it needs. It is technically possible for programs to be written with instructions to "use all available memory addresses". This will not be very common though, as there's no guarantee that it won't use too much and wind up interfering with Windows processes or other programs running in the background, creating issues. It's also hard to balance performance if I don't specifically optimize for a minimum/maximum RAM range. Thus, having all that extra RAM is kind of pointless unless the program in question has been specifically written to utilize it.
Additional confusion: fragmented? There's really no such thing as "fragmentation" on SSDs. On HDDs, this is one of the primary reasons to utilize virtual memory, because HDDs were almost always significantly fragmented, which invariably creates slowdown reading information. By writing data to contiguous blocks before sending them to RAM, or by letting the RAM offload its present data to contiguous blocks on the HDD swap file instead of simply dumping it, it made it much faster to retrieve the other set of data when needed once more. On SSDs, it's still beneficial to do this since it's still faster than rebuilding the data from storage sectors. It also avoids errors, and increases stability.
- Even if random writes are slow from disc, do them to swap space in RAM, then copy that RAM part to required RAM locations much quicker than you could load up any disc swap files.
What you're suggesting here wouldn't make any sense in practice: if I have available RAM, the program would just use another cell. I wouldn't need to dump the existing data in the already utilized cells. But there's no guarantee that any given user will have enough RAM, or enough
free RAM to load absolutely everything that's needed at any instant. Hence, it's better to keep things within a budget. It also does not deal with the issue of any RAM address needing to both offload data and retrieve it again later. In order for programs to ensure that they have what they need -- on a range of different systems -- they will assign RAM addresses at launch, then work with the assigned addresses. Even if there are other memory addresses available, I can introduce instability by just coding a program to take whatever's free, as I discuss above. Better to reserve specific addresses and work within that budget. Thus, if I need to free up a memory address, I still need to do something with the existing data in that address. My choices are: dump it and need to rebuild it from scratch when it's needed again...or write it to the swap file to make it really easy to load it back into RAM when it's called for again. So, while YES -- it's much, much faster to simply load
everything directly into RAM and just keep it there...that's not always
possible. (Devs don't write programs to be run on only one, specific system configuration.) So it's kind of silly to code something to work that way. It's going to wind up creating issues for users unless their systems are specifically configured to run that program, and they're left with enough RAM overhead to manage all of the other functions and active software on their system, simultaneously. There's no guarantee that this will be the case. So, a wonderful failsafe: reserve specific RAM addresses and utilize virtual memory for any swapping. A user is far less likely to completely run out of disk space than they are to hit a memory address error if the RAM is 100% full.
- And i was under the impression that generally the days of virtual memory had little to do with speed but were mostly a measure to work around insufficient RAM space. Hell i've even seen people do booters that load the whole system disc into a virtual ramdisc, disable the HDD altogether and the system is insanely fast.
Yup! But only in part. There were numerous reasons for creating virtual memory, and many new tricks in its application were developed over the years. What you're referring to, "Out of memory!" issues, have not really been a thing since the very early 1990s. I'm speculating now, but I'd say the biggest reason to keep it around was more and more multitasking functionality being built into Windows. As both the amount of "stuff" that would be running in Windows simultaneously increased (anti-virus, web browsers, platform front-ends, social media, utilities, etc.) at the same time that programs like games were getting more and more demanding, it became more and more impossible to determine what sort of system config a game would be facing when it loaded up. Virtual memory is a rock-solid method of ensuring that the RAM can very quickly free up any memory address needed with minimum slowdown.
Now, when we consider "power users" (because that's exactly what we're discussing here -- people trying to push their performance to the limits) what we're mostly looking at are four basic scenarios:
1.) The user simply has a system configuration that is exactly what the game/program is looking for, and they are able to increase loading times and performance without any issues. This is not guaranteed to work, however. Too many different system configs out there. Mileage will vary greatly. If it does work, though -- knock yourself out! No reason not to do it if it's stable. Which leads me directly to:
2.) The user will gain performance at the cost of stability. Sometimes, this seems like an ideal situation
at a glance, especially after a fresh, Windows installation. But, once the system has been used for a year or so, I would not be surprised to see a menagerie of instability issues cropping up. (Ask me how I know!
)
3.) It will work well for some games, but not for others. End result being that some games can't possibly load everything into RAM at once, as that would cause crazy conflicts. Certain functions need to be loaded when called for, then offloaded when no longer needed. Once a drive starts getting full, this is going to cause slowdowns, stutters, etc., whether on an SSD or HDD. A workaround for this is virtual memory, keeping everything flowing well over time, even when the drives get cluttered.
4.) There's something else wrong with a user's system. If turning off virtual memory has that drastic of an effect on things, then there's likely something wrong with my PC. Having everything in RAM might mean that a few loading stutters are smoothed over. But if I'm seeing much higher FPS, or errors in a game clear up, or a program stops crashing -- that's indicative of a more serious issue with my drive or my file system. That's not a benefit of using only RAM; it means something in my system config is not working properly.