I solved the duplicate NPCs problem on my PC

+
Okay, this is working on my PC. I really don't know if this will also work for others but who knows?

There is a well known tweak where the game setting "Slow HDD Mode" is set to On, even when you are on SSD. The thought behind it is that there will be loaded more into the game / memory and that that is faster then loading parts (or something like that, English is not my native tongue).

I had this set to On too ... and I just did set "Slow HDD Mode" to Off and the duplicate problem is completely gone :D There still are duplicates, but just the normal amount.
I tested it multiple times in front of V's MegaBuilding and each time the same result: Slow HDD Mode setting ON = a LOT of duplicate NPCs and cars, setting Off = normal crowd and cars.
Also tested in Lizzie's Bar: On = a lot of duplicates and not that crowded, Off = few duplicates and more crowded.

Also, with the setting to Off, now the normal numbers of NPCs and cars are back on the street. With the setting to On, there are way less NPCs and cars.

I really hope this works for others too.
 
Last edited:
Thanks for sharing this! But Can someone explain/expand on this more? What are the disadvantages to doing this?
 
A particular benefit of SSDs during actual gameplay is the speed with which it handles the swap file, which pre-caches data that will then be sent to the RAM. An SSD can do larger chunks much more quickly than an HDD, but conversely, an HDD trying to carry the same load will not be able to handle it smoothly. In lay terms, this results in stutters, slow texture decompression, pauses and hitches when loading into new areas, etc.

Slow HDD Mode is a feature that decreases the demand placed on spinning hard-drives. One big thing with any procedural system is variety. The more variety I want, the more data it's going to include. The more data I include, the longer it's going to take to write to the swap file and then send it to the RAM. The longer that takes, the worse my performance is going to be. By decreasing the number of variables...like...saaay...the number of individual NPC appearances that can be loaded at any one time, the more smoothly I can make the game work on older hardware.

Thus, if you have an SSD -- don't use Slow HDD Mode. Only use this if you're running the game off of an older, spinning drive. Also, if you decrease the crowd size in the settings, you'll decrease the number of copies that appear while using Slow HDD Mode.

_______________


In theory...!

:cool:

Remember that the dupe NPCs are going to be the result of more than just one setting. The amount and speed of the RAM you have installed, how much free space you have available for virtual memory, the processing speed of your GPU, other programs you have running in the background, the demand of other graphical settings you have turned up, etc., etc., etc. can all mean that when the game calls for "moar NPCs!" -- it just doesn't have the resources available at that instant to load another shuffle of NPC assets, so it will just populate new NPCs based on what's presently loaded into RAM.

That is a lot of speculation really, but you can see very similar situations of "copies" being created by all sorts of different, procedural systems for the same sorts of reasons, like SpeedTree, Assassin's Creed crowds, or Total War units, for example. I remember a game I was playing of Rome 2 Total War where all of my Cataphracts loaded with exactly the same armor and horses except for the one guy who had a different color horse and a different armor set. Bloody non-conformist.
 
If your speculation is correct, that means that this system works only when under 100% perfect conditions. Conditions that we don't even know about and how to achieve them. So someone who runs every other game 100% full detail 120fps at 4k will scratch their head about why only Cyberpunk has problems while literally every other game on his machine works fine.
 
A particular benefit of SSDs during actual gameplay is the speed with which it handles the swap file, which pre-caches data that will then be sent to the RAM. An SSD can do larger chunks much more quickly than an HDD, but conversely, an HDD trying to carry the same load will not be able to handle it smoothly. In lay terms, this results in stutters, slow texture decompression, pauses and hitches when loading into new areas, etc.

Slow HDD Mode is a feature that decreases the demand placed on spinning hard-drives. One big thing with any procedural system is variety. The more variety I want, the more data it's going to include. The more data I include, the longer it's going to take to write to the swap file and then send it to the RAM. The longer that takes, the worse my performance is going to be. By decreasing the number of variables...like...saaay...the number of individual NPC appearances that can be loaded at any one time, the more smoothly I can make the game work on older hardware.

Thus, if you have an SSD -- don't use Slow HDD Mode. Only use this if you're running the game off of an older, spinning drive. Also, if you decrease the crowd size in the settings, you'll decrease the number of copies that appear while using Slow HDD Mode.

_______________


In theory...!

:cool:

Remember that the dupe NPCs are going to be the result of more than just one setting. The amount and speed of the RAM you have installed, how much free space you have available for virtual memory, the processing speed of your GPU, other programs you have running in the background, the demand of other graphical settings you have turned up, etc., etc., etc. can all mean that when the game calls for "moar NPCs!" -- it just doesn't have the resources available at that instant to load another shuffle of NPC assets, so it will just populate new NPCs based on what's presently loaded into RAM.

That is a lot of speculation really, but you can see very similar situations of "copies" being created by all sorts of different, procedural systems for the same sorts of reasons, like SpeedTree, Assassin's Creed crowds, or Total War units, for example. I remember a game I was playing of Rome 2 Total War where all of my Cataphracts loaded with exactly the same armor and horses except for the one guy who had a different color horse and a different armor set. Bloody non-conformist.
What if i don't have any SWAP space and have heaps of RAM? Or that's some sort of custom scratch file games use internally, regardless of system configuration?

BTW if we're talking discs i have another suspicion. At one point i started getting short freezes whenever the game auto-saves. With all system, game data and other files on SSD i blamed some update. Then i disabled my big HDD and freezes were gone. Thinking back the freezes might've started when i added this disc. I wonder if opening a file on SSD may cause Windows to refresh ALL drives including HDDs with the associated time penalty.
 
What if i don't have any SWAP space and have heaps of RAM? Or that's some sort of custom scratch file games use internally, regardless of system configuration?

BTW if we're talking discs i have another suspicion. At one point i started getting short freezes whenever the game auto-saves. With all system, game data and other files on SSD i blamed some update. Then i disabled my big HDD and freezes were gone. Thinking back the freezes might've started when i added this disc. I wonder if opening a file on SSD may cause Windows to refresh ALL drives including HDDs with the associated time penalty.
Not recommended, though it will still technically work. Many programs reserve specific memory addresses for specific functionality, and rely on stacking the swap file with what will be needed when called. If you don't have a swap file standing by, the program is forced to fetch the data right from the actual installation blocks (which are not guaranteed to be contiguous) and then write it directly to RAM. This action, despite the fact that RAM is much faster than a spinning HDD can ever hope to be, can create the opposite of the desired effect, as the HDD still needs to be read. And because of the way headers need to manually fetch the data, this can be way, way, way slower than just letting the drive create a new swap file while the RAM is handling other things. Virtual memory was first conceived of to directly handle this type of reading from the drive / writing to RAM more efficiently.

Analogy:
RAM is a high speed bus that can pick up and drop off whole crowds of people at a time. Virtual memory is like telling anyone that wants to ride the bus to be at the bus stop at a specific time. The bus then grabs the whole crowd at once, and drops them all off at once. If you don't have a swap file, the bus needs to go pick up every single person at their individual houses before heading to the final destination. Regardless of how fast that bus is, it still needs to make all those stops. The "bus stop" method is almost always faster. (Of course, it's a little more complex than that, but that's the idea.)

So why do so many older games actually benefit from disabling virtual memory altogether? More because of the demands that games would try to put on the swap file combined with the clunky way that Windows used to handle it. (Windows XP was especially inefficient with it.) A much better idea was to create a standard swap file with set min/max size at 1.5x your total RAM (for gaming). That kept Windows from making...interesting...decisions concerning how to dynamically scale it on the fly.

Nowadays, Windows 10 handles things very efficiently, by comparison. There's zero reason to deny Windows virtual memory. Especially since SSDs are not all that dramatically slower than RAM itself, it simply leaves the PC another avenue to get things loaded and processed quickly.
 
Wouldn't windows just swap to RAM if there is sufficient amount of it (e.g. 32Gb)? AFAIK most guides even recommend disabling SWAP past certain RAM size. Why would a modern engine on modern computer not pre-fetch any data directly to ram and then just shift it to required place? Would it be beneficial to make a ramdisk for swap file then?
 
Wouldn't windows just swap to RAM if there is sufficient amount of it (e.g. 32Gb)? AFAIK most guides even recommend disabling SWAP past certain RAM size. Why would a modern engine on modern computer not pre-fetch any data directly to ram and then just shift it to required place? Would it be beneficial to make a ramdisk for swap file then?
In short...no. To all of that. There's absolutely no reason with modern systems to define a set paging file size. As I mentioned, in past versions of Windows, this was advisable simply because of a.) how inefficiently Windows handled it and b.) the physical limitations of spinning HDDs with headers. Modern Windows does not require or really benefit from a defined min/max size. Especially not on an SSD.

Virtual memory is part of how programs pre-fetch data. But that's not all it does. It also serves as overflow. If a defined memory address is holding data that needs to be freed up for some other function, the RAM can write that data to the swap file, receive the new data that the program is demanding, then immdiately grab the other data back from the swap file. If I don't have the swap file, the data will need to be rebuilt from the various sectors around the drive before being ready in the RAM again. That's way, way, way slower. Even for an SSD.

A note here -- when you install a game, it's not written and filed in a single place. Most file systems record data on an HDD or SSD by putting similar data extentions in the same place, regardless of what program they're associated with. Thus, very simply, .bin files are all stored in one spot, sound files are stored in a different spot, .exe files are all stored together, etc. The "file structure" you see when you open a "folder" in Windows is an illusion. There's no actual file on the HDD with everything inside it. It's just a visual representation of the way the file system has referenced various files to differentiate which ones are associated with which program.

Thus, when a program "loads", what it's doing that entire time, while you're watching a bar crawl across the screen or a circle fill up, is decompressing and copying specific files into the RAM, and preparing things like the swap file to get ready for what that program needs. If you deny this process virtual memory altogether, you're simply making it harder for it to get what it needs when it needs it. It's a lot faster to get the data copied into a swap file ahead of the game's call, or maintain what the RAM needs to offload in a contiguous block, rather than taking all the processing cycles and time accessing stuff from all over the drive the next time it's needed.

(Do keep in mind, too, that we're talking about processes that can be measured in milliseconds. While that might seem like a pointless consideration altogether, think of it this way. If I make something that would normally take 8 milliseconds take 32 milliseconds instead...I've just increased the amount of time that it takes by 400%. If I do that for the next 100 processes, I've just added 2,400 milliseconds of slowdown to the overall performance. A function that could have taken 0.8 seconds will now take a full 2.4 seconds. That's pretty noticeable, and we're talking systems that will be performing millions of processes every second.)

If you leave virtual memory on Windows defaults, you will ensure the most efficient performance for most games. If you deny Windows virtual memory, you are limiting the computer's ability to write / rewrite contiguous blocks of data straight into RAM. The days of "virtual memory is actually slowing down your games" is an obsolete consideration.
 
Last edited:
Holdup. Doesn't Windows 10 turn off pagefile on SSDs by default? Or are we talking about a completely different system here? Also, how can I tell if I have the virtual memory enabled or disabled?
 
That's a lot of text to explain fragmentation and function of file systems. Doesn't address my original question though. Why would you use possibly fragmented file on slow disc for SWAP when there is plenty of RAM available? Even if random writes are slow from disc, do them to swap space in RAM, then copy that RAM part to required RAM locations much quicker than you could load up any disc swap files.

And i was under the impression that generally the days of virtual memory had little to do with speed but were mostly a measure to work around insufficient RAM space. Hell i've even seen people do booters that load the whole system disc into a virtual ramdisc, disable the HDD altogether and the system is insanely fast.
 
Nowadays, Windows 10 handles things very efficiently, by comparison. There's zero reason to deny Windows virtual memory. Especially since SSDs are not all that dramatically slower than RAM itself, it simply leaves the PC another avenue to get things loaded and processed quickly.
the good, very very good reason, is the limited I/O writes on an SSD.
windows doesn't need a swap file any more, specially not for cashing, and yes SSDs are dramatically slower than ram still.
an NVME m.2 SSD will max out at about 7.5 gb/s, ram will hit more like 28 or 29 gb/s, at a fraction of the search times (roughly 1/1000).
 
Holdup. Doesn't Windows 10 turn off pagefile on SSDs by default? Or are we talking about a completely different system here? Also, how can I tell if I have the virtual memory enabled or disabled?
Virtual memory is enabled on all drives, HDD or SSD by default. You can check the settings by right-clicking This PC ---> Properties ---> select Advanced System Settings on the right hand side ---> select the Settings button under the Performance section ---> select the Advanced tab:

This will show you your present virtual memory size.
1638816256358.png


It should be left Windows-specified in almost all cases. One exception to this is if you're using a really old HDD, say 5200x from the early 2000s and have no SSD installed. In that case, it may be a good idea to set your minimum paging file size to 0.5x your RAM and your maximum paging file size to 1.5x your RAM.

For SSDs, this is strongly discouraged. An SSD "ripples" the sectors that are used for virtual memory, ensuring that the same, physical cells are not being written to over and over again. Doing so will decrease the lifespan of those cells and thus the overall drive by proxy. One great way to kill sectors on your SSD years in advance is to define a set paging file, which will, by its function, create and maintain set, contiguous blocks on the drive to be used by virtual memory, and those cells will be hammered every time Windows writes to the paging file. (It's for similar reasons that you do not want to defragment an SSD nor reformat it when reinstalling Windows. There's no need. It simply stresses the drive.)

That's a lot of text to explain fragmentation and function of file systems. Doesn't address my original question though. Why would you use possibly fragmented file on slow disc for SWAP when there is plenty of RAM available? Even if random writes are slow from disc, do them to swap space in RAM, then copy that RAM part to required RAM locations much quicker than you could load up any disc swap files.

And i was under the impression that generally the days of virtual memory had little to do with speed but were mostly a measure to work around insufficient RAM space. Hell i've even seen people do booters that load the whole system disc into a virtual ramdisc, disable the HDD altogether and the system is insanely fast.
I think you're talking about a few different and unrelated things here. Yes, RAM is faster than virtual memory. But no, it's not as simple as, "Just use RAM only, then."

  • Why would you use possibly fragmented file on slow disc for SWAP when there is plenty of RAM available?
Easy answer: not all programs are written to recognize and utilize unlimited amounts of RAM. Games, especially, are specifically written to operate within a set, RAM budget. This is a primary reason, in gaming, to opt for faster RAM instead of more RAM if you have the choice. 64 bit processing is better at this, but a program still needs to be told where to put data and where to go to get what it needs. It is technically possible for programs to be written with instructions to "use all available memory addresses". This will not be very common though, as there's no guarantee that it won't use too much and wind up interfering with Windows processes or other programs running in the background, creating issues. It's also hard to balance performance if I don't specifically optimize for a minimum/maximum RAM range. Thus, having all that extra RAM is kind of pointless unless the program in question has been specifically written to utilize it.

Additional confusion: fragmented? There's really no such thing as "fragmentation" on SSDs. On HDDs, this is one of the primary reasons to utilize virtual memory, because HDDs were almost always significantly fragmented, which invariably creates slowdown reading information. By writing data to contiguous blocks before sending them to RAM, or by letting the RAM offload its present data to contiguous blocks on the HDD swap file instead of simply dumping it, it made it much faster to retrieve the other set of data when needed once more. On SSDs, it's still beneficial to do this since it's still faster than rebuilding the data from storage sectors. It also avoids errors, and increases stability.

  • Even if random writes are slow from disc, do them to swap space in RAM, then copy that RAM part to required RAM locations much quicker than you could load up any disc swap files.
What you're suggesting here wouldn't make any sense in practice: if I have available RAM, the program would just use another cell. I wouldn't need to dump the existing data in the already utilized cells. But there's no guarantee that any given user will have enough RAM, or enough free RAM to load absolutely everything that's needed at any instant. Hence, it's better to keep things within a budget. It also does not deal with the issue of any RAM address needing to both offload data and retrieve it again later. In order for programs to ensure that they have what they need -- on a range of different systems -- they will assign RAM addresses at launch, then work with the assigned addresses. Even if there are other memory addresses available, I can introduce instability by just coding a program to take whatever's free, as I discuss above. Better to reserve specific addresses and work within that budget. Thus, if I need to free up a memory address, I still need to do something with the existing data in that address. My choices are: dump it and need to rebuild it from scratch when it's needed again...or write it to the swap file to make it really easy to load it back into RAM when it's called for again. So, while YES -- it's much, much faster to simply load everything directly into RAM and just keep it there...that's not always possible. (Devs don't write programs to be run on only one, specific system configuration.) So it's kind of silly to code something to work that way. It's going to wind up creating issues for users unless their systems are specifically configured to run that program, and they're left with enough RAM overhead to manage all of the other functions and active software on their system, simultaneously. There's no guarantee that this will be the case. So, a wonderful failsafe: reserve specific RAM addresses and utilize virtual memory for any swapping. A user is far less likely to completely run out of disk space than they are to hit a memory address error if the RAM is 100% full.

  • And i was under the impression that generally the days of virtual memory had little to do with speed but were mostly a measure to work around insufficient RAM space. Hell i've even seen people do booters that load the whole system disc into a virtual ramdisc, disable the HDD altogether and the system is insanely fast.
Yup! But only in part. There were numerous reasons for creating virtual memory, and many new tricks in its application were developed over the years. What you're referring to, "Out of memory!" issues, have not really been a thing since the very early 1990s. I'm speculating now, but I'd say the biggest reason to keep it around was more and more multitasking functionality being built into Windows. As both the amount of "stuff" that would be running in Windows simultaneously increased (anti-virus, web browsers, platform front-ends, social media, utilities, etc.) at the same time that programs like games were getting more and more demanding, it became more and more impossible to determine what sort of system config a game would be facing when it loaded up. Virtual memory is a rock-solid method of ensuring that the RAM can very quickly free up any memory address needed with minimum slowdown.

Now, when we consider "power users" (because that's exactly what we're discussing here -- people trying to push their performance to the limits) what we're mostly looking at are four basic scenarios:

1.) The user simply has a system configuration that is exactly what the game/program is looking for, and they are able to increase loading times and performance without any issues. This is not guaranteed to work, however. Too many different system configs out there. Mileage will vary greatly. If it does work, though -- knock yourself out! No reason not to do it if it's stable. Which leads me directly to:

2.) The user will gain performance at the cost of stability. Sometimes, this seems like an ideal situation at a glance, especially after a fresh, Windows installation. But, once the system has been used for a year or so, I would not be surprised to see a menagerie of instability issues cropping up. (Ask me how I know! ;))

3.) It will work well for some games, but not for others. End result being that some games can't possibly load everything into RAM at once, as that would cause crazy conflicts. Certain functions need to be loaded when called for, then offloaded when no longer needed. Once a drive starts getting full, this is going to cause slowdowns, stutters, etc., whether on an SSD or HDD. A workaround for this is virtual memory, keeping everything flowing well over time, even when the drives get cluttered.

4.) There's something else wrong with a user's system. If turning off virtual memory has that drastic of an effect on things, then there's likely something wrong with my PC. Having everything in RAM might mean that a few loading stutters are smoothed over. But if I'm seeing much higher FPS, or errors in a game clear up, or a program stops crashing -- that's indicative of a more serious issue with my drive or my file system. That's not a benefit of using only RAM; it means something in my system config is not working properly.
 
Last edited:
Top Bottom