I hate this rediculous mod limit (CDPR please fix this)

+
Status
Not open for further replies.

Guest 3841499

Guest
Yes, and the question is whether this whole load/offload aggregation is related to the 12-mod limit? Why do you think they did it?
 
MonarchX;n10520322 said:
Yes, and the question is whether this whole load/offload aggregation is related to the 12-mod limit? Why do you think they did it?

I believe it was a guesstimate for how many "big" mods it would take to put the engine into a state that might prevent it from loading. Or, perhaps that's the maximum number of remaining functions the engine can keep track of by design. It's sort of moot as you're arguing from massive hindsight. When the REDKit was released, there were no tools like Mod Merger, no one had yet tried to cook various assets together, and the actual toolset provided did not include the ability to do that. I have to say that they might have underestimated people's dedication to getting their mods working with the game.

End result is, if the maximum RAM required to load the game assets + mod assets ever exceeds the maximum RAM available in the engine at any instant...crash. Either a limit was chosen to prevent this, or a limit already existed in the engine because that's how it was originally written. (Then, clever modders figured out how to circumvent the limitation by altering how the assets of various mods could be loaded.)
 

Guest 3841499

Guest
No, still makes no sense because it's all under an assumption that mod COUNT has more influence on the game performance/function than mod CONTENT. 15 mods with a single 128x128 texture each will prevent the game from loading, but my giant 3.2GB MergedPack makes the game load faster than vanilla (due to one of the mods disabling store videos). Where's the logic in that? Besides, you can cripple performance with scripts alone or even settings.

There is some kind of PER-BUNDLE limit. For example, HD Reworked Project 5.0 (HDRP5) has very large and heavy textures. If you try to uncook it, you get OUT OF MEMORY error during the process. There is no way to merge it to mere mortals, but those who seek the way tend to find it. The way around that is to use Cache Viewer, extract textures from texture.cache in DDS format, convert them to TGA, cook them to get XBM's, uncook them, and finally recook HDRP5 in several parts. That's right, separate HDRP5 into 2-3 mods, all of which combined will result in exact 1:1 original HDRP5 copy, and you got yourself a fully merge-friendly HDRP5. When uncooking the game, you never get OUT OF MEMORY error, that I think, WOULD happen if CDPR didn't separate W3 content into several packages.

All-in-all, it's a mystery...
 
@bulkane
How was i disrespecting CDPR? - I'm complaining about a feature that got worse with every patch and update in which we can now have barely any mods installed unless you use mod merger, something I'm pretty sure can be fixed (I do work in the games industry you know). My annoynance comes because they said they would support modding. Clearly this has not really happened so I think I have a right as do many to be a bit frustrated by this

As for mod merger, no, pretty much all mine are texture mods (Not lighting) and work perfectly together when i dont use the manager - however if I do use manager only about half work (The recent popular face complexion textures stop working as an example and yrden lighting stops working if merged with e3 quen - however both work without merger)
 
Last edited by a moderator:
SigilFey;n10515592 said:
Most notably, there is a finite amount of memory (RAM) and other system resources that any engine is capable of utilizing. This is not governed by the amount of RAM on your system, but by the amount of RAM that the engine itself is coded to use.
SigilFey;n10520632 said:
the maximum RAM required to load the game assets + mod assets ever exceeds the maximum RAM available in the engine at any instant


 
This discussion about modlimit doesn't make sense because a 500kb texture mod can trigger the mod limit, even if the mod isn't being currently used by the game or a single w2ent tweak.
 
MonarchX;n10520692 said:
No, still makes no sense because it's all under an assumption that mod COUNT has more influence on the game performance/function than mod CONTENT. 15 mods with a single 128x128 texture each will prevent the game from loading, but my giant 3.2GB MergedPack makes the game load faster than vanilla (due to one of the mods disabling store videos). Where's the logic in that? Besides, you can cripple performance with scripts alone or even settings.

Because these functions were introduced by modders -- that's not the way the mod kit was designed to work. You're arguing hindsight again. (Meaning: you're superimposing your present day understanding of modding, following years of tools and techniques being developed by the community, and saying, "So why didn't CDPR do it this way from the beginning?" That's a fallacy of reasoning. Kind of like saying, "Well, why didn't Napoleon just create armored tanks, then he could have won at Waterloo!" Because tanks didn't exist back then.) Even though CDPR has never explained exactly what the situation with the REDKit was, I get the distinct impression that it didn't work out as originally intended. I also get the impression that, at some point during the development of TW3, the decision was made to pull people off of the project, likely before it was"ready". This is all speculation on my part. As stated, CDPR has not made any sort of formal statement about this. But these things happen: Limited hours, limited people, limited funds. Contracts, limitations, agreements. Terms, deadlines, and legal concerns. Like all business in any industry. Can't have everything. If the REDKit had been developed further, perhaps it would have included such tools and workarounds by default. Maybe the engine itself would have been re-worked to allow for higher limit, or the limit could have been based on the total "size" of assets that needed to be simultaneously loaded. Who knows? In the end, that's not what happened.

What you're identifying about the 3.2 GB MergedPack makes 100% sense as a workaround for the limit. Of course, it would load more quickly, since it's loading as one cache of assets.
1.)The engine has to load the first mod in my load order (Mod A). Then it loads Mod B, which is next in my load order, and compares its changes with with Mod A.

2.) It overwrites anything in Mod A that "conflicts" with the changes from Mod B.

3.) Then it loads Mod C, comparing its changes against "the state of Mod A overwritten by Mod B".

4.) It overwrites any "conflicts" in either A or B with the changes introduced by C.

5.) It loads Mod D, comparing its changes against "the state of Mod A overwritten by Mod B with the remaining 'conflicts' in either A or B as overwritten by Mod C"...

And on and on, the process getting exponentially more complex with every mod added.

So, this starts gobbling up RAM fast, and is likely a key reason for the limit. Now, before anyone makes an argument about how "stupid" this is, consider this one fact:

There's no other way to do it. That's how computers need to work.

It's not so big a deal with the much lighter and more modular system used in engines like Gamebryo / Creation Engine (there's far less that's loaded into RAM [or holding RAM in reserve] at launch, and it's easier for that engine to "reschedule" something needed by the game [since it was built to be modular]). By comparison, the RED Engine is a beast that gobbles up all the resources it needs for all of its processes up-front, leaving far less overhead in the engine to be used by "additional" functions. There are utilities that can be created that expedite the process of loading mods in either engine, like Mod Merger, WryeBash, LOOT, etc. But in order for these programs to be possible and work properly, a few things need to be in place:

1.) The assets in question must be explodable / re-compressable. As many games use their own, proprietary file systems, this is not always possible, and so modders are stuck using the default system. For example, Bethesda assets are notoriously buggy if a .BSA file is exploded, then re-compressed. The method of compression used by Beth to "cook" a .BSA file is not available publicly, and using existing systems can occasionally result in data corruption when the .BSA file is re-compressed.

2.) The changes introduced by various mods must be compatible. Like being able to merge various texture layers from various mods...that's not always how an engine reads texture packages. So it's not always as simple as exploding a texture file, taking just the maps I want, overwriting those files in another package, and re-compressing the file. Sometimes, each package must be completely redone, and various mods will never be compatible. (This is largely the case with most "texture packs" for Bethesda games.)

3.) Not all utilities work as well as they may initially appear to. It's passingly common for a community-made utility that is "so awesome and the developers should have done it this way from the beginning!!!" to utterly break the game in the long run. I distinctly remember a few versions of SKSE destroying two of my Skyrim playthroughs because some of its new tricks created endless loops in Papyrus scripts that would not manifest in the game until 50+ hours in, then all saves would be corrupted after that point. In the end, this is a sign of "terrible design and a complete lack of programming knowledge" by the authors -- who had no way of knowing how the source code would react to that technique over time.

So, it's a terrible habit to pass judgment on something simply because a better way of handling it happens to be discovered later on. That's called progress, and it deserves praise, not ridicule against those that that "only" laid the very foundations upon which all else was built. :) And quite often, what appears to be progress is simply an unintentional shortcut that winds up breaking as much as it fixes. We live and learn. (Although, from what I'm seeing, the Mod Merger thing looks pretty solid in TW3's case!)


MonarchX;n10520692 said:
There is some kind of PER-BUNDLE limit. For example, HD Reworked Project 5.0 (HDRP5) has very large and heavy textures. If you try to uncook it, you get OUT OF MEMORY error during the process. There is no way to merge it to mere mortals, but those who seek the way tend to find it. The way around that is to use Cache Viewer...

Possible, and this would be leaning toward the "upper engine limit" idea. Most engines have a maximum number of functions / variables that can be loaded into it at one time, and TW3 is definitely putting the engine through its paces under vanilla conditions. It's also somewhat dependent on the "worst-case scenario" the game can produce. (Meaning: the busiest scene with the most processes simultaneously running under the heaviest graphical load that player will ever experience throughout the course of the game.) Limits on engines are sometimes set against such situations, as no mods can ever exceed the limits of what the engine is capable of doing for that particular situation...even though a larger selection of mods would work just fine in every other part of the game.

Again, it's normal for Bethesda games to wind up in such situations, and there's not much in place to let you know that you've exceeded the engine's capabilities. That's why so many of the civil war overhauls for Skyrim had such issues. They just brought Papyrus to its knees and started kicking it while it was down. The mods were solid, but the engine couldn't handle it.
 
Last edited:
Murzinio;n10523232 said:

Was this a question?

EDIT: Ahhh... I think I see where the communication broke down. Super-simplified, hypothetical scenario (mostly because I hate math and want to avoid numbers. :)):

I have a PC with 32 GB of RAM, and a processor capable of 4.5 GHz.

I play a game that's on an engine written to use up to 8 GB of RAM and process at a clock speed at or below 2.0 GHz.

How much RAM will I have available in the game? Answer: LESS than 8 GB. The engine must obviously load its operating environment, and that alone is going to gobble up, say, 1 GB of RAM. Permanently. That means that even though my system has 32 GB, the engine is only written to recognize and utilize a maximum of 8 GB. It's 100% impossible to tell a program, "Just use unlimited RAM, loading unlimited processes into memory." The program won't know when to stop and will max out all available RAM at the speed of electrons, almost instantly crashing the whole system. I must always program a minimum operating environment and a maximum operating environment. Have to. Computers can't "think". Only calculate mathematical equations.

Similarly, how fast will the program run? Answer: it will never exceed 2.0 GHz, regardless of how powerful my CPU / GPU is. That's because there are functions that need to be sent, processed, and returned to be displayed at a specific rate at specific intervals. In the "old days" (...:confused:...), there were no such limitations, and we wound up with situations where games built to run on DOS 5.0 on a 286 processor would be unplayable on newer systems because they ran too fast. There was nothing in the game's engine to say, "Hey, humans can't go that fast. Play no faster than [this]." So, I remember getting one of the first Pentium II chips (...with MMX technology!...oh my god, I spent so much money on that rig...) and playing what I called "Ultima VII On Rollerskates". It was just miserable -- completely unplayable. Eventually, it was decided that the speed of systems was starting to increase faster than games could take advantage of the power, so they had to start "capping" things like graphical processing and program functions so things displayed at speeds a human could interact with. Viola, things like "Frame Limits" were born, and they're still very necessary today.

So, in brief, no amount of "hardware resources" can ever exceed the maximum limitations set by the engine. That's why many legacy titles appear to run "like garbage" on modern systems. They're not. They're often running at the absolute maximum performance that would have been available back when they were released. Hence, the modern day popularity with "remastering" older titles to bring their features and performance more in line with what modern systems can achieve.
 
Last edited:
More of a suggestion. You take a lot of guesses based on some strange misconceptions about the topic, that's not how it works at all. For heap allocations on current x86_64 CPUs (that's both PCs and consoles since the PS4/XONE) the only limiting factor is the amount of available physical memory, unless you have more than 256TB of RAM or you're using a 32bit OS.

The only limitations besides that are only there if you specifically implement them yourself for some reason, most likely for consoles where you have 5GB of shared memory available and that's it so you wouldn't want to run out of it. C++ allows you to either write wrappers or overload the new/delete operators to make your own management system, and it can be easily implemented and made configurable for different platforms/future versions of the engine. And with some config tweaks you can bump the RAM and VRAM usage way over vanilla values so it's either adjustable (there are budget options in config files, though I don't remember if they affect anything in the release build) or set higher for PCs to the point of not being an issue. So that and like it was already mentioned, seeing that even very tiny mods can trigger the mod limit indicates that the budgets themselves are not a problem. Another thing is, with a custom system like that you can handle exceeded budgets in a lot more ways than simply crashing the program and it won't be that easy to reach them in the first place, it's not like you have to keep everything in RAM all the time. So we can safely assume the problem is somewhere else, but even if it was the budgets it would be a very easy thing to change for the devs like I said.

Besides, the scripts and leftovers in wcc clearly show that the modding support for the game wasn't even seriously considered, it's not like modding exists since yesterday so they couldn't make the design decisions with it in mind, so the comparisions with Napoleon and tanks make absolutely no sense.

Also you're confusing different technical terms in your examples...
 
SigilFey;n10524312 said:
How much RAM will I have available in the game? Answer: LESS than 8 GB. The engine must obviously load its operating environment, and that alone is going to gobble up, say, 1 GB of RAM. Permanently. That means that even though my system has 32 GB, the engine is only written to recognize and utilize a maximum of 8 GB. It's 100% impossible to tell a program, "Just use unlimited RAM, loading unlimited processes into memory." The program won't know when to stop and will max out all available RAM at the speed of electrons, almost instantly crashing the whole system. I must always program a minimum operating environment and a maximum operating environment. Have to. Computers can't "think". Only calculate mathematical equations.
https://github.com/freebsd/freebsd/blob/master/sys/contrib/octeon-sdk/cvmx-malloc/malloc.c
 
Also, you can't really "design" a program to run at specified clock speed independent of the hardware. Unless you maybe set the clocks when it runs but it would be totally pointless. Different CPUs will have different "speed" on the same clocks, there's much more to that than the frequency. So that would be a terrible idea to rely on for your game time. If you want to limit the framerate you use precise timers to measure the time on each tick in the game loop and render the frame when the desired time has passed, like 1/60 of a second to get at most 60 fps... And for game logic/physics usually you just use a fixed time step. So again, that's not how it works.
 
Murzinio;n10525292 said:
...unless you have more than 256TB of RAM or you're using a 32bit OS.

It's still an upper limit. This is another fallacy that has wound up causing problems for the gaming industry...since...forever. "256 TB of RAM!!! That's...insane. It's so much that we'll never hit that limit! It's seriously not even worth thinking about."

Kind of like a conversation I had with a bud back in the early 2004. "8 GIGAbytes of RAM!!! Why the @#$%! would you waste so much money on that, man!? There's no way any game is going to come out that requires 8 GB of RAM! You're just wasting your money. That's...insane."

I can also go back to the early 1990s, when my father brought home an IBM computer from Kodak with the 286 microprocessor (which weren't released on the open market yet) and 2 MEGAbytes of "above-board memory". And it was so amazingly powerful that all of the games we had could run, like, perfectly. It was...insane.

And yet it was never more that 3 years before these "insane" systems were starting to show their age, and newer games with "absolutely mind-blowing features" came out that required me to turn down graphics options in order to get them to run smoothly.

Assuming that the way computer systems will work is somehow going to change because "we've broken all boundaries and achieved a whole new plane of technology" will pretty directly lead to disappointment. There are always limits. We have from absolute zero to the speed of light in this existence...and that's it. (That'll be plenty for our gaming needs. ;))


Murzinio;n10525292 said:
C++ allows you to either write wrappers or overload the new/delete operators to make your own management system, and it can be easily implemented and made configurable for different platforms/future versions of the engine. And with some config tweaks you can bump the RAM and VRAM usage way over vanilla values so it's either adjustable (there are budget options in config files, though I don't remember if they affect anything in the release build) or set higher for PCs to the point of not being an issue.

Right. It can. And there are engines that will do that. TW3 doesn't. Obviously, there is reason for that. We might not like this reason (whatever it actually is). We might want to argue with that reason. We might even be able to come up with an alternative solution to get around the limit (which many already have). But that does not mean that there is "no reason", nor that the existing reason is invalid.

You've also landed on the point I made above about the game needing to work reliably on consoles under much more stringent operating conditions. It would be pretty hard to keep multiple design teams on-track if one group is trying to push things into the 32 GB RAM range (so PCs can display light reflections on the individual veins of a character's eye), while the XBox team is struggling to stop pop-in around White Orchard at 780p. There comes a point when the disparity has to be kept in check to ensure measurable progress is being made across the board. There are concerns that cannot be set aside when there is a business to consider, not just a game. So, just because I can, doesn't mean I should.

And, this is all speculation in the end.

What I'm primarily explaining is that there are always reasons, and we don't know what they are in this case. Jumping to conclusions or trying to vilify something we don't actually understand is foolish. (Not saying you're doing that, but there is a vibe in the thread...) Hopefully, decades from now, we'll get the whole story in an episode of "Remembering the World Before 2020", which we'll watch on our quantum PCs via a holographic projector by logging into httppss://www2.FreeYouTube.global.

What you say about future versions is spot-on! It's definitely possible, and it will eventually be implemented in future titles. That's what tends to happen as the industry creates new engines and games! But getting "bigger and better" stuff to happen is nowhere near simple. People that imagine it's "so easy to do" usually continue believing that until they dive into the big-leagues of an industry themselves. Then, they wind up in a world o' hurt-lockers surrounded by bruisers with not a key in sight. I know that, because that's how ALL industries work. "Simple" evaporates as soon as even one other person is involved.


Murzinio;n10525292 said:
Besides, the scripts and leftovers in wcc clearly show that the modding support for the game wasn't even seriously considered, it's not like modding exists since yesterday so they couldn't make the design decisions with it in mind, so the comparisions with Napoleon and tanks make absolutely no sense.

Two totally different considerations. The hindsight involves TW3 modding when the REDkit was released vs. as it is now. Whatever the reasons / motivations were (which we don't know), the fact remains that the tools for present day modding techniques did not exist when it was released. Especially if modding was never considered / set aside / not possible to include in the dev process, that's even more of an argument for what I was saying: that the devs involved never considered such approaches, even though have become commonplace between 2015 and 2018. And the Napolean analogy stands -- as modding for TW3 is not seemingly a direct parallel for modding anything else. Hence, all the trouble authors are having. That means that new things had to be created to deal effectively with it. We lost our "Battle of Waterloo" back in the day, but we've got some mean tanks now.


Murzinio;n10525292 said:
Also you're confusing different technical terms in your examples...

Probably a lot of them. I don't code. What little "scripting" I've done has been self-taught and largely by wrote. (I never learned the foundations, just reverse-engineered the pieces I needed, trial-and-error style, until I got them to do what I wanted.) I'm completely amateur, but still wrote a bunch of stuff for Morrowind between ~2003-2007. I was able to pull off some pretty cool stuff, too! But I never studied the vocabulary in any depth. :p If there are any glaring issues, feel free to PM and I'll edit. (Hopefully the gist gets through.)

Speaking of...


traderain;n10525392 said:
https://github.com/freebsd/freebsd/b...alloc/malloc.c

That's a long read, and I'm not familiar with the "code-y" parts of it. (Only read the first bits, then skimmed around a bit.) From what I gather it's a memory manager for the 64-bit era? Basically, it ensures that no chunks of RAM are wasted?

If so, there have been lots of attempts at doing this all sorts of different ways over time, and all of them created issues for something, eventually. Which is why we have so many different methods today. What sets malloc apart?
 
Murzinio;n10525532 said:
Also, you can't really "design" a program to run at specified clock speed independent of the hardware. Unless you maybe set the clocks when it runs but it would be totally pointless. Different CPUs will have different "speed" on the same clocks, there's much more to that than the frequency. So that would be a terrible idea to rely on for your game time. If you want to limit the framerate you use precise timers to measure the time on each tick in the game loop and render the frame when the desired time has passed, like 1/60 of a second to get at most 60 fps... And for game logic/physics usually you just use a fixed time step. So again, that's not how it works.

Meh. I oversimplified -- like I said I was going to. (You were warned! :cool:) But in the early Pentium days, limiting the Hz (frequencies) is exactly what he had to. Even DOSBox still offers the ability to downclock the CPU by a certain percentage, as running the actual legacy programs still suffer from the same issues. In the beginning, games were created to run "optimally" on very particular hardware. The only way the engine would perform 100% correctly was to have that precise Hz of CPU and the exact amount of RAM it was designed for. So the engine was designed for a specific amount of resources, and having more / less could and would cause issues (even though most were minor enough to be ignored easily until the hardware got waaay more advanced).

Today, programs still need to establish a "sync" (complimentary relationship between RAM, Hz, and draw rate) to work correctly. I spend quite a lot of time trying to get people to understand that limiting their system's power may be necessary to ensure games function correctly. Most notoriously, unlimited FPS, which throws games for a loop sometimes since it's running too fast and falling out of sync with anything requiring frame-timing. The details may have shifted over time, but philosophy and execution is largely the same:

Engine must set min/max levels. Hardware must cooperate with engine. Unlimited power = illusion.
 
SigilFey;n10526252 said:
It's still an upper limit. This is another fallacy that has wound up causing problems for the gaming industry...since...forever. "256 TB of RAM!!! That's...insane. It's so much that we'll never hit that limit! It's seriously not even worth thinking about." Kind of like a conversation I had with a bud back in the early 2004. "8 GIGAbytes of RAM!!! Why the @#$%! would you waste so much money on that, man!? There's no way any game is going to come out that requires 8 GB of RAM! You're just wasting your money. That's...insane." I can also go back to the early 1990s, when my father brought home an IBM computer from Kodak with the 286 microprocessor (which weren't released on the open market yet) and 2 MEGAbytes of "above-board memory". And it was so amazingly powerful that all of the games we had could run, like, perfectly. It was...insane. And yet it was never more that 3 years before these "insane" systems were starting to show their age, and newer games with "absolutely mind-blowing features" came out that required me to turn down graphics options in order to get them to run smoothly. Assuming that the way computer systems will work is somehow going to change because "we've broken all boundaries and achieved a whole new plane of technology" will pretty directly lead to disappointment. There are always limits. We have from absolute zero to the speed of light in this existence...and that's it. (That'll be plenty for our gaming needs. )

And what does that have to do with anything? Where did I imply any of that? You completely missed the point, which was meant to show that current games are far from the adress space limit. That is 48bits btw, if we will need to we will expand to full 64bits which allows for 16EB adress space, so you don't have to worry about that.

SigilFey;n10526252 said:
Right. It can. And there are engines that will do that. TW3 doesn't. Obviously, there is reason for that. We might not like this reason (whatever it actually is). We might want to argue with that reason. We might even be able to come up with an alternative solution to get around the limit (which many already have). But that does not mean that there is "no reason", nor that the existing reason is invalid.

Maybe you should read my post again. The only way these limits could exist is when you implement them yourself. So W3 engine already "does that". I already explained that you can see for yourself reaching memory usage much higher than console hardware limits/PC vanilla is possible with some config tweaks, which shows that any limits are not likely to be a problem causing the mod limit issue.

SigilFey;n10526252 said:
I don't code.
SigilFey;n10526252 said:
I'm completely amateur

No offence but I can clearly see that, yet you don't have problems discussing topics you don't understand and making completely wrong statements about them. You should read up design on patterns used in games (most importantly the game loop) http://gameprogrammingpatterns.com/ and try implementing them by making some simple games in C++ (and make sure you understand the memory management) with something like SFML (pretty much a wrapper for OpenGL), when you finish and read your posts again you will see you had a lot of strange misconceptions.

SigilFey;n10526252 said:
That's a long read, and I'm not familiar with the "code-y" parts of it. (Only read the first bits, then skimmed around a bit.) From what I gather it's a memory manager for the 64-bit era? Basically, it ensures that no chunks of RAM are wasted?

Wrong guess. malloc is a heap allocation function from C standard library. (C++ improved malloc by adding type safety and handling the size automatically) You can use it to write a program with 5-6 lines of code that will allocate entire available memory on your PC without any black magic.

SigilFey;n10526332 said:
Meh. I oversimplified -- like I said I was going to. (You were warned! ) But in the early Pentium days, limiting the Hz (frequencies) is exactly what he had to. Even DOSBox still offers the ability to downclock the CPU by a certain percentage, as running the actual legacy programs still suffer from the same issues. In the beginning, games were created to run "optimally" on very particular hardware. The only way the engine would perform 100% correctly was to have that precise Hz of CPU and the exact amount of RAM it was designed for. So the engine was designed for a specific amount of resources, and having more / less could and would cause issues (even though most were minor enough to be ignored easily until the hardware got waaay more advanced).

That's not the case for a long time now. It's not an oversimplication, it's simply wrong... We're discussing a 2,5 year old game, old software you're talking about had completely different issues. Game development is much different today.

SigilFey;n10526332 said:
Most notoriously, unlimited FPS, which throws games for a loop sometimes since it's running too fast and falling out of sync with anything requiring frame-timing.

If you don't have your framerate independent from game logic/physics then you're doing it wrong or you're still talking about old games when no one bothered to consider that. Which is not the case today, the same games run on at least 30-144FPS on different platforms and often you can go much higher if you have the hardware.

SigilFey;n10526332 said:
Engine must set min/max levels.

Again, it's not something you "must" do. Memory budgets are only helpful to avoid running out of memory if you don't have a lot of it available, but then you can easily adjust them. If you want you can write an engine without that. And you can write a game aiming for unlimited framerate (where frame is presented as soon as it is ready) without a problem, programmers figured out problems like that years ago.
 
Murzinio

We still seem to be having two entirely different discussions. Rather than spending so much effort on trying to argue semantic details (which I readily admit I may have made errors with), let's focus on clarifying each other's points. I'm not going to go through the latest exchange, because it's clear we're not communicating intent effectively either way.


Question 1:

I have absolutely no idea, for example, how your statement that...

Murzinio;n10527332 said:
...current games are far from the adress space limit. That is 48bits btw, if we will need to we will expand to full 64bits which allows for 16EB adress space, so you don't have to worry about that...

...even remotely relates to anything that I was suggesting about the current state of TW3's mod limit. In fact...you've said exactly what I was saying using precise numbers.

SigilFey;n10526252 said:
What you say about future versions is spot-on! It's definitely possible, and it will eventually be implemented in future titles. That's what tends to happen as the industry creates new engines and games!

"it" = engines that allow for more RAM, higher / larger "address space limits", more powerful CPUs, [more-better-gamey-stuff-that-goes-real-fast-like], etc.

Now, were you referring specifically to TW3 with your statement above, or making a general statement about future engines? (I'm taking it to be the latter.)


Question 2:

Overall, I was stating that if an engine is coded to process, say, 16 simultaneous sound channels during a scene, I can't simply "force it" to playback 24 sound channels, regardless of how awesome my sound card may be. The code doesn't exist in the engine. If I don't write the code for that function into the engine, it will never play more than 16 simultaneous sound channels. Similarly, if an engine is coded to utilize, say, 32-bit processing, having a 64 bit OS and processor won't make it run at 64-bits. It will only ever run at 32-bits. Unless I code the ability to recognize and take advantage of 64-bit processing into the engine. And/or, if an engine is coded to utilize up to 8 GB RAM, having 16 GB RAM won't help. I need to code the ability to recognize and use more RAM into the engine.

An engine will always be limited to the resources it is coded to use (meaning its operating environment). Therefore, having additional resources available (more RAM, faster CPU, etc.) on the PC / console will not be available to the engine unless it is coded to use it.

Do you disagree with anything here?
 
1: The numbers I mentioned are hardware limitations, not engine limitations. I don't want to explain the same stuff 3 times...
2: Sound channels - ok I guess. 32/64 bit - you just change the compilation target and you can compile both 32bit and 64bit versions. If you have all the dependencies available for both and don't mess up the code it's not a problem. So kinda true, but you just need to compile it with different target, and if you follow good practices for writing code you won't have to change anything in it.

Memory usage is not fixed at the compilation by itself, you can allocate memory dynamically depending on size of a file that program opens for example, you don't have to know anything about it beforehand, it's a really basic thing to do, without that software would be very limited... And it's called "dynamic memory allocation" for a reason, one of it's uses is precisly when you don't know how much memory you will need when you write the code. Like I said you can write a few lines of code program that will demonstrate this, without specifying anything about RAM. And your program doesn't need to "recognize" the RAM, the operating system handles that, manages the available memory, checks for access violations etc. C++ only provides an abstraction, you don't allocate every single byte manually.

Not being able to use faster CPUs doesn't make sense either, if the CPU can execute the same instructions faster, the code will run faster and even if you have locked framerate, it will just reduce the CPU time the game uses because the computations will be ready sooner and you will only check the timers. For unlimited framerate you will just get more fps. If what you say is true you wouldn't be able to see a difference on newer CPUs or even after overclocking. That only applies to being able to utilize different instruction sets or number of threads but you definitely get a difference with higher frequency or newer, optimized CPU generations.
 
Murzinio;n10528702 said:
1: The numbers I mentioned are hardware limitations, not engine limitations. I don't want to explain the same stuff 3 times...
2: Sound channels - ok I guess. 32/64 bit - you just change the compilation target and you can compile both 32bit and 64bit versions. If you have all the dependencies available for both and don't mess up the code it's not a problem. So kinda true, but you just need to compile it with different target, and if you follow good practices for writing code you won't have to change anything in it.

Memory usage is not fixed at the compilation by itself, you can allocate memory dynamically depending on size of a file that program opens for example, you don't have to know anything about it beforehand, it's a really basic thing to do, without that software would be very limited... And it's called "dynamic memory allocation" for a reason, one of it's uses is precisly when you don't know how much memory you will need when you write the code. Like I said you can write a few lines of code program that will demonstrate this, without specifying anything about RAM. And your program doesn't need to "recognize" the RAM, the operating system handles that, manages the available memory, checks for access violations etc. C++ only provides an abstraction, you don't allocate every single byte manually.

Not being able to use faster CPUs doesn't make sense either, if the CPU can execute the same instructions faster, the code will run faster and even if you have locked framerate, it will just reduce the CPU time the game uses because the computations will be ready sooner and you will only check the timers. For unlimited framerate you will just get more fps. If what you say is true you wouldn't be able to see a difference on newer CPUs or even after overclocking. That only applies to being able to utilize different instruction sets or number of threads but you definitely get a difference with higher frequency or newer, optimized CPU generations.

I never once said that things could not be done differently. I never once said that things could not be done better. I simply said that engines can and do contain limitations. If those limitations are in place in the code, they become an upper limit. For example, there's a limit on TW3's modding ability, and it's there for a reason. Really insisting that it should have been done differently does not negate the fact that it was done the way it was done.

I asked two questions above, very clearly, but you have not directly responded to either of them. You seem tunnel-visioned on criticizing the simple reality of the situation by repeatedly arguing the way it could have been. Obviously, that's not the way TW3 was built. There's a mod limit. If you truly want to reject everything I've suggested because I used a wrong term or over-simplified the explanation of a process, then you're welcome to reject it. You seem to be convinced.

However...if it's so simple, why do we have so many different methods being used throughout the industry? Why doesn't every single developer under the sun use malloc with perfect efficiency to manage their heap allocation? Why isn't every engine 100% flawless in its execution, able to make complete use of 128 GB of RAM?

Perhaps...there's more to it than you think. Given the overall temperature of some of your responses, I'm going to leave this here for now. I'd like to continue later, because we're still having two different conversations.


Cercaphus;n10528692 said:
I think we came full circle now

I know I've seen this rock before!
 
SigilFey;n10528892 said:
I asked two questions above, very clearly, but you have not directly responded to either of them. You seem tunnel-visioned on criticizing the simple reality of the situation by repeatedly arguing the way it could have been. Obviously, that's not the way TW3 was built. There's a mod limit. If you truly want to reject everything I've suggested because I used a wrong term or over-simplified the explanation of a process, then you're welcome to reject it. You seem to be convinced.

Because I tried to explain to you how all of it works... And you continue to argue with points that I didn't even made.

SigilFey;n10528892 said:
However...if it's so simple, why do we have so many different methods being used throughout the industry? Why doesn't every single developer under the sun use malloc with perfect efficiency to manage their heap allocation? Why isn't every engine 100% flawless in its execution, able to make complete use of 128 GB of RAM?

It's not some "method" you can use or not, it's the way to allocate something on heap in C or with new in C++ (either directly or with smart pointers, but you have to use it if you want something on heap, no way around that). And it doesn't work by filling your entire memory, that was an example to show it can if you want it to.
 
Last edited by a moderator:
You know, I keep hearing about the mod limit, but I've never actually encountered it. I had, at last count, 78 mods running on the game with no issues, and I wasn't aware there was a limit until I saw it mentioned on reddit the other day.

Now reading this thread, I see people saying the mod limit is 12? But then I see other people saying that if the game is being played on a high powered computer possibly the PC is just loading them anyways in spite of the limit... which would then imply there is no actual mod limit in place, but rather the computer itself just stops loading stuff regardless? I'm not sure I'm reading the responses correctly.

But I am wondering, based on the fact I've 78 mods running and never reached the mod limit issue, could the computer size actually be the reason some people are reaching a mod limit while others are not? Is it possible that it is not in and of itself a limit on mods, but rather a limit on the percentage of cpu/ram/etc used? Therefore smaller computers reach a "mod limit" with only a few mods because maximum memory usage percentage was reached, while larger computers never reach a "mod limit" because max memory percentage isn't reached?

I've got a very overpowered gaming rig, and even playing Witcher 3 with a 400+ hour save file and 78 mods, it still only used at its highest 31% of computer memory/ram/cpu/etc, even with a capture card running at the same time, and a video editing program rendering the youtube episodes in the background.

I'm wondering, if someone hits the "mod limit", if they were to try to install more memory to their computer, would the "mod limit" then go away because they are no longer reaching a max memory useage percentage?

Has anyone ever tested this to see if increasing pc memory size, increases their mod limit?
 
Status
Not open for further replies.
Top Bottom