How does AI work, and is it really AI?

+
ai exists, but it isn't used in games. But yes they have AI that they feed loads of data and it solves problems by its own means.

That is not AI. What you are speaking of are machines given clear tasks and clear goals. How they achieve said goals is, partly, up to them but still have to fall within very defined and constrained guidelines from which they can't deviate.

It's basic machine learning and is leagues behind what is being created in 2020. Like the below mentioned systems.

AlphaGo is a computer program that plays the board game Go. It was developed by DeepMind.
IT beat the best player of Go Lee Se-dol 4 1.
That happened in 2016.

AlphaGo was impressive in 2016 but it wasn't true AI. It had to be fed tons and tons of data from previous games for it to analyze and, basically, replicate when it judged it appropriate. It's successors, AlphaGo Zero and AlphaZero, were far more impressive in that they trained themselves. Basically given information on what pieces can do and let loose to develop it's own strategy.

Much closer to true AI but still not AI. A definite milestone that lead Deepmind to it's next step which is what @Jetro30087 mentioned - AlphaStar.

AlphaStar was the obvious next step for one simple reason. Every system so far had been developed for games which provide players with what is called perfect information - I.E.; players see everything at all times. Whereas a game like Starcraft does not.

AlphaStar is now considered better than 99.8% of players. In due time it would probably reach 100% but it's road to glory was far from perfect and not reflective of what a true AI would be. It's first iteration had to learn completely and only through replays. It couldn't learn on it's own by playing itself. It's latest, and most successful iteration did "perfect" itself through playing against itself.

The problem is that a lot of those 0.2% of players able to beat it would be low level players. To this day, AlphaStar still struggles with adapting to things it's never seen. Once it's seen it, you probably wouldn't be able to pull it off again but if you throw in a variation it'll struggle to adapt to it and so on.

AlphaStar plays based on raw data, it consistently tries what has the best chances of achieving it's primary goal. It basically plays like a pro player but, unlike a pro player, struggles to adapt it's goal. When you throw in something that doesn't make sense to it, all of it's models crumble and it struggles to adapt. It's this adaptability to things they don't know that is still severely lacking for any system to be called a true AI.

I've been following Deepmind since Google's acquisition in 2014. Even they consider AlphaStar only as a step in true AI research, a major one but a step nonetheless.

Simply put what we've achieved so far are machines that can learn within very specific conditions and subjects. We still haven't completely overcome catastrophic forgetting - which is a tendency that neural networks have to forget what they've learned when presented with new information, especially if it resembles previous information. Until we do, and we seem to be getting closer everyday, true AI is still very much out of reach.

Part of the issue about AI discussions also lies with the fact the word gets thrown around far too much these days. Sometimes as a marketing ploy and other times to make it easier for the average person to understand very complicated systems regardless of how accurate the term is to describe said system.

True artificial intelligence does not exist yet. Everything built so far requires human input and constant nudging in the right direction. No system has reached human intelligence and adaptability. When true AI is finally achieved it will redefine our world in ways we can't even fathom right now. Maybe not in a good way.
 
The reason you focus on combat is that's the best AI in the game, and it's still very bad. I'd say if the non-combat NPCs were interesting, could engage in real conversations and have interesting interactions, we'd all find that at least as compelling as combat. If you could sit at a vendor's food stall, chat him up, and him remember you next time, eventually start telling you personal details about his life, and eventually maybe even give you a mission or two, how amazing would that be? Or if you could talk to an NPC in the street, make some witty banter, and then have them back to your apartment for a roll in the hay?


None of the above is impossible or even that difficult, it just requires some clever work on the part of the game designers. Then we'd feel like we were in a real living city instead of a large stage where we just wander from one fight to the next.
How long would it take to create a city with the points you said above and is it possible and cost effective?
 
I've just been reminded of the 2016 grants for CP2077 and the living cities application sounds like they started with a similar idea.
"Comprehensive technology for the creation of 'live' cities of great scale playable in real-time, which is based on the principles of artificial intelligence and automation, and takes into account the development of innovative processes and tools supporting the creation of high-quality games with open worlds."

Read more: https://www.tweaktown.com/news/7304...cs-will-have-unique-daily-routines/index.html

Surely the processing power required would be truly immense. I can understand how console or a home PC could handle a room full of npc bots in combat or how it deals with Total War style empire management using decision trees and procedures but managing a whole city seems very ambitious.

Surely an online game hosted by a server farm somewhere would be required to implement the kind of npc behaviour that would be necessary to have the 1000 unique individuals carrying out their daily lives? I'd love to see it done though.

Maybe that should be your next project, sounds like a winner to me!

I don't think the processor power required is 'immense', at least not at run time. Or, to put it differently, the processing power required is immense, but we have immensely powerful processors everywhere these days - the phone in my pocket had more than four million times as much memory and tens of thousands of times more processing power than the first mainframe I worked on forty years ago, and that machine supported 18 concurrent users.

I've done a lot of preliminary design work on systems to support open worlds at million square kilometer scale (i.e., continent scale) with at least hundreds of thousands of non-player characters each with considerable agency. Baking such a world will take a lot of compute power, but will have to be done only once, during development. At run time, I don't think (for single player games, or low numbers of multi-player) that the compute power needed is unmanageable.

I currently have code that will model settlement over the whole of the British Isles at one kilometer resolution (that is, each cell is 1km by 1km); it takes about thirty hours to run from the end of the last ice age up to about the bronze age, on a (powerful) PC. This is a good start, but you also need river drainage, a road network, and so on. Once you have gross settlement assigned, and rivers and roads routed, you then have to assign the location of individual buildings, yards, fields and other structures. So all this is going to take considerable compute time on a powerful machine or network of powerful machines, but it isn't impossible given current technology. And, as I say, this need only be done once, during development. Players of the game will play in a pre-baked world.

In a bubble slightly larger than visual range around each player, we need to model all the behaviour of every NPC. In a bubble about twice that radius, we need to wake up every NPC in order that they can be located in the appropriate position for what they're doing at the moment, and also so that they can exchange gossip with one another in order that, if the player talks to them, they're up to date with current news.

Over wider areas, you need model only the actions of characters who transmit news and manipulate prices (i.e. aristocrats, military leaders, outlaw leaders - these are all much the same thing, really - and merchants), and you only need to model their major actions. We need news of wars, significant treaties, significant deaths and market prices to spread (and also natural disasters if your world has natural disasters, but that isn't something I've put much thought into). We don't need to know who hit who with a sword of +4 physical damage ar 11:54 am on the 27th of March, or who ate a biscuit three minutes later.

So, while I greatly respect what CD Projekt have done with Cyberpunk - the city is undoubtedly awesome - I don't believe this even nearly represents what can be done with modern hardware.
 
Last edited:
So, while I greatly respect what CD Projekt have done with Cyberpunk - the city is undoubtedly awesome - I don't believe this even nearly represents what can be done with modern hardware.
I believe the opposite.
Games have to balance more than just CPU usage. They also have to balance against their RAM budget, I/O bandwith and latency as well as memory layout, L1 cache usage and savegame size.

I read some of your page, and I like the general ideas, very nice thing you have going there!
But take your "spread of knowledge" idea:
It requires additional memory for storage of the generated gossip, additonal CPU time for going through it, will baloon the savegame size quite considerably, and most importantly, the "current player bubble" has to be constantly fed with data from disk.
Especially the last part is where I think current hardware is not enough for something like this.

Cyberpunk seems to be already on the limit of how much stuff you can stream in realtime.
The amount of assets simultaniously on screen is beyond anything we have seen before, and the game really struggles with that, sometimes even on high-end machines. Adding even more stuff to that seems like asking for even more trouble.

And to add a minor nitpick (minor because you didn't claim otherwise):
I don't think you thought through all the implementation requirements for your ideas.
Staying with the gossip Idea, while simulating only a bubble around the player seems rather straightforward, NPCs in that area should be able to reference gossip outside of the currently active bubble, no? But that means that you somehow have to be able to search through ALL the gossip availabe, either when it is generated, or when it is referenced. Which in turn requires either holding all gossip in memory, or devising a clever way in which you can offload it to disk but still keep it searchable.
That is a nontrivial problem, and as I have argued above, will only tax your streaming systems more.

I really like your ideas, I think they are promising and you should definitely develop them further.
But I don't think current hardware can handle both, a city looking and sounding like night city, and a complex self-interacting world.
 
That is not AI. What you are speaking of are machines given clear tasks and clear goals. How they achieve said goals is, partly, up to them but still have to fall within very defined and constrained guidelines from which they can't deviate.

It's basic machine learning and is leagues behind what is being created in 2020. Like the below mentioned systems.



AlphaGo was impressive in 2016 but it wasn't true AI. It had to be fed tons and tons of data from previous games for it to analyze and, basically, replicate when it judged it appropriate. It's successors, AlphaGo Zero and AlphaZero, were far more impressive in that they trained themselves. Basically given information on what pieces can do and let loose to develop it's own strategy.

Much closer to true AI but still not AI. A definite milestone that lead Deepmind to it's next step which is what @Jetro30087 mentioned - AlphaStar.

AlphaStar was the obvious next step for one simple reason. Every system so far had been developed for games which provide players with what is called perfect information - I.E.; players see everything at all times. Whereas a game like Starcraft does not.

AlphaStar is now considered better than 99.8% of players. In due time it would probably reach 100% but it's road to glory was far from perfect and not reflective of what a true AI would be. It's first iteration had to learn completely and only through replays. It couldn't learn on it's own by playing itself. It's latest, and most successful iteration did "perfect" itself through playing against itself.

The problem is that a lot of those 0.2% of players able to beat it would be low level players. To this day, AlphaStar still struggles with adapting to things it's never seen. Once it's seen it, you probably wouldn't be able to pull it off again but if you throw in a variation it'll struggle to adapt to it and so on.

AlphaStar plays based on raw data, it consistently tries what has the best chances of achieving it's primary goal. It basically plays like a pro player but, unlike a pro player, struggles to adapt it's goal. When you throw in something that doesn't make sense to it, all of it's models crumble and it struggles to adapt. It's this adaptability to things they don't know that is still severely lacking for any system to be called a true AI.

I've been following Deepmind since Google's acquisition in 2014. Even they consider AlphaStar only as a step in true AI research, a major one but a step nonetheless.

Simply put what we've achieved so far are machines that can learn within very specific conditions and subjects. We still haven't completely overcome catastrophic forgetting - which is a tendency that neural networks have to forget what they've learned when presented with new information, especially if it resembles previous information. Until we do, and we seem to be getting closer everyday, true AI is still very much out of reach.

Part of the issue about AI discussions also lies with the fact the word gets thrown around far too much these days. Sometimes as a marketing ploy and other times to make it easier for the average person to understand very complicated systems regardless of how accurate the term is to describe said system.

True artificial intelligence does not exist yet. Everything built so far requires human input and constant nudging in the right direction. No system has reached human intelligence and adaptability. When true AI is finally achieved it will redefine our world in ways we can't even fathom right now. Maybe not in a good way.
Off course i did not implied Alphastar or Alphago is Skynet level AI.
I just posted to show how dumb looks in comparison the AI in Cyberpunk 2077 which is practically non-existent.
I thought you did not knew of Alphastar or Alphago.

Artificial neural networks suffer from the inability to perform continual learning.
Catastrophic forgetting is indeed a problem.
People studding this are trying to implement sleep-like activity in attempt to solve this.

One key feature of true Adaptive intelligence i think is being able to sleep and dream.
 
A True AI should be self aware, volitional and ultimately able to surprise.

Alpha go is not true AI, it's just software. Made to mimic and predict , replicate human behavior in a very narrow set of perimeters , and taking the advantage of huge data streams, ie. a perfect 100% accurate memory on pretty much every move there ever was.

A human chess master on the other hand had experience, but he can never memorize all his previous moves, so there's always room for errors.
 
It requires additional memory for storage of the generated gossip, additonal CPU time for going through it, will baloon the savegame size quite considerably, and most importantly, the "current player bubble" has to be constantly fed with data from disk.
Especially the last part is where I think current hardware is not enough for something like this.

Cyberpunk seems to be already on the limit of how much stuff you can stream in realtime.
The amount of assets simultaniously on screen is beyond anything we have seen before, and the game really struggles with that, sometimes even on high-end machines. Adding even more stuff to that seems like asking for even more trouble.

The 'player bubble' has to be constantly fed data from disk anyway. In Cyberpunk 2077, that's mainly textures; and part of the reason it doesn't work well on old consoles with disk storage is that there isn't enough I/O bandwidth. To do the gossip system and the degree of photorealism in Cyberpunk might prove beyond current generation I/O capabilities – I don't think so, but it might.

In engineering all things are tradeoffs. One of the things that is really disappointing for me in modern games is the very poor depth of repertoire of non-player characters – most street NPCs in Night City cannot carry on any conversation at all; like demented parrots, they know only one phrase. Would the game be more satisfying if it were less photorealistic but every NPC had interesting, deep conversation? The only way to find out is to try, but I believe firstly that the general answer is 'yes', certainly for role playing games

And to add a minor nitpick (minor because you didn't claim otherwise):
I don't think you thought through all the implementation requirements for your ideas.
Staying with the gossip Idea, while simulating only a bubble around the player seems rather straightforward, NPCs in that area should be able to reference gossip outside of the currently active bubble, no? But that means that you somehow have to be able to search through ALL the gossip availabe, either when it is generated, or when it is referenced.

No, neither. New items of gossip are spread through the network by a limited class of NPCs, in my model mainly merchants (who travel) and innkeepers (who interact broadly with local populations). My model is aimed towards worlds without instantaneous long distance comms, which of course isn't the case in Cyberpunk, so might not work well. But the idea is that NPCs in the gossip class are 'woken' once per game day, to interact with other gossip NPCs in their immediate location. They're (obviously) not all woken at the same time; rather, there's a background task iterating through them more or less permanently, dealing with one at a time. And it isn't a priority task. When (for example) combat is eating all the compute resources, it can be paused.

Which in turn requires either holding all gossip in memory, or devising a clever way in which you can offload it to disk but still keep it searchable.

That is a nontrivial problem, and as I have argued above, will only tax your streaming systems more.

It's called 'a database'. they're quite good, and very well established, technology.

I really like your ideas, I think they are promising and you should definitely develop them further.
But I don't think current hardware can handle both, a city looking and sounding like night city, and a complex self-interacting world.

The first computers I wrote games for had 1 Kb – 1024 bytes – of RAM, and ran an 8 bit processor at 2 megahertz. The first computer I worked on professionally had 8K of 32 bit words of memory, the equivalent of 32 Kb (and that was hand wired core storage, not silicon chips). It ran a 32 bit processor at 0.8 megahertz, and supported 18 concurrent users. Because video terminals had only just been invented and were still new and expensive tech, we used actual teletype terminals – things that looked like steampunk typewriters, fed with fanfold paper. And obviously, on paper, there were no moving graphics, at all.

The phone in my pocket has four 64 bit cores running at 2.26 gigahertz, so its processor bandwidth is about six orders of magnitude faster than those machines (ignoring the GPU, which it also has); it has 2 Gb of memory, so its storage capacity is again six orders of magnitude more; and as for its I/O bandwidth, those early machines were running at 300 bits – about 32 bytes (allowing for checkbits) – per second. Consumer grade SSDs now run at either three or six billion bits per second, so eight orders of magnitude more.

Computing power is not advancing as fast as it used to. The doubling of power every year that we saw for most of my working life has slowed. But it is still advancing fast. So if it is the case that current machines couldn't do what I'm suggesting, then by the time a game using the technology I'm suggesting can be built – remember that AAA games are typically around eight years in development – they will be able to.

But actually, I'm fairly confident they can. Certainly there's a tradeoff to be made between the visual world and the interactive world. Night City is, as you say, on a high end machine, visually stunning, with very close to photorealistic detailing on every asset. It is a magnificent piece of work. But actually, Witcher II levels of graphics – which were slightly stylised, and certainly less photorealistic – still provide satisfying gameplay. If I/O bandwidth is the bottleneck – as it may well be – would you rather have Night City's degree of photorealism, or a very slightly lesser degree but highly interactive NPCs?
 
My model is aimed towards worlds without instantaneous long distance comms, which of course isn't the case in Cyberpunk, so might not work well.
Yeah thats my main point of contention. I like that you don't dismiss that though outright. And maybe I am wrong and you could do better world simulation even in Cyberpunk. But I think we both can agree that that would certainly rule out PS4/Xbox one, and probably even current gen of consoles. Very unattractive for AAA games.


It's called 'a database'.
Ah come on, I specifically deleted that part of my own response because I felt it was needlessly condescending. It should be clear that we both are capable programmers who know what we are talking about, no need to paint the other party as ignorant ;)
Maybe I was a bit too "explainy" towards you as well, so sorry for that.

Yeah, database are well proven tech, but their characteristics are very hostile to realtime simulation, especially their bandwith and memory usage. They are usually only employed server side in multiplyer games. Having one in your single player game is not impossible, but as I said, nontrivial to integrate.


The first computers I wrote games for had 1 Kb – 1024 bytes – of RAM, and ran an 8 bit processor at 2 megahertz.
The first one I wrote games for had 64Kb, but ran 8 bit at 0,985 MHz ;) I betcha we both miss these times ;)


So if it is the case that current machines couldn't do what I'm suggesting, then by the time a game using the technology I'm suggesting can be built – remember that AAA games are typically around eight years in development – they will be able to.
In a sense, yes, I agree with you. But so far, games have still not reached the apex of graphical fidelity. I think any game that wants to be in the "next-gen" bracket will gobble up as much resources for graphics as it can, leaving only breadcrumbs for the other systems... There is reason that all next-gen titles so far were shooters, and cyberpunk kinda shows why straying to far from that "formula" is dangerous.


would you rather have Night City's degree of photorealism, or a very slightly lesser degree but highly interactive NPCs?
I actually want both, and see no reason why the other should stop existing.
Yeah, I want games with proper world simulation, I just don't expect them to have next-gen graphics. And I want games like cyberpunk, which are awesone to look at. But I also accept that these will have a hard time offering anything besides that. And cyberpunk actually goes further than any other next-gen title before that, they at least tried very hard to deliver an actual rpg on top of nice graphics. I really don't think you can push it much further...
At least not without throwing away vast amounts of potential customers...
 
Last edited:
AlphaGo is a computer program that plays the board game Go. It was developed by DeepMind.
IT beat the best player of Go Lee Se-dol 4 1.
That happened in 2016.
it was a great achievement, but it also can't do anything but play go, so put it in cyberpunk and it would do fuck all. those learning but very narrowly focused projects are all like this, very very good at one thing, so far totally unable to do more than one thing. They really aren't any smarter than the automata that could write. the current model of machine learning is seeming more and more like a dead end for getting to a general intelligence.
 
it was a great achievement, but it also can't do anything but play go, so put it in cyberpunk and it would do fuck all. those learning but very narrowly focused projects are all like this, very very good at one thing, so far totally unable to do more than one thing. They really aren't any smarter than the automata that could write. the current model of machine learning is seeming more and more like a dead end for getting to a general intelligence.
I just posted to show how dumb looks in comparison the AI in Cyberpunk 2077 which is practically non-existent.

Artificial neural networks suffer from the inability to perform continual learning.
Catastrophic forgetting is indeed a problem.
People studding this are trying to implement sleep-like activity in attempt to solve this.

One key feature of true Adaptive intelligence i think is being able to sleep and dream.
 
Top Bottom