ai exists, but it isn't used in games. But yes they have AI that they feed loads of data and it solves problems by its own means.
That is not AI. What you are speaking of are machines given clear tasks and clear goals. How they achieve said goals is, partly, up to them but still have to fall within very defined and constrained guidelines from which they can't deviate.
It's basic machine learning and is leagues behind what is being created in 2020. Like the below mentioned systems.
AlphaGo is a computer program that plays the board game Go. It was developed by DeepMind.
IT beat the best player of Go Lee Se-dol 4 1.
That happened in 2016.
Former Go champion beaten by DeepMind retires after declaring AI invincible
Humans take an L after AI triumphs.www.google.com
AlphaGo was impressive in 2016 but it wasn't true AI. It had to be fed tons and tons of data from previous games for it to analyze and, basically, replicate when it judged it appropriate. It's successors, AlphaGo Zero and AlphaZero, were far more impressive in that they trained themselves. Basically given information on what pieces can do and let loose to develop it's own strategy.
Much closer to true AI but still not AI. A definite milestone that lead Deepmind to it's next step which is what @Jetro30087 mentioned - AlphaStar.
AlphaStar was the obvious next step for one simple reason. Every system so far had been developed for games which provide players with what is called perfect information - I.E.; players see everything at all times. Whereas a game like Starcraft does not.
AlphaStar is now considered better than 99.8% of players. In due time it would probably reach 100% but it's road to glory was far from perfect and not reflective of what a true AI would be. It's first iteration had to learn completely and only through replays. It couldn't learn on it's own by playing itself. It's latest, and most successful iteration did "perfect" itself through playing against itself.
The problem is that a lot of those 0.2% of players able to beat it would be low level players. To this day, AlphaStar still struggles with adapting to things it's never seen. Once it's seen it, you probably wouldn't be able to pull it off again but if you throw in a variation it'll struggle to adapt to it and so on.
AlphaStar plays based on raw data, it consistently tries what has the best chances of achieving it's primary goal. It basically plays like a pro player but, unlike a pro player, struggles to adapt it's goal. When you throw in something that doesn't make sense to it, all of it's models crumble and it struggles to adapt. It's this adaptability to things they don't know that is still severely lacking for any system to be called a true AI.
I've been following Deepmind since Google's acquisition in 2014. Even they consider AlphaStar only as a step in true AI research, a major one but a step nonetheless.
Simply put what we've achieved so far are machines that can learn within very specific conditions and subjects. We still haven't completely overcome catastrophic forgetting - which is a tendency that neural networks have to forget what they've learned when presented with new information, especially if it resembles previous information. Until we do, and we seem to be getting closer everyday, true AI is still very much out of reach.
Part of the issue about AI discussions also lies with the fact the word gets thrown around far too much these days. Sometimes as a marketing ploy and other times to make it easier for the average person to understand very complicated systems regardless of how accurate the term is to describe said system.
True artificial intelligence does not exist yet. Everything built so far requires human input and constant nudging in the right direction. No system has reached human intelligence and adaptability. When true AI is finally achieved it will redefine our world in ways we can't even fathom right now. Maybe not in a good way.