Interesting discussion,
EliHarel and
schinderhannes.999 . I had an AI thread in the Cyberpunk forum that got derailed with speculative fiction and all sorts of nonsense and misinformation.
The reason why VIDEO game AI is often so bad is mostly because game programmers are not AI researchers. Their knowledge and understanding of AI is limited and, often, outdated. Some games do include interesting AI techniques: I think Doom 3 used some kind of neural networks to react to the player's movement patterns and Total War Rome II used Monte-Carlo Tree Search in the single player campaign. However these implementations are often lightweight, dirty and do not use all the resources they potentially could. Mostly, because it is after all a video game and there are a bunch of other things to keep the hardware busy (i.e graphics, sound, etc.).
A game, technically speaking, is a closed system that is completely understood: it has states, transition rules, intermediate and terminal rewards, etc. This means that even games with many possible "configurations" or states such as Chess or Go can be theoretically solved by something as simple as a search tree. In practice this takes way too much time, so it's not fun to play against a machine unless it can respond quickly. Recent progress in game-playing AI has made it possible to solve even Go, however. Video games are not very different, with the possible exception that unlike these board games, they are not games with
Perfect Information. It is also possible to optimally solve large
decision processes with uncertainty, but this can also take a very, very long time even on high-end, gaming PC hardware.
Most tasks that can be represented as
games have optimal solutions, and a computer CAN and WILL find these solutions given enough time. Humans are particularly good are finding decent solutions somewhat fast, and we can be very good at quickly incorporating new evidence into our existing knowledge and generalizing from a few significant examples. We currently do not have the math to efficiently solve these problems, so even the complicated problems are solve
naively: assess many or all possible options, estimate their value, choose one and move on. Expert chess players, for instance, are especially good at reducing their pool of available moves to only a handful of really useful ones. But even intermediate players are terrible at finding optimal strategies, often settling for good, suboptimal strategies.
Consider what's possibly one of the simplest games out there: tic-tac-toe. It's so well understood mathematically, that every possible board is known and every good move can be known in advance. Losing in tic-tac-toe only shows you're unaware of the winning strategies. Any other game follows a similar principle, only they have more states, their states become more feature-rich and may include partial observability or uncertainty (eg. not knowing, only estimating, the true position of an enemy troop). Under partial observability, actions are taken to reduce uncertainty and gather information about the true state of the world, eventually leading to better action choices and ultimately, winning.
So while this is an interesting discussion that I am willing to continue, it does have a short answer: all games can be mastered by a "computer", but computers are limited by 1) resources and time available for computation and 2) having sufficiently advanced AI methods. Because in video games (1) is very strict and (2) is almost never present, they often resort to letting the computer-controlled player exploit loopholes in the rules, abuse the game engine, or outright cheat.