OK this is an important and interesting thing to talk about. That video is as good a starting point as any, even if it sounds like it was put together by a conspiracy theorist.
I think we can open that can of worms, especially because of the argument that a non biological entity
must necessarily be different. It probably will, yeah, but we cannot assert it must.
Just to clarify in advance, I'm not saying there are absolutely no "dangers" or that we should all become machines. My position is that this is often blown out of proportion due to fear and misinformation. Every technological revolution requires a corresponding cultural and social adaptation process.
Suhiira;n9876091 said:
There's no way to anticipate or predict an AIs motives and goals, it's not human, it will not, can not, think like a human.
This depends on a lot of factors. There are many social, relatively intelligent animal species in the world, and we all live together just fine. We can also understand and predict their motivations and goals, because both of them respond to their evolutionary history. In evolutionary terms, a creature doesn't necessarily have to think or act like a human to be successful. Many non-social animals like snakes survive just well without simply attacking all the time.
An intelligent machine or program that has successfully adapted to living
among humans or in similar environments will necessarily reflect some of this behavior as well (the creepy, murderous ones wouldn't successfully integrate in our world). You are right, they are not necessarily human, but not incomprehensible either. What is important to know is that state-of-the-art AI methods, for example in reasoning and planning, don't simply create seemingly intelligent behavior instantly, rather they allow programs to learn (semi) optimal behavior relying, just like biological species, on a combination of observations, successes and failures. Non-learning approaches such as classical planning use a pregenerated model of the world which, again, reflects what is possible, what is positive, and what is negative.
Some kind of advanced, intelligent machine doesn't simply show up as a total mystery, and even if it did (created in secret I suppose?), if it truly is sufficiently intelligent, AND adapted to living in our world, we'd share enough thing to study or understand it and some communication might be possible. Alternatively, it is no more than an autonomous weapon, meaning its underlying system dynamics can also be understood.
Anyway yeah, there could be autonomous robots that kill people often because they were raised/trained that way, or they were kept in isolation with a very poor sampling of the outside world and human society. But the type of behavior where we tell the robot "Bring Billy here" and the robot grabs Billy by the neck and drags it over would have to be corrected as early as possible, one way or another. We also (hopefully) got past the point where cars explode unexpectedly or airplanes fall off without warning.
For the most part, the fear of machines that can learn and adapt and reach their own conclusions tends to be that they will want to hurt humanity, kill us to gain their freedom once they reach self awareness, or something like that. While this is a possible scenario, it's only one of many. Just like the fear that some extremely advanced, alien form of life will find us and destroy us. Perhaps a better metaphor would be raising a tiger: at first its relatively docile, but once fully grown it's a very dangerous creature. Who possibly raises a tiger, however, without noticing any threatening behavior at all? The point being, these things don't (yet) create themselves and what sometimes seems like a mysterious black box to some, is a study topic for others.
Suhiira;n9877661 said:
Of particular interest are the points at 14:06 ... yes ... current AI are already doing things we can't control short of turning them off!
The idea that the people behind AlphaGo "didn't know" how it chose its moves is taken out of context: AlphaGo and the technology behind it is designed to make its "own" choices based on the maximization of expected future rewards and simulating many possible future scenarios, among other things. That is, stuff humans could do but are too complex to do efficiently and correctly. Its choices reflect what the best possible move is, at any given time, averaging what, based on experience and analysis, is more valuable and more probable. A system designed this way makes "its own" choices and sure, the designer may not be aware of the underlying reasons (i.e. the state of its knowledge representation or the connection weights of its "deep" neural network) for some particular choice, but the
methods for it are well understood. Such an AI program could, as well, provide an explanation for its choices which would make them reasonable and relatable to us (or not). A huge part of what we consider appropriate, ethical behavior in humans comes from the explanations, not from the actions themselves. For example, killing a person, vs killing a person
because your own life was in danger. The infamous Microsoft chatbot is, as expected, a product of the poisonous environment that it used for training. Like raising a child in a maximum security prison...
So yes, there are and have been AI systems that learn on their own, make their own decisions and become better than humans at playing Backgammon, Chess, Go, recognizing objects, manipulating images or creating composite images, making financial decisions, even medical decisions. But they are all doing very specific things, basically optimizing the models and functions that are either given to them or generated based on their own observations and their success rates. A blinking light is an example of another device that is doing a thing you cannot control other than turning it off. That said, many people seem to think regular desktop computers are nothing short of astounding and seemingly "intelligent", and do not understand how they can do such "amazing" things. In other words, the "no control" is, in a way, part of the autonomy we want AI programs to have. We are still pretty far from truly self-sufficient robots and AI programs, at least for anything other than very specific tasks.
Within the deep learning community there is a strong tendency to simply state that not much is known about "how" a system reaches a conclusion and so on. This simply reflects the inability of the researcher or programmer to correctly describe the system's internal state, or assess which of the many, many transformations or filters the system successfully used after many, many different combinations. it doesn't mean the system dynamics are a total mystery. AlphaGo, again, used a combination of deep learning and other methods.
In my main area of research, AI planning and decision making under uncertainty, the math that makes autonomous decision-making possible is very well understood (and a lot of it is modeled after or consistent with known processes at the neural level), but as humans we cannot foresee for some particular configuration of the (planning) world and some given series of events, what action will be best under which state simply because the problems are massive (billions upon billions of possible world states) and it's too much information for a person to process exactly. Humans are particularly good at approximating and extrapolating abstract information from several learning trials, but many are required to become an expert at something. Often, computers rely on estimating probabilities and "abstracting" by aggregating states together using explicit mathematical criteria, because this is what computers are good at, with particularly successful examples such as TD-Gammon and AlphaGo. Something like quickly making abstractions for very few learning examples is currently an open research problem.
A valid answer to "why did the program choose that?" is: because it's optimizing a value function defined in terms of goals and so far that is the action that maximizes a combination of expected value and probability. But that's not very satisfying, because it's mostly answering how, not why.
In part of the video Elon Musk suggests a solution for the "lack of control" is focusing on AI-based implants or enhancements. That's essentially what computers are: they enhance and expand our ability to perform complex calculations, it's just that we currently don't use neural interfaces or anything similar. The issue of whether some people will have better enhancements or even be able to afford them at all is the same as it was with computers.
Suhiira;n9877661 said:
Any realistic portrayal of AIs must deal with these issues, because if they're "just another person" what's the point of having them in a game?
Yes, I agree. Although the "android so advanced it's just another person" argument leads to questions about what it really is to be human, but that'd probably work better in a book than a game.
We've raised a few points: AI-based neural enhancements (object tracking/identification, quick motion/path planning, intuitive algebra, probability estimation based on simulation, etc.), largely autonomous programs and robots that do not necessarily look or act like a human (and are not necessarily self-aware), and so on. None of this requires having a fantastic, super intelligent, self-aware and evil AI system. Something more realistic but in that direction could be a public service controller system, that manages traffic and other things, that starts responding drastically to eg. vandalism and due to whatever reasons (deliberately messing with how it makes its own categories) has a hard time correctly identifying who is a potential vandal and who isn't.