Suhiira;n8397910 said:
As you've no doubt noted I'm MUCH more of an engineer then a philosopher or cognitive scientist so your insights were very informative, thank you!
OK ... I'll re-revise my potential revision of the availability of a "true AI" in 2077 back to ... maybe, but probably not
And have a couple Red Points from me as well.
You're welcome. I could talk about this all day, this is what I do for a living (AI research).
My next question would be: what is "true" AI? Something like you see in movies? Data from Star Trek or Hal 9000? I'll just write another wall of text here, because I can.
Let's use Data as an example: he's fully functional in human environments and capable of learning quickly from few examples. He understands many human contexts and tries to convey an appropriate tone of voice. Much like a human, he tends to
babble (as observed by Picard). If not for his weird looking skin and eyes, he could pass as a human. A quirky one maybe, but a human nonetheless. Is his program "true" AI? I'll elaborate just a bit to clarify some ideas. Your search for the "true AI" might be something like a paleontologist that keeps finding dinosaur fossils but cannot find any goddamn dragon bones.
I. (Yet another) Overview of AI.
In the field of AI we already have mathematical models that explain and implement learning from trial and error, logical reasoning from large amounts of declarative knowledge, reasoning under uncertainty, speech processing (sounds), some amount of language processing (semantics), relatively advanced computer vision (object recognition and tracking), "multi modal" sensor integration (laser range scanners, 3D cameras, etc.), etc. One of the recent, well known successes of AI covered by mainstream media was AlphaGo, which is "simply" the result of combining learning through trial and error (reinforcement learning methods) and statistics-driven tree search with value approximation (necessary in large domains) using multi-layered neural networks. This is impressive but more so in practice than in paper, since people have been working on these things for decades and we expected things to work in practice sooner or later. The entire "deep learning revolution" of today is the product of having sufficiently fast hardware run the models we already had. Granted, when you add so many layers systems become more complex and you then get people that play around with deep neural networks (eg. combining images and so on) and get funny results they don't fully understand. Anyway, the point is there is lots of progress in modeling particular aspects of intelligent behavior, but most of it is too academic at the moment. And state-of-the-art research is so specific that you need a proper AI background to appreciate it, especially because often the results are mathematical proofs.
II. What robots need (because I think you mean AI in robots)
Running these things onboard mobile robots is challenging for many reasons:
1) Real-life limitations: bring asymptotic convergence bounds down to Earth with limited processing time, limited information, unreliable actions, etc.
2) Integration of actual robot sensors into planning/learning/reasoning/etc., therefore adding tons of noisy data.
3) Transforming high-level operators (eg. "grasp mug") into low-level, control operators (all specific arm, joint and hand movements necessary to grasp a mug).
Making robots run such plans is the goal of several research groups in fact. Once in a while someone will try to put it all together and publish interesting results. This is when you see pancake flipping robots, coffee serving robots, or robots that find magazines. This last example is quite good for state of the art (2015). This particular robot integrated localization and semantic mapping techniques (identifying the form and function of objects within their spatial location), navigation (going from X to Y), classical planning (problem solving assuming full knowledge and deterministic actions) and some form of probabilistic planning (perform actions to reduce uncertainty). In the experiment, the robot had to find a magazine in a building, so it looked around for tables and bookshelves (known to contain magazines) and when it was sure it had to be in one particular office, it looked harder. Because it could not find it, it assumed (created a series of internal logical constructs) it must be hidden inside a container, so it opened a drawer and found the magazine. This is both very advanced and very limited. But what happens if we add more knowledge of regular office interactions and subtasks (opening doors, pouring coffee, etc.). add Google-level of speech recognition (translate verbal into written instructions), a strong enough NL parser to get the hang of basic instructions, and likewise a Google-level voice synth to respond. It should be able to interact with people in tasks like:
-- Where's my coffee mug?
-- [he's Steve, his mug is bright blue, I saw one in the meeting room]. It might be in the meeting room.
-- Could you bring it to me?
-- Sure!
This sort of thing (an enhanced magazine finder) is technically possible (and very expensive due to hardware limitations) but also potentially worthless from a science standpoint. Hand coding behavior profiles and knowledge representations is sort of a mix of AI and software engineering for robots. We have mathematically correct models for learning and decision-making, but most of them are still impractical for real-time use. Another example: In the mid 90's there was a very successful Backgammon program called TD-Gammon, which very much like AlphaGo used reinforcement learning with neural network value approximation and learned, without much actual prior knowledge, how to play Backgammon. Not only did it reach human proficiency, it actually beat the world champion and discovered better opening moves that top players then started using. The caveat? It needed hundreds of thousands of training iterations to learn this one particular game. Not to mention, the real world is orders of magnitude more complex than a game.
So in bringing AI to robots, the real issue is the severe limitation we have when handling inaccurate perception and actions and choosing relevant information fast enough to act within a reasonable amount of time. For instance, at any given time, why should the robot record the location of any blue mug? Either way, this is all technically possible, but putting it together is a challenge where engineering and technology (the robot machinery, sensors, control operators, etc.) and science and math (AI) must reach a compromise.
So given fast enough computer hardware and advanced enough robot hardware, could we build Data? We could definitely build particular aspects of Data. A general purpose, fully autonomous device with continuous planning and learning and incredibly precise perception and interaction... hmm. Don't know. I should hope so, we have plenty of time.
III. But, but, ... they would still not be sentient beings, so it's not "true AI".
Like we said before, the phenomenology of mental states is a philosophical construct, of something we assume we possess. Is self-awareness the product of a sufficiently sophisticated program (such as the human mind), an advanced computing architecture (the human brain) or something inherent to humans (and therefore unattainable for advanced synthetic beings)? Don't know. Do we assume humans are self-aware because of their observable behavior and ability to pass tests? Yep, all the time. Could some humans be simple cognitive black boxes that output symbols in response to other symbols? Yep, that too. So where does this dichotomy come from? How is a program that eg. learns how to play games better than humans, all or mostly on its own, not true AI? What if it said "booyah!" every time it wins? Is that more human-like? Does it also need a humanoid body? And there it is: we want all intelligence to resemble what we think we know about human intelligence, and often dismiss that of other species.
And related to that last part, there are several "robots" (sometimes disembodied heads) that look very "human like" but do very, very little in terms of AI, and manage to shock everyone with how sophisticated they seem to be. Kind of like that robotic face that "wanted to destroy humans"... Heh. Well chat bots are not new, but this one uses spoken language and facial features. In terms of responses, much can be accomplished using statistical analysis. That's all cool and fun, but current, actual AI is absolutely more advanced than that and definitely more interesting. It just doesn't normally have cool demos
IV. OK, OK fine..., so what's the verdict?
Can we build some form of autonomous, relatively general-purpose machines that learn on their own and interact with us? Yeah, sure. We'll get there sometime, maybe soon. Will they be indistinguishable from humans? Maybe, don't know. Is that necessary though? Can we build Data and Hal 9000? It's too early to tell.
So what's the point of android sci-fi? Duh, self reflection. Introspection. The analysis of human behavior and ethics. They're narrative vehicles. Good sci-fi is informed and not embarrassing (the sci part) and borrows what is necessary to develop interesting worlds (the fiction part). I think 2077 is sufficiently far into the future for EDUCATED speculation. But then again, it's also an alternate universe so it doesn't have to conform to the scientific advances of our "reality"... as long as things make sense. For instance, no massively advanced robot can operate purely on logic; our world is noisy, variable and people are often irrational. An advanced robot makes compromises and approximations in order to react fast enough with only incomplete information.
That's all. Sorry if such long posts are annoying. Now back to work.