Sardukhar;n9236811 said:
Heh what a joke. A hundred years ago that "journalist" would've been writing about these new mechanical horses that move on their own and might kill us all one day. Some people call them "cars".
It's amazing how little AI is understood outside of academia and how many "news" and arguments are based on fiction and FUD. If these reporters knew what goes on inside university labs they'd shit their pants so hard their tinfoil hats would fall off.
I'll just remind you we are at the point where we can prove solutions exist for a huge variety of problems, including those related to learning appropriate behavior, planning and decision making with unknown probabilities or visual recognition of objects and their function in human society, to name a few. I know of linguistics groups that have programs running on simple robots to study how they can improvise languages/symbols/codes. We're working towards understanding and modeling many cognitive processes. The problem right now for intelligent robotics is mostly one of scalability: putting all of this together and making it run sufficiently fast in crappy robot hardware is hard. For particular problems AI programs can undoubtedly beat humans and adapt their actions based on observed performance. So if someone responds well to politeness, that's the key to success and the robot should use it more often. Programs have also beaten humans in creativity tasks! All of this just takes too damn long right now, for many complicated reasons but mostly because it's done in a very naive way: too many useless options are considered.
We tend to think of humanity as mystical creatures and don't realize we can be quite similar, controllable and predictable. Magicians and illusionists make a living out of this. We often run on a very basic, impulse driven system and only occasionally engage our actual reasoning or problem solving skills. This does not mean we're simple or easy to understand, quite the opposite in fact. Acting in this complex haze helps us solve most issues quickly and painlessly, with mistakes sometimes.
Whether machines can or will ever achieve self awareness is a different story, but my position is that we cannot assume a creature lacks intensional mental states simply because their physiology is different from ours. There's nothing that suggests our minds are a unique, causal effect of our particular central nervous systems.
I don't see the point in using fiction to defend a position, though. Fiction speculates, and is free to do so and inspire us.
At this point however we KNOW we will soon have service robots slowly integrating into human society. They will harvest our crops, drive us places and help us find our slippers at home. They will also beat us at every game of logic, financial investments, risk assessment, etc. They're advanced, moving computers with arms and legs, our current vision of the future is not at all bleak.
In Victorian-era England Butler wrote a novel about a place where machines driven by steam engines enslaved humanity by making them continuously service them with repairs and so on. Fear of technology is nothing new. What you find in most technology zines, blogs and YT vids (and hear from some public personalities) about "AI" is the equivalent of using goat simulator to warn about the dangers of physics.