Hello again fellow thinkers.
Lately I've had a little (just a little) more time on my hands and found out there's a whole movement online (maybe offline too) of people warning about "the dangers of AI", which seems to derive from a modern phenomenon where business entrepreneurs are considered scientific authorities. One such doomsayer is the well known Elon Musk, a businessman who obviously needs to get a better grasp of what Artificial Intelligence actually is and stands for.
Musk (and probably others) warn about some fiction-inspired, robot apocalypse where "AI's" (I've previously discussed how wrong that is) surpass human intelligence and control all or most facets of our lives. Specifically, I do understand the dangers of autonomous, weaponized machines (we're living it now with the drone program). Since Musk appears to be a defendant of all things conservative and military it is to be expected his idea of how to apply autonomous systems is, of course, weapons of mass destruction.
But what Musk apparently doesn't know is that AI is not an entity, it is a research field. That it has been around since the 1950's. That is currently is an integral part of our lives, with applications present in electronic devices, digital services and entertainment. And that we're only starting to understand reasoning, memory, learning, planning, in the human cognitive level, and trying to combine different explanations in the behavioral, mathematical and physiological domains.
Other alarmists are also warning of the dangers of machines, like "they only use pure logic" and would kill children and so on. Or the horrible shock of realizing "the machine learned by itself!". These are ridiculous claims by ridiculous people, judging on the basis of mainstream media, science fiction and business investors. Meanwhile, scholars focus on actually understanding human cognition and implementing these elements in devices that will make our life easier, because understanding ourselves is essential to our survival and because autonomous devices can truly help in the house and assist in hospitals. We need autonomous learning so machines can improve without constant human guidance, but the idea that machines could reach human-like intelligence while relying on pure implicative logic is absurd. The argument is similar to saying: "if dogs could talk, they'd just talk about shitting and eating". IF dogs could talk, their underlying neural and cognitive apparatus facilitating this complex function would give them access to a much wider worldview. Sadly many alarmist online comments come from computer-related professionals who understand programming, but know nothing of AI, human cognition or the evolution of the human brain.
But the truth is the dangers of "
dangerous AI" are real. As real as the dangers of, say, a psychotic murderer on the loose. Or actual weapon design. Whether we decide to build weaponized, autonomous killing machines is up to us. They certainly don't build themselves. And we can't blame the gun for killing an innocent person. Even though we are many years away from the fictional level of AI that average people envision, it IS time to ask ourselves what we want to do with it, should it ever occur.
Here are some opinions of known people who actually work on the field:
http://www.computerworld.com/articl...ay-elon-musks-fears-not-completely-crazy.html.
People like Musk aren't stupid, they're just misinformed. But it's concerning that average people listen to them more than to the scholars whose entire life has been devoted to studying and understanding these things.
Thoughts?