Sardukhar;n10855601 said:
It's fine to say, "hey, that wouldn't make it past the staff meeting" only, that's a staff meeting of people and people screw up. Planes crash, traffic lights fritz, your HDD fails out of the blue, your phone catches fire. There is no reason to think the same failures of design and production won't plague AI and robotics.
Well said, I completely agree.
Also, All it takes is one unexpected breakthrough to completely change the world as we know it - lawmakers
cannot and
do not keep up with the rapid advance of technology, as you can see by ongoing copyright wars on YouTube and other platforms (even when content is clearly "fair use"), so legal regulation is probably off the table during the first few years of a true, dangerous AI's existence.
I do worry about the future, but I also understand there's nothing I can do about it. A super-intelligent AI with no regard for humans (either by design or via a bug) deciding to smack a nuclear button isn't something I have any control over.
volsung;n10848761 said:
The main difference between that and our reality is that in our world complex systems don't just spontaneously appear. Such a robot would be carefully designed and engineered and lots of experts would fully understand the math and the technology that makes it work. Even if it is capable of making its own choices, its internal criteria would be known to us, exactly like how we train a dog or raise a child.
What is the benefit of understanding its internal criteria? If some random noise upsets your dog, it might lash out and bite you no matter how well you've treated it, no matter how well you thought you understood this criteria. I know because it's happened to me. No, I didn't put the dog down (I don't buy into the "OMG IT TASTED BLOOD NOW ITS A LETHAL KILLING MACHINE EUTHANIZE IT" thing), but it happened.
Your child may grow up to be a psychopath no matter how well he was raised, even with no childhood trauma (that you know of).
Maybe something happened to him or her that you weren't around to see, that you weren't around to monitor. Maybe the child found a benefit in hiding something from you, and that hidden thing turned into a future personality disorder. Humans are not perfect and cannot predict these things with complete accuracy.
There are exceptions to every rule, outliers to every norm. As Suhiira said on the last page, can we really afford even a
single "oopsie" with AI over time? Maybe now we can - the most we have to worry about
currently is it beating us at Starcraft or Chess or whatever. But 50 years from now?
I'll admit I haven't read every single one of your posts in this thread, so apologies if you already explained your reasoning here. Anyway, much like Su, I am not against AI research either. I just also think it should be done slowly, carefully, and probably not be left in the hands of corporations without
serious and
regular official oversight. Not "Oh I'll do a quick scan once a year," but frequent,
thorough checks to make sure everything is happening above-board.
The problem? As I said earlier, the wheels of government turn
exceedingly slowly. And with so many older individuals in office who do not understand technology (One senator asked Zucky Boy how Facebook plans to remain free... "Senator, we run ads." DUH!), things will move even more slowly as lobbyists and ordinary citizens attempt to explain things to them.