Artificial Intelligence vs. Intelligent Machines

+
volsung;n10848761 said:
its internal criteria would be known to us, exactly like how we train a dog or raise a child.

Yeaah..not good examples. I've done these things, multiple times. Results Wildly Vary. Yes, even with dogs. Magnificently, when it comes to children. There are patterns absolutely, but it's just not predictable over the course of the education. More so with dogs, of course, but even so, few guarantees.

If this is how well you think experts will understand what their AI is going to become, yikes.

If it was a poor example, less yikes. Mild yikes.
 
Last edited:
Sardukhar;n10852321 said:
Yeaah..not good examples. I've done these things, multiple times. Results Wildly Vary. Yes, even with dogs. Magnificently, when it comes to children. There are patterns absolutely, but it's just not predictable over the course of the education. More so with dogs, of course, but even so, few guarantees.

If this is how well you think experts will understand what their AI is going to become, yikes.

If it was a poor example, less yikes. Mild yikes.

Heh OK you may have a point: it was a bad example because the analogy of raising a child or a dog should be for non AI experts, for whom these machines are simple black boxes.

I will admit I know nothing about raising children but I certainly have experience raising dogs. Somehow you are suggesting you cannot educate a dog and predict the outcome of this education? Sure there are individual differences, but we certainly know a lot about how they respond to training as a species: they are social creatures, their responses to certain stimuli are consistent and this is reflected in a long term behavioral adaptation. Some dogs learn faster than others, some are hard to train, etc. Some have genetic issues, for instance from over breeding, that manifest as erratic or violent behavior. We also adapt our teaching to reflect how we perceive the dog is progressing (because we are initially unaware of their exact individual differences and preferences). Dogs in particular have been domesticated for so long they are an integral part of human societies, and everybody can own and raise a dog successfully (you don't have to be an ethologist).

With humans we also understand many of the mechanisms that dictate behavior. Do you really think it's impossible to detect whether a child has psychopathic tendencies that should be addressed, or that violent circumstances may lead to trauma? I'm not talking about how to raise a child so they become the next Mozart or the next Gauss.

Anyway, regarding AI, most areas today are very well understood but outside communication is pretty poor. A more concrete example of an adaptable "intelligent" robot would be if we had commercial robotic assistants. They might develop very different "personalities" depending on their owners, and yet respond to the same criteria: the indecisive owner who refuses to wear his hearing aid will probably end up with a robot that constantly and very loudly asks for confirmation before attempting anything, while the very quiet and direct owner will probably have a robot that simply does as it is told because, in both cases, they are maximizing their internal expected rewards by learning from experience, within very well understood parameters. This also means, unless somehow artificially restricted, it could be possible to have your robotic assistant mistreat your guests and maybe even hurt them. It's not like we don't know how to create a murderous adult. Modern panels on ethics and morality in AI & robotics actually focus on this issue, whether ethically acceptable behavior should be internally controlled or dictated externally. One big issue here is that we don't all agree on what is "ethically correct". The point being, that regular people can use, train and interact with these machines with only an intuitive understanding of their behavior.

I don't know what people think AI researchers do but we are always expected to include either analytical (mathematical) proofs or statistically significant experimental validation if we want to publish in any remotely reputable conference or journal. A robot that sometimes confuses a human with a coffee cup or that bumps into people's shins wouldn't even make it past the weekly staff meeting.

Edit: just wanted to add there is a major difference between robots with AI and biological creatures. Mechanisms can be proven to operate correctly and can be systematically designed and built within minimal error parameters. Humans and dogs are the product of a greedy local optimization rule from natural selection and there is some amount of variability within the species, leading to all sorts of predispositions and disorders.
 
Last edited:
volsung;n10854661 said:
I don't know what people think AI researchers do but we are always expected to include either analytical (mathematical) proofs or statistically significant experimental validation if we want to publish in any remotely reputable conference or journal. A robot that sometimes confuses a human with a coffee cup or that bumps into people's shins wouldn't even make it past the weekly staff meeting.

But robots who do this have, plenty of times. The rooba, the early versions of Boston Dynamics robots...

I think using current tech and methods to predict future tech is dangerous, because, well, future tech. That's what has people on edge.

It's fine to say, "hey, that wouldn't make it past the staff meeting" only, that's a staff meeting of people and people screw up. Planes crash, traffic lights fritz, your HDD fails out of the blue, your phone catches fire. There is no reason to think the same failures of design and production won't plague AI and robotics.


volsung;n10854661 said:
Edit: just wanted to add there is a major difference between robots with AI and biological creatures. Mechanisms can be proven to operate correctly and can be systematically designed and built within minimal error parameters. Humans and dogs are the product of a greedy local optimization rule from natural selection and there is some amount of variability within the species, leading to all sorts of predispositions and disorders.

Mechanisms fail all the time. Often. Sometimes under stress but sometimes from crappy design and sometimes from who-knows-why. Gremlins. I'm no scientist, but I run heavy equipment and believe me, there is lots of variety with mechanisms. The more complex the higher the chance for failure.

 
Sardukhar;n10855601 said:
It's fine to say, "hey, that wouldn't make it past the staff meeting" only, that's a staff meeting of people and people screw up. Planes crash, traffic lights fritz, your HDD fails out of the blue, your phone catches fire. There is no reason to think the same failures of design and production won't plague AI and robotics.

Well said, I completely agree.

Also, All it takes is one unexpected breakthrough to completely change the world as we know it - lawmakers cannot and do not keep up with the rapid advance of technology, as you can see by ongoing copyright wars on YouTube and other platforms (even when content is clearly "fair use"), so legal regulation is probably off the table during the first few years of a true, dangerous AI's existence.

I do worry about the future, but I also understand there's nothing I can do about it. A super-intelligent AI with no regard for humans (either by design or via a bug) deciding to smack a nuclear button isn't something I have any control over.

volsung;n10848761 said:
The main difference between that and our reality is that in our world complex systems don't just spontaneously appear. Such a robot would be carefully designed and engineered and lots of experts would fully understand the math and the technology that makes it work. Even if it is capable of making its own choices, its internal criteria would be known to us, exactly like how we train a dog or raise a child.

What is the benefit of understanding its internal criteria? If some random noise upsets your dog, it might lash out and bite you no matter how well you've treated it, no matter how well you thought you understood this criteria. I know because it's happened to me. No, I didn't put the dog down (I don't buy into the "OMG IT TASTED BLOOD NOW ITS A LETHAL KILLING MACHINE EUTHANIZE IT" thing), but it happened.

Your child may grow up to be a psychopath no matter how well he was raised, even with no childhood trauma (that you know of).

Maybe something happened to him or her that you weren't around to see, that you weren't around to monitor. Maybe the child found a benefit in hiding something from you, and that hidden thing turned into a future personality disorder. Humans are not perfect and cannot predict these things with complete accuracy.

There are exceptions to every rule, outliers to every norm. As Suhiira said on the last page, can we really afford even a single "oopsie" with AI over time? Maybe now we can - the most we have to worry about currently is it beating us at Starcraft or Chess or whatever. But 50 years from now?

I'll admit I haven't read every single one of your posts in this thread, so apologies if you already explained your reasoning here. Anyway, much like Su, I am not against AI research either. I just also think it should be done slowly, carefully, and probably not be left in the hands of corporations without serious and regular official oversight. Not "Oh I'll do a quick scan once a year," but frequent, thorough checks to make sure everything is happening above-board.

The problem? As I said earlier, the wheels of government turn exceedingly slowly. And with so many older individuals in office who do not understand technology (One senator asked Zucky Boy how Facebook plans to remain free... "Senator, we run ads." DUH!), things will move even more slowly as lobbyists and ordinary citizens attempt to explain things to them.
 
volsung;n10854661 said:
Do you really think it's impossible to detect whether a child has psychopathic tendencies that should be addressed, or that violent circumstances may lead to trauma?
Actually I think exactly this.
School shootings are a perfect case-in-point.
We cannot predict human behavior, yet you imagine we can somehow predict an AI?

Sardukhar;n10855601 said:
It's fine to say, "hey, that wouldn't make it past the staff meeting" only, that's a staff meeting of people and people screw up. Planes crash, traffic lights fritz, your HDD fails out of the blue, your phone catches fire. There is no reason to think the same failures of design and production won't plague AI and robotics.
And every reason to expect they will.
As you mentioned, the more complex the system the more points of error/failure exist.
 
Last edited:
I mean you can predict what a simple machine will do, but that won't ever be an A.I. just like a single neuron is easy to model but when you have 83 billion of them it gets really tough to make even simple approximations.

I mean we don't even know if A.I. is something that can be built with software.
 
Hoplite_22;n10857391 said:
I mean you can predict what a simple machine will do, but that won't ever be an A.I. just like a single neuron is easy to model but when you have 83 billion of them it gets really tough to make even simple approximations.

I mean we don't even know if A.I. is something that can be built with software.
I have no real doubt an AI will eventually be created.
I have SERIOUS reservations about anyone that thinks we can possibly predict, or control, the thought processes of an intelligence totally unlike anything we've ever dealt with before.

Expect the unexpected, NEVER assume you know what's going to happen or how it will react.
Go very, very cautiously.
 
Hoplite_22;n10857391 said:
I mean you can predict what a simple machine will do, but that won't ever be an A.I. just like a single neuron is easy to model but when you have 83 billion of them it gets really tough to make even simple approximations.

I mean we don't even know if A.I. is something that can be built with software.

As weird as it sounds, I believe that Deep Learning as a whole will accelerate GPU processing techniques, which in turn will help simulate those neurons much faster.

Maybe just like the way we work.

Our neurons do work because they get enough oxygen and glucose and whatever else is required to active them so in one way, neurons are helped by external factors like food and water but in turn, it gives us the ability to search for more food and water. Isn't that what the world lives for anyway?
 
Finally have the time and will to post again.

Listen I know I said a number of things and probably caused more confusion than anything else. The whole point is artificial intelligence EXISTS because it is an area of study and NOT AN ENTITY, but it is 1) Not at the level of general intelligence humans have and 2) On a function basis (planning & decision-making, pattern recognition, object recognition, etc.) it is much more advanced than people think. This means we are only decades away from having functional service robots at home (eg.: Rob, bring me my coffee!) that act largely autonomously and interact with us, but nowhere near having a robot that will discuss philosophy with us (if at all possible).

The other point that I failed to explain is that no matter what you dislike/fear/disbelief about machines with AI, the same applies to humans. "We can't predict their behavior", "They can be dangerous to society", etc., etc. Even the more existential arguments like "consciousness" are applicable to both humans and machines. A sufficiently advanced autonomous, humanoid robot will have learning/adaptation capabilities similar to those of a human, if it is supposed to operate in human circles (meaning it will learn the behavior that is expected of it, and not start killing everybody for no reason). And yes there might be mechanical/programming/whatever errors, but nothing truly dangerous should be released as a consumer item (despite there existing numerous examples, like exploding smartphones....). Again, nothing very different from what we have and use nowadays.

The fear of an all-powerful, all-knowing, world-dominating AI program is about as rational as the fear of an alien invasion, and yet that doesn't stop anyone from exploring and understanding space. Don't get your "AI" digest from corporations and youtubers, is all.

Now there is a real threat regarding the use of AI-based tools, and a good example was that murderbots video. Data gathering and mining easily leads to profiling and surveillance and yet everyone nowadays happily uses Facebook, Instagram, etc. which is potentially more dangerous and unethical than any autonomous machine.

Back to my first post and the reason for this thread: in sci-fi games, I wished they referred to entities using nouns, such as intelligent machine or AI program. Not something like "an AI", eg. a disembodied "intelligence"... which is just as silly as saying "an artificial digestion" (i.e. a process).
 
Cyberpunk is science fiction & therefore the underpinning fabric of reality is the Rule of Cool;)

heh well you are somewhat right even if you were kidding because it is fiction after all, but I also think because it is science fiction, ideas should at least not contradict modern science. For example, a story about an alternate reality where living species are created by deities and did not evolve from previous species is fantasy, not sci-fi. The insane-yet-rudimentary, 100% logical killer robot with 1970's programming is one of several myths that shouldn't belong in sci-fi. Authors like Philip K. Dick, arguably a big influence in what we call "cyberpunk", very skillfully focus on truly relevant topics like perception and self awareness, and choose very appropriate terms (create new words, or simply "android"). Many of the classical, "technical" sci-fi authors made factual errors and agreed to correct them when people pointed them out. So yeah unlike fantasy which has no restrictions, sci-fi should probably follow some guidelines. Just my opinion.
 
Cyberpunk is science fiction & therefore the underpinning fabric of reality is the Rule of Cool;)
heh well you are somewhat right even if you were kidding because it is fiction after all, but I also think because it is science fiction, ideas should at least not contradict modern science.

Very much this.

The Rule of Cool does not, at least in Cyberpunk 2020, wildly contradict modern science. People aren't teleporting because it's cool, or flying around on leg mounted jetpacks or fighting with lightsabers.

So the underpinning fabric of reality is reality, frankly. Then framed through what is Cool - sometimes. Lots of not cool stuff is present.
 
There's different levels of sci-fi: Hard sci-fi, whereby the laws of physics are strictly adhered to, normal sci-fi, where they are mostly followed and there is a consistent internal logic for the things that don't, and science fantasy, which is just magic hand-waving dressed up as tech.

Some of the best sci-fi explores the bounds beyond physics-following a line of reasoning to discover what would happen if the impossible was possible. What would it mean if people could teleport? What if a Strong AI governed every aspect of your life? What if we are all trapped in a simulation?

2077 should logical, plausible, and consistent, but not necessarily hamstrung by hard science. Do cool things, but then show us how it works and what it means for people.
 
2077 should logical, plausible, and consistent, but not necessarily hamstrung by hard science.

Totally agree, but by being logical, plausible and consistent it would be adhering to some form of scientific background. For example, the super-advanced yet super-stupid killing robot is very illogical and inconsistent with the technology required to build it in the first place. Something like an "intelligent" clockwork machine wouldn't be very different from a golem (eg. Devil of Caroc from PoE).

Do cool things, but then show us how it works and what it means for people.

I actually think the best sci-fi doesn't overexplain things, and it should avoid showing how things work and instead focus on the second part, what it means and its implications. Fictional "technical" details easily become Star-Trek-level technobabble.
 
Top Bottom