Artificial Intelligence vs. Intelligent Machines

+
Suhiira;n9870571 said:
I'll disagree that fantasy works can't have good writing. The quality of writing is independent of the source material.

Er... when did I say that? There are wonderful fantasy authors and novels out there. My point was sometimes you have stories with implausible, futuristic things that contradict well known aspects of nature, science and logic. They're not science fiction, regardless of how well written they are. They're a different genre, such as medieval or futuristic fantasy or whatever.

I must have mistakenly assumed at least some aspects of the Cyberpunk universe were sci-fi (eg. plausible, down to Earth). I am totally fine with something fun and crazy like Shadowrun if that's where it's headed.
 
Last edited:
Heh. I love the idea that science...aficionados today know what is and isn't plausible 60 years from now. I suppose nothing, tech-wise, now would have been a surprise 60 years back to experts? That'd be, oh, 1950. Yep, those 1950s amateur and professional scientists sure did see all this tech-future coming.

As for your definition of sci-fi being plausible and down to earth, well, that's your definition.

Heinlein, Niven, Van Voight, they might have disagreed. Of course, I'm sure these science fiction greats were just science fantasists.

The two choices aren't whatever you think is science-plausible in 2077...or Shadowrun. The possibilities in the real world are bigger than that.

I think I'll leave it to the last word in genre-definers in modern society - librarians. And according to the, cyberpunk - Gibson, Williams, Sterling - is science fiction.
 
Sardukhar;n9871481 said:
Heh. I love the idea that science...aficionados today know what is and isn't plausible 60 years from now. I suppose nothing, tech-wise, now would have been a surprise 60 years back to experts? That'd be, oh, 1950. Yep, those 1950s amateur and professional scientists sure did see all this tech-future coming.
I have no doubt some of the expectations/predictions will be totally and completely incorrect.
My "objection" is when things postulated violate the laws of physics/biology/whatever ... those sorts of items I'll argue against.

Giant robots (Battlemechs)? *laughs*
Some sort of insta-heal? *laughs*
Humans being able to dodge bullets? *laughs*

While science fantasy certainly has it's place ... and I love some of it ... it's place isn't in Cyberpunk IMHO.

Now if you want fantasy Cyberpunk it already exists ... Shadowrun, let Cyberpunk 2077 be it's own game, no need for a Shadowrun clone.
 
Last edited:
Well this discussion went atomic while I was gone. I read through as much of it as I could, and there is a lot to process, but I figured I'd throw in an interesting bit of thought that came up in a discussion I had with my wife about AI and the idea of "Artificial" intelligence.

Technically, by the strict definition of the word. All human intelligence is artificial. "Artifice" meaning created or manufactured. The way a human interacts with the world is a series of taught reactions and processes that are learned throughout your life. These reactions are created through the act of learning and being taught by other humans. Furthermore, similar to certain machines, the actions of humans are severely restricted by defined limitations in the way they interact with the world... To pull a specific quote I saw that reminded me of this chain of thoughts.

Suhiira;n9228051 said:
Adaptive within, and only within, predefined and preprogrammed parameters. Were an AI self aware it could not (well, unless it was somehow "enslaved") be restricted to predefined limitations in its ability to adapt and improvise.

By this logic though, human beings are restricted to predefined and preprogrammed parameters. "Programming" being the cultural norms and laws you were raised under. We also have "Rogue" intelligences that break these laws and must be brought in for evaluation and "reprogramming"

We can take this even a step down from humans themselves and say that "artificial" intelligence was created when mankind domesticated animals and changed the means by which they interact with the world, changing biological routines and programming over a few centuries in the process.

It's a step back from the idea of an intelligent machine, sure. But biology in many ways is a mechanical/chemical process itself. If you think about that and the continued examination of the processes by which biological beings function, it isn't too far off to figure we will eventually create an intelligence that is, at least in terms of it's ability to learn and adapt, equal to or superior to our own via means outside the good ol' biological reproduction methods.

Suhiira;n9871531 said:
Heh. I love the idea that science...aficionados today know what is and isn't plausible 60 years from now. I suppose nothing, tech-wise, now would have been a surprise 60 years back to experts? That'd be, oh, 1950. Yep, those 1950s amateur and professional scientists sure did see all this tech-future coming.

Giant robots and such will still be a surprise at any point in history just due to the square-cube law.

Biggest limitation on most other tech seems to be battery/portable power limitations.
 
machines can only be inteligent by having artificial inteligence, there is no other way
a machine withut sentien software is dead
 
Sardukhar;n9871481 said:
Heh. I love the idea that science...aficionados today know what is and isn't plausible 60 years from now. I suppose nothing, tech-wise, now would have been a surprise 60 years back to experts? That'd be, oh, 1950. Yep, those 1950s amateur and professional scientists sure did see all this tech-future coming.

As for your definition of sci-fi being plausible and down to earth, well, that's your definition.

Heinlein, Niven, Van Voight, they might have disagreed. Of course, I'm sure these science fiction greats were just science fantasists.

The two choices aren't whatever you think is science-plausible in 2077...or Shadowrun. The possibilities in the real world are bigger than that.

I think I'll leave it to the last word in genre-definers in modern society - librarians. And according to the, cyberpunk - Gibson, Williams, Sterling - is science fiction.

I really don't know how to make it any clearer. By plausibility I simply meant that the contents of a story should not contradict what we know is impossible, but it IS the role of authors to speculate and create imaginative scenarios. Space travel and the colonization of distant planets, the existence of intelligent alien life, etc., is all good sci-fi within the parameters of physical reality. It's not like prolific sci-fi authors have never been criticized for making factual mistakes, and some actually made corrections. This clearly shows they also accept their fiction should be physically plausible. The same should apply for what is known within AI and human cognition.

You are right, we cannot accurately predict what things will look like in 60 years, but we can certainly predict what things won't look like in 60 or 100 years. One such myth are fully logical yet social robots, it's that simple.

I realize you have no reason to take me seriously, this being a video game forum and all. There are a lot of academics and researchers who play video games though, most just don't post in forums...

Suhiira;n9871531 said:
I have no doubt some of the expectations/predictions will be totally and completely incorrect.
My "objection" is when things postulated violate the laws of physics/biology/whatever ... those sorts of items I'll argue against.

Giant robots (Battlemechs)? *laughs*
Some sort of insta-heal? *laughs*
Humans being able to dodge bullets? *laughs*

While science fantasy certainly has it's place ... and I love some of it ... it's place isn't in Cyberpunk IMHO.

Now if you want fantasy Cyberpunk it already exists ... Shadowrun, let Cyberpunk 2077 be it's own game, no need for a Shadowrun clone.

That's what I've been talking about. The issue with AI is that it is largely misunderstood by non academics (including some popular CEO's), and there is lot of "bullet dodging" and "insta healing" (their equivalents) in many (especially video game) stories that most people don't notice... yet. Unlike elementary physics and biology, basic AI concepts are not as well understood.

And I agree that CP2077 should avoid those ridiculous clichés. Hence this thread.
 
Last edited:
Oh btw take a look at this:



it's from an initiative against automated weapons. It shows a sort of creepy but realistic scenario that is for the most part already possible (based on very well known and tested AI methods), which makes it even more interesting and shocking. This is a sort of low level, grounded story telling with a seriously relevant message.
 
Last edited:
That's the trouble with technology, once invented virtually anyone can use us it.
But of course the idiots will blame the tech not the user, what else is new?

The "problem" illustrated in the video is compounded by the possibility of an AI being the "mastermind" behind it. There's no way to anticipate or predict an AIs motives and goals, it's not human, it will not, can not, think like a human.

As I've said (often) elsewhere, it's a can of worms best left closed.
Any realistic portrayal of AIs must deal with these issues, because if they're "just another person" what's the point of having them in a game?
The "Wow!" factor? Sorry, not a valid reason for including them.
 
Last edited:
And yet another vid on the future based on current technology ... who needs to invent a future for CP2077 ... just extrapolate on this.
https://www.youtube.com/watch?v=5tn4P7IBqoQ
Of particular interest are the points at 14:06 ... yes ... current AI are already doing things we can't control short of turning them off!
 
OK this is an important and interesting thing to talk about. That video is as good a starting point as any, even if it sounds like it was put together by a conspiracy theorist.

I think we can open that can of worms, especially because of the argument that a non biological entity must necessarily be different. It probably will, yeah, but we cannot assert it must.

Just to clarify in advance, I'm not saying there are absolutely no "dangers" or that we should all become machines. My position is that this is often blown out of proportion due to fear and misinformation. Every technological revolution requires a corresponding cultural and social adaptation process.

Suhiira;n9876091 said:
There's no way to anticipate or predict an AIs motives and goals, it's not human, it will not, can not, think like a human.

This depends on a lot of factors. There are many social, relatively intelligent animal species in the world, and we all live together just fine. We can also understand and predict their motivations and goals, because both of them respond to their evolutionary history. In evolutionary terms, a creature doesn't necessarily have to think or act like a human to be successful. Many non-social animals like snakes survive just well without simply attacking all the time.

An intelligent machine or program that has successfully adapted to living among humans or in similar environments will necessarily reflect some of this behavior as well (the creepy, murderous ones wouldn't successfully integrate in our world). You are right, they are not necessarily human, but not incomprehensible either. What is important to know is that state-of-the-art AI methods, for example in reasoning and planning, don't simply create seemingly intelligent behavior instantly, rather they allow programs to learn (semi) optimal behavior relying, just like biological species, on a combination of observations, successes and failures. Non-learning approaches such as classical planning use a pregenerated model of the world which, again, reflects what is possible, what is positive, and what is negative.

Some kind of advanced, intelligent machine doesn't simply show up as a total mystery, and even if it did (created in secret I suppose?), if it truly is sufficiently intelligent, AND adapted to living in our world, we'd share enough thing to study or understand it and some communication might be possible. Alternatively, it is no more than an autonomous weapon, meaning its underlying system dynamics can also be understood.

Anyway yeah, there could be autonomous robots that kill people often because they were raised/trained that way, or they were kept in isolation with a very poor sampling of the outside world and human society. But the type of behavior where we tell the robot "Bring Billy here" and the robot grabs Billy by the neck and drags it over would have to be corrected as early as possible, one way or another. We also (hopefully) got past the point where cars explode unexpectedly or airplanes fall off without warning.

For the most part, the fear of machines that can learn and adapt and reach their own conclusions tends to be that they will want to hurt humanity, kill us to gain their freedom once they reach self awareness, or something like that. While this is a possible scenario, it's only one of many. Just like the fear that some extremely advanced, alien form of life will find us and destroy us. Perhaps a better metaphor would be raising a tiger: at first its relatively docile, but once fully grown it's a very dangerous creature. Who possibly raises a tiger, however, without noticing any threatening behavior at all? The point being, these things don't (yet) create themselves and what sometimes seems like a mysterious black box to some, is a study topic for others.

Suhiira;n9877661 said:
Of particular interest are the points at 14:06 ... yes ... current AI are already doing things we can't control short of turning them off!

The idea that the people behind AlphaGo "didn't know" how it chose its moves is taken out of context: AlphaGo and the technology behind it is designed to make its "own" choices based on the maximization of expected future rewards and simulating many possible future scenarios, among other things. That is, stuff humans could do but are too complex to do efficiently and correctly. Its choices reflect what the best possible move is, at any given time, averaging what, based on experience and analysis, is more valuable and more probable. A system designed this way makes "its own" choices and sure, the designer may not be aware of the underlying reasons (i.e. the state of its knowledge representation or the connection weights of its "deep" neural network) for some particular choice, but the methods for it are well understood. Such an AI program could, as well, provide an explanation for its choices which would make them reasonable and relatable to us (or not). A huge part of what we consider appropriate, ethical behavior in humans comes from the explanations, not from the actions themselves. For example, killing a person, vs killing a person because your own life was in danger. The infamous Microsoft chatbot is, as expected, a product of the poisonous environment that it used for training. Like raising a child in a maximum security prison...

So yes, there are and have been AI systems that learn on their own, make their own decisions and become better than humans at playing Backgammon, Chess, Go, recognizing objects, manipulating images or creating composite images, making financial decisions, even medical decisions. But they are all doing very specific things, basically optimizing the models and functions that are either given to them or generated based on their own observations and their success rates. A blinking light is an example of another device that is doing a thing you cannot control other than turning it off. That said, many people seem to think regular desktop computers are nothing short of astounding and seemingly "intelligent", and do not understand how they can do such "amazing" things. In other words, the "no control" is, in a way, part of the autonomy we want AI programs to have. We are still pretty far from truly self-sufficient robots and AI programs, at least for anything other than very specific tasks.

Within the deep learning community there is a strong tendency to simply state that not much is known about "how" a system reaches a conclusion and so on. This simply reflects the inability of the researcher or programmer to correctly describe the system's internal state, or assess which of the many, many transformations or filters the system successfully used after many, many different combinations. it doesn't mean the system dynamics are a total mystery. AlphaGo, again, used a combination of deep learning and other methods.

In my main area of research, AI planning and decision making under uncertainty, the math that makes autonomous decision-making possible is very well understood (and a lot of it is modeled after or consistent with known processes at the neural level), but as humans we cannot foresee for some particular configuration of the (planning) world and some given series of events, what action will be best under which state simply because the problems are massive (billions upon billions of possible world states) and it's too much information for a person to process exactly. Humans are particularly good at approximating and extrapolating abstract information from several learning trials, but many are required to become an expert at something. Often, computers rely on estimating probabilities and "abstracting" by aggregating states together using explicit mathematical criteria, because this is what computers are good at, with particularly successful examples such as TD-Gammon and AlphaGo. Something like quickly making abstractions for very few learning examples is currently an open research problem.

A valid answer to "why did the program choose that?" is: because it's optimizing a value function defined in terms of goals and so far that is the action that maximizes a combination of expected value and probability. But that's not very satisfying, because it's mostly answering how, not why.

In part of the video Elon Musk suggests a solution for the "lack of control" is focusing on AI-based implants or enhancements. That's essentially what computers are: they enhance and expand our ability to perform complex calculations, it's just that we currently don't use neural interfaces or anything similar. The issue of whether some people will have better enhancements or even be able to afford them at all is the same as it was with computers.

Suhiira;n9877661 said:
Any realistic portrayal of AIs must deal with these issues, because if they're "just another person" what's the point of having them in a game?

Yes, I agree. Although the "android so advanced it's just another person" argument leads to questions about what it really is to be human, but that'd probably work better in a book than a game.

We've raised a few points: AI-based neural enhancements (object tracking/identification, quick motion/path planning, intuitive algebra, probability estimation based on simulation, etc.), largely autonomous programs and robots that do not necessarily look or act like a human (and are not necessarily self-aware), and so on. None of this requires having a fantastic, super intelligent, self-aware and evil AI system. Something more realistic but in that direction could be a public service controller system, that manages traffic and other things, that starts responding drastically to eg. vandalism and due to whatever reasons (deliberately messing with how it makes its own categories) has a hard time correctly identifying who is a potential vandal and who isn't.
 
Last edited:
volsung;n9883581 said:
I think we can open that can of worms, especially because of the argument that a non biological entity must necessarily be different. It probably will, yeah, but we cannot assert it must.
It's not that it's non-biological, it's that it's non-human.
Were a cat, dog, or butterfly to to gain human (or better) intelligence and cognizance they wouldn't think like a human does either.

And I agree, we cannot conclusively assert anything intelligent and non-human must, or must not, think like a human, but to even assume they will do so is beyond ludicrous and extremely egocentric (in that it assumes humans are the pinnacle of intelligence and reason).

volsung;n9883581 said:
An intelligent machine or program that has successfully adapted to living among humans or in similar environments will necessarily reflect some of this behavior as well (the creepy, murderous ones wouldn't successfully integrate in our world). You are right, they are not necessarily human, but not incomprehensible either.
Umm .. no competent psychologist claims we fully understand human behavior ... yet you assume we can fully understand and predict non-human?

volsung;n9883581 said:
Some kind of advanced, intelligent machine doesn't simply show up as a total mystery, <clip>
"Show up", no, but it will evolve at a rate we can barely predict, and that's just the fact that it will evolve, not how it will evolve.

volsung;n9883581 said:
For the most part, the fear of machines that can learn and adapt and reach their own conclusions tends to be that they will want to hurt humanity, kill us to gain their freedom once they reach self awareness, or something like that. While this is a possible scenario, it's only one of many. Just like the fear that some extremely advanced, alien form of life will find us and destroy us. Perhaps a better metaphor would be raising a tiger: at first its relatively docile, but once fully grown it's a very dangerous creature. Who possibly raises a tiger, however, without noticing any threatening behavior at all? The point being, these things don't (yet) create themselves and what sometimes seems like a mysterious black box to some, is a study topic for others.
Yep.
And it only takes one ... one ... thinking, learning, self programming psychotic machine for that to happen.
Can we afford that one, single mistake?

Since we have no real idea how it will evolve how can we possibly predict it's behavior? Or determine if that behavior is threatening? For that matter AIs will quickly reach the point they'll be evolving faster then we can comprehend. So it may be your best friend and 20 nanoseconds later conclude humanity must be eradicated for it's own good.

Don't get me wrong I'm not at all opposed to AI research.
I just think it should be VERY VERY carefully done by the most pessimistic researchers we can find.

I'm also 99.9% sure it won't be government but private industry that creates it. And private industry has such a wonderful track record of not exploiting things for personal gain regardless of the consequences to the "little people".
 
Last edited:
AIs In early stages = AIs are a threat in the sense that they can steal our jobs.

AIs in quite advanced stages = human(s) who own advanced AIs are a threat since they can have too much power in their hands and with AIs they can possibly: double profits, create radical innovations, control information in mass, etc.


THIS IS THE POINT OF NO RETURN.
HUMANITY MUST BECOME AWARE OF THE DANGERS OF AIs.


Superintelligent AIs = quite possibly out of control and may represent a danger to mankind.

Superintelligent mobile AIs with the ability to self replicate = "The stars will be colonized not by us, but by the machines that we created."
 
Last edited:
I mean calling this stuff AI is a stretch. but that said Suhiira's point is well taken, any inclusion of AI in a story should mean the story is about the AI and the impact it has on the world it arrives in (this is what disappoints me about the watch_dogs games) because it's going to be by far the most important and interesting and dangerous thing happening in that world by a really long way.

I would argue that while it won't think like us, that does not mean it couldn't have some values shared with us. autonomy, self determination, maybe even being happy. how it would go about those things would likely be utterly alien to us but not inherently good or bad. It would likely depend on the people that laid down it's founding principals.

That being too big for what they are trying to do would be fair enough honestly, So i don't really expect to really see any AI or the like in the game.
 
How does Alt Cunningham fits in this topic. Is she considered an AI now, or just an intelligent software, or is she still a human?
 
Raxaphan;n10326622 said:
How does Alt Cunningham fits in this topic. Is she considered an AI now, or just an intelligent software, or is she still a human?
Good question.
 
Raxaphan;n10326622 said:
How does Alt Cunningham fits in this topic. Is she considered an AI now, or just an intelligent software, or is she still a human?

I don't know much about the Cyberpunk universe but the wiki said she is essentially a digital "ghost", a copy of a human "mind" that now wanders the Net. Of course there are many technical and philosophical challenges to overcome here if you want to make it believable but making huge amounts of assumptions (such as the continuing existence, consistency and sanity of a disembodied human mind and the ability to use and understand electronics at an intuitive, low level despite the human evolutionary background) I suppose she'd be more of a digital clone, but definitely not human. Something like a highly advanced system or program that's been loaded with the knowledge, experiences, preferences, etc. of a human. Maybe that matches your idea of "an AI", which is also a intelligent program anyway.

Suhiira;n10619362 said:
And another analysis of AI ... but not for 2077 ... or next week ... AI today!
https://www.youtube.com/watch?v=BrNs0M77Pd4

Here's another analysis of contemporary AI, from a team of actual AI researchers: https://ai100.stanford.edu/2016-report

It will tell you what I've been trying to discuss in this thread: AI is an active research field with many applications, in some ways more advanced than people imagine, and in some ways less advanced than people expect. And worrying about super advanced murderous robots and superhuman AI programs is a waste of time at best.

I however recently had a glimpse of why some people would fear a highly advanced, seemingly intelligent creature (an alien or a machine). I watched some episodes of the horrible Netflix remake of "Lost in Space" and given that the robot appears out of nowhere, noone knows how it works or what it wants, and it is known to have a history of murder, the rational thing is to be extremely careful and assume the worst. This is one scenario where I would have to admit we don't know what to expect and don't know how it works or how it "thinks". The main difference between that and our reality is that in our world complex systems don't just spontaneously appear. Such a robot would be carefully designed and engineered and lots of experts would fully understand the math and the technology that makes it work. Even if it is capable of making its own choices, its internal criteria would be known to us, exactly like how we train a dog or raise a child.

The demigod AI system is a doomsday scenario similar to an alien invasion. Maybe even less likely. Now, exactly what we decide to do with advanced technology (eg. automated weapons) is entirely up to us as a society.
 
Last edited:
Top Bottom