Artificial Intelligence vs. Intelligent Machines

+
Suhiira;n9228051 said:
Do you for one moment think any of them view the world the same way a human does?

Ok, so because I assumed that their programming would make them sometimes act like humans. You're telling me they would never act like humans?

>my example states that sometimes they'll act like us and sometimes not
>your example is that because of their programming they never act like us

Which one of these 2 is more likely?

Suhiira;n9228051 said:
But unfortunately we're not even close to understanding the human mind. OK, maybe some miracle breakthru will happen, but currently there no indication that's probable. Now if you want to posit AIs in 2077 based on a miracle breakthru, fine ... science fantasy.

I'll was not defending it will happen or it should be in 2077. That was just me telling there's perhaps no difference between SI, real self aware AI and us humans, and when considering only the perspeective of the viewer, there's even less difference since they all act almost the same under certain conditions, thus being able to explore the viewer's emotions almost the same way.

Suhiira;n9228051 said:
Undoubtedly, but all this assumes an AI will react to things they same way a human will, and that's highly unlikely.

Hmmm, excuse me? Many of those are exactly the opposite of the human behavior (speecially the David example in prometheus). Your question was how was the importance of AIs besides WOW factor and my answer was how AIs are unique and act differently than humans, thus making them a new element in the whole equation.
 
Last edited:
Lisbeth_Salander;n9229641 said:
Ok, so because I assumed that their programming would make them sometimes act like humans. You're telling me they would never act like humans?
<clip>
I'll was not defending it will happen or it should be in 2077. That was just me telling there's perhaps no difference between SI, real self aware AI and us humans, and when considering only the perspeective of the viewer, there's even less difference since they all act almost the same under certain conditions, thus being able to explore the viewer's emotions almost the same way.
Never? Probably not (never say never :cool: ), rarely, probably.
Unlike humans they don't have to find work, transportation, housing, food, medical insurance, nor do they have a spouse, kids, etc. etc. etc., attend school, play soccer with it's friends, it doesn't sleep - it can't dream. Why would an AI act like a human when its world view doesn't include most of the major factors that influence human behavior? An AI will not, can not, act like a human. OK, yes, it can probably simulate (i.e. pretend) doing so, but that's hardly the same as actually being influenced by these factors.

Lisbeth_Salander;n9229641 said:
Hmmm, excuse me? Many of those are exactly the opposite of the human behavior (speecially the David example in prometheus). Your question was how was the importance of AIs besides WOW factor and my answer was how AIs are unique and act differently than humans, thus making them a new element in the whole equation.
I've never said an AI wasn't important, in fact quite the opposite. A true AI would have profound and far reaching effects. At it's most basic my entire point in this entire discussion is that unless those effects are included in a game (as best they can be) an AI isn't going to be "realistic". It's merely another human intelligence with human perspectives and motivations ... gee "wow". A true AI will be many things, but human isn't one of them.
 
Last edited:
Suhiira;n9230941 said:
Unlike humans they don't have to find work, transportation, housing, food, medical insurance, nor do they have a spouse, kids, etc. etc. etc., attend school, play soccer with it's friends, it doesn't sleep - it can't dream. Why would an AI act like a human when its world view doesn't include most of the major factors that influence human behavior? An AI will not, can not, act like a human. OK, yes, it can probably simulate (i.e. pretend) doing so, but that's hardly the same as actually being influenced by these factors.

The chinese philosopher, Master Zhuang, once questioned the reality of being a man with this quote:

"Once upon a time, I, Chuang Chou, dreamt I was a butterfly, fluttering hither and thither, to all intents and purposes a butterfly. I was conscious only of my happiness as a butterfly, unaware that I was Chou. Soon I awaked, and there I was, veritably myself again. Now I do not knowwhether I was then a man dreaming I was a butterfly, or whether I am now a butterfly, dreaming I am a man."



The basis of being someone is to have the memory of being that person, through memory one remembers his personality and ideals, without it he is no longer himself. One is who he thinks he is. It is in my view an understatement to say that an AI can't have human views of the world since it can't live like a human, but you did not realise that it's not necessary to have lived like a human to have human views and perspectives.

An AI might simply remember to have lived like a human through implanted memories and have no awareness of it's artificial nature to view the world as a human.

Does this remind you of a certain character from a certain cyberpunk movie?



Suhiira;n9230941 said:
true AI would have profound and far reaching effects. At it's most basic my entire point in this entire discussion is that unless those effects are included in a game (as best they can be) an AI isn't going to be "realistic".

"True AI", you say as if there is an absolute truth that states the human state of being and intelligence is the only and truthfull one. To prove the state of subjectiveness of one being is an ambitious goal. The mirror test in animals is something, but the same doens't work with robots. In terms of Artificial Intelligence, self programmation is the closest task one AI could get to achieve self awareness. Taytweets is an example of adaptive AI, not so bad example depending of the perspective.

@Sardukhar

I hope you don't stay up late just to moderate my posts. I honestly don't want to make your job harder, buddy.
 
Last edited:
Lisbeth_Salander;n9231641 said:
The basis of being someone is to have the memory of being that person, through memory one remembers his personality and ideals, without it he is no longer himself. One is who he thinks he is. It is in my view an understatement to say that an AI can't have human views of the world since it can't live like a human, but you did not realise that it's not necessary to have lived like a human to have human views and perspectives.
While the essence of your point is sound in application it has some rather gaping holes.

I am Napoleon.
I'm X gender even tho I'm biologically Y gender.
You disagree with me therefor you're aggressing me.
X religion is 100% correct and all others are 100% wrong.
I remember being a 15th century pirate in my previous life.

Sorry, but belief, desire, or delusion don't make fact.

What does make a person a who they are? Damned if I know.
And any competent psychologist will give you the exact same answer. It's a not yet understood combination of generic and life experience factors.

So, no, I totally reject the concept that and AI is "human" because it happens to think it is.

 
Suhiira;n9232081 said:
While the essence of your point is sound in application it has some rather gaping holes.

I am Napoleon.
I'm X gender even tho I'm biologically Y gender.
You disagree with me therefor you're aggressing me.
X religion is 100% correct and all others are 100% wrong.
I remember being a 15th century pirate in my previous life.

Sorry, but belief, desire, or delusion don't make fact.

What does make a person a who they are? Damned if I know.
And any competent psychologist will give you the exact same answer. It's a not yet understood combination of generic and life experience factors.

So, no, I totally reject the concept that and AI is "human" because it happens to think it is.


>my argument was not to prove AIs can be human
>my argument was about how AIs, through memory implants, can have human views and perspectives
>you somehow imply I was again defending that AIs can be human

I'll be more explicit this time, memories play a huge role in the personality of an individual, the subjective past of someone has no intrinsic value, specially to the individual himself. So perhaps, the need of said individual to have lived a past similar to humans to have human views and perspectives as you stated, has no impactfull difference in behavior when compared to an AI with a made up past, under the circumstances that said AI has no awareness of its own artificial state of being. Thus, an AI like Rachel from Blade Runner has perhaps no highly different views from the world than a human, or at least that would make sense in a science fiction story, while still not being fantasy.



Having human like views and perspectives by no means imply being or not being human. Science in our reality has yet to prove one's subjectivity. Braindance could very well answer that question. I was aware of using speculations in my post, but I used them as speculations and inspirational factor and not as an argumentative fact.

But aren't many aspects of CP2020 also based and inspired by scientific speculation, while still being based on scientific facts? Cyberpunk 2020 has scientific innovations based on already existing ones, but they as they are still don't exist, thus someone could consider them being speculative scientific innovations based on already existing scientific innovations. You see it as black and white, but you see it this way because you feel like it?
You're an argumentator, but you're not an investigative scientist as it is partially evidenced by your lack of fascination for the unkown "What does make a person a who they are? Damned if I know." Your lawfulness is no different than a religious man who 100% believes in his truth, while you are probably closer to truth, both have convictions that keeps them from discovering the unknown. One believes in god, therefore he has all the answers and looks for no more answers at all, the other (you) has all the answers (only values evidence, and evidence only), therefore you don't look for more answers. Skepticism is necessary in distinguishing science from religion, but valuing both evidences and plausible speculations and theories (with the objective of proving them) are necessary for the advancement of science and for the creation of entertaining stories. Discovery is fundamental for science, to content is not.
​In their extremes and without flexibility, both the religious, the atheist and the agnostic can be prejudicial for scientific innovations.




​Again, in the work "Artificial Intelligence: A Modern Approach", published by Stuart. J Russels and Peter Norvig, perhaps the most popular in the field, it is stated the importance of philosophy in the fundations of Artificial Intelligence. Ignoring philosophies just because some are based on subjectivity is not wise of you Suhiira.

In the first chapter, "Introduction";

"1.2 The Foundations of Artificial Intelligence - Philosophy (428 B.C.-present)" It even states Socrates's questioning of human behavior, then the authors correlationate it as a fundamental basis for algorithm.

If the fact that Mike Pondsmith, through speculative rationalistic logic, was able to get into conclusions of certain speculative technologies based on already known scientific aspects for CP2020, thus making it enough for CP2020 having certain technologies that we don't exactly have in our reality, then perhaps it could happen again with AIs and many other types of technologies, under the condition that said speculative technologies fit thematically in the game.
 
Last edited:
Lisbeth_Salander;n9231641 said:
One is who he thinks he is.

Not quite. One also needs muscle memory or else one is in trouble the minute he tries to re-eact or explain in detail a talent or event he remembers doing but in reality has not ever even tried (but the listener/watcher might have). Discovering that you can not fly afterall once you've jumped down the cliff is a lesson learned a bit too late.
 
Last edited:
kofeiiniturpa;n9235821 said:
Not quite. One also needs muscle memory or else one is in trouble the minute he tries to re-eact or explain in detail a talent or event he remembers doing but in reality has not ever even tried (but the listener/watcher might have). Discovering that you can not fly afterall once you've jumped down the cliff is a lesson learned a bit too late.

I didn't mean physically nor literally, what I mean was, "one is who he thinks he is to himself and perhaps to others", it's symbolic, that's what Suhiira didn't get. An AI who thinks itself to be human helps in making such AI to appear more human like, since it is not completely self aware that it was indeed an AI it would be harder for investigators to realize it was one and it would be tougher or basically impossible for the AI to reveal itself under pressure considering it didn't know it wans't human, therefore making it more likely to pass the Turing Test and the Voight-Kampff test, or at least that's the logic in the movie Blade Runner, but a great logic indeed:



It was harder for Deckard to realize that Rachel was not human. "But if Dekard could have figure it out, then anyone can figure it out..." Not if you consider that Deckard is a replicant, thus making it easier for him to detect those of the same kind. And even then Rachel didn't reveal herself, it was Deckard who assumed she wans't what she appeared to be. Thus reiforcing my argument that it makes it tougher to differenciate.And he assumed, without using the test, implying the test perhaps was useless.


The Illusion makes It get more convincing and consequently harder to detect, thats my point. The AI is not human once it thinks it's human, but it appears to be more human once it thinks it's human. And thus the quote "One is who he thinks he is" not to affirm, but to question, since you can't tell the difference between an android and a human and the android can't tell difference either, is there a difference between being and pretending to be? The movie makes us ask this very same question.

Someone could say: "why not just programm the AI to know it was a machine but also make her act like a human", because since knowing, someone could find a break into the system by asking the right questions, just like it's implied in the movie.

Not to mention, that was what Russels said, not with the same words: "to create a human like AI, one must first understand the human mind". By not knowing it was an AI, it would have less reasons to think like an AI, and more motives to think like a human, thus making it easier to pass the Turing Test and the Voight-Kampff test, or at least that's the logic used in the book and movie, there's plenty of speculation here not making it 100% true, but having enough logical explanation to appear in a science fiction movie or in a game, specially since self awareness is only stated to exist in Blade Runner with a question mark.



Blade Runner is not considered "The Cyberpunk Movie" because it got cool visuals, it got a deep philosophy behind it. No wonder why Marcin Iwinski bought a BluRay version of it.
 
Last edited:
Lisbeth_Salander;n9235971 said:
And thus the quote "One is who he thinks he is" not to affirm, but to question, since you can't tell the difference between an android and a human and the android can't tell difference either, is there a difference between being and pretending to be?

There is a difference to those who know (and care?); and of course it makes a difference physically; no matter how convincing the act, it is still a fraud. Beyond that it is a pointless question since the ignorant - the subject matter included - neither know nor care (at least not before finding out); and dobly so if they aren't even supposed to since then there's no advantage or purpose to any of it for them.
 
kofeiiniturpa;n9236501 said:
There is a difference to those who know (and care?); and of course it makes a difference physically; no matter how convincing the act, it is still a fraud. Beyond that it is a pointless question since the ignorant - the subject matter included - neither know nor care (at least not before finding out); and dobly so if they aren't even supposed to since then there's no advantage or purpose to any of it for them.

Certainly, there are some who even say there should be ethics regarding how a robot should be able to look like a human in order to not..well.. not create a cyberpunk dystopia. Of course we have right now, bots online who can pass as humans. There was a hacking in a dating site not so long ago revealing that most of their female users were chat bots. Most men didn't even realize, I bet.

One could say that it's human nature to not question if we should do something before doing it. The character Loghaine from the game Arcanum, said that because men has very short lives compared to other races (elves, dwarves,etc) and are living with the constant fear of death, thus we do not live to see the consequences of our actions and are constantly making choices that has benefits in short periods of time but not so on the long run, therefore we don't ask ourselves if we should create something before doing it.

Will someone ask themselves the "should" before making a super intelligent AI, perhaps a self aware one? Some thoughts even if speculation, appear to have so obvious answers.
 
Last edited:
Hrm. Interesting:

https://www.forbes.com/sites/tonybra.../#2eb79913292c

Facebook shut down an artificial intelligence engine after developers discovered that the AI had created its own unique language that humans can’t understand.

Update: http://gizmodo.com/no-facebook-did-not-panic-and-shut-down-an-ai-program-1797414922

Not for nefarious reasons, "FAIR researcher Mike Lewis told FastCo they had simply decided “our interest was having bots who could talk to people,” not efficiently to each other, and thus opted to require them to write to each other legibly."

My favourite bit? "In this case, the only thing the chatbots were capable of doing was coming up with a more efficient way to trade each others’ balls."

MADNESS.
 
Last edited:
The experiment where robots were given 264-bit binary code genomes and evolved to find a specific resource faster and more reliably, and began lying to each other to gain access to said resources illustrates that if, as Suhiira alluded to, robots are given a "need", like housing and food, they will eventually display human characteristics when attempting to attain said "need".

http://www.popsci.com/scitech/articl...ces-each-other

As an aside: in gaming human-like behavior can be relatively easy to achieve through the use of bots, or advanced AI systems (expert systems) which would play the game to the point where if someone was to drop into a game it would be difficult to know if you were playing a bot or a human unless it was made obvious.

http://www.kbs.twi.tudelft.nl/docs/M...van/thesis.pdf
http://u.cs.biu.ac.il/~galk/Publicat...ebots-cacm.pdf

On the point of expert systems; you have to ask yourself what makes you human, or to say; what separates you from an AI or in this case an expert system. Most of the things you do in life you learned how to do from someone else, in one way or another. In theory an expert system which is programmed to be an expert in a multitude of fields, with enough processing power, can easily surpass the thinking capacity of a human being by achieving mastery of fields which would take a human being multiple lifetimes to accumulate. An expert system in history, philosophy and/or, to an arguably lesser extent, psychology could easily have a wider understanding of humanity than most humans do.
 
Lisbeth_Salander;n9235971 said:
I didn't mean physically nor literally, what I mean was, "one is who he thinks he is to himself and perhaps to others", it's symbolic, that's what Suhiira didn't get.
You're correct, I didn't.
But so what??
You're saying that if by some miracle an AI acts with human values and morals it's a non-threat ... how do we insure that happens? Isn't it far more probable it won't?

Take a look at these:
https://www.youtube.com/watch?v=8nt3edWLgIg
https://www.youtube.com/watch?v=MnT1xgZgkpk
 
Last edited:
Suhiira;n9238811 said:
You're correct, I didn't.
But so what??
You're saying that if by some miracle an AI acts with human values and morals it's a non-threat ... how do we insure that happens? Isn't it far more probable it won't?

I disagree, we are building them, we have to choose what we are going to try to build. which to be fair means if corporations do it first we are all fucked.
 
As much as it might be nice to think the government has the best AI research teams it's extremely unlikely. Some corp will do AI first.

And, having been a mainframe programmer who's worked on DEC, IBM, and Univac machines (at the assembler level) I can assure you programs almost never run correctly the first time. Assuming a self-programming machine, which an AI would have to be, all it takes is one simple error or oversight and the machine is beyond our control because it can reprogram itself 10,000x as fast as we could plug holes.
 
Sardukhar;n9236811 said:

Heh what a joke. A hundred years ago that "journalist" would've been writing about these new mechanical horses that move on their own and might kill us all one day. Some people call them "cars".

It's amazing how little AI is understood outside of academia and how many "news" and arguments are based on fiction and FUD. If these reporters knew what goes on inside university labs they'd shit their pants so hard their tinfoil hats would fall off.

I'll just remind you we are at the point where we can prove solutions exist for a huge variety of problems, including those related to learning appropriate behavior, planning and decision making with unknown probabilities or visual recognition of objects and their function in human society, to name a few. I know of linguistics groups that have programs running on simple robots to study how they can improvise languages/symbols/codes. We're working towards understanding and modeling many cognitive processes. The problem right now for intelligent robotics is mostly one of scalability: putting all of this together and making it run sufficiently fast in crappy robot hardware is hard. For particular problems AI programs can undoubtedly beat humans and adapt their actions based on observed performance. So if someone responds well to politeness, that's the key to success and the robot should use it more often. Programs have also beaten humans in creativity tasks! All of this just takes too damn long right now, for many complicated reasons but mostly because it's done in a very naive way: too many useless options are considered.

We tend to think of humanity as mystical creatures and don't realize we can be quite similar, controllable and predictable. Magicians and illusionists make a living out of this. We often run on a very basic, impulse driven system and only occasionally engage our actual reasoning or problem solving skills. This does not mean we're simple or easy to understand, quite the opposite in fact. Acting in this complex haze helps us solve most issues quickly and painlessly, with mistakes sometimes.

Whether machines can or will ever achieve self awareness is a different story, but my position is that we cannot assume a creature lacks intensional mental states simply because their physiology is different from ours. There's nothing that suggests our minds are a unique, causal effect of our particular central nervous systems.

I don't see the point in using fiction to defend a position, though. Fiction speculates, and is free to do so and inspire us.

​​​​​​At this point however we KNOW we will soon have service robots slowly integrating into human society. They will harvest our crops, drive us places and help us find our slippers at home. They will also beat us at every game of logic, financial investments, risk assessment, etc. They're advanced, moving computers with arms and legs, our current vision of the future is not at all bleak.

In Victorian-era England Butler wrote a novel about a place where machines driven by steam engines enslaved humanity by making them continuously service them with repairs and so on. Fear of technology is nothing new. What you find in most technology zines, blogs and YT vids (and hear from some public personalities) about "AI" is the equivalent of using goat simulator to warn about the dangers of physics.
 
Last edited:
We humans, define self awareness as being aware of oneself, in other words as knowing and being aware of your own thoughts. But we don't understand the human conscience as Suhiira stated, we don't know it all yet in both psychological and neurological aspects of the human mind and brain. So how arrogant of us to define ourselves as self aware beings, since we don't even know 100% about ourselves just yet.

What if an "self aware" A.I defines self awareness as both knowing oneself thoughts while knowing 100% all the thought process and systems behind such thoughts (something that we don't have as stated in the sentence above), wouldn't an A.I judge us as inferior beings the same way that we now judge them? But more importantly, in this hypothesis, wouldn't said A.I be more self aware than us since they would know more about themselves than us about ourselves? Dude smart robots 'n shit lmao.

Hmm, guys? I think there's something different with Mike Pondsmith wikipage.



M-Mike? What have you done...
 
Last edited:
"At core, unless you have the meaning behind the black leather and the neon, you lose what cyberpunk is. That’s the problem with getting Cyberpunk made as a videogame; people don’t get it. They think it’s about action heroes quipping as they take down corporations.”

This reiforces your idea and the importance of fitting tematically. The question goes, would he allow the point that I defended, or would he find a way to implement A.I in a way that doens't break the game principles? Oh who am I talking to :p your answer will always be "no" to this question. Perhaps I should be more objective and ask, why do you think it doens't fit thematically. Oh come on, don't tell me the answer is what I already read from you, that A.I are ompletely speculative, if that is the answer, aren't many other things in CP2020 that are also speculative? More importantly, there are zero reasons for having an A.I in the game, the creative director could simply not create a fitting story that includes A.Is?

Oh, and the "it just feels" like it doens't fit in in the game, is perhaps a answer that goes with "thematically", since one involves the other, usually.

Suhiira;n9241661 said:
As much as it might be nice to think the government has the best AI research teams it's extremely unlikely. Some corp will do AI first.

And, having been a mainframe programmer who's worked on DEC, IBM, and Univac machines (at the assembler level) I can assure you programs almost never run correctly the first time. Assuming a self-programming machine, which an AI would have to be, all it takes is one simple error or oversight and the machine is beyond our control because it can reprogram itself 10,000x as fast as we could plug holes.

Perhaps yeah. It may be something that'll happen only in science fiction, or fantasy. The government is perhaps only greater at mass surveilance. considering it is difficult to programm them as you stated do you think that a great true A.I as you mentioned will be created by accident or intentionally? My spelling sucks when I write fast. English is not my mother language comrad.

Suhiira;n9238811 said:
You're saying that if by some miracle an AI acts with human values and morals it's a non-threat ... how do we insure that happens? Isn't it far more probable it won't?

I agree. It is completely plausible that for an AI to become self aware it needs basic instincts, and the most basic one is probably self preservation. Makes sense, why wouldn't they see us as a threat? Yeah.
 
Last edited:
It's not that I think AIs won't eventually be created. More that it's a lot less easy then many think.

It's not just a matter of stringing enough CPU together, were that the case networks would already be AIs. Nor it it a matter of creating programs that can search databases and find relevant data, if that were true Google would already be an AI. It's more of the ability to take seemingly unrelated information, turn it upside down and sideways and find a way to apply it to a problem. I.E. that "little" thing called creativity. And we certainly have no clue how to create that, if we did everyone could be Mozart, Einstein, Hawkings. We're currently incapable of anything close to this in humans, who we understand far batter then we do computers (and we understand very little about how human thought/creativity works) how are we going to make a computer capable of it?

Then there's the "minor" matter of morality/ethics. There's no reason to think an AI will view the world the same way a human does and every reason to think it won't. Will this result in "Terminator"? I hope not, but we'll never know till it happens. And there's no way to insure it won't. This isn't the sort of thing we can afford to leave to wishful thinking.

Lastly, the invention of an AI will totally upend the economic, social, political, military, and what-have-you fabric of the world. Thinking it won't is criminally naive.

I'd LOVE to see a realistic portrayal of AIs in the world, but there's absolutely no way to predict exactly what will happen. If you add AIs to a realistic (vice fantasy) game you have to consider what happens when 50-75%+ of the workforce becomes unemployed (and mostly unemployable) virtually overnight. What happens when some nations military becomes an unstoppable juggernaut that never makes mistakes. These aren't things that might happen with the implementation of AIs, they WILL happen.
 
Last edited:
Top Bottom