Artificial Intelligence vs. Intelligent Machines

+
Suhiira;n8410080 said:
Maybe they can kinda-sorta simulate something in an AI, but if it is in fact "intelligent" how long do you think simulated fake "feelings" it's incapable of actually experiencing are going to fool it

Get the Voight-Kampf test... Quick...
 
Corewolf;n8414780 said:
Get the Voight-Kampf test... Quick...

Blade Runner assumed replicants were indistinguishable from humans without resorting to an extremely sensitive and accurate "lie detector" (which are far from infallible, that's why they're not generally used in courts of law in most nations).

I think you're however referring to a Turing test?
 
Nope. Turing just tests if a machine can be told apart from a human via having an individual have a conversation with two individuals, one human, one machine and judge whether he can tell which one is the machine.

Things have already passed the Turing tests... or at least a version of it. There are inconsistencies.

http://www.bbc.com/news/technology-27762088



Voight-Kampff tests for physical response against human baseline, specifically "It measures bodily functions such as respiration, heart rate, blushing and eye movement in response to emotionally provocative questions". Hence if we are talking about being able to "Fool" something (another person or the machine itself) via simulated fake "feelings"... Voight-Kampff would actually be more accurate in distinguishing fake emotional responses and feelings versus real ones.
 
I just hope that artificial intelligence for humans, and other species are smart enough and adapt to your tactics. Same with the Intelligent machines/robots. I want the AI to take advantage of the environment both inside buildings and outdoors in Night City.
 
Part I.

Suhiira;n8414260 said:
That I do!
Because in spite of not being entirely sure how they work in a biological brain we have ample proof they do, in fact, work.
You, in turn, assume such things can be replicated electronically, with zero proof it's possible.
We're both guilty.

So there is proof that creatures with biological brains have cognitive functions, some of them (eg. great apes) considerably advanced.

I never said "such things" can be replicated electronically. What I did say is we have accurate mathematical models for aspects of what we consider to be intelligent behavior. Often, as soon as we explain what one or another aspect of "intelligence" is, it stops being considered intelligent. That is, once a machine does it, it's no longer intelligence. The list keeps getting smaller and smaller. This is reason enough to ask questions such as: are minds a result of our biology? are they simply enabled by our biology? And so on. Science is about asking the right questions.

Suhiira;n8414260 said:
Actually I assume self-awareness and cognition are inherently biological functions.
While I have zero doubt a simulation of it can be created electronically as you say, "the issue of whether self-awareness is 'real' or 'simulated' will always come up", so I choose to err in the side of skepticism, like any good engineer (or scientist).

The assumption that minds are exclusively a product of biological brains is an oversimplification. Like the man driving around the UK and assuming all British sheep must necessarily be black, based on biased sampling despite genetics suggesting there might be white sheep. The truly skeptic (and data driven) position is: so far all the sheep I've seen are black.

What you didn't address, however, is the permanent issue of what happens when human cognition is subjected to the same criteria you use to disregard artificial cognition. Maybe your perception, emotions and mental states are also "simulated", but the point is they are real enough for you.

The reason I often reference work in philosophy of mind (which I recommend to anyone interested in AI) is because these "things" are not as easy to dismiss as one would like. The position that certain aspects of cognition are inherent to humans or to particular types of brains closes a lot of doors and is borderline anti-scientific. I am not talking about anything supernatural here, simply the fact that this position suggests an answer without any analysis (almost like a religious position).

Suhiira;n8414260 said:
AI is a theory, and like any scientific theory, it's up to those that propose it to prove it's correct not the rest of the world to prove it isn't.

Which is why I offered some insight into the current state or AI :)

There's a lot of confusion about what a scientific theory is. It is not the beginning of research, it's the result of years of work and research. A theory is the comprehensive body of knowledge, evidence, inferences and data that accurately describes one or more processes. When you have an informed guess, that's usually called a conjecture. When you have a very informed question or statement from some amount of evidence but need further proof, that's called a hypothesis. Colloquially however people often use the term theory when they mean a conjecture or something even weaker.

What we have discussed about AI is not something that has to be "believed". I only stated well known (in the scientific community) facts, such as the existence of correct mathematical models that are strongly correlated with aspects of human intelligent behavior, and referenced current discussions about perception and mental states that help us realize it's not easy to simply say what a "real mind" is and what isn't, especially if we judge based only on our personal ideas. This should lead to a proper discussion about the possibilities of AI.

And all of this was in response to mainstream misconceptions such as "AI programs must be able to reprogram themselves", that advanced AI can consist of pure logic and that "artificial is inherently different from biological because reasons". I simply hope this will contribute to a better understanding of these topics, which are very hard even for academics.

And because now we're going in circles, we can either stop or move on.

Part II.

Back on topic with CP2077, I would vote for humanoid robots endowed with sufficiently advanced AI to be called "Androids" but can envision different other forms of AI-related entities:

- Autonomous vehicles.
- Personal assistants (either a small robot or some handheld/wrist device).
- An entire building/house ran by an AI administrator, controlling all sorts of moving things, many sensors and interacting with guests. (simply admin?)
- And the general, more advanced version of the previous item, any sufficiently advanced program with AI that receives external input (humans, the environment, robots, etc.) and can affect its surroundings (any peripherals), with or without mobility. This would be the cryptic type of entity that is obscure, seemingly intelligent and makes people uneasy.

None of the above except perhaps the last one need "general purpose intelligence" and "self-awareness" for them to be interesting. The point is to borrow interesting SCIENCE fiction concepts, hide away some detail and avoid technobabble.
 
Last edited:
Vehicles and Personal Assistants plugged into the net (via 2077 WiFi probably since both devices are mobile) to a semi-AI (again, I'm unwilling to assume "full" AIs) mainframe located someplace I can definitely see. Something along the lines of Star Trek's computer (ignoring a couple holodeck episodes), where it can answer virtually any question and perform any routine action (drive from A to B). Same with building AIs monitoring security, copier ink supply, etc.

As you say, a more general advanced AI could, should, certainly exist, there's no reason to think research into AI will ever stop. And yes, people will be uneasy about it.

However I'd MUCH prefer these not be a significant issue/focus in CP2077.
CP2077 is about your humanity, not a machines.
 
Sardukhar Suhiira

Having everything scientifically explained is great, but that as a rule restricts the creative direction a movie or a game can get. The more scientifically accurate the more limited it gets. That doens't mean we have to make magic without any science involved, what Sard "Chaotic Evil" Ukhar said makes sense. But here lies the 2 conventional alternatives,

1. the DC Comics way: where superman flies without any explanation for it happening whatsoever
2. or the Marvel approach: where Stan Lee made sure most of the characters have a great scientific explanation for their superpowers

3. But there's the third alternative where the creative director finds plausible possible future technologies and merges the creativiness with the scientific explanation (evidence and all that stuff). In other words, this creates something that may happen in the future and makes its own science BASED on our already known science. Take self awareness by instance, just like Ex Machina and westworld did (this is spoiler free I swear on me mum).

>humanity at this very moment have no idea how to make a robot self aware (because we have no impirical evidence on the subject). Therefore we only have speculation (but very good speculations).
>we don't know if its possible or not for it to exist in robots
>we do know that it is plausible for it to exist or at least something very very close to it, a non self aware being but extremely smart and well programmed

what this show and movie did was create plausible explanations for self awareness to exist. Logic is logic, but we can create a fact based on logic but not 100% explained by it.

Now someone could ask, but doens't these two explanations from both westworld and ex machina contradict each other in explaining the mechanism behind self awareness? Here's the beautiful part. Since it's based on logic and in science we already know the answer is [MINOR ALMOST INSIGNIFICANT SPOILER ALERT]:
No, on the contrary. They complement each other.

4. Now here comes the fourth alternative that is making a 100% scientifically explained story based on this quote ">we do know that it is plausible for it to exist or at least something very very close to it, a non self aware being but extremely smart and well programmed". We could very well not define if the the robot is or isn't self aware since we don't know if it is possible BUT we know as a matter of fact that it is possible to create an super smart Artificial Intelligence. Now this is what the Alien movies did, it is stated that robots are not self aware, and those who think otherwise are just fool individuals, or at least that's what the Weyland company says, it doens't mean it's true, since the own movie brings up that question with skepticism. In the alien movies robots are extremely intelligent but the real question regarding if they are trully self aware is never answered, a very wise choice to leave the unkown scientific factors in complete or partial mistery, thus only working on what science and logic permit and letting the audience to make up their own conclusions. (in the alien franchise, robots seem to not be self aware and are restricted to their own programmation, but they are VERY CLOSE to being self aware, or at least look a lot like they are, see? since it is never stated it makes us wonder if they are or not).

I say we go with the 4th alternative, AKA the Alien movie (1979)/Prometheus approach and leave the audience to make up their own conclusions regarding the unknown, thus following logic and imperical scientific evidence whille still not being restricted by science.

The formulae is simple for the 4th alternative: Given that both X and Y are not scientifically proved, and that X is more likely to exist than Y, therefore the story should be based on X and if possible make the nature of Y to be ambuiguous and or unknown or even giving strong hints that Y may be true but never stating it as a fact.

I take back what I said Suhiira, I'm starting to get convinced you're a Lawful Neutral individual.

The plausibility scale of these alternatives should be (from more likely to less likely): 4>3>2>1


Calistarius;n9220221 said:
For the most part my soldiers only end up being a part of the masses that I throw towards the meatgrinder in the hopes of that they destroy it, win, and survive the ordeal of it all so I don't have to replace them with a rookie that I have to train all over from scratch again... XD

And here I tought you were the good guy :p Hey never looked at it that way, the fast pacing makes people who loves perfection to not care about it. Great game. This is the song it should played when you play Xcom 2, captain:

https://www.youtube.com/watch?v=-DSVDcw6iW8
Real human bean and a real hero
 
Last edited:
Don't get me wrong, I'm perfectly willing to suspend belief in some areas ... faster then light travel for instance ... for the sake of a plot/game. But things with big HUGE glaring gaping holes in their logic/implementation knock me right out of the park. I love Tolken as much as anyone else, he explains where his wizards, elves, and (to an extent) hobbits come from, why they have the abilities they have, why they act as they do. Galaxy Quest even made sufficient scientific/logical sense to be thoroughly enjoyable. But when you take a game based on logic and scientific probability (i.e. CP2020) and add things like man portable lasers with virtually unlimited power supplies, giant Macross/Battletech robots, AIs, or any of a number of other things you've crossed the line between "fiction" and "fantasy".

We have a perfectly good science fantasy game already, Shadowrun, let's make CP2077 different by keeping it science fiction.
 
Suhiira;n9221821 said:
and add things like man portable lasers with virtually unlimited power supplies, giant Macross/Battletech robots, AIs, or any of a number of other things you've crossed the line between "fiction" and "fantasy".

>Artificial Intelligences are considered fantasy
You do realise they exist right?

In my honest opinion the fourth alternative and the term "suspend belief" don't combine very much since it is the most realistic of them all. Of all those examples you wrote the only one present in our reality is the AI.

My argument is that programmed AIs exist in the real world, but self aware AIs don't exist in our world. Therefore CP2077 having programmed AIs and not having self aware ones does not go against science fiction.

>HUGE glaring gaping holes in their logic/implementation

Tell me. Do we have implants exactly like 2020 in our reality? No, but we have pretty similar/less advanced ones, you know we're getting close to it. There's absolute no difference between what CP2020 has from the non self aware AIs. They're plausible to exist in more advanced forms because they both already do.
 
Last edited:
Lisbeth_Salander;n9221881 said:
>Artificial Intelligences are considered fantasy
You do realise they exist right?
I'm afraid they don't.
There are some very interesting programs out there (i.e. Watson) that are GIANT leaps toward AI but they're not even in the ballpark of being a "true" AI.


Lisbeth_Salander;n9221881 said:
In my honest opinion the fourth alternative and the term "suspend belief" don't combine very much since it is the most realistic of them all. Of all those examples you wrote the only one present in our reality is the AI.

My argument is that programmed AIs exist in the real world, but self aware AIs don't exist in our world. Therefore CP2077 having programmed AIs and not having self aware ones does not go against science fiction.
There's a huge different between what we have now in the real world and say a cockroach in terms of "intelligence".
And the very term "AI" changes its definition:

"As machines become increasingly capable, mental facilities once thought to require intelligence are removed from the definition. For instance, optical character recognition is no longer perceived as an example of "artificial intelligence", having become a routine technology."

As we create more and more capable programing and other techniques. Merely being able to see and react to objects is no longer defined as intelligence, nor is hearing, tasting, touching, etc. Intelligence is (sorta kinda ... this is HIGHLY debated) what can be done with the information collected, how it's used. Yep, we have machines that can collect data, parse data bases at lightning speeds, establish new links between diverse bits of data (information), And not a single one is capable of looking at a kitten or baby and "thinking" it's cute (or ugly). They can precisely define why humans perceive such things as "cute", but they in no way comprehend the concept.

Now, if you wish to define AI as mere data manipulation then yes you are 100% correct. Myself I prefer a definition that includes self awareness and the ability to react to the world around it based on a combination of physical sensations and cognitive comprehension and interpretation.

I.E. it must be able to decide humanity is a scourge that needs to be eliminated. Hopefully it finds some reason not to :D
But to be intelligent it needs to have the capability to make such a decision.


Lisbeth_Salander;n9221881 said:
Tell me. Do we have implants exactly like 2020 in our reality? No, but we have pretty similar/less advanced ones, you know we're getting close to it. There's absolute no difference between what CP2020 has from the non self aware AIs. They're plausible to exist in more advanced forms because they both already do.
Exactly like here in 2017, of course not. But the capability of (most of) the implants in CP2020 (and presumably CP2077) are certainly within the scope of probability given continued research and development.

Potentially AI is to, tho I'm skeptical.
But it's not so much the existence of AIs in a game I have issues with how people imagine they'll effect the world. They're not going to seamlessly blend in, hopefully they won't go Terminator, the changes will be profound and very VERY far reaching, and we have no way to predict what exactly they'll be. You could just as well ask "What if the Nazi's won WW II." That there would be differences is easy to predict, what, and how far reaching they would be is pure speculation that totally defies any attempt at probable (vice possible) speculation.
Thus fantasy.
 
Last edited:
I'll now proceed to piss off Sard, possibly Maximum Mike, and several hundred thousand other people.

The world postulated in CP2020 can't possibly exist as written.

You cannot have 75-90% of the population living near the poverty line, who the hell is buying the hundreds of ground cars manufactured each day by the Megacorps?
No sales = no income ... duh.

If anyone wishes to DEBATE the topic I'm more then willing. But debate means reasons and facts not feelings and desires.

 
Suhiira;n9222511 said:
Exactly like here in 2017, of course not. But the capability of (most of) the implants in CP2020 (and presumably CP2077) are certainly within the scope of probability given continued research and development.

>At the moment, we don't have body implants exactly like CP2020, we do have body implants though
>According to you it is plausible for it to happen in the future
>You didn't explain why it's completely plausible

>At the moment, we don't have AIs that can be like humans, we do have AIs that act like humans though
>According to you it is implausible for it to happen in the future
>You didn't explain why it's completely implausible

Hmmm.

Suhiira;n9222511 said:
Now, if you wish to define AI as mere data manipulation then yes you are 100% correct. Myself I prefer a definition that includes self awareness and the ability to react to the world around it based on a combination of physical sensations and cognitive comprehension and interpretation.

All the references in this post are from Artificial Intelligence the third edition of "A Modern Approach" by Stuart J. Russell and Peter Norvig. This academic work is considered the most popular definition with more than 33.000 citations in the scientific community.

Definitions of artificial intelligence according to eight recent textbooks are shown in Fig- j ure 1.1. These definitions vary along two main dimensions. The ones on top are concerned with thought processes and reasoning, whereas the ones on the bottom address behavior. Also,! the definitions on the left measure success in terms of human performance, whereas the ones 1 on the right measure against an ideal concept of intelligence, which we will call rationality. A! system is rational if it does the right thing. This gives us four possible goals to pursue in artificial j intelligence, as seen in the caption of Figure 1.1. Historically, all four approaches have been followed. As one might expect, a tension existsl between approaches centered around humans and approaches centered around rationality.2 A! human-centered approach must be an empirical science, involving hypothesis and experimental]
"The exciting new effort to make computers think . . . machines with minds, in the full and literal sense" (Haugeland, 1985) "[The automation of] activities that we associate with human thinking, activities such as decision-making, problem solving, learning ..."(Bellman, 1978)"The study of mental faculties through the use of computational models" (Charniak and McDermott, 1985) "The study of the computations that make it possible to perceive, reason, and act" (Winston, 1992)
"The art of creating machines that perform functions that require intelligence when performed by people" (Kurzweil, 1990) "The study of how to make computers do things at which, at the moment, people are better" (Rich and Knight, 1 99 1 )"A field of study that seeks to explain and emulate intelligent behavior in terms of computational processes" (Schalkoff, 1 990) "The branch of computer science that is concerned with the automation of intelligent behavior" (Luger and Stubblefield, 1993)


It is true that acting both humanly and rationally is necessary to be an true Artificial Intelligence, but pay attention that it is ACTING and not necessarely BEING. as it is stated bellow:

Acting humanly: The Turing Test approach

The Turing Test, proposed by Alan Turing (1950), was designed to provide a satisfactory operational definition of intelligence. Turing defined intelligent behavior as the ability to achieve human-level performance in all cognitive tasks, sufficient to fool an interrogator. Roughly speaking, the test he proposed is that the computer should be interrogated by a human via a teletype, and passes the test if the interrogator cannot tell if there is a computer or a human at the other end. Chapter 26 discusses the details of the test, and whether or not a computer is really intelligent if it passes. For now, programming a computer to pass the test provides plenty to work on. The computer would need to possess the following capabilities:

1. natural language processing to enable it to communicate successfully in English (or some other human language);
2. knowledge representation to store information provided before or during the interrogation;
3. automated reasoning to use the stored information to answer questions and to draw new conclusions;
4. machine learning to adapt to new circumstances and to detect and extrapolate patterns.

It is possible to note that before acting like a human, one must first think or at least give the viewers the impression that they appear to think similar to humans. Acting humanly does not imply having self awareness. All the four capabilities, even the 4th one that states that adaptation is fundamental, may be achieved through programation. If we can program an AI to say certain words at determined time, then make advanced indistinguishable speech patterns. At no point in my argument it is defended that self awareness can be achieved by programmation, but since a machine may not feel what is like to find a dog cute across the street, but they might fool us into thinking they find that animal cute through pre programmed facial expressions, voice tones, speech pattern, etc. To think that such advancements in the field are not possible is to disconsider the already existent advancements.

>programation is possible
>self awareness is uncertain

>programation is required to act like a human
>self awareness is required to be like a human

Acting like a human was enough for the Alien franchise to make beings indistinguishable from us to exist while still being science fiction, therefore making it possible for Cyberpunk 2077 to do the same.
 
Last edited:
Lisbeth_Salander;n9225891 said:
>At the moment, we don't have body implants exactly like CP2020, we do have body implants though
>According to you it is plausible for it to happen in the future
>You didn't explain why it's completely plausible
Easy, even a cursory search of the internet will bring up several references to some of the recent advances in prosthetics (some of which have been posted elsewhere in these very forums).
And damn, some of them are amazing, and are on a direct path toward some of the cyberware proposed in CP2020.

Lisbeth_Salander;n9225891 said:
>At the moment, we don't have AIs that can be like humans, we do have AIs that act like humans though
>According to you it is implausible for it to happen in the future
>You didn't explain why it's completely implausible
Also easy, no matter how much an AI may "act like" a self-aware being until it becomes self-aware it's just that, an act. Let's call it Simulated Intelligence (SI for lack of a better term).

Since we're so far unable to define what self-awareness is ... lots of theories out there, little agreement, and without agreement there is NO practical or scientific validity to any of them; you don't debate the laws of gravity or thermodynamics, they're indisputable.

So, even totally ignoring other factors, until what AI is has been defined, and that definition has been pretty much universally accepted, it's "implausible" bordering on impossible for it to exist.
As you've (inadvertently) pointed out, lacking a recognized definition anything can be said to be AI.

Lisbeth_Salander;n9225891 said:
All the references in this post are from Artificial Intelligence the third edition of "A Modern Approach" by Stuart J. Russell and Peter Norvig. This academic work is considered the most popular definition with more than 33.000 citations in the scientific community.

It is true that acting both humanly and rationally is necessary to be an true Artificial Intelligence, but pay attention that it is ACTING and not necessarely BEING. as it is stated bellow:
Popular it may well be, but since when has popular been a measure of validity?
As I pointed out above, the case is still very much open on "What is AI".

I've made some effort to clarify MY definition requires the ability to draw its own conclusions based on the information available to it, i.e. not SI (Simulated Intelligence ... see above). So BEING is a critical factor as far as I'm concerned.

Feel free to disagree (lots of people do), but are we discussing the definition of AI or how it's inclusion in a game (or reality) will have profound and far reaching effects on the culture and society in order for it's inclusion be be realistically portrayed? This is not as simple as adding flying cars, it's more like not adding firearms, the very fundamentals change.
 
Last edited:
Suhiira;n9226191 said:
Since we're so far unable to define what self-awareness is ... lots of theories out there, little agreement, and without agreement there is NO practical or scientific validity to any of them; you don't debate the laws of gravity or thermodynamics, they're indisputable.

You're implying I'm defending the existence of self awareness. I'm not.

Suhiira;n9226191 said:
So, even totally ignoring other factors, until what AI is has been defined, and that definition has been pretty much universally accepted, it's "implausible" bordering on impossible for it to exist. As you've (inadvertently) pointed out, lacking a recognized definition anything can be said to be AI.

You're implying I'm saying there's only one definition of AI. I did not said that. There are multiple ones.


Suhiira;n9226191 said:
Feel free to disagree (lots of people do), but are we discussing the definition of AI or how it's inclusion in a game (or reality) will have profound and far reaching effects on the culture and society in order for it's inclusion be be realistically portrayed?

You're defending the meaning while saying that it is the justification for CP2077 not having human like robots and I'm defending how the meaning of a true Artificial Intelligence has no importance in the implementation of AIs in the game, since it doens't need to be self aware to be in a science fiction game or movie, considering it may appear to be self aware even though it isn't.

What you call the "wow" factor I call human emotions, and every good director knows how to manipulate the audience when necessary. If it doens't need to be self aware to feel self aware, a creative director perhaps will not hesitate in bringing this idea to life.

They changed 2077 quite a lot to appeal to a wider public, in interviews it was even said that the full retro look might keep younger audiences at bay, therefore they said they won't take that road.

They sacrificed some elements of cyberpunk 2020 for the demographic, as perhaps evidenced by the trailer, maybe we'll be seeing non self aware human like robots in 2077 too.

The conversation is getting heated, I hope this doens't affect us and the forum, as I don't get things personal I interpret this discussion as a civilized non emotional argumentation and more importantly, friendly and respectfull. One thing I never do is to dislike people based on their different perspectives, and I value your perspective.

I don't want a Call of Duty clone, on the contrary, I believe the more complex CP2077 will be the better.
 
Last edited:
Lisbeth_Salander;n9226301 said:
You're implying I'm defending the existence of self awareness. I'm not.
My apologies, I apparently read something into what you said you didn't intend to imply.

Lisbeth_Salander;n9226301 said:
You're defending the meaning while saying that it is the justification for CP2077 not having human like robots and I'm defending how the meaning of a true Artificial Intelligence has no importance in the implementation of AIs in the game, since it doens't need to be self aware to be in a science fiction game or movie, considering it may appear to be self aware even though it isn't.
Valid point, to an extent.
If true (self aware) AI doesn't exist then a Simulated Intelligence (SI) would still fully under human control, and incapable making decisions outside its pre-programed parameters (broad or narrow as those may be). So the repercussions of wide spread implementation are still governed by good old fashion human greed, corruption, and occasionally altruism, the results are predictable (well, as predictable as humans ever have been). My argument is that results of a true AI are totally unpredictable, there's absolutely no reason to suppose one would act/react like a human, and every reason to suspect it wouldn't. Thus implementing true AI would be a vitally important factor. OK, you could simply give one human motivations and reactions, but if you do that they're essentially just another human, their inclusion in a game is basically meaningless, i.e. a "wow" factor.

Lisbeth_Salander;n9226301 said:
What you call the "wow" factor I call human emotions, and every good director knows how to manipulate the audience when necessary. If it doens't need to be self aware to feel self aware, a creative director perhaps will not hesitate in bringing this idea to life..
I hope the above clarified what I mean when I refer to a "wow" factor?

Lisbeth_Salander;n9226301 said:
They changed 2077 quite a lot to appeal to a wider public, in interviews it was even said that the full retro look might keep younger audiences at bay, therefore they said they won't take that road.
They sacrificed some elements of cyberpunk 2020 for the demographic, as perhaps evidenced by the trailer, maybe we'll be seeing non self aware human like robots in 2077 too.
I'm not so sure the changes are mostly intended to appeal to a wider public, I think they're more things necessary to convert a PnP game to a video one and implement some newer concepts that didn't exist when CP2020 was created..

Lisbeth_Salander;n9226301 said:
The conversation is getting heated, I hope this doens't affect us and the forum, as I don't get things personal I interpret this discussion as a civilized non emotional argumentation and more importantly, friendly and respectfull. One thing I never do is to dislike people based on their different perspectives, and I value your perspective.
For my part I'd much rather discuss a topic with someone I disagree with, I learn more. Assuming they actually discuss the topic not just go off on some "I'm right, you're wrong, and I don't have to justify my position" diatribe.
 
Last edited:
Man, I really wish I had more time to be here!

Two general responses to existing discussions:

Artificial Intelliigence: I love this one. At its core, we are calling intelligence "artificial" if a.) we (humans) created it and/or b.) it is created using existing computer technology. However, what is "AI" at its core? It's a series of switches firing in a particular sequence...either on or off...in a certain order...depending on the digital code that was written to interpret stimulus from the environment.

What is natural natural intelligence at its core? It's a series of neuron cells that send an electro-chemical signal from one neuron to another...either the signal is sent or not sent...on or off...in a certain order...depending on the DNA code that determines how we receive stimulus from from the environment.

While the human brain is much more complex, works faster, uses organic machinery instead of synthetic machinery to function -- it's exactly the same process -- just exponentially increased. If we networked the same number of computer processors together as there are potential neuron pathways in the average human mind -- we have a human brain. But just like every individual human, it would be a result of countless interconnections, idiosyncrasies, and flaws that would result in a very unique way of interpreting the surrounding environment. It's not that humans have created "artificial" intelligence...we have already created a completely new form of intelligence. Just one that has not yet reached a comparable level of functionality to the human brain. Yet.


Realism vs. Fantasy: In terms of game design, it's the balance that's important. Neither would 100% realism appeal to many players, nor would 100% fantasy, free-for-all appeal to many players. The most important aspect of any creative venture is that the final piece has a "sense of itself". It needs to know what it is...where it came from...and where it's going (even if [ideally] the audience has no idea).

I feel CDPR @#$%!ng nails this. Especially with TW3. It doesn't mean that everything worked flawlessly, but it means that nothing really yanked and pulled the player away from the experience. (Of course, there are always arguments that can be made to the contrary, but I feel that most of the gaming world would agree that TW3 was a predominately positive, enjoyable experience from beginning to end. There are flaws, but I think most players would agree they don't "ruin" the experience.) Therefore, I imagine the devs will strike the same balance between very dynamic characters, an overwhelmingly immersive world, and playable, gameplay mechanics that balance realism with just the right touch of spectacle.

As a comparison of the balance between what is scientifically viable and the fantastical elements needed to make the game "punch" the audience, I would say the film The 5th Element is almost a standard candle. I'm not saying the game should imitate the the movie's tone or style -- I'm saying that the same sort of concessions need to be made to fully realize the vision of what Cyberpunk truly is. It's not a 100% scientifically viable take on the future; it's a stylized impression of the future used as a medium for telling a sweeping, intimate narrative.
 
Last edited:
SigilFey;n9227191 said:
<clip> but I think most players would agree they don't "ruin" the experience.
And there you have it.
You can add anything you want as long as it fits into the overall theme/experience/what-have-you of the game.
We've all played games that are good as a whole ... but ... there's an element (or ten) that just doesn't "fit" and rips our sense of immersion/belief right out of the game.
And this may well be the same exact element being viewed by different people.

The whole FPS debate is an excellent example.
Why should I miss at point blank range because my character skills control combat?
Why should my characters skills be ignored in combat?
 
Last edited:
Suhiira;n9227061 said:
If true (self aware) AI doesn't exist then a Simulated Intelligence (SI) would still fully under human control, and incapable making decisions outside its pre-programed parameters (broad or narrow as those may be). So the repercussions of wide spread implementation are still governed by good old fashion human greed, corruption, and occasionally altruism, the results are predictable (well, as predictable as humans ever have been). My argument is that results of a true AI are totally unpredictable, there's absolutely no reason to suppose one would act/react like a human, and every reason to suspect it wouldn't. Thus implementing true AI would be a vitally important factor. OK, you could simply give one human motivations and reactions, but if you do that they're essentially just another human, their inclusion in a game is basically meaningless, i.e. a "wow" factor.

Not having self awareness does not imply not being able to adapt or improvise. We already have adaptive systems in our reality.

The turing test requires the following item, even though the test itself is not considered a self awareness test:

"4. machine learning to adapt to new circumstances and to detect and extrapolate patterns."

There are endless possibilities for an AI to have a huge importance in a story, it goes from either exploring the intentions of its master in the extreme and under circustances that differenciates it from any other human, to "breaking" through the help of an external element the rules in its behavior while still following said rules, to exceed actions that any human could do, and my favorite by being proggramed to have a personal adaptive philosophy and give to the viewers a different perspective from reality and the world around it, by being programmed to have unusual instincts that no human could have in other words doing irrational things with ultimate precision (something that does not occur with humans since chaotic behavior does not walk side by side with non carelessness), by having fast unnatural adaptive notions (having a real adaptive AI enemy that does not die by the first encounter (but try to mimic the movements and tactics of the very player), by being proggramed by the player (giving him/her the option to fully customize his own personal robot ahhh my second favorite), by being 100% programmable both physically and mentally, etc.

An example of how AIs can be "free" even when they're not is David from the movies Prometheus and Alien: Covenant:



"The trick, William Potter, is not minding that it hurts."

[SPOILERS AHEAD FROM BOTH PROMETHEUS AND ALIEN: COVENANT]

>David is an android created by the human called Peter Weyland (owner of the Weyland company)
>David goes to a mission with his owner and a bunch of scientists to another planet in order to find our creators( some tall aliens called engineers bunch of jerks)
>It is heavily implied that the most important programmed rule in David is to protect and serve Peter
>everything David can do is restricted and programmed by Peter
>David is heavily inspired by movies and other forms of art
>David certainly has many other programming priorities (many other rules to follow besides serving Peter)
>In this alien planet it is basically confirmed that Xenomorphs exist or already existed
>Peter Weyland gets killed by an alien
>David is not restricted to follow his number 1 rule anymore
>It is not stated what his number 2 rule is (assuming it is one or various or even if it exists at all, perhaps David has no more restrictive rules or having no rules at all or the most plausible of them all: he has rules, but less restrictive ones)
>After his master's death David goes to another planet through the help of another human
>David creates Xenomorphs probably because he studied them from enginers database, therefore creating something out of his programming since he was not specifically programmed to create Xenomorphs, but was probably programmed to create something.
>David creates something that already exists, but he explored his restricted programming to the absolute extreme

He is an example of an android who has no self awareness but is indeed free to think how and what he wants in the sense that he knows the "hows" and "whats" with a basic programmation defining briefily the need to want something, but then again are humans any different? Can you imagine a color that doens't exist? At the same time he is indistinguishable from a human and still have adaptive behavior which made him extrapolate patterns, but more importantly he created xenomorphs, the perfect organism, while being alone in a cave through genetic engeneering, something no human could do.


There is a thin line between being human and pretending to be human, maybe there is no difference. Perhaps self awareness is just the self delusion that makes one believe he is indeed someone when in fact he is but a tought. Russels stated that by making AIs one must first understand the human mind. The day the human race creates a self aware Artificial Intelligence will be the day when the abyss will look back, for them will be trully the reflection of ourselves.

 
Last edited:
Lisbeth_Salander;n9227861 said:
Not having self awareness does not imply not being able to adapt or improvise. We already have adaptive systems in our reality.
Adaptive within, and only within, predefined and preprogrammed parameters.
Were an AI self aware it could not (well, unless it was somehow "enslaved") be restricted to predefined limitations in its ability to adapt and improvise.

Lisbeth_Salander;n9227861 said:
There are endless possibilities for an AI to have a huge importance in a story, <clip>
Undoubtedly, but all this assumes an AI will react to things they same way a human will, and that's highly unlikely.

Cats, dogs, dolphins, apes, even insects have limited intelligence and varying degrees of self awareness. Do you for one moment think any of them view the world the same way a human does? And no, it's not their more limited intelligence and self awareness that causes this non-human perspective, it's because they aren't human.

Lisbeth_Salander;n9227861 said:
Russels stated that by making AIs one must first understand the human mind. The day the human race creates a self aware Artificial Intelligence will be the day when the abyss will look back, for them will be trully the reflection of ourselves.
Here I'll totally agree.
But unfortunately we're not even close to understanding the human mind. OK, maybe some miracle breakthru will happen, but currently there no indication that's probable.
Now if you want to posit AIs in 2077 based on a miracle breakthru, fine ... science fantasy.
 
Top Bottom