Artificial Intelligence vs. Intelligent Machines

+
Suhiira;n9250681 said:
It's not that I think AIs won't eventually be created. More that it's a lot less easy then many think.

It's not just a matter of stringing enough CPU together, were that the case networks would already be AIs. Nor it it a matter of creating programs that can search databases and find relevant data, if that were true Google would already be an AI. It's more of the ability to take seemingly unrelated information, turn it upside down and sideways and find a way to apply it to a problem. I.E. that "little" thing called creativity. And we certainly have no clue how to create that, if we did everyone could be Mozart, Einstein, Hawkings. We're currently incapable of anything close to this in humans, who we understand far batter then we do computers (and we understand very little about how human thought/creativity works) how are we going to make a computer capable of it?

Then there's the "minor" matter of morality/ethics. There's no reason to think an AI will view the world the same way a human does and every reason to think it won't. Will this result in "Terminator"? I hope not, but we'll never know till it happens. And there's no way to insure it won't. This isn't the sort of thing we can afford to leave to wishful thinking.

Lastly, the invention of an AI will totally upend the economic, social, political, military, and what-have-you fabric of the world. Thinking it won't is criminally naive.

I'd LOVE to see a realistic portrayal of AIs in the world, but there's absolutely no way to predict exactly what will happen. If you add AIs to a realistic (vice fantasy) game you have to consider what happens when 50-75%+ of the workforce becomes unemployed (and mostly unemployable) virtually overnight. What happens when some nations military becomes an unstoppable juggernaut that never makes mistakes. These aren't things that might happen with the implementation of AIs, they WILL happen.

>aren't things that might happen with the implementation of AIs, they WILL happen

I absolutely agree that the world in Cyberpunk 2020/2077 shouldn't change! With the creation of AIs the world would change, but under the condition that said A.I ever goes public. If the A.I doens't go public, then the world doens't change. Considering also that said A.I must be programmed to have a neutral view of the world or limitations, in order not to change things from the shadows. Tell me that there isn't a possibility of such thing happening? Consider the possibility of an human (or perhaps not) like non self aware AI (and only one AI will exist in the whole game or perhaps very few) being constructed and its creator making it programmed with a number 1 rule, that is to never go public, if it ever goes public then said A.I must self destruct immediately, thus ensuring it never happens during the whole story. But how would we "the players" know that it is an AI? well there are many possibilities, one of them is the Main Character knowing the existence of such A.I through the creator, and thus never revealing to the A.I that he (the Main Character) knows about its artificial state of being, BUT the MC doens't know HOW the AI is 100% made. In this possibility, the world of CP2077 woudln't change while still having few AIs, or only one AI. In this hipothesis the author should consider many "what ifs" that could force the A.I to go public and create ways of it not going public. This is only one example of it existing without changing the world.

But what if the Main Character decides to go public? The A.I would self destruct, and since said Main Character doens't know how it was made, no one would believe the MC and the MC wouldn't be capable of creating such A.I, since he didn't know how. Thus a human like non self aware non creative A.I , like David in Prometheus, could exist in CP2077 without having implications in the world under the condition that said AI would never go public. In this example there would be no evidence of how to construct said android besides the AI itself.

A limited A.I that focus in mimicking human behavior but without super intelligence, specially if said A.I would be forced/programmed to think like a human in Rachel's case,( of course it wouldn't think like us all the time since they're umpredictable as you stated but it would be no smart than average humans most of the time) would also have little no effects in the world, considering it would also not go public with (number 1 rule/self destruction) then it would have absolutely almost 0 effect in changing the world.

Now here comes the problem that may would exist a way to make it go public, and or we make the suspension of desbelief OR to make the whole story surrounding it to be restricted, the possibilities of it going public exist but events will NATURALLY happen without being forced to subtly conspire agains't it happening, I'm not talking about making things magically conspire agains't it happening, but to restrict many or all the possibilities of it happening via plausible facts and or events. Having only itself as evidence "for the first A.I to be created" would also work very well.

>We're currently incapable of anything close to this in humans, who we understand far batter then we do computers (and we understand very little about how human thought/creativity works) how are we going to make a computer capable of it?

How an A.I can be realistic made, today with creativity? Really doesn't have an answer. But again, we're talking about a science fiction game set in the year 2077, under the assumption that in the story of the game must have a truthfull "HOW THEY MADE AN A.I" explanation, then by this logic all the other scientific innovations in the game surely should follow the terms of realism, and they don't. You're stating that for AIs to exist in CP2077 they must explain their existence while staying truthfull to our world logic and views? I agree, that's why I'm against them putting self aware ones in the game since it stays closer to our reality, but then again you value creativity, and me too, while I repeat the same argument of David in prometheus, he has no creativity and yet is almost like what a human like self aware 100% truthfull A.I would appear to be like. In my defense, such AI wouldn't be able to create new things as you stated, but it would still be usefull.

Not to mention David like A.I has all the parametres of an already existing A.I today, so it is highly plausible.

I'm not saying CP2077 shouldn't follow scientific rules, I'm saying that it 99% should. But to also let that 1% for the creation of new scientific innovations (in the game) based on already existing ones (that exist in our reality), while still following completely all the scientific rules and mantaining touch with reality (to NOT make magic nor self aware A.Is to happen). And while it isn't easy as you stated, we don't need to make it 100% to look like a human, consider the mental and physical affects of cyber implants in humans, then perhaps we can have an AI that looks 70%-90% like a human physically, and 55% like a human mentally with its speech. In this possibility the only problem is that people would realise it isn't human because of the speech, so perhaps it could be programmed to speak only necessary things and or live isolated.

To me there's no difference between "never" and "will", since one assume is 0% likely to happen and the other 100% likely to happen. Sometimes, possibilities exist.

Have you wondered if CP2077 creators already had this very same discussion we're having, I wonder what conclusion they got, or will get?


You know that CDPR may give the "WOW" factor a huge importance because of marketing strategies? In terms of movies, have you considered what made Avatar (the blue aliens movie) in 2009 to have such a high profit compared to all the other movies? And when I say high proffit I mean $2,787,965,087 in box office making it the 1rst place until today. It was because of the WOW factor, until that day that Avatar first appeared in movie theatres, humanity has never saw a movie with such ambitious CGI. For a movie or a game or anything that depends of entertainment to dinstinguish itself from others of the same category, it MUST DISTINGUISH ITSELF with innovation, it must be different to get people's attention, thus the importance of the "WOW" factor. Avatar 2009 had a huge box office because of the 'WOW' factor. It's human fellings that generates money.


 
Last edited:
EvilWolf;n9238791 said:
On the point of expert systems; you have to ask yourself what makes you human, or to say; what separates you from an AI or in this case an expert system. Most of the things you do in life you learned how to do from someone else, in one way or another. In theory an expert system which is programmed to be an expert in a multitude of fields, with enough processing power, can easily surpass the thinking capacity of a human being by achieving mastery of fields which would take a human being multiple lifetimes to accumulate. An expert system in history, philosophy and/or, to an arguably lesser extent, psychology could easily have a wider understanding of humanity than most humans do.

You're telling me that in theory an expert stystem could surpass us in understanding human mind through analysis and not creativity, since they still can't create new things but are better at analysing things that already exist, so it is possible considering the current technology we have? Oh boy, take a look at this:

"Elon Musk on mission to link human brains with computers in four years: REPORT"

https://www.cnbc.com/2017/04/21/elon...rs-report.html

We dun guffed. From Sard link:

The Oxford dictionary defines “the singularity” as, “A hypothetical moment in time when artificial intelligence and other technologies have become so advanced that humanity undergoes a dramatic and irreversible change.”


Hoplite_22;n9241301 said:
I disagree, we are building them, we have to choose what we are going to try to build. which to be fair means if corporations do it first we are all fucked.

Since "we" is also defined by the corporations, Isn't it more likely for corporations to do it first, since they have more monetary resources than good natured people and the government combined, not to mention that by theory the private sector has way more interest in experimenting and creating new projects and innovations since it will beneficiate from its products more than the government having in mind they can industrialize their projects but the government can't as easily?

Think about it, there are studies stating that for every 1000 projects only, only 1 of them passes to the final phase: industrialization. Since the private sector has more resources, flexibility and freedom for creation, the government can't "throw in the trash" 999 projects since they don't have too many resources, in other words, there are many regulations preventing such creative flexibility.

Does anyone even remembers Route 128 anymore? Its industrial crisis leaved Sillicon Valley as the first technological city in America, but why did such crisis happen in Boston again you ask? In "Technopoles of the World the Making of 21st Century Industrial Complexes" by Manuel Castells and Peter Hall, it is stated based on the work of Annalee Saxenian the following when mentioning one of Massachussets's plausible crisis causes:

[...] the industry is ruled by professional associations, such as the Massachusetts High Technology Council, that enforce an industrial discipline so as to exercise their lobby power, both in the State Government and in Washington [...]

This can give you the idea of how difficult it is for "government" and "innovation" to walk side by side. This doens't mean it will, it means it is more likely for corporations to do it first.
 
Last edited:
Lisbeth_Salander;n9253451 said:
If the A.I doens't go public, then the world doens't change.
Unless whoever invents an AI NEVER uses it for any practical purpose, i.e. STRICTLY experimental, how could it not soon become blindingly obvious they have an AI? Some corp/government suddenly leaps years, centuries, ahead in R&D literally overnight, their manufacturing and administrative procedures become hyper optimized and efficient, their marketing and PR efforts break all records. No, if it's out there, and in use, it will be noticed, and sooner rather then later.

Lisbeth_Salander;n9253451 said:
A limited A.I that focus in mimicking human behavior but without super intelligence, specially if said A.I would be forced/programmed to think like a human <clip> would also have little no effects in the world, <clip>
True, as long as it stayed a laboratory rat with such artificial restrictions atop it's already artificial electronic brain. And in this case, "what's the point" of including it in a games setting ... "wow" ...

Lisbeth_Salander;n9253451 said:
Now here comes the problem that may would exist a way to make it go public, <clip> the possibilities of it going public exist but events will NATURALLY happen without being forced to subtly conspire agains't it happening, I'm not talking about making things magically conspire agains't it happening, <clip>
I'm afraid there are no "natural" "non magical" ways to keep the genie in the bottle once it's out (see the above comments). Thus a HUGE suspension of disbelief, i.e. "fantasy".

Lisbeth_Salander;n9253451 said:
But again, we're talking about a science fiction game set in the year 2077, under the assumption that in the story of the game must have a truthfull "HOW THEY MADE AN A.I" explanation, then by this logic all the other scientific innovations in the game surely should follow the terms of realism, <clip>
No, actually "how they made it" is pretty much irrelevant (a great intellectual academic exercise tho), HOW IT EFFECTS THE WORLD is hyper relevant. Because if it doesn't effect the world in profound and far reaching ways it's not a "realistic" portrayal of AIs in a game world.

Note: We have two discussions going on here, and sometimes we mistake comments directed at one aspect (the creation of AIs) as being directed at the other (the implementation of AIs) and vice-versa.

Lisbeth_Salander;n9253451 said:
I agree, that's why I'm against them putting self aware ones in the game since it stays closer to our reality, but then again you value creativity, and me too, while I repeat the same argument of David in prometheus, he has no creativity and yet is almost like what a human like self aware 100% truthfull A.I would appear to be like. In my defense, such AI wouldn't be able to create new things as you stated, but it would still be usefull.
True, how it would be free thinking enough to solve common everyday problems (outside of pure logic, which we already have with computers, no need for AI) while being incapable of "higher creative thought" is totally beyond me ... i.e. "fantasy".

Lisbeth_Salander;n9253451 said:
I'm not saying CP2077 shouldn't follow scientific rules, I'm saying that it 99% should. But to also let that 1% for the creation of new scientific innovations (in the game) based on already existing ones (that exist in our reality), while still following completely all the scientific rules and mantaining touch with reality (to NOT make magic nor self aware A.Is to happen). And while it isn't easy as you stated, we don't need to make it 100% to look like a human, consider the mental and physical affects of cyber implants in humans, then perhaps we can have an AI that looks 70%-90% like a human physically, and 55% like a human mentally with its speech. In this possibility the only problem is that people would realise it isn't human because of the speech, so perhaps it could be programmed to speak only necessary things and or live isolated.
Again, it's a matter of how its existence would effect the world not how it looks or acts.

Lisbeth_Salander;n9253451 said:
To me there's no difference between "never" and "will", since one assume is 0% likely to happen and the other 100% likely to happen. Sometimes, possibilities exist.
Here we'll have to agree to disagree.
Short of reaching escape velocity with sufficient energy to escape Earths gravitational influence what goes up WILL come down.

Lisbeth_Salander;n9253451 said:
Have you wondered if CP2077 creators already had this very same discussion we're having, I wonder what conclusion they got, or will get?
I hope they have ... and I pray that if they decide to include AIs they do so with "realistic" effects and consequences. If it's in the game I don't want it for the "wow" factor or as a glaring (to me) "fantasy" element in an otherwise "realistic" game, it would blow immersion right out of the water (for me).

Lisbeth_Salander;n9253451 said:
You know that CDPR may give the "WOW" factor a huge importance because of marketing strategies?
They might, and if they do I'll enjoy it every bit as much as I enjoy "Shadowrun", a fantasy game in a fantasy setting.
But it won't be CP2020 (upgraded and revised) ... a science fiction game in a science fiction setting ... and that's a pity.
 
Last edited:
Suhiira;n9254791 said:
True, how it would be free thinking enough to solve common everyday problems (outside of pure logic, which we already have with computers, no need for AI) while being incapable of "higher creative thought" is totally beyond me ... i.e. fantasy

>A.I being free thinkers

On the contrary, David has no free thinking, he was only able to create something that already existed after all.

If I stated that he was "free" was in the sense that he no longer had the most restrictive rule of all: to serve and protect Peter Weyland. And even then he was restricted to other rules.

Suhiira;n9254791 said:
True, as long as it stayed a laboratory rat with such artificial restrictions atop it's already artificial electronic brain. And in this case, "what's the point" of including it in a games setting ... "wow" ...

What's the point of having computers that walk and can interact with the world and the people around them?

It could be out in the world under the condition that it would have a restricted liberty as stated in my example.

Suhiira;n9254791 said:
HOW IT EFFECTS THE WORLD is hyper relevant. Because if it doesn't effect the world in profound and far reaching ways it's not a "realistic" portrayal of AIs in a game world.

It needs to have realistic portrayal of A.Is. It also needs to have realistic portrayal of how A.Is would affect the world. Both are relevant to stay as close as possible to reality.

Well that is if we want a realistic Cyberpunk 2077.


Suhiira;n9254791 said:
Unless whoever invents an AI NEVER uses it for any practical purpose

An A.I could have purpouses while still being anonymous. Not as big purpouses as a public A.I, but that's the point since we don't want them modifying the world of CP2077.

Suhiira;n9254791 said:
I'm afraid there are no "natural" "non magical" ways to keep the genie in the bottle once it's out (see the above comments). Thus a HUGE suspension of disbelief, i.e.

Suhiira;n9254791 said:
I'm afraid there are no "natural" "non magical" ways to keep the genie in the bottle once it's out (see the above comments). Thus a HUGE suspension of disbelief, i.e. fantasy, would be required.


Except you forgot the rest of the comment where it's explained the other alternative:

With the existence of only one A.I, and no evidence of its existence besides the A.I itself and it has a self destruction rule in case it goes public. It gets easier to close all the possibilities. Because it's difficult to close it doens't mean it's impossible.

With a being that is 100% programable the genie can stay in the bottle with plausible excuses.



Suhiira;n9254791 said:
Again, it's a matter of how its existence would effect the world not how it looks or acts.

Acts and looks affect the world, since it needed to stay out of the public eye.

An A.I not going public while still roaming the streets of Night City needs to at least look and act human to not affect the world and consequently not go public, then it would be reasonable to look and act like a human.

If the tought, why would a creator not want to have his A.I to go public ever crosses your mind...well the A.I not going public could be by intentional or unintentional causes,

You should watch Westworld. You will never expect the finale.
 
Last edited:
Suhiira;n9254791 said:
I hope they have ... and I pray that if they decide to include AIs they do so with "realistic" effects and consequences. If it's in the game I don't want it for the "wow" factor or as a glaring (to me) "fantasy" element in an otherwise "realistic" game, it would blow immersion right out of the water (for me).

Suhiira;n9254791 said:
They might, and if they do I'll enjoy it every bit as much as I enjoy "Shadowrun", a fantasy game in a fantasy setting. But it won't be CP2020 (upgraded and revised) ... a science fiction game in a science fiction setting ... and that's a pity.

Mike is there, there is a chance that they'll have both. I hope they'll use it as a marketing strategy while being realistic. Have you realized that in many interviews Mike made recently, he seem weird when describing the game...Perhaps this thing about Mike should be discussed privately, in order to preserve balance here.

Suhiira;n9254791 said:
Here we'll have to agree to disagree. Short of reaching escape velocity with sufficient energy to escape Earths gravitational influence what goes up WILL come down.

If what goes up, stays in space forever. It may not come down.
 
Last edited:
Lisbeth_Salander;n9253691 said:
You're telling me that in theory an expert stystem could surpass us in understanding human mind through analysis and not creativity, since they still can't create new things but are better at analysing things that already exist, so it is possible considering the current technology we have?

A major advantage an expert system, theoretically, has over a human is that a human has his/her knowledge in a field, whereas an expert system could, potentially, have the knowledge of everyone who contributed to its development. The implications of this are as follows; an expert system can, theoretically, be smarter than any one human, however the expert system in question would not be smarter than a particularly skilled, in the field in question, group of humans, or a genius.

Suhiira;n9241661 said:
As much as it might be nice to think the government has the best AI research teams it's extremely unlikely. Some corp will do AI first.

To be honest I would rather, certain, corps do it first than shadier elements of the government. If I had to guess I would say IBM and Google are the closest corps to achieving AI.
I hope IBM does it first.
 
@Suhiira

You once said how unlikely it would be for an Artificial Intelligence to have improvisation without having self awareness.

Well, the following video surprised even someone that defends A.I.s like me:


AI plays agains't the world's greatest DOTA player. The AI adapted its strategies by simply playing against itself for over "lifetimes." Guess what? The A.I won.

Another A.I plays the ancient GO asian game, that exists for 3000 years. A game so complicated, asian kids are put in speacial GO's schools when they appear to be too good at it at an earlier age. A game so complicated, it would take millions of years to proggram a singe A.I all it's strategies. A game so complicated Mathematicians say there are more unique moves in GO than atoms in the universe. But they programmed saud AI to learn and to redesign itself, and by doing so it won agains't humanity's best GO player.

It took 3000 years for humans to master GO, while an A.I younger than you and I was better at playing it. Said A.I was so superior than us at playing GO, that it created NEW strategies that now started being used by professional GO schools.

I used to think that real smart A.Is would only exist in science fiction books, but now I'm having serious doubts about it. Life exists in this planet for 4.5 billion years. The first animals that weren't microscopic started to appear in the ocean around 600 million years ago. From around 100-200 thousand years ago, the first humans as we know started to evolve. Civilization exists for what? 10 thousand years? Meanwhile A.Is only started to be developed in the early 1960s and are already surpassing us in many intellectual working fields. So unless we do something right now the future may not be good.
 
Last edited:
Lisbeth_Salander;n9352371 said:
@Suhiira

You once said how unlikely it would be for an Artificial Intelligence to have improvisation without having self awareness.

Well, the following video surprised even someone that defends A.I.s like me:
I watch almost everything Cold Fusion puts out, good stuff!

But while AIs may demonstrate a very limited form of improvisation within a very confined set of parameters (i.e. a single game and it's rules) it's still a LONG way from the real thing.
Taking essentially unrelated information and putting it together in unique ways.

Lisbeth_Salander;n9352371 said:
@[
I used to think that real smart A.Is would only exist in science fiction books, but now I'm having serious doubts about it. Life exists in this planet for 4.5 billion years. The first animals that weren't microscopic started to appear in the ocean around 600 million years ago. From around 100-200 thousand years ago, the first humans as we know started to evolve. Civilization exists for what? 10 thousand years? Meanwhile A.Is only started to be developed in the early 1960s and are already surpassing us in many intellectual working fields. So unless we do something right now the future may not be good.
Computers to have a SIGNIFICANT advantage in terms of "learning" in that they can do so at literally the speed of light, so they can pack 10,000 years into a few hours.
And that is precisely the thing that makes them very VERY scary. A truly self-programming self-aware AI could exploit the slightest flaw in it's restrictions faster them we could even push the "off" button.
 
Lisbeth_Salander;n9352371 said:
@Suhiira

You once said how unlikely it would be for an Artificial Intelligence to have improvisation without having self awareness.

Well, the following video surprised even someone that defends A.I.s like me:

AI plays agains't the world's greatest DOTA player. The AI adapted its strategies by simply playing against itself for over "lifetimes." Guess what? The A.I won.

Another A.I plays the ancient GO asian game, that exists for 3000 years. A game so complicated, asian kids are put in speacial GO's schools when they appear to be too good at it at an earlier age. A game so complicated, it would take millions of years to proggram a singe A.I all it's strategies. A game so complicated Mathematicians say there are more unique moves in GO than atoms in the universe. But they programmed saud AI to learn and to redesign itself, and by doing so it won agains't humanity's best GO player.

It took 3000 years for humans to master GO, while an A.I younger than you and I was better at playing it. Said A.I was so superior than us at playing GO, that it created NEW strategies that now started being used by professional GO schools.

I used to think that real smart A.Is would only exist in science fiction books, but now I'm having serious doubts about it. Life exists in this planet for 4.5 billion years. The first animals that weren't microscopic started to appear in the ocean around 600 million years ago. From around 100-200 thousand years ago, the first humans as we know started to evolve. Civilization exists for what? 10 thousand years? Meanwhile A.Is only started to be developed in the early 1960s and are already surpassing us in many intellectual working fields. So unless we do something right now the future may not be good.

Sometimes I think humans are really just a stepping stone to a greater form of "life", like the ape.
In the future AI's will view us as we view the ape. Innocently stupid creature acting on impulses it's whole short life.


 
The greatest minds on the artificial intelligence field arguing with eatchother, including Elon Musk and DeepMind's CEO:

Suhiira;n9353371 said:
Taking essentially unrelated information and putting it together in unique ways.

Now that is exactly what Ray Kurzweil says at 47:40 when asked how will A.Is be beneficial for us:


"Humans can't imagine concepts we can't imagine."

Perhaps the technological singularity will happen when AI's are capable of creating innovations in general. In other words, when AI's can be able to imagine things we can't imagine.
 
Last edited:
Lisbeth_Salander;n9357481 said:
Perhaps the technological singularity will happen when AI's are capable of creating innovations in general. In other words, when AI's can be able to imagine things we can't imagine.
More when they start to imagine things we could have were we as a species a few hundred times older. Since their main ability to essentially compress time by doing things faster.
 
Reality VS Art

CEO behind Google's artificial intelligence (world's most powerfull AI) talking with Blade Runner 2049's director:

[video=youtube;pPn-xuifKFg]https://www.youtube.com/watch?v=pPn-xuifKFg[/video]

 
When Elon Musk and Google are considered the greatest "AI" has to offer, I realize this thread has failed...

I'm just gonna say it out right: there's a difference between pop "AI", often discussed by journalists, writers and entrepreneurs, and real AI, an area of scientific research at the intersection of math, computing and cognition. One is complex and requires some academic background to properly understand, university-level education to implement, and many years of academic research work to expand upon and contribute to. The other accepts all sorts of speculation, opinions and, often, irrational ideas and fears. I'll let you decide which is which.

You're all free to listen to whatever you like. I do realize now that, unlike astrophysics or evolutionary biology, the AI community hasn't done a great job communicating its achievements and results to the general public, so its easy to confuse what's business, what's fiction and what's science.
 
volsung;n9834121 said:
When Elon Musk and Google are considered the greatest "AI" has to offer, I realize this thread has failed....

Yeahno. It's a discussion forum and this is a discussion. No fail here.

 
volsung;n9834121 said:
I'm just gonna say it out right: there's a difference between pop "AI", often discussed by journalists, writers and entrepreneurs, and real AI, an area of scientific research at the intersection of math, computing and cognition. One is complex and requires some academic background to properly understand, university-level education to implement, and many years of academic research work to expand upon and contribute to. The other accepts all sorts of speculation, opinions and, often, irrational ideas and fears. I'll let you decide which is which.
And that's the problem with permitting AIs in CP2077.
Most folks think in terms of popular "pop culture" AI and have no clue what "real" AI involves.
If it's included the way "pop culture" would see it then it should be all pervasive and will be essentially meaningless as it's "just another" person to interact with.

WOW, an AI!

(( Just rolls her eyes. ))
 
Last edited:
Well originally when I started this thread I wanted to express my concern about how some tech terms (aka technobabble) are often thrown around and instead of contributing to building a believable world, they do exactly the opposite. One such term is, in my opinion, "AI" as a name for a sentient, artificial entity. Sentient programs are precisely that, and sentient androids that seamlessly integrate into human societies are also precisely that. In order to get there I tried to provide a concise and short overview of the state of the art in AI research.

(Summary of summary here, for the record)
Basically the idea is that many individual aspects of what we consider intelligent behavior have been successfully modeled using advanced math, and successfully implemented in practice in one or another form of machine. In terms of the evolution and architecture of cognition there is no reason to assume synthetic beings cannot develop some kind of consciousness similar to ours, simply because we don't know enough about it. The achievements of AI and cognitive science do suggest, however, that mere functions are not enough to distinguish human intelligence as something especially unique, not anymore at least.

This should have been an encouragement for companies like CDPR to 1) focus on world building and story telling 2) avoid obsolete ideas and 2) choose their tech-language carefully so their story remains relevant for a long time.

My earlier post simply stated that somehow the thread ended up discussing what millionaires and business people think about the plausibility and sometimes "imminent dangers" of something they invest in, but barely understand, in both the technical and philosophical dimensions. In their world, we should also stop having babies because of the greater than zero chance one of them might turn out to be a criminal mastermind and enslave humanity. These CEO's have certainly contributed to the mainstream adoption of AI-based technology, but the science behind it was exciting in the 1980's... university labs have moved on to other things, like "taking essentially unrelated information in putting it together unique ways" (eg.: computational creativity, combinatorial optimization, intrinsic motivation in autonomous learning, etc.). Like I said, I think some of the new sciences (AI included) have failed to properly communicate results in a non technical manner.

Back to the thread title, modern advances in the field of AI make it possible to delegate all sorts of tasks, from path planning to scheduling to financial investments to complex decision making, on computer programs. Current efforts integrating vision, object manipulation and planning will make it possible to have robotic assistants, even at home, in the near future. These assistants must be able to quickly process massive amounts of inconsistent information and make educated guesses, so one stereotype (the super logical robot) must go. Taking a fictional leap to the near or not so near future, as sentient robots become progressively more autonomous, proficient and maybe human-like, the challenge for video games, movies and books is telling meaningful stories based on these ideas, or the implications of these ideas. Otherwise, like Suhiira said, these sentient programs and machines are no different from regular characters or, worst case scenario, simply replace the dragons of medieval fantasy. Anyway, the point is that there's a not so fine line between science fiction and fantastic fiction, and I assume CP77 wants to be the former? Therefore the importance of discussing these topics.

Edit: Basically the question is how far can/should CDPR or any game company go, in terms of concepts, vocabulary, educated speculation, and so on, in order to create solid sci-fi worlds in our era of mainstream interest in technology? Games like Pillars of Eternity and Divinity and so on have it relatively easy, with magic anything goes!
 
Last edited:
volsung;n9847911 said:
Edit: Basically the question is how far can/should CDPR or any game company go, in terms of concepts, vocabulary, educated speculation, and so on, in order to create solid sci-fi worlds in our era of mainstream interest in technology? Games like Pillars of Eternity and Divinity and so on have it relatively easy, with magic anything goes!
Trying to explain AI to the layman is like trying to explain quantum mechanics ... not gonna happen.
Lots of people think they know what it's all about but in most cases don't have a clue.
(They'll however loudly and frequently argue their opinions ... yes opinions not facts .. with anyone that disagrees with them.)

Games, for all that they can do, aren't really in a position to try to educate players on technical issues.
Sure they can present various moral issues, but technical? No.
 
Last edited:
Suhiira;n9850721 said:
Trying to explain AI to the layman is like trying to explain quantum mechanics ... not gonna happen.
Lots of people think they know what it's all about but in most cases don't have a clue.
(They'll however loudly and frequently argue their opinions ... yes opinions not facts .. with anyone that disagrees with them.)

Games, for all that they can do, aren't really in a position to try to educate players on technical issues.
Sure they can present various moral issues, but technical? No.

If a game were to be focused on a topic central to quantum mechanics I'd certainly appreciate a short intro, but most definitely I'd expect the designers, writers and developers to have a functional understanding from which to create a plausible world, preferably from sources other than business entrepreneurs and comic books. That's the difference between fantasy and sci-fi. I am not at all against story telling without a plausible foundation, I personally love comic books, fantasy and all sorts of implausible fiction. I just don't call it science fiction.

Good science fiction is not technical at all, but it is informed enough to know what *not* to say or do. In other words, the moral issues are, directly or indirectly, derived from and restricted by technical possibilities. Consider for instance the concept of "evolution" in Pokemon games: everybody knows its technically wrong but noone cares because its a playful, cartoonish game. This means however any writing more nuanced than a Saturday morning cartoon is out of the question. What kind of stories does CP77 want to tell? And how long will it take before these stories become objectively silly, not for a few but for a large enough crowd?
 
volsung;n9869211 said:
I am not at all against story telling without a plausible foundation, I personally love comic books, fantasy and all sorts of implausible fiction. I just don't call it science fiction.
Here we'll agree, I have a large collection of sci-fi and fantasy books filling several large floor-to-ceiling book cases and one, one, full of reference works (some of which are more fantasy then fiction themselves).

But I'll disagree that fantasy works can't have good writing. The quality of writing is independent of the source material.
 
Top Bottom