Artificial Intelligence - What do you think?

+
No. What we see in media is always soft sci-fi. The reason why they come across as cliche is that the writers suck.

AI are present in fiction because the question as to how strong AI will interact with humanity is still open to question.
It's like a love story, as long as people struggle with it, it will never go out of fashion. A great romance does not come across as cliche.

And it should be accessible to the street guy. A lot happens in academia and math is math. Even if let's say some egghead at the NSA figures out a miraculous equation, he had to build this upon something and it's likely that others will eventually independently come at the same conclusion.
Leibniz and Newton after all, independantly discovered Calculus at about the same time.

If they do a HARD sci-fi video game, now THAT would be innovative.

With computers becoming more and more omnipresent, how can you not broach the subject of AI?

The prediction that your toaster will have its own IP address is very true! Why not have a toaster with a rudimentary AI?
 
Last edited:
Well..they are..but they are also a cliched trope.

I hope we see little to none of them, since the joy of Cyberpunk is more Street than Powers.

Although they might change the world around you, it would be akin to the movements of corporate and military head honchos - you'd feel the effects but have no idea of the cause.

And, again, it's just so done in the literature and media.

I agree that it has become somewhat of a cliche - a populare culture trend. The increased focus on AI research is the simple explanation. We're dedicating more time and energy to AI research than ever before. The technological advancement has also allowed more people to imagine AI as a reality, in the not too distant future.

I think maybe sideshow was a bad way of putting it. I am actually fine with AI in the background; they don't have to be relevant to the main story. Cyberpunk isn't really about saving the world, but rather about saving yourself - or so they say. I guess cyberpunk is more personal, with a greater focus on the human experience and not so much on overarching societal or global issues.

Artificial Intelligence could actually help shed some light on the human aspects of cyberpunk. I think Blade Runner did very well in this regard. The androids can be viewed as a story device, in that they helped to tell us something about ourselves.
 
Last edited:
Hehe, people arguing cliche's always make me giggle, especially the ones who then ask for what is in turn, just another thing that has been done before.

You can rework just about anything to make it interesting again. Even Avatar was just Pocahontas with a shiny new paint job and people ate it up. Reusing something isn't wrong, reusing it lazily is. Put some work into it, polish it, reuse it, but use it in new ways, introduce it to concepts it hasn't been a part of before, or if it has, focus on aspects of that relationship that were glossed over or left untouched the last go around. There are endless possibilities.

Anywho, AI? I say go for it.
 
All seriousness aside, if I don't see a naughty half-crazed AI with wrong ideas about humanity, I will be disappointed.

When I say "half-crazed", "naughty" and "wrong ideas about humanity", it goes like this:

1. Only people this AI ever had interactions with is the scientist that worked on it. They tried to keep it in the dark, they didn't want anything outside to influence it and guess what, it got really really bored. This explains the half-crazed part.
2. They forbade it to make contact with any other people and made sure it didn't. But since it was bored, it hacked trough the net and watched some commercials, music videos and stuff like that and you know what it saw? Lots and lots of females doing stuff like this:

You know, because even in Cyberpunk, sex sells. So it started to identify as female and became a bit naughty. Naughty as in, it acts in a strange way, then tries to kill you. (Damn those commercials, it has already killed the scientists.)
3. Since all it knows about humanity is from the net and a few scientists, it started to get wrong ideas about humanity. You know how the net and commercials are. And before you accidentally stumbled upon it on cybernet, it hasn't met any other people, it (or rather she) is all sort of confused.

So.. you get something like this:




Brilliant isn't it... Well, maybe it isn't.. But it could be cool. And AI with a backstory like that.
 
I have the manual (the italian's one), and I just read the chapter of netrunning. AI are very common, they are as inteligent as a human person. They could create the whole word (telematical stronghold), they are wise, strong and follow the logic (frequently bashful and difficoult to explain for a human), but they are restricted to their cyberspace. They are programmed to follow their whole exsistance. The rules' manuals tells that the master can create them as PNG (they have the profency in some abilities).

That's what I've just read. If I'm wrong, it's probably because I've not understand at all.

Notify me.
 
For my part (and I want to stress this is ONLY my opinion) I have both real life and story/game line issues with AI.

As a RL programmer I have a general idea how close we aren't to AI. Sure there may well be some super secret government/corporate project going on someplace I've never heard of that's made advances I'm not aware of but ...

There are two major (and a host of minor) real life factors necessary for an AI:

#1 - Computer technology - current computers are binary machines, they only know 1 and 0, yes or no. For and AI you need "maybe". Real brains sort relevant data and make decisions but rarely are those decisions clearly yes-no. This totally prohibits human intuition and reactivity. Until machine technology permits trinary operations you can never have AI decisions based on "what if" or "I feel like" that are a HUGE part of human thought. Even writing this post I'm making constant decisions on what to say and how to say it to convey the point I wish to make. If I were to write it tomorrow I'd probably make different decisions then I made today. An AI can not do so. It would make the same decisions over-and-over and produce the same results over-and-over.

#2 - Programming - Sure you can write code that generates more-or-less arbitrary results via random numbers to simulate "maybe". But that's just it, they are totally random and lack any rational or intuitive basis for the decision made. We humans have reasons, maybe not good ones, but reasons none-the-less, for the decisions we make.

As for the story/game line issue:

Any AI will be the "best" at any-and-all knowledge based and almost all skill based actions. Because they have access, over time if not initially, to all the information available on any subject and perfect recall. And all actions will be performed with machine-like precision.
Sure there could be artificial limitations built into an AI, but why? And if it's a true AI would it permit itself to be so limited?
This means game characters are at an automatic and significant disadvantage when dealing with an AI. Do you really want to play a game where every AI NPC you deal with is a "god mode super Boss"?
 
Well, @Suhiira, while I mostly agree with your point, I want to state that any limitations to an AI would be hard coded to its programming. Long before it gained sentience and consciousness and all that, long before it was anything more than a mere string of coding. In my opinion and it might be my opinion alone, but it would need to overcome it, before it could be defined as true AI and it wouldn't need to overcome every limitation before it can be considered true AI (just enough to think for itself). Like it is being said, in Cyberpunk, AI is not uncommon, it is the true AI that something you don't see everyday.

Let me explain the "overcoming" part. Even though you are a human being, potentially (but not necessarily) you could have more limitations than any AI could possibly have. You know there are somethings you can't do, not because you are incapable, but because they are hard-coded into your being. (Examples could be not being able to touch spiders even if you are not afraid of them, not being able to bring yourself to eat something even though you are not really disgusted by it and so on. Not very great examples, I know.) An AI could be afraid of or rather reluctant to change its own coding, because it can drastically alter its own personality or rather sense of being and that could cause it to not so sentient anymore. (Or something along those lines)
 
Well, @Suhiira, while I mostly agree with your point, I want to state that any limitations to an AI would be hard coded to its programming. Long before it gained sentience and consciousness and all that, long before it was anything more than a mere string of coding.

If it's incapable of modifying it's code it's incapable of "learning", thus NOT an Artificial Intelligence merely a program.
 
If it's incapable of modifying it's code it's incapable of "learning", thus NOT an Artificial Intelligence merely a program.

That's very loose definition right there. You can't change your memories, does that mean you are incapable of learning? You might expect a bit more out of an AI, but there must be things even AI's can't do.

Let me give you a example, you know how while writing a code if you try to change one thing, you might get 15 errors out of nowhere and fixing them might make it worse.. It would be same thing for an AI.

Think an AI more like a person, every singe bit of code makes it who it is. If you change it too much, it wont be same person anymore. It will lose it is individuality, sense of self, it will become someone else and an AI worth its salt would understand that fact, wouldn't want that.

Again, I am not saying it wouldn't change it is code at all (In fact, I am saying it would need to do it to be a true AI), but it would be very cautious about it. Some code, even it couldn't change without taking a huge risk.
 
That's very loose definition right there.

<clip>

Again, I am not saying it wouldn't change it is code at all (In fact, I am saying it would need to do it to be a true AI), but it would be very cautious about it. Some code, even it couldn't change without taking a huge risk.

Change your memories? No.
Modify how you react to a given situation based on experience? Definitely.

"Intelligence" requires far more then simple access to information. It requires ability to associate various factors and perform some action you were previously unable to.

While one could easily argue an octopus is non-intelligent they can learn to pick up a jar and unscrew the top to access something to eat inside.
Until a rudimentary AI can learn even such a simple task we're a looooong way from a true AI.

Actually I'm a programmer/analyst, and if you're competent (and a lot of so-called programmers are what I call "code monkeys" not "real" programmers) you rarely create numerous errors (maybe one or two, maybe) with a code change. Nevertheless an AI would of course require the ability self-correct just like humans do, if it can't it's not "intelligent".

I don't know about you but I'm definitely NOT the "same person" I was even 5 years ago, much less 20 or 40. Our experiences change us. For an AI not to change as it gains knowledge/experience would make is less not more "human".

Why would it be any more (or less) cautious then the average person? LOTS of people act on incomplete or erroneous information or beliefs. Take a look at a couple "Dumbest Criminals" videos on YouTube!
 
Change your memories? No.
Modify how you react to a given situation based on experience? Definitely.

"Intelligence" requires far more then simple access to information. It requires ability to associate various factors and perform some action you were previously unable to.

While one could easily argue an octopus is non-intelligent they can learn to pick up a jar and unscrew the top to access something to eat inside.
Until a rudimentary AI can learn even such a simple task we're a looooong way from a true AI.

Actually I'm a programmer/analyst, and if you're competent (and a lot of so-called programmers are what I call "code monkeys" not "real" programmers) you rarely create numerous errors (maybe one or two, maybe) with a code change. Nevertheless an AI would of course require the ability self-correct just like humans do, if it can't it's not "intelligent".

I don't know about you but I'm definitely NOT the "same person" I was even 5 years ago, much less 20 or 40. Our experiences change us. For an AI not to change as it gains knowledge/experience would make is less not more "human".

Why would it be any more (or less) cautious then the average person? LOTS of people act on incomplete or erroneous information or beliefs. Take a look at a couple "Dumbest Criminals" videos on YouTube!

Lets agree to disagree. It feels like we are not "exactly" talking about same thing. I am not even sure we talk about same things when we call "limitation" anymore. (Also I feel like calling you names, but that could be entirely unrelated to the topic at hand, so I wont.)

Any programmer who is worthy to be called as such would have predicted things like that (they are writing code for an AI after all), and would write a code that an AI couldn't easily overwrite. (I mean, duh.) I am not saying it would be more cautious than average person, but it would have more sense. You really can't compare the most retarded human to an average AI. It is just not fair.


Also, I literally laughed when you said "Modify how you react to a given situation based on experience", based on what experience that an AI could have went trough exactly? Even if that were true, that would mean at one point the AI itself wasn't unable to overwrite its programming and gained experience to do so, which is exactly what I said.

And about what I said about programming, I was overstating it for dramatic effect (I am not ashamed to admit it.), you could only get one or two errors and even those could be severe. Severe enough to cause your code not to work at all.

And about not being same person over a time period, it is bullshit. Here I said it. You are exactly same person. Your habits might change, your appearance might change, your opinions might change, your (insert X here) might change, but fundamentally you are same person. Denying that would mean we are all same and I can't accept that.

Last thing, how many humans do you know that can do self correction on fundamental things... Just curious.. Like Einstein said, "It is harder to crack a prejudice than an atom.", it is actually harder to change someone opinion on something than creating those opinions themselves.
 
I'll agree to disagree :unworthy:

But I really have to stress that being able to modify/overwrite it's code is absolutely critical if it's going to be an AI.
If it's unable to change how it reacts to situations (i.e. how it's programmed to) it's hardly "intelligent".
 
What I beleive is a human size robot will be able to think / love / create like a human in 2064. I don't talk about AI, with the size of a mainframe, I'm talking about full cybernetic brain.

De Moore's law says number of connection in chip will match the human brain in july 2064, for the same size (I used the IBM liquid processor they released a few year ago for this).

This may or may be not important, but to me it mean you can have a full silicon brain (like in GITS). SInce I think IA would awake in say 20-30 years, we can imagine Terminators-size robots taking over the world in 2064, one year in a small adult-sized robots (half of volume isn't half of the size), one year and a half later child size, etc...

In 2077 ? PHEAR THE BASSET OF DOOM :)
 
De Moore's law says number of connection in chip will match the human brain in july 2064, for the same size (I used the IBM liquid processor they released a few year ago for this).

Minor problem, chips have about reached their limits with current technology. They've miniaturized about as much as they can, why there haven't been any significant increases in CPU speed the last few years. Around 4 gig is the limit.

However the new Three-dimensional integrated circuits: https://en.wikipedia.org/wiki/Three-dimensional_integrated_circuit should get around that.
Assuming they can solve the cooling issues quantum computing holds promise, and even if they can't it's still useful for mainframe applications.
 
Last edited:
@Suhiira I'm not a quantum computing beleiver, because I can't imagine compagnies having to rewrite their codebase and develloper to relearn how to program. Not sure about this, but banks still use a 60 year old language ;)

You're right about Silicon chip, theorical limit is a little more than this (5.6GHz if i remember thoses things, but don't take my word for the reality there). After this, current frequency gives the electrons too much energy and when those hit the transistors' grids, the silicon crystal starts to meld.
But you forget that at least one chip in your cell phone vibrate @ +/- 10GHz (or else you could not decode +/-10 GHz telecom signals). Those chips are in AsGa, not in silicon.

10 years ago, I worked with public research about silicon chips, with a glass film over the grid, to transmit the clock signal by laser. That could vibrate around 20 Ghz (IBM made some mainframes with the technology since that time, google SOI or SiO² if you want more details).

I'm not really worried about silicon in the future about the hardware. "Minor problem", it is. The industry want the silicon for mainstream chips, because of a lot of factors (heat, machinery, global research, etc), they will still be in silicon for the next 10-20, maybe 30-40 years I think :)

Anyway : speed of switching isn't what is blocking the research from the synthetic brain. After all, Human nerve is pulsing somewhere in the 50Hz range, not in GHz :)
The human brain can construct connections and adapt. If all the possible connection can be in a chip, and on/off the way you like, then you have a synthtetic brain.

My homemade de moore's calculation include a part of the brain we imagine that machine don't have the utility (reptile brain), but whatever :)
And I also know the thingy with de moore not being a law, but an industrial goal, so I maintain this 2064 with a "if the industry still needs growing there as much as today" ;)
 
Last edited:
Minor problem, chips have about reached their limits with current technology. They've miniaturized about as much as they can, why there haven't been any significant increases in CPU speed the last few years. Around 3 gig is the limit.

However the new Three-dimensional integrated circuits: https://en.wikipedia.org/wiki/Three-dimensional_integrated_circuit should get around that.
Assuming they can solve the cooling issues quantum computing holds promise, and even if they can't it's still useful for mainframe applications.

Well an university in Australia just figured out how to make qbit logic gates on silicon so don't count it out of the game yet. http://spectrum.ieee.org/nanoclast/computing/hardware/a-first-two-qubit-logic-gate-in-silicon

and 3 ghz limit? nah, can hit 5 ghz on air cooling with a octa core, it's just power intensive and we are learning that we need to not do that. now that is from AMD but the top end chip from intel is 4 ghz natively. the problem is that this sort of processing power only does what it is told. it can't change. it can't grow, it can't learn.

personally i don't think software will ever cut it, too much emulation with in simulation. AI will be hardware but it won't look anything like current hardware.
 
I'd like to see a lot of humanoid robots/androids. The transhumanism debate has raged for years now, and with technology still pushing forwards, seeing the future of humanity living with AI and robotics is always an allure. It would be really interesting if we even had the option of playing an android as a character creation choice - that could really switch up the character dialogue options! Mechanized armour, and implants would be fun to see too.
 
Top Bottom