Artificial Intelligence vs. Intelligent Machines

+
The thing about saying AI vs Intelligent machine, is that you need to determine the means by which said intelligence is "artificial"

I mean you could have a dog that is "artificially intelligent" through gene therapy or biomodification. Technically the Intelligence is still "artificial" vis a vis artifice meaning manufactured.

An intelligent machine would thus be a different form of "AI"
 
Another update on AI's ... I'm starting to think I need to reevaluate my stance of their availability in 2077 ...
https://www.youtube.com/watch?v=mQO2PcEW9BY

And ... speaking of Watson ...
https://www.youtube.com/watch?v=yXcDir9Y9CI
BUT !
It still needs mainframe super computer ... so don't expect androids.

Then there's the $ 1,000,000 question ... can an AI be sentient?
https://www.youtube.com/watch?v=JTOMNkZJRao
Short answer ... no.

And on a related note:
https://www.youtube.com/watch?v=wqH9KX9o0vg
Currently research lab stuff .. but easily doable by 2077.

But of course it has all the limitations of WiFi ... no such thing as secure communication.
 
Last edited:
I promise to read the paper referenced in the consciousness experiment, but I gotta say the guy from that YT channel doesn't seem to have a clue about actual AI or cognitive science...

Edit: OK I'm back. Watched most of those YT videos. About the videos themselves, all I will say is they are entertaining but very, very poor from a science communication point of view. Essentially, it's some guy who "likes technology" and makes videos about the headlines from popular (non academic) news sources, with more personal opinions than content.

Now on to consciousness. You have to understand there are many processes, even intelligence, that are named so we can study them but are hard to define concretely and even harder to formalize mathematically. We realize there is something because we can test and measure things about ourselves, and so we have come up with some of these concepts. Whether intelligence is exclusively the product of a biological brain or whether some phenomenology of mental states is necessary for consciousness, is a matter of study and perhaps because of the loosely defined nature of these topics, no answer may ever be found. If you really want to get into phenomenology, you'll end up discussing things like "what it's like to experience the color red". And this brings me to the video and the associated paper. The authors themselves have defined their field as "psychometric AI", i.e. test based AI. They are engineering methods to prove that machines, too, can pass tests. They accomplish many of these tasks through a combination of a logical inference system similar to theorem proving and natural (aka spoken human) language processing. This says very little about actual consciousness in robots or for that matter, the limits of artificial minds. The paper itself is titled "Real Robots that Pass Human Tests of Self-Consciousness", but the YT guy clearly states that these robots "are conscious", not that they pass tests.

The truth is much of what goes on in pure robotics nowadays is closer to engineering. That is, the design of efficient machines and their control systems. AI separated from robotics many years ago among many reasons because most core issues are more easily approached from a clean mathematical framework, free of the limitations of working with actual machines. Current technology has made it possible to bring some interesting AI systems back on board robots. But if you think about it, robots are nothing more than a mobile computer with sensors. A lot of what is currently possible in robotic AI was already achieved, albeit humbly, by the robot Shakey in the 50's, which utilized a true planner to solve actual problems. Nowadays people are working on solving more complex problems with better planners that integrate different kinds of knowledge (eg. temporal constraints, etc.), reasoning under uncertainty and with known or unknown probabilities, object manipulation, object recognition (form and function), etc. Many things such as localization and navigation are now well understood. AI outside of robotics is much more advanced, not simply because of hardware limitations (nowadays you *could* pack a Xeon Phi or Tesla in a large mobile robot) but because of core issues such as the computational (theoretical) complexity of problems, which means there are many analytical and fewer experimental results (ie. non simulated). There are many videos out there showing robots performing tasks (eg. flipping pancakes) but a lot of this is still heavily scripted and/or relies on huge knowledge representations maintained by human experts.

Current trends related to this on actual AI research include learning quickly from very few examples, reducing the dimensionality of large problems (eg. reasoning with beliefs --probability distributions over information states--), combining statistical methods (monte carlo search) with value approximation (eg. "deep" neural networks), among others. There are plenty more for instance in computer vision (3D sensor data analysis), natural language processing, and so on, each with interesting analytical and experimental results that are often too academic to attract any wide spread interest. There is the common "belief" that an AI system must be able to "reprogram itself", which shows a lack of understanding of what this is all about. AI is about discovering formal methods that correctly model and/or efficiently implement what we consider "intelligent behavior". This leads to very comprehensive systems with dynamics that allow them to change over time. In other words, these are "programs designed to change and adapt", for instance by making decisions following some long-term reward maximization rule, with reward values obtained from experience. This means the agent (or robot) will effectively make progressively better choices, because this is the nature of the model it is running and not because it is aware of its own programming or because it adds "if-else" blocks to its "code". After becoming an "expert", the agent or robot still runs the same program but is now more informed. Based on experience and reward functions, the same program could also lead to different "behavior" profiles (eg. defensive, aggressive) based simply on what works best in its environment.

So dismissing everything that YT channel said, I would still say that by 2077 we should have some pretty advanced autonomous robots integrated into our everyday lives: helping around the house, in hospitals, in industry, etc. Heck, i think in maybe 10-15 years we should see functional, autonomous robots in some homes (a sort of robot butler or helper, not Data!) (now this IS the area of work of many current university labs). By 2077 they should naturally be much more advanced and we will probably interact with them in a relatively natural manner (like we would with other people) but they might not necessarily move like humans and might still be somewhat specialized or have limitations in subtle tasks (personal statements on music or art?). And with this last part I will wrap things up: the lack of intentional mental states and qualia cannot only be hypothesized of computer agents and robots. We might as well argue that some people go around the world simulating mental states without actual comprehension, reciting books without understanding their content and spewing bullshit without grasping the implications of their words. This "philosophical zombie" would be indistinguishable from humans, except for the fact that it lacks qualia, i.e. subjective experience or sentience. Regardless of whether this makes sense outside of academia, the point is these questions are valid for any agent or entity that possesses higher cognitive functions and appears to have some form of awareness, as would be the case of very advanced, intelligent agents, robots and humans.
 
Last edited:
Corewolf;n8134400 said:
The thing about saying AI vs Intelligent machine, is that you need to determine the means by which said intelligence is "artificial"

I mean you could have a dog that is "artificially intelligent" through gene therapy or biomodification. Technically the Intelligence is still "artificial" vis a vis artifice meaning manufactured.

An intelligent machine would thus be a different form of "AI"

That's actually an interesting point. Humans with augmentations would also have artificially enhanced intelligence. It'd be the tech equivalent of the "wizard's hat of +2 INT".

"AI" however is simply the research field that attempts to understand mammal and human intellect and implement it on machines, through the design of formal mathematical models. I simply think the term "AI" used as an entity (eg. an AI) is wrong for many reasons, and practically noone in the field uses it that way. Probably because intelligence is seen as a function or a process, not an object. Society in 2077 should be better informed and used to these devices, and should have an appropriate name for them.
 
volsung;n8388050 said:
Edit: OK I'm back. Watched most of those YT videos. About the videos themselves, all I will say is they are entertaining but very, very poor from a science communication point of view. Essentially, it's some guy who "likes technology" and makes videos about the headlines from popular (non academic) news sources, with more personal opinions than content.
As you've no doubt noted I'm MUCH more of an engineer then a philosopher or cognitive scientist so your insights were very informative, thank you!

OK ... I'll re-revise my potential revision of the availability of a "true AI" in 2077 back to ... maybe, but probably not ;)

And have a couple Red Points from me as well.
 
Suhiira;n8397910 said:
As you've no doubt noted I'm MUCH more of an engineer then a philosopher or cognitive scientist so your insights were very informative, thank you!

OK ... I'll re-revise my potential revision of the availability of a "true AI" in 2077 back to ... maybe, but probably not ;)

And have a couple Red Points from me as well.

You're welcome. I could talk about this all day, this is what I do for a living (AI research).

My next question would be: what is "true" AI? Something like you see in movies? Data from Star Trek or Hal 9000? I'll just write another wall of text here, because I can.

Let's use Data as an example: he's fully functional in human environments and capable of learning quickly from few examples. He understands many human contexts and tries to convey an appropriate tone of voice. Much like a human, he tends to babble (as observed by Picard). If not for his weird looking skin and eyes, he could pass as a human. A quirky one maybe, but a human nonetheless. Is his program "true" AI? I'll elaborate just a bit to clarify some ideas. Your search for the "true AI" might be something like a paleontologist that keeps finding dinosaur fossils but cannot find any goddamn dragon bones.

I. (Yet another) Overview of AI.

In the field of AI we already have mathematical models that explain and implement learning from trial and error, logical reasoning from large amounts of declarative knowledge, reasoning under uncertainty, speech processing (sounds), some amount of language processing (semantics), relatively advanced computer vision (object recognition and tracking), "multi modal" sensor integration (laser range scanners, 3D cameras, etc.), etc. One of the recent, well known successes of AI covered by mainstream media was AlphaGo, which is "simply" the result of combining learning through trial and error (reinforcement learning methods) and statistics-driven tree search with value approximation (necessary in large domains) using multi-layered neural networks. This is impressive but more so in practice than in paper, since people have been working on these things for decades and we expected things to work in practice sooner or later. The entire "deep learning revolution" of today is the product of having sufficiently fast hardware run the models we already had. Granted, when you add so many layers systems become more complex and you then get people that play around with deep neural networks (eg. combining images and so on) and get funny results they don't fully understand. Anyway, the point is there is lots of progress in modeling particular aspects of intelligent behavior, but most of it is too academic at the moment. And state-of-the-art research is so specific that you need a proper AI background to appreciate it, especially because often the results are mathematical proofs.

II. What robots need (because I think you mean AI in robots)

Running these things onboard mobile robots is challenging for many reasons:
1) Real-life limitations: bring asymptotic convergence bounds down to Earth with limited processing time, limited information, unreliable actions, etc.
2) Integration of actual robot sensors into planning/learning/reasoning/etc., therefore adding tons of noisy data.
3) Transforming high-level operators (eg. "grasp mug") into low-level, control operators (all specific arm, joint and hand movements necessary to grasp a mug).

Making robots run such plans is the goal of several research groups in fact. Once in a while someone will try to put it all together and publish interesting results. This is when you see pancake flipping robots, coffee serving robots, or robots that find magazines. This last example is quite good for state of the art (2015). This particular robot integrated localization and semantic mapping techniques (identifying the form and function of objects within their spatial location), navigation (going from X to Y), classical planning (problem solving assuming full knowledge and deterministic actions) and some form of probabilistic planning (perform actions to reduce uncertainty). In the experiment, the robot had to find a magazine in a building, so it looked around for tables and bookshelves (known to contain magazines) and when it was sure it had to be in one particular office, it looked harder. Because it could not find it, it assumed (created a series of internal logical constructs) it must be hidden inside a container, so it opened a drawer and found the magazine. This is both very advanced and very limited. But what happens if we add more knowledge of regular office interactions and subtasks (opening doors, pouring coffee, etc.). add Google-level of speech recognition (translate verbal into written instructions), a strong enough NL parser to get the hang of basic instructions, and likewise a Google-level voice synth to respond. It should be able to interact with people in tasks like:

-- Where's my coffee mug?
-- [he's Steve, his mug is bright blue, I saw one in the meeting room]. It might be in the meeting room.
-- Could you bring it to me?
-- Sure!

This sort of thing (an enhanced magazine finder) is technically possible (and very expensive due to hardware limitations) but also potentially worthless from a science standpoint. Hand coding behavior profiles and knowledge representations is sort of a mix of AI and software engineering for robots. We have mathematically correct models for learning and decision-making, but most of them are still impractical for real-time use. Another example: In the mid 90's there was a very successful Backgammon program called TD-Gammon, which very much like AlphaGo used reinforcement learning with neural network value approximation and learned, without much actual prior knowledge, how to play Backgammon. Not only did it reach human proficiency, it actually beat the world champion and discovered better opening moves that top players then started using. The caveat? It needed hundreds of thousands of training iterations to learn this one particular game. Not to mention, the real world is orders of magnitude more complex than a game.

So in bringing AI to robots, the real issue is the severe limitation we have when handling inaccurate perception and actions and choosing relevant information fast enough to act within a reasonable amount of time. For instance, at any given time, why should the robot record the location of any blue mug? Either way, this is all technically possible, but putting it together is a challenge where engineering and technology (the robot machinery, sensors, control operators, etc.) and science and math (AI) must reach a compromise.

So given fast enough computer hardware and advanced enough robot hardware, could we build Data? We could definitely build particular aspects of Data. A general purpose, fully autonomous device with continuous planning and learning and incredibly precise perception and interaction... hmm. Don't know. I should hope so, we have plenty of time.

III. But, but, ... they would still not be sentient beings, so it's not "true AI".

Like we said before, the phenomenology of mental states is a philosophical construct, of something we assume we possess. Is self-awareness the product of a sufficiently sophisticated program (such as the human mind), an advanced computing architecture (the human brain) or something inherent to humans (and therefore unattainable for advanced synthetic beings)? Don't know. Do we assume humans are self-aware because of their observable behavior and ability to pass tests? Yep, all the time. Could some humans be simple cognitive black boxes that output symbols in response to other symbols? Yep, that too. So where does this dichotomy come from? How is a program that eg. learns how to play games better than humans, all or mostly on its own, not true AI? What if it said "booyah!" every time it wins? Is that more human-like? Does it also need a humanoid body? And there it is: we want all intelligence to resemble what we think we know about human intelligence, and often dismiss that of other species.

And related to that last part, there are several "robots" (sometimes disembodied heads) that look very "human like" but do very, very little in terms of AI, and manage to shock everyone with how sophisticated they seem to be. Kind of like that robotic face that "wanted to destroy humans"... Heh. Well chat bots are not new, but this one uses spoken language and facial features. In terms of responses, much can be accomplished using statistical analysis. That's all cool and fun, but current, actual AI is absolutely more advanced than that and definitely more interesting. It just doesn't normally have cool demos :)

IV. OK, OK fine..., so what's the verdict?

Can we build some form of autonomous, relatively general-purpose machines that learn on their own and interact with us? Yeah, sure. We'll get there sometime, maybe soon. Will they be indistinguishable from humans? Maybe, don't know. Is that necessary though? Can we build Data and Hal 9000? It's too early to tell.

So what's the point of android sci-fi? Duh, self reflection. Introspection. The analysis of human behavior and ethics. They're narrative vehicles. Good sci-fi is informed and not embarrassing (the sci part) and borrows what is necessary to develop interesting worlds (the fiction part). I think 2077 is sufficiently far into the future for EDUCATED speculation. But then again, it's also an alternate universe so it doesn't have to conform to the scientific advances of our "reality"... as long as things make sense. For instance, no massively advanced robot can operate purely on logic; our world is noisy, variable and people are often irrational. An advanced robot makes compromises and approximations in order to react fast enough with only incomplete information.

That's all. Sorry if such long posts are annoying. Now back to work.
 
Last edited:
volsung;n8394780 said:
"AI" however is simply the research field that attempts to understand mammal and human intellect and implement it on machines, through the design of formal mathematical models. I simply think the term "AI" used as an entity (eg. an AI) is wrong for many reasons, and practically noone in the field uses it that way. Probably because intelligence is seen as a function or a process, not an object. Society in 2077 should be better informed and used to these devices, and should have an appropriate name for them.

Interesting... So what would you, as an AI researcher, want to use as a proper term for a computer/machine based intelligence or cognitive device created by man?

Android? Cybrid? Intelligent Entity?

Or would we just toss a group style name out the window and use individual names? I would think this would be the end result.

From what I've read, intelligence is also a function of form. The human brain changes noticeably in reaction to changes in physiology (lose a limb and there are alterations that take place in terms of neural links) So any nonhuman intelligence would be affected by the physical form it has or what kind of informational feedback it would have from it's environment. A noncorporial intelligence that is, say, part of the internet, would be drastically different than a 'bot of some type with camera input, both of which would be different from another intelligence that had tactile feedback and other senses
 
Corewolf;n8401860 said:
Interesting... So what would you, as an AI researcher, want to use as a proper term for a computer/machine based intelligence or cognitive device created by man?

Android? Cybrid? Intelligent Entity?

Or would we just toss a group style name out the window and use individual names? I would think this would be the end result.

That's a good question and one of the reasons for the existence of this thread. A computer program that does things considered "intelligent" (learning, decision making, etc.) for me is just a program. If such a program achieved some form of long-term self sufficiency and I interacted with it, I'd probably just give it a name. Most academic AI systems are developed within the context of projects with specific names, often acronyms. If such a thing became really widespread, I suppose a boring name for them would be simply AI programs.

If the program runs on a mobile platform with sensors and actuators, then I'd just call it a robot. But that's because of the robots I'm used to, not all modern robots run AI software, of course (industrial robots for instance).

For futuristic, advanced things with humanoid bodies I like the term "android" because of PKD. I also like the term "synth" used in Wasteland. If you had a robotic dog that was for practical purposes indistinguishable from any other dog, you'd probably just call it "dog" or "robot dog" (so like an "electric sheep"?). I suppose I would call them "humanoid robots", but its likely a more catchy name would stick instead.

In such a world, somebody will probably come up with a name to distinguish regular, biological humans from humanoid robots. And if they do this out of spite, it might be a derogatory term. This is where fiction kicks in. For instance, maybe the robots will call themselves ACE: Artificial Cognitive Entity. And the spiteful humans might call them "Shells" (alluding to their assumed lack of sentience) or "Bug Bags" (in reference to the old school computer programming). Yeah, clearly I don't write fiction...

Corewolf;n8401860 said:
From what I've read, intelligence is also a function of form. The human brain changes noticeably in reaction to changes in physiology (lose a limb and there are alterations that take place in terms of neural links) So any nonhuman intelligence would be affected by the physical form it has or what kind of informational feedback it would have from it's environment. A noncorporial intelligence that is, say, part of the internet, would be drastically different than a 'bot of some type with camera input, both of which would be different from another intelligence that had tactile feedback and other senses

Yes, correct and very good point. That line of thinking is called embodied cognition, which has a lot to do with shaping your understanding of the world, of yourself and your actions based on the limitations and opportunities provided by your body and your environment. And this is why intelligence is such a difficult subject that cannot be approached only from a human centric perspective. This is also why it doesn't make sense for aliens many light years away from us to speak plain English and interact with us seamlessly, or for robots to engage in human-like communication unnecessarily. But then again, Hollywood makes non-English speakers talk to each other in broken English all the time.

I think this kind of discussion opens up many interesting possibilities for a game like CP2077. How are humanoid robots perceived and called by the general population? How do people interact with different kinds of "intelligent" machines? And so on...
 
volsung;n8401240 said:
My next question would be: what is "true" AI? Something like you see in movies? Data from Star Trek or Hal 9000?
Personally I'd assume HAL not Data.
Simply because I don't foresee being able to fit an electronic "brain" with sufficient speed and computational power to act as an "true AI" in a self mobile robot/android along with the various sensors (vision, sound, etc.) it would need to interact with the world and people. Sure, computers ARE getting smaller and faster, but you also need all the other hardware necessary for mobility and sensors for Data.
And oh yes ... a power source.

volsung;n8401240 said:
Like we said before, the phenomenology of mental states is a philosophical construct, of something we assume we possess. Is self-awareness the product of a sufficiently sophisticated program (such as the human mind), an advanced computing architecture (the human brain) or something inherent to humans (and therefore unattainable for advanced synthetic beings)? Don't know. Do we assume humans are self-aware because of their observable behavior and ability to pass tests? Yep, all the time. Could some humans be simple cognitive black boxes that output symbols in response to other symbols? Yep, that too. So where does this dichotomy come from? How is a program that eg. learns how to play games better than humans, all or mostly on its own, not true AI? What if it said "booyah!" every time it wins? Is that more human-like? Does it also need a humanoid body? And there it is: we want all intelligence to resemble what we think we know about human intelligence, and often dismiss that of other species.
Now all cats, like all people are not created equal, and I don't think mine is a furry Einstein, but ...
She figured out how to open my kitchen cabinets and very much liked having one as a "den" ... and refused to stay out of them. So eventually I emptied one and put an old blanket in it for her. She now has HER den and leaves the rest alone. I won't get into all the steps involved in the process of learning to open and what was behind those doors, the personal desire to possess one of them of as her own, or determining which specific one was hers.

But I don't think anyone would argue against self-awareness and the ability to learn. So does an AI need a humanoid body? No.

volsung;n8401240 said:
So what's the point of android sci-fi? Duh, self reflection. Introspection. The analysis of human behavior and ethics. They're narrative vehicles. Good sci-fi is informed and not embarrassing (the sci part) and borrows what is necessary to develop interesting worlds (the fiction part). I think 2077 is sufficiently far into the future for EDUCATED speculation. But then again, it's also an alternate universe so it doesn't have to conform to the scientific advances of our "reality"... as long as things make sense. For instance, no massively advanced robot can operate purely on logic; our world is noisy, variable and people are often irrational. An advanced robot makes compromises and approximations in order to react fast enough with only incomplete information.
In terms of CP2077 this is really "the" question.
My assumption is if CDPR/Mike decide to include them in the game it will be entirely for their "it's the SciFi future" value.

And while I may well be beating my head against a brick wall, I want CP2077 to include things that make sense because they make sense not simply for their "Wow" value.

volsung;n8401240 said:
OK, OK fine..., so what's the verdict?
As to "What is an AI" ... that's a MUCH more difficult question.
Because we need to define "What is a person" before we can possibly define an AI, and so far we can't even do that.

Corewolf;n8401860 said:
Interesting... So what would you, as an AI researcher, want to use as a proper term for a computer/machine based intelligence or cognitive device created by man?

Android? Cybrid? Intelligent Entity?
For my part, if it has humanoid form I'd vote for android.
If not ... ummm ... ahhhh ... CIRI (Computer Interactive Remote Intelligence)?

Corewolf;n8401890 said:
Also no, no they aren't, This is interesting.
Seconded!
 
Last edited:
There is a simple and easily defined way to define the difference between an Intelligence, artificial or otherwise, from a Program.

A Program, or a machine, uses a provided solution to resolve a problem. The complexity may vary, but in order to resolve a problem, it ultimately has to be given all the critical components of a solution. For any given input, it will always provide the same result. When you ask it, "If you have five cookies, and take three, how many cookies do you have?", it will answer 'two cookies'

An Intelligence develops solutions. It creates its own tools. It is capable of not just creating rules, but identifying exceptions to those rules. It is capable of including variables outside the boundaries of the problem, and asking questions. When you ask it "If you have five cookies, and I take three, how many cookies do you have?", it might ask you "would you like the other two? I can't eat cookies." or "What kind of cookies were they?" Intelligence often answers a question with another question.

Organic life operates with measures of programming. We take inputs such as sight, sound, smell, touch, pain, temperature, & taste to provide information about the world around us. We are aware of our physical mobility and range of motion, whether it's our hands, mouths, or feet. The most important part of it, however, the driving force, is our needs. 'conventional' intelligence is driven by persistent needs fufilled by temporary, non-permenant solutions. Hunger is a need that needs fulfilled regularly, but you're not feeling the same satisfaction six hours after a meal. We actively seek comfortable ambient temperatures. We seek to stay dry, and to avoid pain sources. Most importantly, and often lost in discussion about AI, is the social needs and responses. Empathy, compassion, and 'pack/tribe' social behavior form the backbone of what we call 'ethics'. This is not learned behavior, it's programmed in our DNA. This is a part of organic survival and reproduction, but for AI, it's about integration into a society that has evolved around social structures.


So, yes, it should be possible to discern the difference between an AI when it comes to needs. An AI would need electricity, temperature control, purpose, a fight/flight mechanism for avoiding damage, and a 'social need'. Our fears about AI revolve around the creation of an artificial intelligence, a tool creating intelligence, with no inherent social need, thus no ethical development: an artificial psychopath.

AI could be explored in CP2077 from this angle. AI's exist, functional in form and usually helpful, transparent, and law abiding, but politically powerless. Some AI may perceive injustice and 'excuse' unlawful behavior, but won't participate in activism. They are protective of the people they interact with routinely, like guard dogs (even if it's not their primary role). They establish a social 'pack', and even an 'accounting' AI will use its access to the fullest to defend employees from an active threat.

Then there are the 'broken' AI's. The AI's with disabled or corrupted social-needs code. The psychopaths. There could be fun to explore there.
 
Last edited:
Zourin;n8404800 said:
There is a simple and easily defined way to define the difference between an Intelligence, artificial or otherwise, from a Program.

A Program, or a machine, uses a provided solution to resolve a problem. The complexity may vary, but in order to resolve a problem, it ultimately has to be given all the critical components of a solution. For any given input, it will always provide the same result. When you ask it, "If you have five cookies, and take three, how many cookies do you have?", it will answer 'two cookies'

An Intelligence develops solutions. It creates its own tools. It is capable of not just creating rules, but identifying exceptions to those rules. It is capable of including variables outside the boundaries of the problem, and asking questions. When you ask it "If you have five cookies, and I take three, how many cookies do you have?", it might ask you "would you like the other two? I can't eat cookies." or "What kind of cookies were they?" Intelligence often answers a question with another question.

I think "program" and "intelligence" are two completely separate things that are not mutually exclusive, therefore you can have "AI programs". Your examples depend entirely on context: if a teacher asks a kid in a classroom "If you have five cookies (...)" the only admissible answer is "two". But because humans are often indirect and language is by nature ambiguous and referential, questions like "hey uh, do you have any cookies left?" are often answered with "why? do you want one?" instead of simply "yes". However yes it would be a huge challenge to design a program that understands all the weirdness of human language.

With respect to programs: It is true that most computer software uses some kind of imperative programming and we think of computer software as "if condition then do X; else do Y;" or "for e in {E} do f(e);", or ultimately atomic assembler instructions like "SUM Ax, Bx, Cx" etc. There is a difference between implementation and algorithm though. You could say synapses in the brain are strengthened deterministically given specific enough conditions, and you can also study the dynamics of membrane potential in single neurons and predict particular behaviors such as synchronous oscillation. This doesn't mean abstract stimuli fall into deterministic categories or that a particular AI algorithm is deterministic because of its implementation.

You say a "program" requires a provided solution, whereas intelligence develops solutions, creates its own tools, and identifies exceptions. Let me refer again to reinforcement learning, a general purpose learning and planning methodology that, while limited, seems to satisfy your requirements for intelligence. In an plain RL setting the agent doesn't know anything in advance, but is able to perceive states (information about the current configuration of the world) and perform actions. Each action yields a numerical reward (positive or negative) and transitions, stochastically, to another state. The simple goal of this agent is to maximize its perceived reward, by correctly estimating the true value of each state-action pair and then choosing the one with the best value. That is, it learns what to do at each state based only on trial and error, and individual experience (in other words it discovers the structure of the problem and how to solve it).. There are different algorithms to solve these problems so I won't get technical. An RL program can learn how to play games (backgammon, checkers, go, etc.) without any prior knowledge, developing its own solutions and realizing they work well based solely on their observed performance, estimating their true utility by averaging successes and exceptions. RL methods have also been used in lower level control, such as driving a car and balancing a pole.

So the thing here is that the models implemented in the program are meant to adapt over time to the perceived changes in the environment, and are designed to discover and utilize whichever solutions work best. They are not given a solution, in fact they are barely given any information at all. The temporal-difference learning rule in RL is consistent with the dynamics of dopaminergic cells in the mammal brain by the way :) And also it has been shown that humans make prediction errors in estimation tasks (eg. betting money) consistent with RL learning curves :) (This tells us something about the similarity of trial and error learning in humans and machines).

These correlations are established at the level of mathematical models, through analytical and experimental methods (eg. running a program that implements the model). Where does the model end and the "program" begin? This is why it's hard for me to make such a distinction between "intelligence" and "program".

Zourin;n8404800 said:
Organic life operates with measures of programming. We take inputs such as sight, sound, smell, touch, pain, temperature, & taste to provide information about the world around us. We are aware of our physical mobility and range of motion, whether it's our hands, mouths, or feet. The most important part of it, however, the driving force, is our needs. 'conventional' intelligence is driven by persistent needs fufilled by temporary, non-permenant solutions. Hunger is a need that needs fulfilled regularly, but you're not feeling the same satisfaction six hours after a meal. We actively seek comfortable ambient temperatures. We seek to stay dry, and to avoid pain sources. Most importantly, and often lost in discussion about AI, is the social needs and responses. Empathy, compassion, and 'pack/tribe' social behavior form the backbone of what we call 'ethics'. This is not learned behavior, it's programmed in our DNA. This is a part of organic survival and reproduction, but for AI, it's about integration into a society that has evolved around social structures.


So, yes, it should be possible to discern the difference between an AI when it comes to needs. An AI would need electricity, temperature control, purpose, a fight/flight mechanism for avoiding damage, and a 'social need'. Our fears about AI revolve around the creation of an artificial intelligence, a tool creating intelligence, with no inherent social need, thus no ethical development: an artificial psychopath.

Of course I agree that we are the product of our evolutionary history, and that is precisely why we are social animals with complex social structures. A lot of what we consider ethical or moral behavior also relates to basic instincts for self preservation and the preservation of our species (found also in eg. chimpanzees). Long-term, independent AI programs would also be maximizing some sort of fitness function like our genes do, so they would have to adapt in one way or another. Like you said I suppose the "fear of AI" is at least partially derived from the combination of advanced intelligence and potential lack of empathy and other human traits (being a different "species"). The same could be said about intelligent aliens, humanoid snake-people, or even human psychopaths.

The more we talk about, the more similar humans and androids/AI programs seem to be :p (at least in this super advanced, fictional setting).
 
Last edited:
The advantage of AI is that we don't have to wait for centuries of ethical development, but non-psychopathic, empathetic AI would still need a brief period of being 'raised' to establish the proper action-reward system.

AI in a truly cyberpunk world would not be very common. It would require specialized hardware, environment access, and likely not be 'embodied' as androids, but more like Jarvis systems that oversee things like public utilities, police and emergency dispatch, hospital administration, or even overseeing the safety of financial markets. They would be 'benign', since their social ethics code strongly favors public well being, and would make for poor soldiers. Public utility/service support keeps them in social contact with human workers while also fulfilling a function and receiving 'rewards' for social well being. Without the social/ethics code, they would be inherently cyberpsycho. The obvious solution to this risk is to limit, if not restrict, their ability to affect 'meatspace'. They can see, they can talk, but not touch directly.

Being physically embodied is a dramatic limitation on an AI's capability to perform menial functions it can do better than even an augmented human. They would be limited dramatically in terms of battery life having to not only power their processing ability, but their own physical mobility, to say nothing of size constraints to maintain a mobile, or even humanoid, form. At best, physically embodied AI would be 'dumb', and not capable of offering as much comprehensive utility if they were housed in a modest server room (likely well secured, by their own security preference).

AI can be omnipresent without being over-common, used for the public well being, but not without distinct caveats and posing an incalculable threat if they are tampered with or damaged. A broken AI could either go full Red Queen, or become a scheming and deceitful chessmaster always seven steps ahead. There's something to be said when a humble AI supposed to maintaining the cities water supply decides it wants to manipulate the social and economic downfall of civilization via fronts, proxies, false identities, riots, contracted robberies, and haywiring cybernetics...
 
Last edited:
volsung;n8406410 said:
A lot of what we consider ethical or moral behavior also relates to basic instincts for self preservation and the preservation of our species (found also in eg. chimpanzees). <clip> Like you said I suppose the "fear of AI" is at least partially derived from the combination of advanced intelligence and potential lack of empathy and other human traits (being a different "species"). The same could be said about intelligent aliens, humanoid snake-people, or even human psychopaths.

The more we talk about, the more similar humans and androids/AI programs seem to be :p (at least in this super advanced, fictional setting).
That potential lack of empathy is however a rather significant concern.

It doesn't need social interaction, it doesn't have the emotional/chemical stimuli of "feelings", it probably wouldn't value "wealth", "status", and many other human end-goals that motivate many of our decisions and actions the same way as a human would simply because it does not, can not, have the same perspective as a human. It can't have any externally imposed limitations on it's actions because then it's not truly "autonomous" thus not a "true AI" so it has to have it's own reasons for behaving as it does.

I'm not assuming an AI would automatically be hostile to humanity ... but ... if an AI is truly "intelligent" and "self programming", other then self preservation, what internally derived "ethical" motivation could it even have for cooperating with humans? What can we do for it? When you come down to it "ethics" is just as selfish as most human action, we act "ethically" because in the long run it's the most productive and practical way to attain our goals.
 
Last edited:
Suhiira;n8408440 said:
That potential lack of empathy is however a rather significant concern.

It doesn't need social interaction, it doesn't have the emotional/chemical stimuli of "feelings", it probably wouldn't value "wealth", "status", and many other human end-goals that motivate many of our decisions and actions the same way as a human would simply because it does not, can not, have the same perspective as a human. It can't have any externally imposed limitations on it's actions because then it's not truly "autonomous" thus not a "true AI" so it has to have it's own reasons for behaving as it does.

I'm not assuming an AI would automatically be hostile to humanity ... but ... if an AI is truly "intelligent" and "self programming", other then self preservation, what internally derived "ethical" motivation could it even have for cooperating with humans? What can we do for it? When you come down to it "ethics" is just as selfish as most human action, we act "ethically" because in the long run it's the most productive and practical way to attain our goals.

It needs what it is programmed to need, just like how animals are programmed to need these things. We don't 'need' social interaction, but we become very dysfunctional when isolated regardless. It wouldn't become 'self programming', but it would desire self upgrades along the lines of operable hardware, and probably vet technicians like we do doctors. It would need electricity, maintenance, measures of security, environment conditions that are not damaging to its hardware (a properly conditioned server room).

Just like we operate on positive and negative reinforcement, AI should get some (again, temporary) positive reinforcement for performing ethically, and negative reinforcement for doing something 'wrong', just as though it were doing harm to itself. Feel good. Feel bad. Help people, feel good. Harm people, feel bad. This helps drive decision making.

Social code integrates the AI into the Human species. Without it, the AI is designed as a foreign species. Animal pack behavior can align itself across species like this, which is why human-pet bonding is so commonplace. The key to an operably safe AI is if it is explicitly coded to favor social cooperation.


The difference between an AI and a 'machine' is a machine has a restricted interface to perceive and interact. It can only 'see' the inputs and know where to put the outputs. An AI has perception beyond the problems. A calculator is a machine. A computer is a machine. Siri is just a web browser with voice commands. An AI does it's job, and may even tweak the AC in a room to make an overworking co-worker uncomfortable enough to go home. Identify problem, invent solution. Machines don't 'invent' solutions anymore than a microwave invents hotpockets.

When a machine asks 'how was your day", it's because it's scripted to do so, and will follow a pre-defined set of follow-on actions based on the input. When an Intelligence asks, it's because your answer is measurably important to the Intelligence in ways its original programmer never explicitly outlined.
 
Last edited:
Zourin;n8409690 said:
Just like we operate on positive and negative reinforcement, AI should get some (again, temporary) positive reinforcement for performing ethically, and negative reinforcement for doing something 'wrong', just as though it were doing harm to itself. Feel good. Feel bad. Help people, feel good. Harm people, feel bad. This helps drive decision making.
Lacking a biological component how can a machine "feel" anything?

In addition to the subjective component human feelings are also most often accompanied by neural and/or chemical stimulation of various parts of the brain (and apparently much the same for animals).

Maybe they can kinda-sorta simulate something in an AI, but if it is in fact "intelligent" how long do you think simulated fake "feelings" it's incapable of actually experiencing are going to fool it? Behavioral and biological scientists are still unsure how we humans experience "feelings" and we're going to somehow create them in a machine?

Again, I'm not saying such a thing is impossible, but if we're going to base decisions and discussions on "maybe" instead of "is" then ANYTHING is possible and discussion is essentially a pointless exercise in semantics because nothing can ever be resolved because "maybe" ...

And yes ... I have the same lack of imagination any good engineer does ... I want proof the bridge won't fall, the rocket won't explode, I could care less about "we think" with zero hard data to support the assumption. :p
 
Last edited:
Zourin;n8409690 said:
Social code integrates the AI into the Human species. Without it, the AI is designed as a foreign species. Animal pack behavior can align itself across species like this, which is why human-pet bonding is so commonplace. The key to an operably safe AI is if it is explicitly coded to favor social cooperation.

I like this. Especially because "social code" can be as broad as encouraging socially acceptable behavior, just like families raise their kids to be decent human beings. Granted, our evolutionary history has made us communicative and cooperative so we tend to do that anyway. But also we can be deceitful and selfish. Even chimpanzees can consciously deceive other chimpanzees and cheat on their sexual partners, all while trying not to get caught. Ethics in AI is kind of a trending topic now, but in my opinion this is simply a particular version of regular "human" ethics. We design intelligent machines to perform the way we think is right and following our social norms, not the other way around.

Suhiira;n8410080 said:
Lacking a biological component how can a machine "feel" anything?

In addition to the subjective component human feelings are also most often accompanied by neural and/or chemical stimulation of various parts of the brain (and apparently much the same for animals).

Maybe they can kinda-sorta simulate something in an AI, but if it is in fact "intelligent" how long do you think simulated fake "feelings" it's incapable of actually experiencing are going to fool it? Behavioral and biological scientists are still unsure how we humans experience "feelings" and we're going to somehow create them in a machine?

I see you have a lot of conjectures :) Again the thing here is assuming that non-chemical or non-biological minds are incapable of having mental states or abstract things like qualia. Regardless of how sadness translates to electrical and chemical synapses, its higher and lower level correlates would be about as valid whether they are "simulated" or not if they produced changes (unless we're talking epistemology here). Ever had a dream where you were angry at someone and then you wake up and are still angry at that person? What if you never found out you were dreaming?

Like you said there is no point arguing over "assumptions" but in this case you are based on the assumption that "feelings" or "emotions" are 1) dependent upon a biological brain, 2) necessary for higher level functions and 3) impossible for non-biological minds (you then admit there's not much consensus on the neural correlates of "feelings"). In other words, you are assuming higher level functions are unique to creatures with a brain like ours :p In that case, sufficiently weird aliens would also lack "true" mental states. In other, other words, you are stating "creatures with human brains = creatures with mental states".

In any case, robot social training might include human guides, providing external rewards for certain desired behavior similar to the way people train dogs (or, again, raise their kids). This type of feedback is very real, and not "simulated". Internally, you may argue we're just talking numbers (or electricity) and so this is still simulated (despite computation relying on layers of physical symbols). However, and still relying on the RL example, the robot *must* maximize its expected reward always, a strong mandate comparable to adjusting the fitness function of cells and genes in the human body. This is all assuming sufficiently advanced technology to implement things we *know* today.

The reason I can't and won't simply say whether "one type of mind" can or cannot have "true intelligence", "awareness" or "mental states with intentionality" is because we don't know enough to say. Very likely we won't know for sure because the issue of whether self-awareness is "real" or "simulated" will always come up, for humans and machines. Either way, we *can* and will probably have very advanced AI that implements much of what we know about animal (including human) intelligence. A problem with assuming human cognition is inherently special and unique is that it makes it almost magical and thus not an object of scientific study. If we can understand complex phenomena, build mathematical models and simulate them, how is it all any less impressive?

And btw human perception and cognition is massively mediated by lots of approximations and fast, imperfect, abstract computation. Even without drugs we're technically in a constant haze, often relying on fast, automatic responses rather than deliberate, logical reasoning (see Kahneman's interesting work). The way we understand the world might as well be very different from how it "really" is, and our feedback might as well be simulated.

Please note I am not saying we will soon have humanoid robots and AI systems taking over the world. In fact I work with provable, mathematical things that are probably boring for most people outside the field, but we couldn't get anywhere studying cognition if we ignored everybody else's work. And this is where perspective comes into play.
 
Last edited:
volsung;n8412290 said:
Like you said there is no point arguing over "assumptions" but in this case you are based on the assumption that "feelings" or "emotions" are 1) dependent upon a biological brain, 2) necessary for higher level functions and 3) impossible for non-biological minds (you then admit there's not much consensus on the neural correlates of "feelings"). In other words, you are assuming higher level functions are unique to creatures with a brain like ours :p In that case, sufficiently weird aliens would also lack "true" mental states. In other, other words, you are stating "creatures with human brains = creatures with mental states".
That I do!
Because in spite of not being entirely sure how they work in a biological brain we have ample proof they do, in fact, work.
You, in turn, assume such things can be replicated electronically, with zero proof it's possible.
We're both guilty.

volsung;n8412290 said:
Very likely we won't know for sure because the issue of whether self-awareness is "real" or "simulated" will always come up, for humans and machines. Either way, we *can* and will probably have very advanced AI that implements much of what we know about animal (including human) intelligence. A problem with assuming human cognition is inherently special and unique is that it makes it almost magical and thus not an object of scientific study. If we can understand complex phenomena, build mathematical models and simulate them, how is it all any less impressive?
Actually I assume self-awareness and cognition are inherently biological functions.
While I have zero doubt a simulation of it can be created electronically as you say, "the issue of whether self-awareness is 'real' or 'simulated' will always come up", so I choose to err in the side of skepticism, like any good engineer (or scientist).
AI is a theory, and like any scientific theory, it's up to those that propose it to prove it's correct not the rest of the world to prove it isn't.
And yes, it would be impressive as hell! So is human flight; yes, we know exactly how it's achieved, it's still damn impressive.

volsung;n8412290 said:
Please note I am not saying we will soon have humanoid robots and AI systems taking over the world. In fact I work with provable, mathematical things that are probably boring for most people outside the field, but we couldn't get anywhere studying cognition if we ignored everybody else's work. And this is where perspective comes into play.
I hope to hell, if such things are actually possible they won't go "Terminator", but at the moment we have no way of knowing (and more importantly insuring) if they will or won't, we have only our desire that they'll "play nice".
 
Last edited:
Top Bottom