Part I.
Suhiira;n8414260 said:
That I do!
Because in spite of not being entirely sure how they work in a biological brain we have ample proof they do, in fact, work.
You, in turn, assume such things can be replicated electronically, with zero proof it's possible.
We're both guilty.
So there is proof that creatures with biological brains have cognitive functions, some of them (eg. great apes) considerably advanced.
I never said "such things" can be replicated electronically. What I did say is we have accurate mathematical models for aspects of what we consider to be intelligent behavior. Often, as soon as we explain what one or another aspect of "intelligence" is, it stops being considered intelligent. That is, once a machine does it, it's no longer intelligence. The list keeps getting smaller and smaller. This is reason enough to ask questions such as: are minds a result of our biology? are they simply enabled by our biology? And so on. Science is about asking the right questions.
Suhiira;n8414260 said:
Actually I assume self-awareness and cognition are inherently biological functions.
While I have zero doubt a simulation of it can be created electronically as you say, "the issue of whether self-awareness is 'real' or 'simulated' will always come up", so I choose to err in the side of skepticism, like any good engineer (or scientist).
The assumption that minds are exclusively a product of biological brains is an oversimplification. Like the man driving around the UK and assuming all British sheep must necessarily be black, based on biased sampling despite genetics suggesting there might be white sheep. The truly skeptic (and data driven) position is: so far all the sheep I've seen are black.
What you didn't address, however, is the permanent issue of what happens when human cognition is subjected to the same criteria you use to disregard artificial cognition. Maybe your perception, emotions and mental states are also "simulated", but the point is they are real enough for you.
The reason I often reference work in philosophy of mind (which I recommend to anyone interested in AI) is because these "things" are not as easy to dismiss as one would like. The position that certain aspects of cognition are inherent to humans or to particular types of brains closes a lot of doors and is borderline anti-scientific. I am not talking about anything supernatural here, simply the fact that this position suggests an answer without any analysis (almost like a religious position).
Suhiira;n8414260 said:
AI is a theory, and like any scientific theory, it's up to those that propose it to prove it's correct not the rest of the world to prove it isn't.
Which is why I offered some insight into the current state or AI
There's a lot of confusion about what a
scientific theory is. It is not the beginning of research, it's the result of years of work and research. A theory is the comprehensive body of knowledge, evidence, inferences and data that accurately describes one or more processes. When you have an informed guess, that's usually called a
conjecture. When you have a very informed question or statement from some amount of evidence but need further proof, that's called a
hypothesis. Colloquially however people often use the term
theory when they mean a conjecture or something even weaker.
What we have discussed about AI is not something that has to be "believed". I only stated well known (in the scientific community) facts, such as the existence of correct mathematical models that are strongly correlated with aspects of human intelligent behavior, and referenced current discussions about perception and mental states that help us realize it's not easy to simply say what a "real mind" is and what isn't, especially if we judge based only on our personal ideas. This should lead to a proper discussion about the possibilities of AI.
And all of this was in response to mainstream misconceptions such as "AI programs must be able to reprogram themselves", that advanced AI can consist of pure logic and that "artificial is inherently different from biological because reasons". I simply hope this will contribute to a better understanding of these topics, which are very hard even for academics.
And because now we're going in circles, we can either stop or move on.
Part II.
Back on topic with CP2077, I would vote for humanoid robots endowed with sufficiently advanced AI to be called "Androids" but can envision different other forms of AI-related entities:
- Autonomous vehicles.
- Personal assistants (either a small robot or some handheld/wrist device).
- An entire building/house ran by an AI administrator, controlling all sorts of moving things, many sensors and interacting with guests. (simply admin?)
- And the general, more advanced version of the previous item, any sufficiently advanced program with AI that receives external input (humans, the environment, robots, etc.) and can affect its surroundings (any peripherals), with or without mobility. This would be the cryptic type of entity that is obscure, seemingly intelligent and makes people uneasy.
None of the above except perhaps the last one need "general purpose intelligence" and "self-awareness" for them to be interesting. The point is to borrow interesting SCIENCE fiction concepts, hide away some detail and avoid technobabble.