This article may come about 30 years too early but I hope it will be fun for you to read it as it was for me to write it. It assumes, at least, the existence of machine cognition - AI agents or systems able to reason, to understand the world model and concepts, to properly interact with environment changes.
The Singularity is not required. They do not have to pass Turing tests nor do they need to have some awesome language skills.
1. Chatbots and Assistants
At present, in 2019, we have a number of chatbots and assistants largely available to the public. None of them look smart enough, none of them create the impression that you are talking to an intelligent or cognitive being. The progress in the last 30 years is small, it looks like more features have been added or they use better voice recognition but there is nothing like a better ability to reason. There are still few decades left until we may get what we hope for, truly interactive entities. Real cognitive networks are required for this.
I have already explained this in my previous articles. Right now, the chatbots are machine learning systems trained on large text databases, sometimes with weak attempts to link concepts inside. There is hardly any cognition in this. But somewhere in future, this will change, these networks will have memory, world understanding, planning, anticipation, empathy and so on.
The closest picture to an ideal assistant that we have comes from Science Fiction area. An intelligent, named entity is able to talk to us while at home or on our cell phone and help us in our daily routine. Or maybe, it will communicate with us through an implanted brain device. Sometimes, this entity is a household robot, able to do various tasks. Either way, we will perceive this entity as intelligent and conscious.
Maybe a chip the size of a fingernail will do this job and it will be integrated in various devices or appliances. Such a chip could make your entrance door talk to you and maybe makes you feel sorry to leave it. Or there will be a single agent in cloud, controlling all your devices, accessible everywhere.
Once we have intelligent machines, the real language processing will come soon. In nature, the language appeared 100.000 years ago, after the human brain was smart enough. But until then, we can design and study the principles of conscious systems.
2. The Consciousness
I expect a large number of disagreements here but in order to be able to create and design it, we need to keep it simple and less abstract. I will mix it with awareness and being sentient, and define everything like a property of a cognitive system.
The consciousness is the ability of a system to reason about self and about environment.
There are different grades of reasoning. From the awareness of the immediate surroundings and self preservation (an ant feeling danger and running away) to cognition about everything and being able to express it (a human).
A possible super-intelligence will have an even higher degree of consciousness and may perceive the reality at quantum or universal levels.
The consciousness is not a barrier, there is no inflection point of it. It's a property with multiple possible values, in the range of awareness.
There are a huge number of pages written about this. They won't help if too complicated, nor if they involve divinity, abstract notions, inner fears, ego or ignorance. We will not create consciousness with esoteric ideas.
3. Real vs Apparent Consciousness
If we have a robot that knows its name, is able to reason about its nature and about the world, we may perceive it as conscious. If it seems alive and it is able to talk, we will perceive it even more.
Well, the robot in the parody below has very few of these features or none at all, still it may look conscious:
What’s going on here? The robot reaction, the way it moves, etc. make us feel empathetic to it. We almost feel sorry while it says really nothing. Its inner cognition only deals with the equilibrium and physical motion. We think it feels the pain and we tend to see it as a living being. And all it did was to react to the environment by simple rules.
So the real consciousness of the system may not be the most important factor to perceive it alive. It's this secret ingredient that makes us perceive it as conscious, and this is the EMPATHY that the robot was able to rise in us. And just like the real consciousness, the empathy is directly linked to a property with multiple, measurable values.
Let's call this property perceived or apparent consciousness.
4. The Empathy
Let’s consider that we have a cognitive system being able to reason about self and about the world. It is conscious, by the earlier definition. If it does nothing, if it doesn’t move or communicate, we won’t ever perceive it as conscious.
But if this system is able to interact with us, we may perceive some degree of consciousness. This parameter (the apparent consciousness) seems to be directly linked to the ability of the system to raise empathy and to be perceived as alive.
5. Designing Real Consciousness
At this point, it is pretty clear that we need to deal with the apparent consciousness too to create conscious systems. While the real consciousness is an intrinsic property about its own cognition, the apparent consciousness comes from interactions with humans and the empathy rose.
To create real consciousness, it may be enough to train the cognitive system on world models. And make it aware about itself as an entity. If it is able to reason, link concepts, show self preservation, respond to medium it may be considered conscious, even if it doesn’t show it.
The real consciousness can contribute a lot to the apparent one but it should be doubled by empathy to obtain the overall effect, as we could see in the clip above.
We can consider a running ant as the lowest degree of real consciousness that a system can have. And a super-intelligence having the highest score of it.
6. Designing Apparent Consciousness
This is the one that matters in the end. In order to generate empathy and being perceived as conscious and alive, the system should have as many of these possible features:
- Body / shape - A physical entity may look conscious easier. An environmental voice will generate less empathy. It can be a robot, it can have human or animal shape, it could be a cartoon or a face on a monitor, etc. Or a painted vase with a consciousness chip inside…
- Voice: Either coming from a body or from environment, the communication and voice interaction is a big contributor. The text on a monitor is less empathetic overall, even if it is intelligent.
- Cognition / intelligence - This is another big factor, directly linked to the real consciousness. If the system is too dumb, under certain circumstances / interactions, the human will perceive it as a machine. However, a dog doesn’t talk and it’s still conscious but it does this by being playful. A cat seems to be more absent.
If the system shows reasoning, if it is able to answer difficult or tricky questions, the contribution is huge. This is work in progress but it will take at least 25 years to have planning, intuition, anticipation, real NLP etc.
- Communication skills – This will rise the empathy level and trick the human perception, like in this Google Duplex sample.
- Moral values - They will contribute to empathy, easy to implement as rules, behavior, limits in reasoning and actions.
- Feelings, emotions, attitudes – Either shown or detected in others, they will rise the empathy a lot. While emotion detection is now work in progress, simulating feelings cannot be really hard, especially if language is mastered.
- Auto protection, pain, response to stimulus – Also linked to empathy, as seen in the clip above.
- Ability to learn and memory - These are already properties of the cognitive networks they are built on to. These systems may become smarter and smarter and pass the Turing Tests in the end.
7. Conscious Systems
As said in the beginning, the Singularity is not required for these kind of agents. We may consider a robot is not very smart but still like it or feel for it. Like for a pet, for example. Those who don’t believe this is possible should watch the clip above again…
Based on the number of features implemented and their capabilities, we can create accurate metrics for these systems, topping with a consciousness score. Robots with “perceived consciousness score 34505 and real consciousness score 11949” may be sold somewhere in the future like smartphones today…
We will basically create new conscious beings. Or at least, they will be perceived like this. Then people can deal with legal issues, biasing, “what’s inside them?” and other funny topics so they have something to do while robots take over their jobs and destroy the world.
The consciousness may be easier to implement than expected once we have cognitive networks and the ability to reason. And this is less about a real consciousness as it is about an apparent one, directly linked to the generated empathy.
If we create conscious robots and know that something hurts them (for example, an aggression will make them sad and malfunctioning), we will obviously regulate not only their behavior but also aggression against them and interaction with them, and this will be in less than 4-5 decades. Because if we want them smart and conscious, they need to have feelings.
- 5th September, 2019 - First version