My previous articles related to AI (recommended):
Artificial Intelligence Today
We managed to use machine learning to develop face recognition, games intelligence, self driving vehicles or language translation. With mathematically generated patterns similar to the brain neurons, these systems can learn and perform actions similar to humans or even better. It's a huge evidence that this approach is working and the model we have copied from the brain is valid. We knew that one day, we will reach for the moon when we created the first plane or the first rocket. Today, we know that one day, we will build intelligent machines - we just don’t know how long it’s going to take.
Some Philosophical Considerations
We are somehow designed to create intelligent beings. All discoveries today seem to head us there and we cannot stop this progress. It doesn’t look random. Yes, billions of years of evolution could be trial&error but advancing in 100 years from swords to computers, smartphones, Mars flights and internet doesn’t look quite evolutionary random… It’s more like a rush to get somewhere, maybe to colonize other planets and save ourselves from extinction of self destruction. Or maybe to become something else for the same purpose.
We will see if intelligent machines will transform or replace us, it’s not like something we can control. It’s too early for this dilemma, the next couple hundreds years will be as in science fiction movies, intelligent robots will be among us and help.
AGI Papers and Literature
The internet is full of overviews and visions about how to create AGI. They are mostly text as everybody has an opinion about it but very few have practical ideas how to do it. There are blueprints of subsystems and how they are connected…. while the brain has only several big parts which look almost the same, internally. Neurological observations and abstract designs may help but won’t do the magic either.
I’d say that it is at least naive to think about designing AGI today. The best we can hope is to guess the right direction in pursuing it.
The nature didn’t start with intelligent beings. They were created from simple creatures that were able to adapt to the environment and perform simple survival tasks. Maybe it’s a good idea to do the same in order to reach for intelligence. Maybe we have to follow the nature path and design simpler systems able to perform basic tasks.
Studying worms, ants, flies and other organisms with fewer neurons may put us on the right track. These creatures are only responding to external sensors and act consequently, they do not have opinions. If we design similar systems, we will learn from errors and figure out where intelligence comes from. We will create better neural models and improve them.
Imagine an ant on the ground. It can perform the following actions:
- It can carry its food home if it is hungry or at a certain time of the day
- It runs away if there is any danger
- It can build a home if it doesn’t have any
- It can create other ants when the time comes
This ant is not overwhelmed by inner thoughts. It is not able to paint or write a book. But it has some form of self awareness or self preservation because it is reacting to danger. All these basic actions are written in its DNA and carried on from generation to generation.
Actually, all its actions are only responses to external factors, to surrounding stimulus. Basically, this is a form of cognition, of thinking. There is no intelligence, by the word definition.
Therefore, we may define COGNITION as the ability to take and prioritize decisions based on stimulus.
Some artificial cognitive systems we may have today are self driving cars, autonomous robots, drones or industrial robots, all of them interpreting surrounding stimulus and taking direct actions.
When cognition reaches a mature level where the system is able to generate its own decisions, motivations or algorithms or it is able to create connections between stored memories or generate presumptive consequences of actions, we may say that this system is intelligent. So, intelligence is the highest form of cognition with more network capabilities and more neurons. There could be different levels of intelligence of course, like for dogs, dolphins or elephants.
So far, we managed to emulate one of most important parts in animal’s nervous systems – perception and narrow cognition. Machine learning is basically the first algorithm we managed to copy from nature design. Logically, after learning, maybe the next step is to create cognition, then intelligence:
MACHINE LEARNING -> MACHINE COGNITION -> MACHINE INTELLIGENCE
This term already exists, as a generic for artificial intelligence but the definition should be narrowed a bit. We can call it machine reasoning or machine thinking and it is not intelligence.
Let’s say that Machine Cognition is a neural network with the ability to take decisions.
A decision (action) is taken based on input sensors plus a motivation / rewarding / prioritization trigger. This network is able to make primary judgments / decisions just like primitive beings.
Let’s consider something similar to an ant, whose brain is a cognitive machine. Imagine a robot in a warehouse like the one below:
This robot detects if boxes are present in the storage area and moves them on the platform.
Suppose that is what we want to implement the simplest form of cognition, similar to animals. It can be done either programmatically:
if (box in storage)
then Move it to platform.
Or we can use a neuron to implement this decision. In real, everything is in the same network:
- Input (sensors, stimulus) – can be any vision / voice detection system, maybe based on machine learning, converting signals to concepts
- Concepts area – can receive and match signals from multiple sensors and is able to match (identify) the same concept
- Rewarding area, self-adjusted from sensors feedback, is responsible for triggering the action into neuron. Could be a learning system trained to prioritize concepts;
In a similar manner, we may have multiple neurons implementing decisions for different box colors and different actions. Or prioritize boxes on colors like a programmatically “
Neurons for Decisions
Why we would ever want to use neurons to implement decisions?
Short answer: because otherwise these systems will never evolve to intelligence.
Keeping logic/decisions outside network is what has been done by now. For decisions, we are using automated systems bases on software running on CPUs instead of artificial cognitive networks. While these work very well and will still be present for a long time, they are limited. Basically, these programs perform simple iterative tasks or move controls and numbers on monitor windows with millions of lines of code. This approach may be good while dealing with games and simple narrow tasks but not great when dealing with general concepts. They will not ensure enough internal connections.
These will hardly evolve to intelligence. The complexity required is just too high to emulate imagination, intuition, etc. Image recognition has been developed with neural networks because it was impossible to generate an iterative algorithm for it. The same should be done with cognition; decisions should use neurons, cognition should be kept inside network together with concepts and learning, as they have common neurons.
While it still may be possible to create cognition on external modules or speed it up by external systems, adjusting and linking concepts to decisions on the fly and adding possible decisions all the time may be impossible with programming algorithms. Any programmed form of intelligence will require far more resources and will be less flexible than having everything inside.
Even though, the cognition may be the simplest part to implement in the intelligence chain.
It’s the cognition at the highest level. When we manage to set up concepts recognition based on machine learning and reasoning based on machine cognition, the next step is to make the network flexible, just like the brain.
The newly improved intelligent network will be able not only to priorities a decision but to generate new decisions, motivations or algorithms for actions even in case of a fuzzy input. It will be like a machine learning of algorithms, the network will be able to evaluate the correct cognition to apply, it will evaluate the results, re-adjust the input or the selected decisions and learn from all these. This evaluation of consequences and ability to re-adjust the selected cognition algorithm will lead to imagination, anticipation, creativity so basically to intelligence.
We will figure out how to do this on the fly when we’ll have cognitive machines. We need to create neural models and artificial networks then ask neuroscience to confirm them. The neurology can't provide an answer by itself, it’s almost impossible to figure out exactly how the brain is working only by looking at neurons. It should be done the other way around; part of Einstein relativity theory (the gravity) was tested tens of years later, when mankind was able to make space flights. In 1900, they were only imaginary experiments and math formulas which proved real later.
We will not DESIGN the intelligence; it will be generated by connections.
Cognitive Neural Networks
These are neural networks with decision neurons.
Deep learning can help but cannot implement the cognition alone; it will be limited to learning and narrow reasoning. It may provide the next best move, but it won’t be able link concepts and provide decisions based on motivations. As we deal with these types of networks, we will figure out better how to link them. It may be only a matter of topology.
These types of networks will be able to make primary judgments and decisions like primitive creatures – ants, worms, etc. They may implement self-defense, survival or self-awareness decisions. They will also have the ability to auto-tune by themselves not only by feedback but also by something similar to neurotransmitters, something that affects the full network.
The challenges will be to link decisions to concepts or to design flexible rewarding subnetworks – all these will help to find the way further to intelligence.
Think at the brain in terms of common neurons that trigger different patterns in the same time. When we will learn enough to deal with cognition, the path to intelligence will come by itself as we will figure out how concepts and decision are mixed and mapped on neurons.
Cognition in Real Brain - Amygdala
The human brain is using a dedicated part for decisions/motivation/rewarding called amygdala. It is placed in the middle just for this reason – to be wired properly with all other parts. The neurotransmitters also play a huge role in amygdala decisions, by auto-tuning the whole rewarding system.
“Amygdala …perform a primary role in the processing of memory, decision-making and emotional responses (including fear, anxiety, and aggression), the amygdalae are considered part of the limbic system.” (https://en.wikipedia.org/wiki/Amygdala)
The types and mapping of neurons in amygdala may inspire new types of neural models:
“Each ITC neuron inhibits three randomly selected neurons in the same cluster (only one projection per neuron is shown in the figure).” – its probably how prioritization is made based on feedback from sensory neurons.
Less advanced creatures, like bees, having up to one million neurons, are using a slightly different rewarding system:
“Research has catalogued 40 different specific neurons. With advanced imaging it has been shown that one neuron can influence specific cognitive functions, mediating reward-based learning.”
We must keep in mind that the nature designed all these creatures like this with these main goals: survival, reproduction, evolution. We need to ask ourselves why the evolution even exists when survival can be just enough.
A Roadmap To Intelligence
1. MACHINE LEARNING (current)
- Deep learning and derivates
- Machine Learning applications
- The need to improve Computational Neuroscience
- New neural models that require less samples (in progress)
- Ability to acquire concepts and accommodate them inside network
2. MACHINE COGNITION (in 5 – 20 years)
- Cognitive neural networks based more on topology rather than mathematics
- Assimilation and linking of real concepts
- Motivation, prioritization, rewarding connected to concepts
- Duplication/simulations of simple creatures nervous system
- First form of a primitive language connected to network – first real NLP
- Self-driving cars, delivery robots, drones, industrial robots, war machines
- Other cognitive robots or systems
3. MACHINE INTELLIGENCE (in 20 - 50 years)
- Highly flexible cognitive networks able to generate their own decisions, algorithms and adjust motivation / reward on the go
- New network topology and neural models able to select the proper cognition, to evaluate consequences and re-adjust decisions
- Improved language which expresses concepts, decisions, motivation
- Advanced cognitive systems / robots which become really helpful
- Consciousness and self awareness added as a form of cognition about self and self-preservation
4. HUMAN INTELLIGENCE - AGI (in 50 - 70 years)
- Intelligent machines equal or outperform humans in many fields
- Better NLU, NLP, Touring tests passed
- Human is better at emotional intelligence still machine outperforms him in rest
- Advanced reasoning systems used in critical decisions - HAL9000
5. SUPERINTELLIGENCE - ASI (in 70-150 years)
- Systems can take critical decisions for mankind
- Robots can lead activities
- End of the world
- Intelligent system can make advanced discoveries
- Kidding about the end of the world
The Need for New Neural Models
Current neurons were created around 1950, when the ability to test models was highly limited. The progress resides not only in pushing deep learning to extreme but also in better neural models. Computational Neuroscience should be developed as a wide spread discipline, it is where the next breakthroughs will come from.
In the current form, artificial neural networks try to lead an input to an output by adjusting internal patterns. Advanced math is used for it and it is very resource consuming as the calculation goes back and forth to adjust weights, in backpropagation.
The brain doesn’t work like this. It goes one way only and does fast calculations at every step (sum higher than action potential? then trigger the signal further). Maybe new models based on topology will perform better also will allow more and different types of connections and use neurons proximity.
The next types of neural networks may be mixed, learning and cognition, with different topologies connected. Intelligence is a matter of connections and common neurons – one pattern triggers another and another, etc. and billions of flexible connections may lead to the expected results.
While the Machine Learning will have a larger applicability and huge success in the years to come, the machine cognition will have a daunting and slower evolution with reward coming only in the end. Systems will perform better based on software decisions than on cognitive networks for a long time. But when they will understand the language and can make their own judgments, it will be the proper way to build them.
At one point, we may want to decouple the decision part of the network and keep only reasoning than use these networks as huge assistants, advisors, researchers.
We see millions of intelligent neural networks every day, all working perfect. Imagine a lot of drivers on a fast-crowded highway. All their neural networks are at full speed, working in a similar manner, without errors. Their initial structure, the empty network, was very much alike and stored inside something extremely small, as the DNA. This structure evolved from a single cell (also along billions of years of evolution) and it can reveal a lot of hints about how it is made by observing it in different stages.
Thoughts About Concepts and Cognition
If we're looking at a neuron, we realize that this cell is actually doing something simple and impressive – it transforms information in concepts, it reduces data to basics.
An artificial neural network is doing pretty much the same. The big real input is reduced to a single output in a forced manner (by adjusting weights with backpropagation). Somehow, the real neural topology is doing this much easier. Instead of trying to reach a certain output, it just creates a random path through almost randomly spread neurons. This effort takes far less resources.
Maybe instead creating a path to an output, we have to bring output to the randomly generated path, somehow.
A learning network is basically a simplified output for similar inputs. The concepts can be acquired through multiple senses (vision, hearing, touch, maybe memory) but they reflect the same thing, a single pattern for it is generated internally. Senses initially link together (“Andrew, don’t touch the cat”) but after that, a single sense can generate the full pattern alone (hearing “cat” word or seeing it).
It could be a single network area storing concepts as randomly generated patterns. In order to link different senses, commonly triggered neurons by both senses in the same time should be linked together and sent to an output.
Of course, there can be tons of ideas about storing concepts, the point is that we need to accommodate them inside the cognitive network in order to deal with intelligence later.
Let’s consider a simple cognitive system with multiple input concepts and decisions, a robot in a warehouse:
- in case of wooden box, put it on the platform
- in case of cat, scare it away
Priority is the cat as it can interfere with other processes.
Based on some past feedback from sensory neurons (the cat did more damage), the rewarding/prioritization/motivation system will schedule the cat as a priority in case both input concepts are present.
With these implemented now, we can:
- add boxes of different colors and teach the system to prioritize them: the red box first then the blue box
- add a human supervisor and teach the robot to report to the human the cat presence if any
The concepts now include: red box, blue box, cat, human
The actions: move box, scare cat, report to human
Decision and prioritization are made and adjusted by the same module.
The challenge is to implement all this dynamically in the same network, even if in separate modules and with different topology. We will figure later how to link them better and add more of them.
A Primitive Language for a Cognitive Network
With the sample above, we can create a primitive language directly from the neurons inside network. The rewarding module can send concepts presence or current actions directly to the language translator. Also, it can receive instructions.
- Red box present
- Blue box present
- Red box action
- Cat present
- Cat action
- Blue box action
- Human present
- Report cat
- Blue box action
- Make blue box priority
- Remove cat action
- Report cat presence
This is not NLP based on tokenization of phrases from text files but direct connection inside the cognitive network – a real language.
A Bit of History
A couple of weeks after finishing this article, I found an amazing piece of history in Turing Archives:
"If we are trying to produce an intelligent machine and we are following the human model as closely as we can, we should begin with a machine with very little capacity to carry out elaborate operations or react in a disciplined manner to orders (taking the form of interference). Then, by applying the appropriate interference, mimicking education, we should bone to modify the machine until it could be relied on to produce definite reactions to certain commands. This would be the beginning of the process. I will not attempt to follow it further now."
Alan Turing, Intelligent Machinery, 1948
The problem with the future it is that is never exactly like everybody sees it. At a certain moment, we cannot even imagine some breakthroughs that may change the whole vision. For example, in 1900, people imagined robots and moon flight but not television.
So even we anticipate a lot of changes in Artificial Intelligence, it could be all wrong and in the end, we may create biological brains with CRISPR, replacing all we’re doing now with mathematics….
- The neural model we copied from the brain is valid and working
- One day, we will create intelligent machines
- AGI is not really something we can design now
- We may need to follow nature path and copy the design of cognitive creatures
- We may need Machine Cognition as an intermediary stage to intelligence
- We also may need to keep decisions inside network to ensure enough connections
- Human brain has a similar design – Amygdala
- Intelligence may be generated by connections and not by design (just like image recognition)
- We need more neural models and more Computational Neuroscience worldwide
- The language is directly connected to network
- A roadmap to intelligence
- Future still uncertain
- 30th June, 2019 - Initial version
- 1st July, 2019 - Images fixed
- 6th July, 2019 - Text revision, typos
- 15th July, 2019 - Alan Turing reference added