Click here to Skip to main content
15,883,883 members
Articles / Artificial Intelligence
Article

Thinking, Consciousness and a Model of Intelligence

Rate me:
Please Sign up or sign in to vote.
4.50/5 (26 votes)
20 Dec 2017CPOL22 min read 38.6K   31   21
Neural Patterns model for new thinking machines

Introduction

At the time I write this article, December 2017, I couldn't find a similar approach in the public space. There are a lot of descriptions of the neurons, going into biological details, or software neural networks and mathematical formulas. It’s a huge gap between those two and I'm trying to fill it in.

Background

A good understanding of how the real neurons work is required before going further. It's important to understand how they fire when they receives signals from others and how connections are made between them. You can read about synapses, dendrites, axons, neurotransmitters and action potential in the following articles: article1, article2, video, article4, article5, article6.

At the end of this lecture, I hope you will have a better picture of what intelligence and consciousness are and maybe one day, one of you will bring your own contribution to intelligent machines.

Image 1

Overview

Some general statistics:

  • Diameter of a neuron is 4 to 100 microns
  • There are about 80-90 billions neurons in human brain
  • Each neuron connects to 4-7000 others
  • They form between 100 and 1000 trillion synapses
  • The action potential is about 30 milivolts and it last about 1 millisecond
  • The signal travel with about 30-100 m/s depending on neuron type
  • The brain uses about 25W of electricity
  • The average brain is believed to generate up to 50,000 thoughts per day

Some vision related statistics:

  • Number of retinal receptor cells: 5-6 million cones; 120-140 million rods
  • Number of retinal ganglion cells: 800 thousand to 1 million
  • Number of fibers in optic nerve: 1,200,000
  • Number of neurons in lateral geniculate body: 570,000
  • Number of cells in visual cortex (area 17): 538,000,000
  • Wavelength of visible light (human): 400-700 nm
  • Amount of light necessary to excite a rod: 1 photon
  • Amount of light necessary to excite a cone: 100 photons
  • The brain can process an image in about 13 milliseconds

One important thing to remember before going into details, even if the neuron has signals from others, it will only “trigger” (send a signal through his axon) when it receives enough input to reach the threshold of the action potential.

Image 2

Even if two neurons are bond together (one’s dendrite close to another’s axon), the signal can travel only when they also become chemically connected. This chemical connection can become more or less persistent, depending on how often the signal travels in that area.

There are 3 types of neurons, from functional point of view: sensory, motors and interneurons. The first brings the signal to the brain from nervous terminations in our body. The second sends the brain feedback to the muscles, glands, internal organs. The third type refers to the usual brain neurons that help in making connections.

Some Biological Considerations

It is unlikely that the nature stored inside our DNA the position and type of each neuron. The human brain is born "empty", without too many pre-defined connections and then it configures itself through life experiences.  I'm saying "without too many connections" because some of them (like instincts) are still stored inside DNA, and they auto generate when the brain takes shape.

So at birth, the brain is more like a sponge, eager to accumulate information, to shape connections between virgin neurons. This is why a baby is able to learn a quite fast, while it may take longer as an adult. You need to reshape an already settled network, plus physical and chemical characteristics of the brain may not be the same as the brain should dedicate to other purposes. Put differently, the human is more a learning machine in the beginning, then he becomes more of a thinking machine.

The human being is filled with sensors which send electrical impulses up to the brain or spine.
Suppose we touch something with our finger, maybe something hot. The touch sensors in the finger records the pressure, a chemical reaction takes place locally and an electrical impulse is sent through a long neuron up to the brain. The brain takes a decision by firing other neurons inside and sends back an electrical impulse (through a motor neuron) to a muscle, which retracts the finger.

This is how basically things work. Not only fingers but all senses and muscles are wired this way. They connect the whole body to some neurons in brain, which connect on their turn to the whole brain network.

Image 3

The retina forms an image based on the intensity and shape of the light getting in, also on the frequency of the light (colors). Each point on the retina excites the termination of its own neuron which sends signals in the visual cortex, in the back of the head. The human has a 2D vision on each eye, a certain image in front of the eyes will excite (trigger) some group of neurons, similar to monitor pictures. As you can see in the above statistics, the resolution of human eye is quite high, so the "pixels" of the image formed on retina will trigger a lot of corresponding sensory neurons.

Ears - The internal organs vibrate to certain air frequencies, they transmit the intensity and shape of the vibration to internal sensory neurons. A specific sound, a word for example, will trigger first these neurons, and then others connected to them. If the sound is repeated, these new triggered neurons will connect each other, chemically and electrically, and form a memory - this is how the word is remembered.

Speech - After some internal processing in the brain (not for everybody, but in general) the existing brain network connects to the motor neurons leading to neck muscles, larynges and lungs. The impulses send based on some pre-learned patterns excites certain muscles, in a certain order and a word comes out, by modifying the air shape from the lungs.

Not all of all sensory neurons are connected to the part of the brain that we have access to. The processing and feedback for most of them are mostly done unconsciously.

Image 4

Persistent Connections

Let's consider that we have a one year old baby who sees a cat for the first time. His eyes send the image of the cat to the brain, to a specific group of neurons that fires in cortex.

As they trigger simultaneously or in a very short interval of time, the chemical and electrical signals find their ways from one another and new permanent connections are born.

These new connections are formed either between them or between others connected to them directly. This is basically the memory of a cat, a pattern of neurons that once triggered together and linked each other. In the future, triggering a large part of them may trigger all of them, as they are physically connected.

In reality, things are a bit more complicated, as the brain is capable of some fuzzy logic and recognize the cat in different positions, so a larger group of neurons are contributing to this, in a similar manner. They are in the same brain area and form similar patterns.

There is also something tricky here. This memory is a cat only when the neurons trigger in a order and the signal travel to each other. The persistent physical connection doesn’t represent retrievable data, it’s not a jpg. You cannot excite only one neuron in the pattern and expect to retrieve all of them. You cannot search for a cat this way, thing goes the other way around: you get the pattern from the same input (or maybe from a related memory) and when full cat is triggered you do something with it, usually trigger further patterns. This is why image recognition is so fast, it's not a database search. Also, this is why we can recognize the object by looking at a fragment, we trigger enough neurons part of the initial pattern. (Intuition may fit in this category too.)

When the cat pattern is not triggered, part of the physical connections between involved neurons may be used to trigger other patterns. Even when the cat is triggered, other patterns derive from it because no neuron is only part of a single pattern.

If they don't fire a long time, those neurons may break the chemical connection and the memory may be forgotten. For the moment, let's see a cat as a pattern of several millions of neurons connected and triggered together in a short window of time.

A CAT!

Image 5

Neural Patterns

Now suppose our toddler is trying to touch the cat. Let's see what’s going on in his brain.

The cat pattern is already taking shape while the neurons fire continuously in the visual cortex. At the same time, he may smell the cat, he may touch it, he may hear it meowing, he may hear his mother speaking. All these sensors trigger more entry point neurons, that link together with in-progress cat pattern. Now we have a full memory of a cat situation: our cat is not only an image, its sound, smell, sensations.

A lot of sensory neurons create consecutive patterns, each having a specific order in their area. Some new big patterns are born and next time when the baby hears a cat, the patterns already settled in hearing area may also trigger the visuals patterns of the cat, as they are physically linked. Or they may trigger the associated neurons storing the cat name, because they once triggered together (Andrew, don't touch the cat!)

A cat situation:

Image 6

If the cat scratches the kid, the pain sensors send impulses directly to some "reaction to danger" neurons which release adrenaline and excite some running muscles. These kinds of connections may be either predefined (in DNA, instincts) or learned. The patterns formed this hard way will associate the cat with danger.
Next time, the image of a cat may trigger "reaction to danger" neurons and create a sensation of fear - patterns of neurons connected with glands (release some hormones), with skin, with muscles, etc.

What is a Thought?

Now we are at the point to define what a thought is.

A thought is basically a pattern made by successively triggered linked neurons. An image may be recalled as a single static pattern, but a situation, an idea, a memory, a thought consist in a succession of triggered patterns, following the physical connections between them. Everything counts in this definition: the path, the order, maybe the speed of the signal; it’s not a 3D static pattern but a temporal sequence.

This sequence is not isolated; it’s always part of succession of other patterns triggering it (either coming from external sensory neurons or from some other internal patterns). Also, while the pattern trigger, some intermediate neurons generate adjacent patterns, as they are involved in other physical connections too. This defines the logic, connected thoughts generated from one another.

So a thought is a time pattern, electrical signal travelling in a short period of time thought the same linked neurons. They were all triggered together once, by different other thoughts or inputs.

If we look at how neurology explains deja-vu today, we may find it a bit complicated. But if we think that similar nearby patterns triggered due to some factors, things may get a bit easier.

You may still wonder what this model is good for. I think looking at the human intelligence like this will help us to have a better understanding of the processes that happen in our brain. And maybe it helps at the design of the next generation of thinking machines, as we are still in learning machines era. Machines with multiple inputs and outputs that can train on different areas, have memories and correlate them, all in the same network.

Entropy and Random Thinking

The brain never stops thinking. Time patterns are formed one after another continuously, in different areas. How come they don’t always generate the same patterns in the same order, all the time? First, that brain is dynamic, its configuration changes continuously by creating new synaptic connections and destroying old ones. Also, the sensory neurons generate enough entropy so the input for some starting patterns is never the same. It may be similar but not the same, a single grade difference of outside temperature can influence the whole brain. This is because a neuron fires when the input from others reached a threshold level of the “action potential”. So some neurons may trigger or not depending on an outside entropy, thus affecting all others.

This is how random thinking is achieved, something that today machine learning doesn’t handle very well as they expect the same result for the same data. Brain may generate different results, depending on mood...

This randomness helps at selecting a good solution even if not all possibilities have been analyzed. Some people call this intuition. Machine learning algorithms also make use of this kind of entropy ( “random forest”) as it is simulated by software too.

What is Intelligence?

Intelligence = succession of thoughts that generate productive thoughts which serve to some predefined purposes (patterns) - survival, reproduction, evolution. How are these predefined patterns formed? By trial and error, mutations happened and those who didn’t have them disappeared. After they formed, they found their way into DNA somehow and the descendents inherited them, then enhanced them in the same way.

With more generated connections and more activity, the chances that the brain matches some predefined patterns increase. We can say the brain becomes more intelligent.

Real neurons on microscope:

Image 7

Image Recognition

As I said, once a connection has been set between some visual neurons (or maybe in some nearby area) and formed a pattern, the brain is able to recall that image. You see someone you know, the eyes triggers the same neurons again, like the first time, following the same hardware pattern that now exists. They are linked with some other memories you have with that person, as more patterns formed at that time or later in a similar context. So basically, a memory is triggering the same patterns again, which gives the same feelings, reactions or thoughts. They link further with all associated patterns with that person, names, places, etc.

But how do we imagine someone in memory? When we think about someone, we never think exactly at the person face, without a context. Basically, we remember someone in a certain situation, or location, we do not have a separate image of the face because it was never formed like this. Our thoughts start with others patterns related to that person, like the name, or a location or some other related memory. Then, the brain tries to reconstitute the big pattern which formed originally and is linked to the original image pattern in visual area. When the full memory re-triggers, the visual pattern activates and we "see" the person face in memory again, without external stimulus, all starting with some related thought. Also, a copy of the pattern may form in a close area and when we dream, this can trigger the image in our brain without external input.

Artificial Intelligence and Machine Learning

We are moving slowly in the right direction. There is still focus in the wrong areas, pessimism, lack of understanding, lack of information, ignorance, etc. We are in the early phases of this new field and it scares some people or makes others dream. Some others treat it almost religiously. But the progress cannot be stopped, if we can do something as a species, we will do it, no matter good or bad, no matter whether it can be used in evolution or in wars.

At the moment, we try to emulate 3D temporal entities on 2D digital (binary) systems. It’s slow and inefficient, as the brain does much more with the energy of a bulb. If we manage to make something really intelligent, it may have the size of a stadium - still possible, but we will probably do better. The current design serves well to create new Machine Learning algorithms, to understand what we can do with AI or what the limitations and dangers are. Still not practical. Our software neurons need to evolve to better dedicated hardware.

The machine learning algorithms we have today only consider inputs and outputs. They don't know and don't care about inside patterns that form while training on data, but they use the output. And inside is where the magic is, those patterns represent understanding of the system over the phenomenon. We don't use them enough to generate others and reuse the same network for different purposes (ie reinforcement learning).

Also, machine learning deals with numbers and functions, while the brain deals with flexible spatial connections. The brain is much faster in this regard, but the future hardware may solve this.

A brain stores a situation like a temporal succession of triggered linked neurons, while the software neural networks store a fixed pattern and tries to reuse it later. It’s like we build a black box on some data, then we go with it somewhere else and expect results and that’s all.

Many approaches to emulate the human brain and push things forward already exist.

  • Some try "add more hardware" method, applied on current design - faster processors and more cores on GPUs farms in order to speed up the learning. It's not necessary to copy exactly the same design to get the same results, in fact the wheel and the legs serve the same purpose. This approach works well with the most common Von Newmann hardware existing today, allowing to most of us access to machine learning. But probably, it’s not the future.

Image 8

In order to fit the new kind of computation, new hardware is needed. It was the first idea is to move the algorithms implementation to chips, but a redesign of the whole system may perform much better.

  • We need to emulate synapses, to generate 3D connections on the fly, to create electronic patterns. We need some kind of new analog brains - and in this regards, neuromorphic computing may perform best.

This is an entropic environment where connections patterns can form based on previous patterns and generate others.
To accommodate existing concepts and purposes, there is no other way than to train them as we teach children. Not even nature micromanages these, it lets them to the hands of learning. The major difference to nature would be that once trained, the state can be saved, restored, improved.

  • It’s too much of a coincidence that AI and Quantum Computing appeared at the same time. In the close future, we may see design of AI systems based on quantum computing that now are not even considered or imagined in the present time. The speed achieved by these systems may allow patterns simulation in completely new ways. The humanity is in early stages of this promising new era and some already seen ways to use quantum computing for artificial intelligence.

Once we manage to create the hardware and algorithms to generate patterns similar to the brain, things will evolve pretty fast. These new thinking machines will have incredible performances, both in terms of speed and accuracy. There will be a straight path to AGI and we may have it in 20-30 years. Maybe today we are just working with obsolete neurons on the wrong hardware.

With this new base created, we only have to add subsystems we already have, image recognition, language recognition etc. I'll try a wild guess: it may take 5-10 years to create these new algorithms plus hardware then another 10-15 to create highly intelligent robots - able to wash the dishes.

Image 9

Singularity will be probably considered solved when the language to communicate with these machines will reach human standards, and when they can express enough of what they are doing. But until them, simple intelligent robots will invade our homes and will make our lives easier.

One thing is sure, the genie is out of the bottle, a lot of people have seen that this as possible and now they are working on it.

Machine Learning and Information Retrieval

Actually, this is how I’ve got to write this article, I was wondering if machine learning can be used for a better search engine. My conclusion was that not in the nearby future, at least not in the way that I can see. That’s because today ML is not made for storage, still it can be a very good relevance engine if the document is provided (i.e., using ML for top documents provided by a normal search engine).

Checking all documents in collection is too slow and performs way under an indexed search. To benefit from the speed of a brain and build a full search engine machine learning based, we have to store all documents in it, just like the brain store images, in patterns. This would be pretty big, much bigger than current indexes and I don’t think we have something close tot this today - algorithms and hardware.

As I said, the brain is not built for memory. A person can barely remember two pages of a book. The brain is made to generate thoughts while holding information in generated temporal patterns. That's why software "neurons" should store less data and make more connections. Once we fit the world concepts into such a machine, as patterns, we can see how these systems uses the entropy and deal with them.

What is Consciousness

Now that we know how thoughts are made in our brain and how they evolve and generate others, we can draw this cynical conclusion:

Consciousness does not exist.

What we call consciousness is just a suite of thoughts about ourselves. They are just like those about a cat: patterns of triggered linked neurons. They are a bit more because we record a lot of patterns about us each day and we have a lot of input from our body (sensations) which we associate with memories. Plus we have some predefined patterns about survival and auto-protection. All these thoughts related to ourselves are not physically different than those about a cat, they are just... more. We do own more patterns about ourselves than about any other thing out there.

So if we create a robot that knows his name, has a purpose, knows what he does and tries to protect himself, we can say he is conscious. There is no need for more, there is nothing divine in it. Sorry about that...

For those stuck in years '50 clichés, yes, a robot can be programmed to fall in love or to have feelings. Once we manage to handle the patterns inside the learning machine, we can create them and link them to some physical reactions for example. And make them trigger in certain situations. An ant is conscious when running away from us. Prerecorded patterns recognize the danger. Carrying its food is just predefined instincts or maybe a learned behavior; both are stored identically in the nervous system.

Natural Language Processing

Natural Language Processing is the hardest part of the story, the holly grail of the Artificial Intelligence. It’s the element that defines Singularity. It has nothing to do with "words matching" done by machine learning today, which basically search for similar phrases or group of words in a database.

Language processing means a bidirectional form of communication with the thinking machine in which the machine expresses its current state by inspecting internal patterns. It’s the ability of the machine to match the communicated concepts into existing patterns and provide related patterns as feedback. It’s the capacity to synchronize those concepts with consequences, with its own imagination, with history and with a semi-predicted future. It's so much more....

This is the hardest part to achieve and should be approached LAST, as it requires all other subsystems in place. This form of communication also appeared last in human history, about 100.000 years ago, long after the brain was big enough for other types of processing. Dolphins are quite intelligent but don't communicate too much. We need to start with a primitive form of the language and improve it in time, but it can’t be done without resolving intelligence before. We may even need to ask what we can do with a philosopher robot, in case we already manage to have machines that invent things…

Chatbots, the way they are today, can hardly be improved. There is no much progress from 30 years go, and even if we use machine learning on billions of prerecorded conversations, they won't generate natural intelligence.  

Even if we manage to isolate and deal with concepts at language level and to add some kind of the history of the conversation, it will be difficult or impossible to predict the immediate future or the consequences only based on texts. And this is a huge part of the feeling that you are talking to an intelligent entity, they may never pass a Touring test.

Assistants:

Image 10

So the illusion of a rational answer cannot replace real thinking. This doesn’t mean they won’t sell, or people won’t use them for simple tasks, simple question/answer dialogs or assistants. They are just funny gadgets (unless companies choose to replace girls in call centers paid with minimum wages with chatbots, in this case they are annoying gadgets).

Conclusions

There are multiple disciplines involved here, neurology, mathematics, computing, biology, etc., maybe I haven't been very accurate in all cases. But the big picture is probably right. Maybe a new discipline is needed, something that links neurology to machine learning.

The reverse engineering of brain is work in progress and there are still a lot to discover. But today we are at the point where we know enough to start and we have enough computing power. The new AI algorithms just proved this is possible; we only have to find ways to improve them. It may take years of trial and errors, of adjustments and innovations, but intelligent machines are in the way. And they will bring a better life for all of us.

We are probably meant to create them, as a new form of life which at one point may save us from a dying earth, either extra polluted, too hot or irradiated...

Humans use to predict the future too soon and too weak compared to reality. They imagined teleportation and interstellar travels way before 2000. But they weren't able to imagine self driving cars 10 years ago, all science fiction movies still have drivers and pilots. Also all aliens are flesh and blood when it is obvious that our robots will meet first. In this regards, even if the Singularity will occur 10 years later than expected, the impact will be much higher than everybody think today. And hopefully positive.

"Success in creating effective AI could be the biggest event in the history of our civilization."

Stephen Hawking

TAGS: AI, AGI, Python, Machine Learning, Singularity, Neural Patterns gmail adrian.pirvu

This article is part of the series 'Artificial Neural Networks vs Real Nervous Systems View All

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


Written By
IBM
Romania Romania
adrian.pirvu gmail.com

Comments and Discussions

 
QuestionQuantum alghoritm Pin
John79Ita28-Jan-18 3:31
John79Ita28-Jan-18 3:31 
AnswerRe: Quantum alghoritm Pin
Adrian Pirvu7-Feb-18 8:09
Adrian Pirvu7-Feb-18 8:09 
QuestionSeriously? Pin
Marc Clifton13-Jan-18 7:56
mvaMarc Clifton13-Jan-18 7:56 
AnswerRe: Seriously? Pin
Adrian Pirvu15-Jan-18 22:44
Adrian Pirvu15-Jan-18 22:44 
GeneralRe: Seriously? Pin
Marc Clifton18-Jan-18 5:39
mvaMarc Clifton18-Jan-18 5:39 
QuestionLinking memory to chemistry Pin
Member 1146990013-Jan-18 5:45
Member 1146990013-Jan-18 5:45 
QuestionMy vote of 2: Abuse of the English language Pin
Member 1243898525-Dec-17 19:19
Member 1243898525-Dec-17 19:19 
AnswerRe: My vote of 2: Abuse of the English language Pin
Niemand2529-Mar-18 21:11
professionalNiemand2529-Mar-18 21:11 
Question"Consciousness does not exist" Pin
BillWoodruff24-Dec-17 2:09
professionalBillWoodruff24-Dec-17 2:09 
AnswerRe: "Consciousness does not exist" Pin
Mitja Tomažič27-Dec-17 9:35
Mitja Tomažič27-Dec-17 9:35 
GeneralRe: "Consciousness does not exist" Pin
BillWoodruff18-Jan-18 15:52
professionalBillWoodruff18-Jan-18 15:52 
QuestionMy vote of 5 Pin
Foothill18-Dec-17 9:52
professionalFoothill18-Dec-17 9:52 
QuestionThe Library of Babel Pin
TomazicM18-Dec-17 3:25
TomazicM18-Dec-17 3:25 
AnswerRe: The Library of Babel Pin
BillWoodruff24-Dec-17 2:05
professionalBillWoodruff24-Dec-17 2:05 
QuestionMy Vote of 5, Excellent Pass-Trough Hard and Complex Things, on a simplified way. Pin
AndyHo16-Dec-17 12:55
professionalAndyHo16-Dec-17 12:55 
AnswerRe: My Vote of 5, Excellent Pass-Trough Hard and Complex Things, on a simplified way. Pin
BillWoodruff24-Dec-17 2:10
professionalBillWoodruff24-Dec-17 2:10 
GeneralMy vote of 5 Pin
Member 1041561116-Dec-17 3:19
Member 1041561116-Dec-17 3:19 
QuestionThank you for answers Pin
Adrian Pirvu15-Dec-17 23:50
Adrian Pirvu15-Dec-17 23:50 
Question[My vote of 2] Really? Pin
SteveHolle14-Dec-17 4:31
SteveHolle14-Dec-17 4:31 
AnswerRe: [My vote of 2] Really? Pin
Member 1174085530-Dec-17 0:34
Member 1174085530-Dec-17 0:34 
PraiseTank you Pin
scalp14-Dec-17 2:28
scalp14-Dec-17 2:28 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.