Intelligence Generator and Detector

Neuroscience based machine intelligence models by Gary Gaulin, contact:

Sunday, December 6, 2015

Intelligence Design Lab #5 is now online!

The software package is available from Planet Source Code:

Or download with this link, which includes compiled exe for Windows:
The Intelligence Design Lab-5 is a cognitive model with behavior that is guided by a navigational network system that maps out an internal representation of its external environment (an internal world model) using a 2D array where signal flow (magnitude and direction) vectors point out the shortest path to where they want to go. This is a vital part of our visual imagination. During human development it is common and expected to cause children to stretch out their arms and say “I can fly!” as they run around while visualizing themselves navigating the sky.
Physical properties at each place in the external environment are mapped into a network according to whether they are safely navigable, an unnavigable boundary or border at a barrier, or place attracting it (in this case where the food is).
An attracting location in the network provides an always signaling (action potential) signal that propagates outward in all directions and around barrier locations that do not signal at all (the signal stops there just as the critter would by bashing into a barrier). In math these directional activity patterns are shown using a vector map. The ID Lab provides this in the onscreen Navigation Network form that can show the signal direction through each place in the network.
Its confidence in motor actions (forward/reverse and left/right) depend on the magnitude and direction it is actually traveling matching the magnitude and direction of the signal flow at the corresponding place it is currently at. Where there is more than one pathway the shortest path dominates, will be the first to propagate to that point and be favored. Where there are two or more paths of equal distance it may become indecisive but will soon favor one path over the others.
To test its place avoidance behavior a hidden moving shock zone slowly rotates counterclockwise, while the critter chases food in a clockwise direction heading straight towards the hazard. Although the test is demanding the confidence system of this intelligence strives for perfection, as does a human athlete. The relatively high confidence levels shown in the included line chart indicates that the virtual critter is having fun. In the research paper “Dynamic Grouping of Hippocampal Neural Activity During Cognitive Control of Two Spatial Frames” (see notes) that the arena and some of the navigational network is based upon it was found that; some live rats preferred to chase after the treats even though they are not hungry enough to need to eat, while others preferred to remain in the shock free center zone. Even a live animal has to first be willing to accept the challenge. For the virtual critter several If-Then statements that compare actual travel magnitude and direction to that of the internal representation is enough to make it want nothing else but to chase the food around its arena.
Intentionally getting out of the way of the approaching invisible shock zone requires the ability to (from past experience) predict future environmental events. This was added by alternating between current angular time (by default room angle is from 0 to 15) and the next angular time frame ahead. The places that will soon become a shock hazard periodically become a place to avoid. This sequential on and off signaling causes a (over time) temporal decision to be made. The same works for swarming bees. Scouts that find a possible new place to build a hive are one at a time allowed to dance out the location for other bees to inspect. This way each option is first considered, before making a final decision. Otherwise all the bees would either swarm to the first site found or to different ones (instead of staying together).
The virtual critter cannot (like a swarm of bees) divide itself then go separate ways, therefore appropriate actions are taken simply by repeatedly presenting (in any sequence) what must be considered.
Exactly what it will choose to do at any given time is as hard to predict as it is in real animals. The only way to know for sure is read their mind, which (by adding RAM monitoring code) is possible to do to the ID Lab critter. But it's still not at all like the easy predictable behavior of zombie-like “programmed” actions from an algorithm that uses math to make it go in a given direction in response to an approaching hazard instead of simply showing the options to consider then leaving the decision up to it to figure out, on its own.
After avoiding being surrounded by the approaching zone it must have the common sense to go around to behind then wait for the food to be in the clear, while knowing where the food is located even when it's surrounded by places to avoid that can (where signal timing is way off) block its signal activity. Where the signals from attract and avoid locations combine: the wanting to go both towards and away from the food results in it becoming nervously anxious, skittish, as are real animals with such a dilemma.
The signal timing that was found to work best closely follows Hebbian Theory. Neighboring cells that fire together, wire together a network with activity patterns that recreate the physical properties of what is in the external environment. It can also be conceptualized as a conservation of energy strategy where at each place in the network an incoming charge is transferred to uncharged neighbors on the opposite side, outgoing direction. The signal energy is moved from place to place, not destroyed then regenerated all over again.
To establish a benchmark that assumes error free signals from parts of the brain that use dead reckoning to convert what is seen through the eyes into spatial coordinates in its external environment the program simply uses the already calculated X,Y positions that are used to place things in the virtual environment. In the real world our brain oppositely converts visual signals to these spatial X,Y locations, which a virtual environment has to instead start with. Where this dead reckoning system were added to this model and working perfectly that's what you would get for coordinates. Using the exact coordinates that the program already has provides ideal numbers to work from, which in turn gives this critter an excellent sense of where visible things are located around itself even though in this Lab its eyes cannot visually see them.
This navigation system demonstrates how simple it is to organize a network that provides navigational intuition like we have. It helps explain why animals (insects are also animals) seem born with a navigational ability that is there from the start. The origin of this behavior in living animals does not have to be a learned instinct that slowly developed over many millions of years of time by blundering animals passing on slightly less blundering behavioral traits to offspring. It's possible for these neural navigational networks to have existed when multicellular animals first developed, which set off the Cambrian Explosion. The origin of these inherent navigational behaviors may best explained by the activity patterns in these relatively simple cellular networks.
The origin of our brain may in part be from subcellular networks that work much the same way in unicellular protozoans (single celled animals) such as paramecia, which have eye spots, antennae and other features once thought to only exist in multicellular animals. Testing such a hypothesis using this computer model requires additional theory, which may have a controversial title but going further into biology this way meets all of the requirements of the premise for an already proposed theory. In a case like this regardless of being controversial science requires developing already existing theory. Therefore see the TheoryOfID.pdf in Notes folder, for a testable operational definition for "intelligent cause" where each of the three emergent levels can be individually modeled. It is predicted to this way be possible to demonstrate a never before programmed intelligent causation event, which is still a further research goal and challenge for all to enjoy.

Monday, March 31, 2014

Grid Cell Attractor Network for place avoidance spatial navigation around Repelling border/boundary cell mapped hazards or barriers, Version 2

This update to the Grid Cell Network adds a Pulse command button and a more noticeable color coding to make it easier to study the AC component (of the field produced by all output connections of the attractor location staying active/on) where the alternating between sets of force vectors (violet lines showing force direction) average out to a more precise heading and encode a range of possible paths that can be taken, depending on behavior. Some animals prefer to follow walls/barriers while others prefer a more direct route like this model does. Code for repositioning the MyX,MyY location was much improved by using smaller steps calculated from local force vector field strengths, and converting back and forth between hexagonal grid network coordinates and Cartesian coordinates in its environment (required by the IDLab). The Attract and Repel arrays were eliminated by their 1 bit of data being stored in the uppermost 2 bits of the GridIn(X,Y) array byte, which also stores the 6 bits of grid field input from neighboring fields (N variable). Since the behavior of each field in response to these 8 bits of addressing input are all the same for each network X,Y location, N and Attract, Repel states the GridRAM(X, Y, N, A, R) array became simply GridRAM(GridIn(X,Y)) now addressed with only the GridIn(X,Y) byte. Training the GridRAM array for grid field behavior was then reducible to just four short lines of code, in the Initialize subroutine. The TimeStep code is now better optimized, and faster, even though these changes do not necessarily make it easier to understand how this Grid Cell Network model works, but might.

With compiled code for Windows:

Monday, March 17, 2014

Grid Cell Attractor Network for place avoidance spatial navigation

This is a demonstration program (including Windows .exe) for the attractor network that made it possible for the next Intelligence Design Lab critter to confidently challenge the invisible moving shock zone arena, which required adding this hippocampus related network to its confidence system. It's the internal world model where the path around obstacles is planned out, visualized. With this added it learns to leave food in time to get out of the way of an approaching shock zone, then is soon impatiently waiting behind it until safe to eat the rest. Grid fields (here one field per cell) form a hexagonal array/lattice with an electrochemical field that is disrupted by border (also called boundary) cells mixed into the grid cell population to make places in the grid Repel, instead of Attract. As in radio transmission a cyan colored attracting location (food or other immediate need) emits continuous AC waves by turning off then on again, oscillates. Waves will propagate around the barrier to the yellow with tan inside critter location, which then guide it every step of the way back to the oscillating attractor by following the directional violet color angular vectors formed by activation pattern from the 6 neighboring cells around each place, along the way. How our brain or other cognitive system might produce and combine signals into such a spatial representation does not matter to this minimal code model. Since this demonstration greatly simplifies what is most important to know in regards to how the upcoming IDLab4 works it made sense to start with this simplified model, now ready for you to experiment with.

The primary Code:

Monday, November 7, 2011

Intelligence Design Lab

There is now next generation software being regularly updated, the Intelligence Design Lab shown below. Its large round eyes with a mouth in the middle is a good approximation of overall geometry resulting from flattening of 3D dynamics to a flat-land world with no up/down. This greatly simplifies the modeling task without loss of relevance to real-world biological systems:

Includes a Theory of Operation in notes, for information on how the circuit of it works.

The core model of this theory is an Intelligence Algorithm that models the systems biology interaction required to produce intelligence, using an algorithm optimized for digital RAM memory systems of a personal computer. Neural networks are another way to achieve this system interaction, biologically accomplished by molecular intelligence systems that similarly have spatially and chemically addressed genes for data elements. There is here a small fast algorithm/circuit for generating rudimentary intelligence, to control the molecular, cellular and multicellular intelligence of virtual entities.

Source code written in Visual Basic 6.0 along with .exe run file that does not install anything that later requires uninstalling:

Tuesday, November 18, 2008

Source code (but not compiled .EXE program found here that Visual Basic programmers do not need to run) is available on Planet Source Code here:

If you do not have a Visual Basic compiler then here is a zip file with the IntelligenceGenerator5.EXE file.


In the above download, save the zip file to your hard drive in a new folder with a name like IntelligenceGenerator5 then locate and open folder to unzip then run. There should not be a problem or do I expect one but if you worry about viruses getting into it then before running you can check to make sure this is the program I uploaded. Here are the "Properties" (right click on program icon) I have here of the IntelligenceGenerator5.EXE file to look for that should match what you received.

Modified: November 14, 2008, 8:24:42 AM
Size: 108 KB (110,592 bytes)


Autonomous behavior does what it wants to do. In some cases can be trained but wanting to be trained has to be in its behavior. For this reason fully autonomous robots are unpredictable and may accidentally or on purpose charge then attack what it sees or hears.

A robot born with no memories at all that is left to its own to explore and learn would have to bump into things the different ways it can bump into them before learning how not to bump into things. We did the same. Very early on we bumped into a solid object to find out that hurts, so had to learn how not to do that. Then we learned how to stand by trying to stand then falling down every time until we finally made it all the way and were standing, then fell right back down again. Therefore a ten horsepower robot to scurry around the living room, is a very bad idea. It might even learn to avoid stalling out when it hits a wall by going fast enough to go clear through, then soon be down the street visiting neighbors houses.

Autonomous behavior might make a very bad housekeeper but it will here be studied because it is found in molecular machines, cells, insects on up to humans where this behavior is sometimes called "free-will".


Searching for answers and striving to be increasingly better is inherent to the learning mechanism itself, in the simple yet effective way that it works. Therefore the human brain is much more complex than the computer model but the fundamental interaction is the same. We have a memory that responds to what is being sensed, with action signals sent to muscles, where feedback circuits wire back success or failure and includes pain receptors to add a more automatic "don’t do that" reflex that more suddenly puts what muscles are doing into reverse.

Human intelligence is electrochemically produced by neurons that also control muscles and other processes. In addition to intelligence human intelligence also possesses consciousness. Although consciousness has been traced to a relatively small region deep inside the brain, how this awareness works is not yet known.


Associative Memory stores an Action to be taken in response to environmental Sensors.

We have the following sensors, called the "Senses"[23]:

1 Sight
2 Hearing
3 Taste
4 Smell
5 Balance and acceleration
6 Temperature
7 Kinesthetic sense
8 Pain
9 Other internal senses

In computer programming an associative memory is easily implemented with an Associative Array. The memory system is represented by name it was given when created, we will simply call it "Memory" which in turn gives us the array called "Memory()" where inside parenthesis is specified where in memory to read or write action data.

To read/recall a Memory Action:

Action = Memory(Sensors)

To write/store a Memory Action:

Memory(Sensors) = Action

The "Sensors" variable is an integer (whole) number that addresses one of the Actions in memory for reading or writing. Value will range from 0 to the last location which is determined by how many sensors of any kind the robot has in total.

In an electronic system size of memory exactly doubles for each on/off condition (bit) that is added to be sensed. Where there is only one bit memory has two Action data locations 0 and 1. Where there are two bits there are four locations 0 to 3. Where there are three bits there are eight locations 0 to 7.

Where there is more information than can be all at the same time wired into a single memory system as in the retina of our eye many sensors are summed together to extract only the information the rest of the brain needs from the overwhelming number of photoreceptors that the retina contains.


When you encounter a new problem you never saw before, you know when you have no solution in memory. Your confidence in having the correct response is 0, because you have no response at all for that yet. The best you can do is guess. If that didn’t work then you guess again. While growing up we had to try holding cups upside down and other angles to figure out that unless it stays "upright" the contents spill all over the floor. And coordinating muscle movement to walk then run involves a lot of falling down.

In the computer model all locations in memory likewise start off with Confidence = 0. Confidence is incremented up to a maximum of 3. When a guess leads to what it instinctually wants, it’s stored with Confidence = 1. If it works again, then it’s incremented to 2. Then finally to 3. The confidence range of 0-3 is all that’s needed. Going beyond that range is not necessary.

How confidence is incremented leads to various behaviors. All at once going from 0 to the max of 3 leads to overconfidence. If its confidence is easily brought back to 0 then it will have little confidence in any of its responses but that can lead to trying new things.

Seeing food and hungry while motor direction is moving it closer, is here a successful response. But if it is hungry and what it's doing is not getting it any closer, it’s failing, so the intelligence must take a guess. Random motor settings are tried. If settings work then they remain in memory, else it takes another guess what to do.

The computer model has a ring circuit that adds a sense of what is around itself, and with it comes a significant increase in confidence. With this simple circuit the intelligence will right away know where something out of its field of vision is located. There are here six memory location where one is set to act as a pointer to one of six angles.

When you click the "Circuit" checkbox you see what is in essence a ring of six neurons, numbered from 0 to 5. The state of what it sees ahead at Angle 0 shifts from neuron to neuron around the ring. The reference angle for body position could come from the sun, magnetic field, nearby landmark, balance circuit, or other source of rotational feedback.

Direction of food is shown as blue pointer. In this example the food angle just switched from being to the left out of field of view, to being straight ahead. The right motor is running forward (green) while the left is stopped (gray). So it's here Spinning Left (SpL) and Spinning Towards (SpT) what it "sees" to the left of it.

Sensing direction like this adds another level of intuition. Will now learn how to turn in the right direction to follow what when out of its field of vision. Like when something goes fast in front of you, turning the right direction is automatic. You don't look left then right back and forth until you by chance see it again. It's possible to grab the last feeder with your mouse so it has to chase after it. Will notice that it learns to turn in the right direction towards it.

What this adds is shown below, where the chart with Angle and SpT bit added to the Address is shown above the graph of the same age without it. The Food Supply slider control was set to 1 (lowest amount) so it has to work hard to keep itself fed.

In the graph you can also see how well it keeps itself fed, the red line No intelligence at all would have the graph showing a battery level that flat-lines at 0, which in essence, is dead. But notice the red line here. See how it very quickly learns to find food so it doesn't go to zero full, starved, like it would be if stuck in a circle never bumping into food.

We can see from the lower red line of second graph that it had more trouble staying fed. Confidence is also noticeably lower due to it being a slower learner. Shown below, is the graph when its ability to see food is entirely taken away.

In the above three examples, the only thing that has changed is the amount of environmental sensory information. And this intelligence is not at all fussy. Adding any kind of sensor, that does almost anything, can greatly increase its awareness.

Sensory information such as this does not require detail. So even though the typical ring neuron structure is another simple common six sided hexagonal geometry, with large numbers of them a biological brain can store angles representing complex sights and sounds.

It is not necessary to connect every single pixel of eye information directly to memory. In human vision many photoreceptor pixels are combined into a single signal before being sent to the brain via optic nerves. The visual information is not all at once processed, it is processed in "layers" of neural circuits.


To form an Intelligence System with a RAM chip type memory system we Address memory with input sensory information from photosensor eye pixels, microphone ear, battery charger (taste of its food), battery low (hungry), bumper or other sensors useful. This gives each unique environmental situation a unique memory location where a unique Action response to it controls motors, speaker, light or other device it might at some time find helpful.

Whatever is there for a photoreceptor, will work just fine. Even an eye-spot made of a centriole crystal is better than nothing. On up from there are telephoto eyes of birds of prey. Whatever there is developing for an eye gets wired into.

The basic mechanism that produces the phenomena of intelligence can be modeled with a simple loop. We will here give the intelligence control of tank-like 2 motor drive system. Motor Forward and Motor Reverse is controlled with two bits where motor is off when 00 or 11 while motor is moving one way or the other when 01 or 10 with it not mattering which order the control bits are connected to memory. Intelligence inherently self-organizes all inputs and outputs then “learns”.

In the first line of program code we have what the intelligence is to control and could be real motors. With molecules this could be the Krebs Cycle. The "Call" instruction causes top to bottom program flow to jump to where another routine generates a virtual environment containing the robot then jumps back when finished. Where real motors are used the four motor control bits are only sent to motor controller circuit, then returns.

The second line adjusts a Confidence level in response to the condition of the "Stall" environmental input sensor that is 1 (true) when eiter wheel stop turning as it would when wall stops it. We here combined Left and Right Stall shown in above electronic circuit diagram into one, which does not have to be done but is here used that way. Other sensors such as eye pixel, battery low sensor and another for having found charger is added with another If..Then.. statement. Conf(Addr) is a one bit memory array location that stores Confidence level from 0 to 3 at address specified by the "Addr" variable. Due to the way electronic counters operate (but not synapse) the program assumes that Conf(Addr) will not go below zero or above limit, in this case three. The RunMotors subroutine would here change -1 to 0 and 4 to 3 so it stays in range.

The third line uses binary powers of two so that there is a unique Memory Address location for each possible input sensor combination. Networks of neurons already connect in a way that forms a unique branching paths so do not require a numerical address like this, but a computer memory here simulating them requires a number be given. Other inputs can be included in this addressing with the next power of two such as adding "+(EyePixel*32)" to include photosensor to see light from a battery charger. Memory size doubles for each bit added which is at first not a problem but can become unnecessarily complex. Not all sensory information need be included in addressing, just what is needed to make an efficient addressing system to sort visual experiences into unique locations in the memory. When there are a large number of inputs they are first summed in different layers of detail.

The fourth line takes a guess when confidence in an action is below one (zero) by randomly setting the four motor control bits then confidence level to one to indicate low certainty. This part of the mechanism is also intuitive when one tries to imagine what would happen where we could not take a "guess" when necessary. We would forever get stuck right there, maybe repeating the same unsuccessful action like bumping into barrier over and over again until dropping from exhaustion. Flies sometimes do this for a while against a pane of glass to reach a light source on the other side. At some point it has to realize that it is not having any success then try something else, or perish. Even a dumb guess can still be a correct response to environment. This happens in the computer model when stuck against the wall being able to go no further. It has to be able to take a guess how to get out of that fatal (when starves there) situation as would happen to a simple organism in a changing environment with a genome that has 100% replication accuracy which would never try anything new. Or where guesses result in too many useless responses. The ability to take a "good guess" then stay with it must be present for either a genome or intelligence of the computer model to adapt and survive.

1: Call RunMotors
2: If Stall=0 then Conf(Addr)=Conf(Addr)+1 Else Conf(Addr)=Conf(Addr)-1
3: Addr = LMF + (LMR*2) + (RMF*4) + (RMR*8) + (Stall*16)

This model is also analogous to finger muscle control that through training becomes coordinated in a way that they have the keyboard layout stored as motions to reach each key. In both cases intelligence successfully learns to navigate a 3D space without requiring a physical map. We are therefore able to type without consciously thinking about the level of intelligence that does the actual typing. There is in essence more than one intelligent mechanism at work in a brain. There are a number of them functioning at the same time.

We can sum up this mechanism by first needing something to control such as motors, muscles, inner cellular structure (stem cell migration) or the Krebs Cycle. Second there must be a way for success and failure of an action to be measured which can be from visual feedback to correct typing errors, molecular using chemical feedback, or in extreme cases not being able to endure the environment simply eliminates it. Third there must be a memory with a structure that saves actions in a unique location in memory for each combination of sensory input signals such as network addressing of a brain or genes located in a unique functional location in a chromosome in a unique chromosome territory inside the nucleus. Fourth there must be a way to take a guess in order to try a new action which at the genome level involves code changes where in somatic hypermutation (cells of the immune system) regions of the genome undergo recoding at some million times the normal rate to find a way to destroy an invader.

Classes Of Robotic Self-Learning

It is useful to define intelligence as in robotics according to David L. Heiserman 1979 in regards to the self-learning autonomous robot, for convenience here called "Rodney".[4] The Intelligence Generator/Detector described above is Beta class.


While Alpha Rodney does exhibit some interesting behavioral characteristics, one really has to stretch the definition of intelligence to make it fit an Alpha-Class machine. The Intelligence is there, of course, but it operates on such a primitive level that little of significance comes from it. ....the essence of an Alpha-Class machine is its purely reflexive and, for the most part, random behavior. Alpha Rodney will behave much as a little one-cell creature that struggles to survive in its drop-of-water world. The machine will blunder around the room, working its way out of menacing tight spots, and hoping to stumble, quite accidentally, into the battery charger.

In summary, an Alpha-Class machine is highly adaptive to changes in its environment. It displays a rather flat and low learning curve, but there is virtually no change in the curve when the environment is altered.


A Beta-Class machine uses the Alpha-Class mechanisms, but extends them to include some memory - memory of responses that worked successfully in the past.

The main-memory system is something quite different from the program memory you have been using. The program memory is the storage place for Rodney’s basic operating programs-programs that are somewhat analogous to intuition or the subconscious in higher-level animals. The main memory is the seat of Rodney’s knowledge and, in the case of Bete-Class machines, this means knowledge that is grained only by direct experience with the environment. A Beta-Class machine still relies on Alpha-like random responses in the early going but after experiencing some life and problem solving, knowledge in the main memory becomes dominant over the more primitive Alpha-Class reflex actions.

A Beta-Class machine demonstrates a rising learning curve that eventually passes the scoring level of the best Alpha-Class machine. If the environment is static, the score eventually rises toward perfection. Change the environment, however, and a Beta-Class machine suffers for a while, the learning curve drops down to the chance level. However, the learning curve gradually rises toward perfection as the Beta-Class machine establishes a new pattern of behavior. Its adaptive process requires some time and experience to show itself, but the end result is a more efficient machine.


A Gamma-Class robot includes the reflex and memory features of the two lower-order machines, but it also has the ability to generalize whatever it learns through direct experience. Once a Gamma-Class robot meets and solves a particular problem, it not only remembers the solution, but generalizes that solution into a variety of similar situations not yet encountered. Such a robot need not encounter every possible situation before discovering what it is suppose to do; rather, it generalizes its first-hand responses, thereby making it possible to deal with the unexpected elements of its life more effectively.

A Gamma-Class machine is less upset by changes and recovers faster than the Beta-Class mechanism. This is due to its ability to anticipate changes.