Hello World - Human Neurone Simulations Part 3

Published on 3 February 2025 at 15:50

"An idea is just neurones firing in the brain" - Butler

 

So I have been playing with building a brain. Admittedly just playing but I got the code a few weeks ago for how the differential functions work and just been playing with how you train them to see if you could figure out a way that they work and after a while reading about neurotransmitters and getting lost I started running tests and roughly throwing into the equations ideas I was readying about. So this is a bit of a short blog post talking about the differences with artificial neural networks as is and our brains underscored by a few simulations in order to get a view of what the future might hold.

Human Neurones

A big difference between human neurones and the linear algebra we use in LLM and ANNs is that they have timing functions and from my reading are controlled by the proportion of neurotransmitters in the neurone. Learning takes place seemingly both by changes in what might be said to analogous to weights in a ANN but also releases of neural transmitters in the brain that modulate the quantity of these neurotransmitters in cells. There is also activation of gene expression and neurones that activate too little start finding new connections with nearby cells. 

By doing this the brain trains quantitively by strengthening output, trains timings by modulating its neurones timing and continuous networks and redesigns its circuits. Out current artificial neural networks can only do the first part the  quantitively increasing or decreasing change in output and wiring and not deal with timings. Or at least in general and specifically transformers like chat GPT are not building timing models or changing their architecture in real time.

In the below top is the code as is with a small jolt an long inhibitory phase and a small jolt directly after. Notice the first firing is big and it does not reset even after a inhibitory phase and does not fire again (as in it does not rise above zero as very little happens). 

 

Though changing the amount of Calcium reversal potential during the code run completely changes the behaviour of the neurone so it fires more often. Therefore the action of neural transmitters greatly changes the way neurones fire and I have pulled some strange graphs of it by changing these variables. 

Though the big picture is learning for biological neurones probably relies on neural transmitters opening and closing ion gates allowing more or les chemicals into the cells changing how the cell fires like the graph below. What is even more different than how transformers and ANN work is that these chemical changes seem to have central command and controls that monitor and change how the brain learns. Which means the brain is both learning local to the neurone but also has systems coordinating that same learning on ever greater scales of time and granularity. Which implies the brain is continuously thinking about thinking better. This wasn't something I was going to understand anytime soon but as soon as you start to think you understood enough of what it was doing the brain surprises you. 

 

 

Working out how to guess at what value to change was likely to be impossible in biology its very easy to read the changes in voltage when a neurone fires but continuously reading the flow of ions into and out of a single cell and not killing that cells seems likely impossible and for that reason I have found no one with an idea of what  or how you would train a neurone simulation (or at least I cannot access any published material and or chat GPT did not know). 

So I set myself to figure out how and then how on earth would you create a test that would indicate that something had happened inside it and it was doing information processing because for obvious reasons I would not manage to build a whole brain and run it on my home PC. So it would need to be a test that was highly sensitive and then use it to spot that change in oscillation that might indicate something was happening inside it. 

 

Neural Transmitters

 

Now if I knew how all this worked I would explain the precise changes in leak reversal in response to what I did was connect it to a sigmoid output, create a population control of random numbers and calculated a red line of if the differential equations where just oscillating around the sigmoid value of 0 (Sigmoid outputs 0.5 when at zero, and the differential equation can oscillate as you see above around  0 as well).

The random samples are the orange values below and the blue samples is the current AI test population. So the AI is winning on that point. 

Therefore if there is some form of information processing going on in the things "brain" you would expect it to be at this point of -731680.7909908601 in our test (which is the red line), but not above it. It should not be above it because that indicates it's not just a periodic equation going around and around the zero point.

It should not be able to get past that point unless there was some synchronicity with it and the thing its predicting. Or at least that's the hypothesis. 

To be metaphorical if earth keeps circling in its orbit that would be a score of -731680.7909908601, it would be lower if earth wobbled backwards and o in this test, to score higher earth would sometimes have to decide to change course and go in the opposite direction. If it did you'd ask who was driving that thing, where they intelligent. It is the same logic sure the network could just oscillate but there should be no big jumps over that red line without it having to question what did that.  

So that was the idea for my test if error reduced play something for it no inputs and see if anything synced up which would imply things like learning timings and maybe memory. 

The below is showing gulf between picking a random choice and this one. Random choice is orange. Blue is he AI population and the red line is explained below.

 

 

These are my two best performers. The lower performing ones are all of a larger scale and they seem to be less stable when training more than one connection to the sigmoid neurone which seems to be unstable but sometimes produces that interesting spike. So I am using this scoring system as a basis of a evolutionary test to rinse and repeat on finding what makes it go above that line.

I clearly know more about neural transmitters than i did a week ago...

The below are generation 200 and while improvements have slowed for now and the below is really small test but fun nonetheless. I mean I have checked the numbers do seem to output correctly its not picking just a really nice moving average; which I tried and checked it score lower than the random results because with random you might be randomly right or wrong but a wrong function just gives the wrong answer. 

I also think it shows difference between current AI and where we could go if we really got to grips with some of the things the brain does which aren't just mapping but manages rhythm, timings and why try to keep developing only ANN when the original and still the best inside our heads is so fundamentally different to ANNs. 

The two below are now in a long term test to see which one is better as I need more samples to precisely measure which is doing better. And while I can consistently get above that red line if I precisely set its size I do not know why a few perform so much better and I do not have enough data to know if they are outliers.

I do not think it will scale yet to "brains" but its a good start and it goes into the category of "what did that"?

 

Add comment

Comments

There are no comments yet.

Create Your Own Website With Webador