Hello World - Brain Wave Simulations

Published on 26 June 2025 at 17:22

"Creativity has a brain wave signature as well: alpha waves pushing out the right hemisphere" -Steven Kotler

 

That quote sounds great until you realise that's a whole quarter too a half of the brain and alpha waves mean just lots of firing. This is my weird software development blog where I try new and different things with coding and AI. Today I am looking at building a artificial neural network using the Hodgkin and Huxley calculations for the mammalian neurone and seeking to build out a AI product either to fail and get a sense of where the upper limits for AI might be or succeed and well have a piece of software that closely emulates a biologically plausible simulation of a brain. A thinking machine in the Frank Herbert conception or a brain emulator.

I had some hiccups of the start estimating accuracy I try to avoid it but on learning English the improvements I have to show jumped from less than <1% to around 5-8% with 1% probably being random. Therefore I think it probably is in some sense doing something. I did this by focusing on studying brain waves in my simulation.

The simulation does neural plasticity and both the network itself is semi random with the non random bit being it has a algorithmic minder that plugs the simulation together and keeps check on the brain simulation because the brain simulations can fizzle out if not connected properly. In the below this minder is set to maximise the size of the neural network so it is continuously growing. 

 

All the graphs below are of this network made up of multiple mammalian neurone simulations and usually represent the total amount of something across the whole network sometimes representing 1-2 thousand simulated cells. 

 

Gradient Descent Does Not Work

 

So first thing to say if you just try to use gradient descent it does not work. The below graph has the time step on the bottom (x) and error (y) getting worse and just going more negative. 

 

Put Some Spin On It

 

What I learned was you have to take into account the state of the other neurone and this does come out of the Blue Brain Project as simmilar to how they trained their models. What I did differently was start trying different versions and heuristics for pulling the neurone simulations into a sequential process and radically improved my minder algorithm. 

It started to look like the below. The bottom is time along X and the alongside is the amount of neurones firing and at the zenith of their voltage output which I wanted to maximise as a heuristic for the brain simulation to be cycling in a controlled manner. You will note the pattern is neither random nor the same each time which is a good sign that information might be being encoded. 

I can also just clarify the very way they look looks like a oscillating pattern that distribution goes up and down like an AI made of springs or a piston. Or maybe a brain wave.

 

 

Though what I found interesting was slight variations in both how you setup the neurones and their way of learning change dthe scatter graph shapes. I know this is a bit unscientific but they are pretty. 

 

Voltage

 

I did play around trying to keep it perfectly centred on voltage with some mix success the below is a bit more voltage neutral than the others . In the below the minder algorithm was rebuilt to try and create a voltage neutral brain simulation. Is it just me or does it remind you of certain complex numbers.

The one below that is more normal. Also yes that is a brain simulation running at 15000 volts; yes that would fry us. No I am not going to think too hard on that. It has safety measures and cut outs to stop the explosion in numbers in its differential equations or the propagation of error but it turned out it was safe to run this at higher levels than our wetware does so I let it do that as it does not crash the simulation.   

The original looks something like this. Note even higher 75000 volys distributed in the network. 

 

Chemical Fluctuations

 

I can also tell it has a big affect because the graphs I did on the calcium fluctuations seem to have big differences in wavy-ness. 

 

 

Conclusion 

 

This is also fits into my general schtick of if LLMs are like our brains; their proponents say they are they say they have reasoning models etc. Why does backwards propagation not even work if you use the standard ordinary differential equations that biologists use to describe neurones? For me its a reason why I think there might be something wrong with the LLM debate and even if they are right surely you would need to go back and define in what way they related back to thinking in humans.

I mean from where I am sitting AI and brains seem to work on different principles entirely. It seems a mistake to start assuming progress on LLMs will be progress on AGI. 

A current pet peeve of mine on LinkedIn is currently about reasoning models because there does not seem to be a answer on how they relate back to reasoning in humans; everyone seems to refuse to call them self prompting models which was what the discipline used to be called. From what I have heard from some of the papers suggest the reinforcement learning method they are trying to use to create reasoning models is showing little or no new generation of new information; i.e. the AI just uses things it would have done if it was a bit bigger but with the key difference that it uses less compute at training but more when used thereby passing on the cost to the user. Though people seem to want to argue they do reason that this is a good thing for reasons.

For the most part it does not doing anything yet but it is really pretty but I also made sure to have something looking up accuracy of its behaviours and I have managed to get some improvements.

For the most part its just pretty but I had switched over to trying to integrate it with reinforcement learning because I think honestly we seem to learn even language via a reward mechanism and nothing like classification or anything they do in a transformer. I also think propagating error in the network like that can create amplitude changes that the rest of the network might confuse for actual inputs.

So this version uses reinforcement learning via dedicated pleasure centres. This is more human and means that you cannot rely n statistics to explain its performance. 

This is to say it never gets the right answer so hopefully when I think its accurate IU am safe in saying that. Though I have also been horrified by my own mistakes in the past and now cannot change certain areas of code without checking it 3 or 4 times. Nut hopefully the aim is to maximise the parallel with human type learning so where in the below it hits above 0 accuracy which means a accurate utterance its more likely to be "thinking" than just stochastic parrot next token type stuff. 

Anything in the below above zero is a right answer in guessing the right letter that AI has not memorised that book or anything else. It probably wont go anywhere but this is a easy testing medium and even if it doesn't give way to human level development what I want to see is if I can get a reinforcement type agent pattern it against how our brain works and then intergrade it with a drone possibly.  

I did try doing that but I got halfway through setting up the drone. Then realised the code I got had all the python commands to say where the drone to go but none to get drones location meaning it would be useless for reinforcement learning. I figured out might have to build my own model of AirSim and that would take some more time. I then crazily setup a Doom reinforcement area but then the frame rate for more than one agent is a bit low and I might need to move it into C++ to get it faster. I am kind of in a bit of a holding pattern while I finish the easier tests in Python where I have to worry less about stack overflows because the numbers in these calculations can be ^3 or ^4 and early versions just could explode but this now seems stable.

I do not want to move over to C++ because the speed comes at the expense of less error protection and I do not want to change anything and get everything going bigger than a double can hold or an underflow. I am quietly confident I could port it over without risk but I do not really want to cross that Rubicon while testing different variations. 

But yeah pretty graphs from a machine. Do androids dream of electric sheep type of thing...

 

Add comment

Comments

There are no comments yet.

Create Your Own Website With Webador