Hello World - Chemical Simulation

Published on 1 April 2026 at 12:00

“Fill your brain with giant dreams so it has no space for petty pursuits” – Robin S Sharma

 

Know What You Do Not Know

 

So I spent a long time reading through neurological literature, trying to close the gap about building a biologically plausible neural network. 

The human brain is made up of neurones, which we describe with differential equations called Hodgkin and Huxley equations and this produces spiking neurones. Like below; in fact, the below is designed to mimic human neurones identically... (or as much as I currently understand). 

 

 

These spiking neurones have timing and are vastly more complex than our current AI, and the AI technology we use today really does not match the complexity of a single neurone in the human brain.

 

I figured out the gap in the chemical changes that took place during “learning”, so I built a very large genetic algorithm to reverse-engineer it. This process treats everything in the neuron as a “learnable” parameter with a DNA code that describes the full behaviour of that cell, which can be changed. Just like relationships, better means more chance to have more children, but I also used a range of different algorithms that do similar processes, so it's not wholly evolution but evolution and evolutionary adjacent processes. i.e. I tried a lot of different models for changing the genetics I had built and used what worked.

 

In short, small random steps and repeated tests, in the end, it's just an automated scientific method.

 

Experiment

 

So I figured no one would ever believe me, I had built a brain emulator unless I could show it worked for recording and simulating the human brain. Therefore, that is what I did…

 

So you know that show Pantheon on Netflix, I tried to build that, but with less of the melodrama and military espionage. I was trying to do that by testing language outputs as per my Bayesian Turing Test idea (from way back now); but I realised and on a whim tested on how quickly it was to pair the brain emulation with human EEG and this seemed to be much easier to do a really rapid test and fail than language, so I started again with that. Language requires a certain minimum level of competence, and as per my last update, it has various simple solutions that score well. 

The test was no less than could I cram a live EEG onto this brain simulated on my PC... with all the implications therein...

 

I finished the genetic algorithm, and I am going through system by system and ensuring everything has the relevant level of analysis. For me, when I looked up the level of understanding of the brain, one of the key missing steps I found in the literature was LTP, or long-term potentiation, which is not very well understood. The main modelling of the neurone, the Hodgkin and Huxley formulations for the differential equations that represent a spiking neurone, seemed to already exist, so I used that as the basis. 

 

Therefore, what I am doing is using the genetic algorithm to close that gap in understanding by trying to represent the different chemical changes you could do to a neuron as a vector to a chemical model of both the concentrations and the rate of changes of chemical concentrations inside the cell. Hopefully, then I can close that gap and understand what steps happen in a human neuron cell that are likely to aid in learning. There were pictures in the post to show how changing certain chemical concentrations changed the cell's behaviour.

 

To ensure the model is not over-fitting, I have 35 different EEGs in the data set, and the AI is only trained and measured on one. Every few generations, the amount of data is increased, and the simulation is extended, so the AI is then challenged to work harder for longer. The number of generations before the experiment gets harder is crafted so it is statistically unlikely that the same genetic code, even where “elite” and therefore being used multiple times,s should encounter the identical brain scan. This should limit over-fitting, but ideally I really want my own EEG headset to start creating some of my own readings and files, both for variety and tighter control on the providence. 

 

Visualisation

 

The graph below shows the distance in terms of voltage between the AI brain simulation and the human EEG brain wave. 30 sensors are used in the test, each probably measuring 100 is100-ishnes and usually able to output between -120 volts to 120 volts meani, meaning absolutely wrong is probably 3k. Value is a measure of the absolute distance between the AI prediction and the brain output.

 

 

The purpose of the genetic algorithm is to compete between the AI to reduce that value as close to zero as possible. The AIs are literally competing in the genetic algorithm to be the best one to get as much information as possible from a brain scan.

 

You can see it starts to build essential timing in a few exposures, and then there are a few changes, es but it's not bad.  I do not know practical threshold for when you’d say information is being written into the AI I assume it requires much lower scoring and this AI is relatively shallow trained currently to consider that but in theory the AI keeps building its back end out as it does adding more neurones to the simulation and therefore that ought to act for it to increasingly backwards engineer the bits of the brain it cannot see. The idea is that if over an extended time, then at first you want to perform well and get in step with the brain's timing, but then you want to start to close that gap in performance, and that requires simulation of what you cannot see, but which is causing changes. Therefore,e a growth like patter and staged development should be allowed. I have no idea if that will start happening, but you can see there is some thought behind the design as a serious early attempt to do a brain upload. 

 

Under this logic,c the brain should shift its behaviour. This should cause the error to rise, the system should then reverse engineer the new behaviour,r but in the long term the error should be reduced. Therefore, what is above is broadly what I expect if it were working. I probably need much more data to see whether it really does fit increasingly;t therefore, it's a long-term project to inject a bit more data until we get the right behaviour with the genetic algorithm automating the development process.

 

The building processes are controlled in an algorithm itself; architects and builders grow as part of its processes, which is highly modelled on neural genesis, and these processes are also being tested for during simulation to ascertain what growth patterns work and do not.

 

There really is not much of the behaviour of the algorithm that is not defined by evolutionary forces.

 

The above shows that a relatively small AI can map broad brain patterns. The problem would be now improving fidelity. I will have some scaling problems, and there is a possibility that the same chemical model that works best for some use is not optimal for all use cases, and it is true in the brain.

 

That jump up to 200 error that is in the realm of only 2 or so neurones firing could change that, or being out by 0.33 volt on every neurone that is being measured; and when the root cause could be any new stimulus or thought coming in from the rest of the body, I think that's quite a feat… So I am quite proud of what has already been achieved. 

 

I just thought this was an incredibly fun and interesting experiment. I have data to show the evolutionary process that the AI underwent and the chemical simulation that worked,d which you would hope would match human physiology and might be of interest in the medical arena. Though I hope that is of some interest. 

 

Philosophical Implications

 

I have called Hello World a long-term art project before, because while I think the piratical and commercial opportunities and possibly the entire business plan for why you might want very smart AI is the most critiqued and talked about concept either in fiction and sci-fi or even just in business today. 

 

It is really interesting just as a topic of conversation…. It does not expressly do anything practical yet. I would need to figure out all the input and output processes, and then I would like to see if you can get a person doing a task,k point the EEG readers at the part of the brain relevant to the task being done, and just have the AI shadow the human operator.

 

I want to do that because then what you would look for is whether any information not expressly learned in the test comes out with it. i.e. if I scan a person doing a web chat,t what happens if I ask it about its home life that never came up in the web chat?

 

Creepy, I know,w but I think it's obvious why you'd want to know that, and it would be earth-shattering to answer that with a yes. 

 

One thing I really liked in building this and getting this working is, well, where is the gap between us and it? It is easy to try to dismiss AI like Chat GPT as not being conscious, and I have some sympathy for that view. I think this is closer to saying this is just conscious. 

 

This project has each AI with a DNA representing its configuration and design. It has ancestors through its evolutionary algorithm. The design of its neurones is up to date with current understanding of human biology. It evolved to try to emulate a human mind such that the difference can be expressed in the voltage gap between the two systems, continuously analysed, and work is then done to close that gap. At what point do you just give up and say that the gap between you and it is irrelevant?

 

What precisely even is that gap at this point? Is it just that 200 volt spike shown in the above graph, or is it something philosophically more? I just think that is a profoundly interesting head scratcher,r and why I keep saying it's been a bit of an art piece to make this because I think it illuminates the essential differences. 

 

I probably need to get together the data on its evolutionary process, which will be an article on its own. I would then look to visualise internal processes, maybe as a video visualisation,n as I have done before. Then I would like to make a version that accepts input and can test decision-making. Thereafter, I am hoping in 6 months to a year to run tests on whether we can lift and shift brain patterns and do something real and useful with them, such as the tests I outlined above.





Add comment

Comments

There are no comments yet.