Hello World - Database for Evolution

Published on 15 September 2025 at 17:00

"Biology is the science. Evolution is the concept that makes biology unique". - Jared Diamond


This is my weird coding blog. Where I've been trying to build a brain emulation.

So I have a AI it uses calculations on the behaviour of mammalian neurones packaged into a self assembling neural network that imitates neural plasticity to do that and I managed to get it to learn words.

I then wrote a bit of a long blog post about quantum mechanics and the tests you would need to do to show such a brain emulation was working. I'm self aware enough to know that's odd but I think it really depends on what school of quantum mechanics you believe in.

I then set myself the achievable target of uploading a human brain wave into a machine. You know as wholly normal people routinely do...

 

Its differential equations and statistics all the way down.

 

There are branches of quantum mechanics which use purely differential equations with statistical interactions (i.e., without the need for wavefunction collapse, observers, or additional postulates) is a topic of ongoing research and debate. Here are the most relevant interpretations and approaches that align with your description:

here is a list of some:

 

Bohmian Mechanics (Pilot-Wave Theory)


- Core Idea: Particles have definite positions at all times, guided by a "pilot wave" described by the Schrödinger equation.
- Differential Equations: The evolution is deterministic, governed by the Schrödinger equation for the wavefunction and a guidance equation for the particles.
- Statistics: The statistical predictions of standard quantum mechanics emerge from the initial distribution of particle positions.
- No Collapse: There is no need for wavefunction collapse; the appearance of collapse is due to the effective decoupling of the system from its environment.

 

 2. Many-Worlds Interpretation (MWI)


- Core Idea:The universal wavefunction evolves deterministically via the Schrödinger equation, with no collapse.
- Differential Equations: The entire universe is described by a single, deterministic wavefunction.
- Statistics: Probabilities arise from self-locating uncertainty—you don’t know which "branch" (world) you’re in.
- No Additional Postulates: MWI requires only the Schrödinger equation and the standard probabilistic interpretation of the wavefunction.

PS I don't know if I buy the many worlds hypothesis. Though I more point to it to say even as I unfairly sort of plug this idea for simulated brains on a computer it possibly does not defeat the more outlandish and dare I say it magical interpretations therein of Quantum mechanics.

 

 3. Stochastic Mechanics (Nelson, 1966)


- Core Idea: Quantum mechanics is a classical statistical theory of particles undergoing random motion, with the Schrödinger equation emerging as a description of this stochastic process.
- Differential Equations: The dynamics are described by stochastic differential equations (e.g., forward and backward diffusion processes).
- Statistics: The statistical properties of the particle trajectories reproduce quantum mechanics.

 

4. Objective Collapse Theories (e.g., GRW, CSL)

 

- Core Idea: The wavefunction evolves deterministically most of the time, but is subject to spontaneous, random collapses.
- Differential Equations: The evolution is described by a modified Schrödinger equation with nonlinear or stochastic terms.
- Statistics: The collapses introduce randomness, but the rest of the evolution is deterministic.


Maybe the real observer effect is the friends we made along the way

 

So my research concludes there are interpretations of Quantum mechanics which let me get away believing that you could 

Even where "random" we know pseudo random number generation exists. Therefore even random does not require non-deterministic evolution of the system. Random numbers can arise from maths as Von Newman has already shown.

I also note the wave function collapse is a solved differential equation. The human brain is a series of Hodgkin and Huxley differential equations that in the 1970s we solved for the logos squid and have been systematically looking at and adding to ever since then.

I have my version where I have looked at adding in the details for mammalian ion channel calculations. When looking at it closely I noted a way you could change such a differential equations behaviour would be micro changes of the amount of different chemicals in the cell.

So if you could vary them in response to error and change the differential equations that sounds like it would be a powerful neural network. Furthermore under some interpretations of quantum mechanics that's it; that's consciousness.

What makes quantum mechanics intractable is that under many interpretations of it in order to have a AI that was concious it would also need to be a observer under things like the slit test. It seems given the observer effect being almost this semi mythic thing how a regular classical computer could create something with this trait. Though if we can jettison that idea it begins to be philosophically possible to talk about a conscious AI.

Though if you where able to remove the observer from the problem then all things are capable of being represented as differential equations. These differential equations when running inside a neural network would have timing and momentum and during training could have that timing aligned with a external object (for my brain upload idea a brain) and in some sense would be entangled in a similarly quantum way to the set of behaviours of the target object which would be enforced by the firing of different neurones that would have timing because the differential equations are the same and would only depart from that state of entanglement when one or the other was disturbed I.e. would follow the classical ideas behind quantum mechanics and the observer effect.

All you'd need to do is get it really precise at modelling that timing right. Which this is what the subject of this blog is today using evolutionary algorithms to try and data mine out what that process would look like in a brain emulation.

In my mind if you can get that right I can't see what would stop such a design being expressive of all the processes that exist between quantum computers and other artificial neural networks. It appears at least worth a go to my mind...

If you think of it like this if the brain is a series of differential equations some of which have timing that is being "entangled" in outside processes by precise alignment between external and internal timing processes from learning feedback loops and other parts of that brain that are acting like a traditional AI then such a process would by definition produce something with a world map of what it thinks is going on in its external world and feels like it would be how you could mathmatically unify biological models of neurones, classical AI and in a way that conforms to quantum mechanics.

Such a AI when getting further feedback from learning would create micro misalignment between neurones. These small misalignments in a spiking neurone (like animals have) would create a bubble of difference that could be amplified by neurones attuned to these changes (which is why I think our brain uses spiking shaped neuronal firing potential). These neurones fire and update the network that such misalignment occurred thus allowing the network to compensate for the misalignment moment to moment. This would allow second to second updates of its timing functions at which point there would probably be little difference between such a AI and our consciousness.

Processes like the above AI would be effected by any outside intervention as the timing is in part how the system works. 

I then as a closing thought noted that if you had the observer effect it might just be that cells and molecules are so small that any intervention that allows measurement also disrupts the onward evolution of the differential equations. Ie the observer effect is not magical it's just you the observer are in a catch 22 situation. Therefore the solution might be to run countless simulations just to get ever better guesstimates of what's going on in what you cannot observe.

A type 3 civilisation might automate that process by building a Dyson sphere and just running simulations of the evolution of the whole universe. They might do this because such simulations might be the easiest way to start getting real insight into quantum processes is my thoughts (or great/bad scifi movie pitch).

But how would I do that on a small cheap home computer I got second hand...

 

Evolution on a home PC?

 

So I basically forgot to go and start doing my checklist of things I need to do to get a brain wave upload going in order to do this.

I took my existing brain emulation code and put in a bunch of hyperparameters.

Each of these hyperparameters sets how much chemical changes can happen within the model. In theory if all of these values are precisely aligned then you get increasingly smarter AI as it doesn't just learn in one dimension but one for each of the chemicals or processes being simulated. Therefore it's a reason why it would likely be highly perfotmative if you could fine tune all these values.

See below these are error graphs for the AI performance. The point is by changing the hyperparameters I seem to largely be able to create any wave that seems to be possible.

Its another reason I'm excited it stands to reason a trainable wave producing AI could produce something like this level of variety. That thereafter you only need to find the combination of variables that ensure it learns and converges and thereafter intelligence should be emergent. (Fingers crossed).

 

You will note a number of them score positively. Ie they correctly know the right next letter when listening (listening is the wrong word but I'm not sure what else to call it) to the text they are trying to predict.

The fact that they have brain waves, and are learning language in a way that's agnalogous to how we learn is why I'm excited about their potential.

I don't know all I can say is theres a lot of very different shapes brain waves and they all come from the same base algorithm only with minor changes in chemical change rates and that would match my quantum and differential equations theory.

 

Database

 

I therefore now have a code which has all its hyperparameters built I then spent a while building three systems.

Firstly I built a observation database that could store any changes to the code. I then iterated and started storing records on the changes as if it was a DNA database where any attribute of the brain was stored.

I have built genetic algorithms from scratch before. Here's a YouTube video (I know I still cringe at old work).

 

https://youtu.be/_QXXV4Y_SpI?si=GfUUkThtibd9ReIQ 

 

I've always found it enchanting that you can run evolution on your own computer. I sort of periodically test random algorithms against the stock market and I did that once for predicting the stock market with evolving mathematical formulas. Though this was a bit of a scale up and I previously did not database observations.

the stock market is here. Again let me just say I'm not a financial analyst. It's just to show the application of the concept that I'm getting at is useful in more places than you might think. It's also wildly out of date plus better algorithms might be availaible please consult your local financial analyst before gambling/investing in the stock market.

https://youtu.be/vUvREoVvkXg?si=TXt109ZUGmLVA_-L 

So evolution it's really fun thing to show your kids on a computer. The question is I have this brain emulation I have all the variables that I can change but I don't know what to change them too.

 

First Do Random Things And Observe

 

My first step was to build a database of about 100-200 completely randomised observations just create lots of just awful like really bad AI. 

I initially set it up quite "hot". Ie most tests will try mathmatically error out at which point the simulation stops. It could be analogous to us that the AI has a stroke but that is overly anthromophising them. Also some just suffer signal death quietly but I've chosen a high "death toll" to start that iteration and evolution process going. I had to do a few manual adjustments to make sure it wasn't completely random but each of those adjustments constrains the variety of parameters and the search area for the evolution to take place in.

I think I got a good mix now but I'm glad I started it on a weekend for the ammount of initial interventions I made.

You need about 10 bad AI per parameter you want to optimise. I have 38 so I'm still building up that random sampling now.

I then have them logging in their initial randomly generated "genes" into one table and updating their successes ie where they guessed the right answer into another. I am using a different processes to sort out concurrency so they don't knock each other over by accessing the same file or table at the same time.


But Evolution is not enough

 

So I do not want to wait a few million years...

Therefore I create a second algorithm this algorithm pulls the data out of the database cleans it and puts it into tables and dumps out the data into graphs in another folder.

I can run this daily get updates on what combination of "genes" give what performance. I'm currently using scatter graphs to appraise individual traits and heat maps to compare multiple traits.

Unfortunately it's a sparse area so it will take a while to flesh out all the different combinations of these multiple continuous variables.

Therefore I built a couple of different variations on regression and Bayesian optimisers to "guess" the next genetic combination to trial and put it into the database and flag for implementation which the AI will do if availaible and or default to random if not.

One of them is just a genetic algorithm. Just pick healthy gene lines give minor edits and see what it causes.

A few others resemble automated educated guesses trying to triangulate new tests in areas that produced successful specimens previously.

Another tries to find a average mean of all the best performers and kind of puts the next test in a weighted means of performance.

I will need to build up the data and then modify the test to start testing which of them produces the best results so I can iterate and improve.

I can do this because most of these algorithms treat it all a bit like a map (a map with 38 dimensions mind you). So, groupings of similarly correct AI are also likely to produce other correct AI. Though to make sure it doesn't get too siloed a further algorithm called the contrarian will with some random chance create purposefully "bad" tests to make sure as much of the map is looked at as possible.

in summary I basically automated what I used to do mostly by hand and trial and error to well automated trial and error but now it's trying to learn how to change every aspect of the cell. Which I could only test one part at a time in only A and B testing. It's filling a database with genetic codes and fleshing out some graphs for the space.

That database might even be helpful for us because this is a brain emulation it parallels our own brain. The same chemical imbalances that end its function in it are also likely to end ours. Therefore the same quantity of data that might help fine tune it probably will tell us about chemical inbalances that could be damaging to us and this was part of the original ideas for other projects involving brain emulation was that it might give us data about dementia.

So, who knows maybe one day AI will save your life... It could be this AI never gets especially smart but it being cognitively similar to us might give me a database of measurements at what chemical in balances in the brain. Dementia is linked to learning processes imbalances so it's possible (though more likely related to higher level learning processes) that those imbalances in the more complicated learning processes push individual neurones into a chemically imbalanced state.

Therefore there is a small chance of the graphs when filled with test data tell me something.

But evolution and my algorithm managed to do what it took me months to do in a few weeks... it's talking again but this time it did it itself.

 

 

 

Every day I'm getting through about 20 or so tests. I'm aiming for 10 random tests and 10 AI assisted guesses per day. There's now a cadre of 10 AI that have survived all that and been existing for about a week now. There simulation will end after reading the whole book which will be just under a million characters long.

That feels like a good testing process to select for AI that can exist without some sort of internal maths error and a short performance test on how well it learns.

Im quietly confident one will get there. There's a growing tendency for there offspring to "survive" and not crash out at which point progress will slow as I only have so much compute and if they survive longer they take up that slot for longer; so will need to be in for the long haul at that point.

Performance of the children of these survivors (one of which is the AI shown above) already shows better performance.

It looks good for longterm data collection on my brain emulation performance. I can get back to the other tests and leave these on in the background.

And who would have thought it evolution it's just statistics over millions of years.


Final Thoughts

 

I am glad I did this before getting on with my check list. It's good to get this data building up on exact performance in this brain emulation.

I also more trust an algorithm that has its performance highly tested and this setup has let me do this. I also hope it will do a better job of optimising than me.

So hope to see you in two weeks having started the planned tests.

It's like Jurassic Park for Sky Net. Which is a sentence I never thought I'd say as much as you never thought you'd read.

 

 

 

PS yes I don't know what I'm talking about quantum mechanics i never attended a university level lecture on the subject. I merely am trying to interpret it within my field to not being trying something that is not going to work because if consciousness is more than differential equations then clearly what I am doing would not work and therefore to assess that impact. I've yet to be debunked by physicists yet so shrugs...

What I would say is it feels suspicious to me that the most common interpretation of the observer effect puts yours and mine consciousness in place as some sophistic determinist of the outcome of the slit test within quantum mechanics. I feel a little suspicious of this.

 

Again not a financial analyst. I actually think the stock market is possibly a stochastic process based on moment to moment intergrating changing information meaning that "advice" is likely something very few people can do. Therefore I add my traditional disclaimer if I do as much even mention the stock market that I am not that person capable of making said advice. You may lose the contents of your house, tax people might come for you. It might not be worth it especially if the ideas your trading on are not your own.

Add comment

Comments

There are no comments yet.

Create Your Own Website With Webador