Hello World - Test Plan

Published on 5 May 2025 at 09:27

"Creativity is the joy of not knowing it all" - Ernie J Zelinski

 

This is my weird software development blog. 

I have been working on this biological plausible AI.  I feel that I have to explain what that means each time; but I am trying to take a detailed simulation of the human brain and then make it useful.

I have really got into the way you can simulate growth processes in the brain and trying to use it to make a more efficient AI design.

 

Video on current build

 

Brain Simulation Test Demo - Part 4 added in chemical read out 

 

The current test makes me happy that the AI does seem efficient at building itself under its neuroplasticity rules.

The above is getting transformer level performance uses no input's and the AI builds itself out from neuroplasticity rules. 

I think this is great because if you know how much water and electricity AI uses then this loses a lot less but does so being really efficient and stepping up the neurones into trying to simulate the mammalian neurones structure. 

But it makes sense our brains are very energy efficient compared to AI we currently build maybe we should be looking at them in order to slim down our current tech.

Though it's not massively different than what I built before so it's not capability shattering.

I do have to make a retraction though I had worked on and posted a video where the AI got 75% at one point and while that did happen more testing shows that's a rare outlier in terms of performance. 

 

Introduction

 

So my plan for this post is to step through a testing plan to work out all the things that I do not know and I think that will form the basis of what I post on this blog in the future. 

 

Stuff I Do Not Know

 

I think I could do with graphing and visualising the internal chemical composition inside a single Neurone over time feels that it would be a big deal and I could probably get a lot out of that. One of the insights I found was using these as internal queues on how to keep the network stable and building out and them being much more liquid and having multiple chemicals in the cell simulation I wonder if anything odd happens that could act as pointers to the state of the cell.

That sounds a bit odd but if you think the AI is just a simulation of neurones that are also just simulation of liquids well then training it to fire more or less is probably analogous to stirring a cup of tea clockwise or anti clockwise. In that sense there probably is a lot of information in the network about it's own learning.

I.e. if a cell fired but then receives a sudden large jolt to counter this I wonder if that would show up as a affect in the cells simulation and possibly across the network could have multiple learning rules that take advantage of understanding the changes in cell better. 

Currently the AI learns under neuroplasticity rules reorganising its structure and what neurones connect to what on it's own and the connection weight between those cells. But I would want to see if that could be improved upon so currently it manages that between cycles of pruning the model and adding new connections to keep alive. This would be looking to see if the present state of the network gives any hint on how best to add in new neurones or is there "distress" states that would indicate other things could be changed in the network.

I plan to start writing a version of the code that uses data mining and error logging to investigate this and the first draft already works and suggests changes to the AI that I can then test in a T test. The initial version has now been tested for this and it methodically checks what changes to a cell would cause what affect.

The initial version needs to be expanded but it is already basically capable of rerunning a cells time step without changing it and gathering data on what a proposed change would do as a first order sanity check. The problem is though that a lot of the changes you might make have second or third order affects on the whole network performance. 

I would plan to build up a study on what changes to the network would potentially change its performance. I think I have learned more by trying to break the simulation. Nearly always I end up coming back and preferring emulating mother natures designs but there is the interesting chance here to find something that makes the AI smarter that does not emulate how the mammalian brain does it. 

A idea that I have is that might be how the brain does it. The body switches on and off genetic codes, has gilgal cells changing exact chemical compositions of cells. It really is layered learning systems. So currently I'm building a data mining system to learn how the neurones work but one thought I had was just leave it on when I'm finished.

but if that's not what the brain does then some of these tests would result in moving away from the mammalian brain cell design.

I already have one test on a speculative biology front that is seeing if certain changes I have made work. That is going into testing next week to see if it outperforms the current version.

A thing I have noticed and if you know the maths it probably makes sense why this is useful but being able to increase the voltage above human capacity probably would be work but currently that burns out my simulation and generates NaNs. 

 

Goal

 

The current performance is about the level for a transformer maybe a little more but with a lot less resources and with a little few oddities like it seems to have great memory capabilities. 

I detect there is disruption to the network when it grows and I remain concerned it does not have a route from small to seriously large scale simulation. Testing shows just adding neurones outside the networks own rhythm is disruptive so I would have to work within the simulation rules to find how to get into large brain simulation territory.

I am around 50-60% accuracy with a outlier of getting a 75% accurate version once. I estimate I could get that 5% if I just train and grind it out on the same training like you do with the transformer because this test was to show it was relative stable. Though I really have always found it works better focusing on just speed and learning over memorisation.

I reckon if I could get that 75-80% accuracy it would make one hell of a chat bot because the AI is not memorising and just learning in ways similar to us I think that would put it starting to be like us. I think the 90% area is impossible but that would be ASI and mathematically would be the region you could get to if the AI effectively learned everything and always remembered its learning. 

I have simple methods for adding and removing connections. It might be useful to look at Neurone deletion protocols.

 

Plan Of Action

 

I may take a break and post about another project while I get all that running. But my idea is to work through that testing plan to 

Even if this does not yield towards a better AI design it has been really fascinating doing my own research on emulating the brain.

I think I have 3 swim lanes to getting this to work better one is visualising the videos have helped me a lot especially when building something they act as a sanity check. I have found big data statistical testing works really well and data mining means the tests I am doing seem to help detect the right changes to try in that statistical tests. 

I think its been interesting but what I have been learning is especially when building something big strange and scary like a brain emulation. Do the basic things like plan, get data about the thing you are building and always fall back on statistical A/B tests. 

The data mining im planning on doing really is just big automated data collection and analysis. 

So far every time I have thought best I usually have regretted it... have opinions but check the data afterwards has been the hard lesson on building this.

 

Add comment

Comments

There are no comments yet.