"Unless commitment is made, there are only promises and hopes but no plans" - Peter F Drucker
This is my wierd coding blog. Where I build strange things out of code.
I've been trying to build a brain emulation. This is also a slightly strange post I try to plan out how I'd confirm if said brain emulation worked.
I might be going on a rant but it really hits me while writing this why I think there's a oddity to AI approaches that once you see you cannot un-see.
This is my blog on trying to build a biologically plausible neural network one that is patterned after the same biology as us.
It uses the Hodgkin and Huxley differential equations to build a ordinary differential equation neural network that follows what biology describes; that I started with squid neurones as described by the original Hodgkin and Huxley equations then updated to use the ion channels described for the mammalian brain.
I created this second video of a prototype which showed the same AI build at different speeds (artificially enforced).
https://youtu.be/2fsaH8wn62c?si=2VMkPGiUTsrkMZHO
My plan was to build out a set of tests on every aspect of its timing. I'd create a video each month in different things I'd tried; probably post to this blog. I'd probably spend 1-2 years working through the 100 or so tests I theorised where combinationaly possible. In doing so really stripped it down to bare bones and in fact got it working with less I then made a mistake in the code that took me down a development road I'd never had looked at.
I was certain the problem was manipulating the chemical contents of the cell so that it learned both error and how much of each. I probably will still investigate that in the future because Gilal cells do seem to change the chemical contents of neurones but whether it's a learning mechanism as well or just a repair process is a test I'll have to put off.
Which goes to show there's no growth without the genesis of a mistake being made.
Today I've got a version that will print to screen whenever the "word" it's thinking matches the book it's reading. I'd say that's starting to look like what I st out to build ie a thinking machine.

I got a functioning brain emulation going it seems and ahead of where I thought I'd be a few weeks ago.
End ex. Mission done going home... well no I probably need to confirm the above and be rigorously such to get down to brass tacs of why that weirdly worked and what to do about that.
Therefore I've ended up writing this blog posts for how I'd test a model like this to try and figure out if it's doing anything similar to us.
Now to explain what appears by accidient
This is a bit of a tangent I want to explain roughly without outright saying what I did so as to protect the trade secret and so I don't sound completely like a guy going AI! AI!! At ever higher decibels on the internet.
I think we can all agree there's alot of them about.
What I want to do now is think through and build a hypothesis that I can test that afterwards we could say yay or nay to if it's a brain emulation.
Unfortunately at some point I'm going to say the word Quantum... You are going to have your doubts. My point is only to convince you that even if our brain was a quantum computer that would seem to be no barrier if the simulation was correct or thereabouts.
Therefore at least theoretically you can strong man the argument that there seems no real barrier to AGI or ASI. That in principal if you can do the forward pass differential equations that show the brain waves (you can) you just need to test out the learning and building methods.
It was in the learning methods that I've kind of introduced a lot of oddities from a normal ODE. That I made that mistake and worked back from.
Now truth be told I kind of built it around a hypothesis that consciousness is something like both a system of inputs/outputs such as normal AI does and micro changes in amplitude changes that sort of means that it possible it's neurones have a change in output when the error is propogated and the rest of the network is able to measure that change through the change in itself. Therefore by reading its changes in its own state it's weirdly aware of its own learning.
This sound big and pompous but I think if you accept that simply any learning or changes to a sufficiently advanced AI would also change the moment to moment functions in that AI and probably therefore the AI network can at least detect said changes.
That feels simple and intuitive but it does mean that if I train a AI it might instead of mapping from inputs to outputs it might increasingly rely on cycling signals and that would not be possible in our current AI due to exploding and vanishing gradient.
Though in something like our brain I bet our own thoughts work like that and do not have that issue. So those amplitude changes are probably more important and it's what I'm guessing is a driving force in us that causes thinking that rather than solely mapping to fluctuations in the AIs inputs we map to timing fluctuations in the spiking of our own neurones and the small micro amplitude changes being mismatched therein might be a basis for consciousness.
I know it's a bit off but moment to moment mismatches would be like "bubbles" in the neurones moving around changing spikes in neurones of which our brain is very spikey.
Those bubbles might also cause pushes up and down across neurones because our neurones are so explosive and in some sense might travel and would cause fluctuations in brain waves. It also would not be big enough to disrupt normal embedding of information.
it feels a sensible hypothesis. It feels a kind of working hypothesis for what is consciousness. I admit I have no evidence except my videos but if you disagree what other idea would combine traditional AI, quantum and our sense of being something and presence in time.
it's why I compare it back to Quantum because my intuition is it all works off amplitude changes being measurable across groups of neurons. But I could be wrong.
if your those amplitude changes then you would be your brain waves. If you disagree it's not important for the rest of my theory of which is why we can't get to true AI from where we are.
That I sort of thought the mammalian neurone might be weirdly good at this amplitude change and this is why I choose it.
I think but really have no method for proving that is something that would help with memory.
Its a odd theory but amplitude changes is a quantum learning and respond really fast. So it makes sense why you'd maximise for it. Therefore those words might be less proof of rigorous long term learning and more like a sort of spectral bubble of it experiencing those amplitude changes across its neurones responding to that amplitude change rather than building the sort of rigorous long term learning needed to learn cause and affect from input to output.
I did end up jettisoning a lot of normal conventions on how to build ordinary differential equation neural networks so this is its own thing. It's sort of surprising to me it works at all.
Your going to have to forgive me I'm not going to be precise about that X factor.
Earlier builds sort of would work and I basically had to keep rewriting them to see what line of code was causing the happy coincidence of it working. Which was a lot of effort.
Once I got an idea which line of code did what and could cut the others I had to write about 50-100 or so different competing versions and test them till they experienced a signal death or just stopped learning and gave up.
What did work was particular peculiar and has pushed me down to think the reason it might work is that Quantum effects might be simulated or at least might not be a impediment to said simulations.
I realise I don't know anything about quantum so I might be making a rather odd comparison there... But hopefully at some point it will all pan out. I have also had bugs before so very wary of saying anything I don't understand but I know exactly which line of code I can change that makes it work or doesn't so that probably means I'm right.
I am aware the word quantum is replete with a certain mystique but if we abstract it down all we are saying is atoms, biological neurones, molecules in some sense have spin, voltage, polarity by dint of magnetic forces. It shouldn't be surprising that if you know the maths well enough (I'm only guessing here what the maths are) you could simulate all those spins etc and something with quantum properties would be emergent on a classical computer. In such a case something like slit test, and what physicists see in physics would arise from the statistical interplay between those neurones and molecules affecting each others spins. That would also make sense then why a transformer struggles to think it's optimised for mapping input to output and not a set of spinning objects with polarity etc.
Thereafter if you look up the famous double slit test there's a lot of statistics involved and overlapping difussion areas. The thing is in my simulations the clever thing is because there simulated I can measure the energy wave of the neurone at any point as a wave or a stable point (no I'm not going to tell you how). But it's just a simulation therefore I can therefore I can bypass the observer problem in normal quantum mechanics.
Therefore in a simulation I could do things that you cannot in reality do. Thereby I argue even if the human brain is a quantum computer it's not a issue for me or you to simulate that if you know the spin and behaviours of what is simulated.
So if it (the brain) is quantum I just simulate the wave function on a classical computer using the ODE I'm using already as a proxy for the wave part. Use normal AI training to approximate entanglement then because it's simulated there's no wave function collapse in the object.
Hence why I think the below picks out words during its training. At the highest abstraction level it kind of starts to look Quantum-ish. So when I speak about amplitude changes that's how I justify that.
I might be being cocky but you can see why I might think hey surely I can just do it like this then?
it feels unlikely to be purely a bug in these circumstances as I spent a long time really drilling down and running all the tests. But it's always possible occams other razor revised for the Turing test; "do not suspect your AI is sentient when incompetence during testing is a possibility".
Whether across a large enough attention head and matrix calculation it resembles something like that is maybe why you get stuff that looks impressive without really pulling full brain emulation off. After all any neural network is a function approximation if that function was approximating the spin function described above it could work while being painfully inefficient requiring more compute than strictly necessary and possibly hitting multiple upper limits and bottlenecks.
That to me seems to be what's happening in AI right now. We have minimally viable products which can scale but not infinitely or efficiently.
Those minimal viable projects have neglected Quantum like affects and settled on linear algebra as a proxy that seems insufficient in comparision to our own brains. Most AI do not even include differential equations or anything similar to our brains neurones. Therefore we are likely in the foothills of a much larger and longer learning process if we want to fully emulate the brain.
It would follow if we are building AI based on matrix calculations when we should be using differential equations and exploring similarity to quantum that it would end up stalling developmentally and why you could end up in a AI bubble.
It also appears logical from the same deduction that current quantum computing is very tied to doing maths off matter itself often outing it in exoteric states. It appears to me such a process would likewise not result in a brain emulation due to the same issues that you will never get the maths wholly unrelated from those states of matter. Again I'm not physicist but I struggle to not feel the approach is flawed as a method to extract useful compute.
If you asked me the proven path is to follow how our own brain does it. Everything else seems a oddity to me. Yes I can do maths with matter in strange states but after I'm done I don't appear to ever get past that point. Even if I did get that maths it seems not necessarily that I could disentangle it from the matter that enables quantum computers to work and thereby meaningfully progress the field as a whole. Though I'm admittedly not a expert in this field I'm just putting forward my argument why this seems very odd to me.
If you where to pursue Quantum computers to its logical ends any headway would be dependent on those exotic matter states which feels like a research dead end for really expansive and extendable forms of compute.
This is all to say the pursuit of emulating the one and only existing AGI ourselves seems a oddly neglected path. We seem intent on pursuing AI and variations in compute very unlike ourselves. I feel this is very odd when you see the pattern.
That fundamentally Quantum has a measurement issue that you cannot measure a point as a wave and a particle without changing that wave or particle. Therefore it's a dead end to get behind and see what is really going on inside. AI as a whole would then only work as far as you don't have to deal with that problem and that it remains decidedly un-Quantum. ie if it stays at the level of linear algebra or if it remains as bits of differential equations but it's hard to see how you could then get behind that problem to definitively develop true AI.
The same is true in biology beyond a certain point most fine grained examination you might do to the internal processes of a neurone cell are likely also to kill the neurone cell. You can work backwards from perceived behaviour but you have a similar but maybe reduced black box type problem which is why I'd start there and treat that as the proven path.
You could hypothetically talk about is try your best to emulate the brain go trial and error because that's how evolution made it and cross your fingers and then compare any working models back to nature. Hence this proposal and blog.
Seems sort of obvious to me. Ie it's intuitive and let's be real why couldn't you simulate a brain even if it has quantum affects? It's all just maths about spin and voltage.
What we have done is argue that any slight whiff of quantum cannot be done on a classical computer. Though because of the double slit experiment if there's any aspect of quantum involvement surely you could not measure the wave without its collapse so I would counter intuitively argue actually this is surely the only way to go about it involves simulations.
You can only do it by simulation because then you do not have the issue of collapsing the wave function.
I cannot make out why the current paradigm of transformer that are predominantly linear algebra or quantum computers that seem to be about putting matter in wierd states would be my first approach to improve the computers and AI we have.
If otherwise the brain is not quantum then surely it doesn't matter in the first place just simulate it.
Therefore a full brain emulation is not only arguably possible there are circumstances you could not approach it any other way.
Anyway before I get carried away and get lynched by a horde of angry physicists. let's go over its current problem. Ie why the above statement has issues...
The AI I built is also grown to demand as it processes and that alone probably keeps pushing the network in the direction of better performance. I basically teleport the codes equivalent of dendrites in as needed. That is another reason it might be efficient it's not only sizing up in response but that new connection is applied in a way that means it's pushing the network into new configurations as well as in a way that will improve its performance.
That probably has an effect of metaphorically kicking the whole network down the gradient of descent one growth spurt at a time.
Theres a few ways the tail (error) could wagging the dog (pushing the network down a path of gradient descent without learning rigorous parameter embedding).
Thereafter I've mentioned amplitude changes the whole thing is designed as very liquid simulation of our neurones. Let's say it is doing those amplitude changes it might just overjump between energy points that means it never really settles. This might be useful it might not I'd have to test that out.
Though just to say even if you accept there might be merit in a simulation that has these properties they may have wholly new problems and issues.
But it speaks... sometimes... it also seems persistently to do so ie a few of the earlier examples would become unstable then experience signal death (which is why I have it counting how much of the network is "on" on each time step).
Its also small the photo was 200 neurones being simulated. That it works absent large scale is another reason I intuit it has something going for it.
Which is to say it works and is not dead. Which is all really anyone can hope for.
Cool right?
Battle plan
So I think I have a theory and I started this to war game out would AGI even be possible and how would you practically even test to detect a post singularity AI. I built my Bayesian Turing test in the first posts using Shannon entropy as a basis because I was uncomfortable if just accuracy or error measures where enough. My thought was if a AI was thinking a key metric might be the length and therefore information context of its spurts of being aligned with a outside system. Ironically I use mostly accuracy and just longer testing periods now.
So let's say one of the AI models on my PC is that super special thing. How would you test that?
Firstly I would continue the current testing process as it got me this far. I can do a rinse and repeat testing on about 100 models a week and this should let me know the algorithm inside out. I also think we evolved and that's constant and repeated tests so seems a basic starting point.
It is worth thinking that above large enough changes that are just flat out improvements you'd probably have to do a number of longer specified testing to tune some of the specific hyper parameters.
I still think neuraplasticity could have a longer study and I still have my pet theory that chemical concentration changes might also play a role.
I think my way to engage with that might be to run the same genetic algorithm type tests but over a week and only between a A and B of the two different models under consideration. In evolution you could fluctuate between gene A or B for some time but doing that across the whole genome is what would ensure you do not get trapped in a sub optimal design. I cannot really create a entire genetic code for my AI system and that sounds kind of mad just saying it out loud. So some really rigorous sub testing will need to happen.
If I had to embrace that evolutionary appproach though I'd need to build a database of traits and a automatic logging system in my code to log test, test conditions, and traits under consideration. It would necessitate a lot more paper work filing out for each test than I currently do. It would not be impossible just probably unecessary (hopefully).
I have that partially implemented now but I've already tried it once and found much of what could be logged was superfluous and there's a bug in the version of Seaborn that I use for visualisation that can rarely cause a crash which when your wanting long drawn out testing is very frustrating to wake up the next morning and find a bunch of tests spoilt by a external libary of code you do not have control over.
I should just do some more you tube videos. Visualisation of whats happening in the AI is just helpful. I still think I should add in a readout on the dashboard to show some individual neurones stats.
I should test a few of the good ones on really long time periods. This should be stability testing. Do they scale from 200 neurones to 2000 etc? Try to get a pile of PDFs (already started) and just run a model through them; this time saving the model to compare trained and untrained model.
I can then use moving averages. If the AI is riding some cusp of amplitude changes and is not embedding learning inside itself efficiently that should show as whether long term moving averages of a long slow moving averages changes when compared to a quick short term moving average.
I think that's the limit until I got someone to fund me which probably will not happen.
I think my final tests will be to get some brain scan data. I would guess that an AI that is closer to us does a better job at learning and modelling for this and anyway it's just really cool idea to not try to remote control my PC through a BMI into a home built AI model on my PC.
The cheapest BMI look pretty bad but they are 200-400 and or maybe I could build my own. It might be worth one Christmas asking everyone to give me the gift of the real meaning of cyberpunk one year.
Im not saying I can do it I'm just saying it ought to be possible because why not. Even in the brain it's just data this AI at least resembles a brain as a emulation to be worth trying to use it as a receptacle for that data.
I might try to get my hands on a brain machine interface (BMI) and see if someone would find this to get lots at different scales. The gold plated proof of hey this is a brain simulation would be train and test of human brain waves versus AI ones. Ie train the AI on the output of a BMI then keep half the data if the AI shows similarity then that's a brain emulation and you just uploaded (possibly partially) a brain scan.
Again not saying it's possible I'm just saying that would be to my knowledge a world first and failure would probably precisely show the short coming of the work.
it also is not that mad when you accept that I focused on a self scaling, self growing model.
The model because it grows and emulates the brain should just keep trying to grow and build to "reverse engineer" whatever the wave it's trying to be trained on. All it needs to do is keep reducing its error and eventually it should have some fidelity. If it keeps growing surely at some point unless there's some misdesign it does that.
Essentially that's why all AI is a function approximation I think the clever things about our brain is it has wave like properties as well or at least that's my theory. Hence a AI like our brain ought to be able to grow and model those brain waves if fed those brain waves and trained to emulate them. The degree it can emulate them would be the final test for what has fascinated me and led me to build this.
Plus you can put anything in the cloud nowadays except a brain. It seems worth a try even if on the face of it; it sounds silly.
Even if it doesn't work it feels something worth trying and worth looking silly even if it fails and I learn a few things. Plus while I have one that is the strongest contender I've ended up with a whole bunch of weaker ones as well; I would hope I'd push forward and get somewhere with at least one.
Hopefully this works out... but it feels arguably worth a go. I started this blog to war game out theoretically building AGI happy to hear any objections or any impediments I missed.
If successful I would upload a brain emulation and could be the first man to set foot in blob storage... One big step for mankind one assumedly massive compute bill that I hope the project sponsor will pick up (joke)
Afterthought
Ive always really disliked the simulation hypothesis ie that all of this is a simulation and not base reality.
Writing this has given me pause to wonder. I've never liked it because I could never think up a reason for a type 3 civilisation to do this.
If it's really hard to observe real quantum states, that it's easier to guess and test in a simulation then you would start to do large scale simulations of evolution possibly purely to extract interesting compute processes you could not observe in base reality.
You ever heard the saying if it's free your likely the product. Well that kicks that up a notch.
Its just a thought...
Add comment
Comments