“The future of AI is not about replacing humans, it's about augmenting human capability” -Sundar Pichai.
This is a blog about trying to develop biologically plausible AI and my wacky dreams of getting paid to upload brains into PCs. Currently, I have been taking human EEG waves and trying to use them to reverse-engineer an AI that successfully manages them.
The test is simple; the AI is scored based on proximity in guessing a human EEG. The process that developed it was an evolutionary algorithm that took me a year or so to do.
The evolutionary algorithm was designed to automate the reverse engineering of the neuron model by representing the chemical concentrations in the cell and the variety of changes that happen in it.
You would think a model of brain activity would be really useful as an AI period, in practice, there are further hurdles in figuring out our encoder and decoder systems for outputs and possibly thinking about inputs. The reason is that the brain is a spike neural network, which is a very particular subtype that works really well in specific use cases.
I plan to take this and use the mode,l maybe as a sleep aid, the model would synchronise with your brain waves while you sleep. We could run tests in the day to see what chemical changes to the model improve the AI prediction of your brain waves and from that you would be able to infer what parts of your brain where most active, what might be more or less different compared to your baseline and you would have a chemical model for what we think describes your brains chemical mix and I reckon you give that to a doctor they could utilise that knowledge to know if that sleep was abnormal, and hopefully if healthy.
Now talking about AI that is near human-like has a certain “ick” factor. I think the purpose you might want to do this, though, is that you would A) have a product, and B) chemical imbalances in the brain are a factor in many forms of dementia and mental illness. C) population level analysis of EEG and estimated chemicals in your head would be hopefully a way to assist treatment, as you would need to build subsequent systems, but an analysis of your chemical model could “tweak” the brain in the right direction using vitamins or maybe drugs prescribed by the doctor.
I say this as not being a medical specialist, so I might be talking BS, but I thought it was a good idea. You would think that sudden shifts in brain waves not correlated with the simulation would represent subconscious changes, or if not, then probably you're hearing something, etc. You also would know where the sensors are placed, so activity in parts related to hearing might let you say if that disturbance was due to sound or maybe subconscious stress with how it goes between the map.
At minimum its at least intuitively plausible and would be highly individualised to the user.
It's better than me trying to put it in an autonomous hunter-killer drone, which was a direction of travel at one point. It was mostly ethics that stopped that, but I also realised that you need to get a drone licence in the UK, and that was a headache, so oddly, copying brain waves into an AI is less costly and more ethical than drones. This data would also allow increasingly look at the idea of what copying the brain really entails. I think if left on my own, I would probably just keep trying to copy the brain, so the business plan is just how I justify my hope that I can spend more time trying to copy people into machines...
Show me the data
Fair point well made; reasonable to want more information. The version below is 29430, a product of the 29th generation of the evolutionary algorithm. It is doing the guessing job for 3000 observations from the human brain. The current generation is 32, so better versions are semi-regularly found with very, very tiny improvements. Each model is currently very small, highly mathematically complex, and fine-tuned to a task, but fundamentally less compute-intensive than, say, the big LLMs that are around.
The scoring system is the distance in absolute terms between the AI prediction and the human EEG.
What I then wanted to do was show that the simulation actually was syncing and would continue to lower that average score. So I did a moving average, a fast one at 1000 and a slow one at 10000. Then measured the difference. The graph shows in blue where the AI is on average improving at the fast measure versus the slow measure of average, and would show red if on average it was ever getting worse, but that is not seen.
then repeat the process again but piecewise. The leftwards copies incudes all the data right to left and then each step the averaging is reduced to piecewise show the slowing of learning though the fact that it has no red means that a no point the average rate of improvement is above zero (i.e. it is always getting closer over that 3000 steps). This I think shows the rough idea that the algorithm can copy over and more intuitively predict the human brain.
The algorithm uses simulated neurogenesis, differential equations, simulates chemical changes and uses some more traditional maths or AI architecture, where there is a gap in my knowledge of what is going on.
Therefore, the tests validate that the error does seem to go down. I need to keep extending that out, but it's enough to be sort of there even in a short space of time.
Also, you can be relatively certain it's not just a lucky outcome because the evolutionary uses a range of confirming and non-confirming methods, and so the data is always noisy and yet the majority of the data is looking to bottleneck at the score end, where it looks to be working. I think if I stripped out the noise and cherry-picked the data based on the methods I now know to work, I would just have the top performers (I have a graph that confirms this, but not to hand).
Hard Hypothesis Or Soft Hypothesis
So it really did the copying part over.
So here are my thoughts on the unspoken “wacky” part of that. I do not think I am actively copying the brain there (yet), but I think it means it's probably not practically impossible. There are lots of AI projects that are using AI transformers to predict connections and voltage between two areas of the brain.
What I am trying, though, is that because of speed of training, something that can keep up with the brain changes that it can learn, and we can work backwards to copy the rest out and ever increasingly lower that gap. I think what this has managed by overlaying all those learning methods is something that “keeps up” to keep the copying process to be something that might handle live data.
That seems to work as you can see from the above I do not have all the input and output systems worked out but lets say I did and the line above stays blue for a long time i.e. for days well surely I could put on your head a EEG reader, we could also simulate what you could see and doing on the device so then I am feeding your brain waves and actions into the machine and training the AI on that. The question is, how precise would you need to get at the end for interesting “things” to happen?
There is a gap in the measurements, but it's not hundreds or thousands of volts out. The fact that a gap exists probably means it's more like a rough draft copy of the brain's larger architecture is forming in the AI. It almost certainly is not “thinking” the same thought,s but if whole sections of your brain would light up similar things would probably happen in it. This is what I call the soft hypothesis, and I would expect that learning to dry up and on one of the graphs above to start seeing mixes of red and blue signals before really “grokking” the whole model of what is going on. i.e. most of the information in your head is not your deepest desires or hidden secrets, it's just reminding your body to keep the heart beating and the insulin tap turned on or off.
Though the hard hypothesis is well, it does not matter how small that blue line goes as long as it remains blue… well,l what then if you could keep improving that line,ne then that would be a different way of approaching AI and this thing we call our human condition. Though I did mention it currently has no inputs for eyes and outputs to speak, so not to get ahead of ourselves. In such circumstances you could also use it as a cybernetic (in the literal old term) extension of neurones into a device by you have a second brain in a device you train it by reinforcement learning and a brain computer interface (BCI) adapter to carry over the natural human neurones signals into the AI brain simulation and rather than teaching the human to walk again or use the prosthetic you train both together the prosthetic then starts to have its own simulated brain tissue but one that is conditioned to be responsive individually for you and if you see how quickly this version adapts.
So I need to figure out a really long-haul test for determining if hard hypothesis or a soft hypothesis. If hard, I dunno I will look towards uploading people, I guess if it's soft, then sleep maybe billionaires will pay to store a copy until we figure how to put them back the otherway. Either way might discuss with some knowledgeable medical folks as maybe there's some useful data there and then you have to wrestle with the ethics of deciding if it helps people medically is it even right to withhold. Probably someone will just say they have already done these tests, etc., but you never know if it's possible at all to do the hard hypothesis thing; somebody has to be the first to do it right?
Yeah, I know I am getting overexcited,d probably back to the sleep idea first. My current problem is that I know roughly that my current protocols keep this error low enough. If I speed up, I will know quicker if it is just a toy brain mode,l and I can really only focus on the sleep idea, but with the risk that doing so would be disruptive to the evolutionary algorith,m which kind of relies on fair competition, and just upping the data can reward lucky and OK, but not the best.
Therefore,e you would hope that if you took that model, it would have some uses first in a for education and entertainment sleep model, then as a data source for doctors (I would have to really look into that as there are medical device licenses to make such claims, etc.). Though afterwards, you could just keep iterating until the blue line goes red, you do not know how exact the simulation could get. In the above, a voltage gap of in the graph above, with a voltage gap 200 is probably nothing like “You” but 20,2, or 0.2 gap?
I think it's interesting at a minimum.
I was going to talk about the evolutionary algorithm, but I decided to keep that to myself. I have whole hex maps and stuff about the chemical concentrations. The reason is that they are only really showing that the genetic model does converge, and I sort of decided if I want to at least pretend to myself this could pay money at some point, then something needs to go into the section marked proprietary moat. I also think the above is enough to say that it’s not a mistake, the model that I spent about a year working on does seem to converge for the purposes of simulating the underlying brain waves, and it does not, at least to my current amount of data, rise back up.
If I can figure out a faster cadence to run the evolutionary algorithm at and then show that remains accurate, then I think that is fundamentally differfrom fro,m say, the Blue Brain Project, because this one is trainable (but who knows what else people are building).
I think what I am going to aim at is to increase that cadence and then see if the line remains blue,e and try to do a test to destruction process. I can run a slower process in tandem, as I have already built filtering and masking systems. I probably just need to add an inverse one to ignore smaller populations in the fast test. I do probably need that to know that to do the sleep testing, as you’d need to know what the limits of the AI model are to start estimating what a divergence might mean. But it would be nice if that value when training ceases is around 0, you would have something to prove the hard hypothesis.
The human neurones' refractory period is as low as 5 milliseconds at peak, so 3000ms tests, which is where I am at now,w is not an amount of observations that can be entirely dismissed.
I estimate there is some small improvement on average every 0.9667575757575756, with the provision that sometimes they cluster tightly with lots in one generation. Therefore,e I do think I have not found the best algorithm for my purposes yet, et but from above you can see it's there or thereabout. Software testing would recommend a 1-2 hour test for validation, and the code itself is tested to this standard, but within the evolutionary algorithm is in some sense a system that, through processes analogous to evolution and represented by simulated chemical processes, es writes its own code. Well, now I probably have to figure out the stress testing point for said code.
The purpose of the evolutionary algorithm was to balance out the behaviour of a chemical equation, so that the equation may still degrade over an extended time and or generate new insights for improvement.
I think I'll look at medical uses, so going to speak to a few medical-ish people and literally spend a week with the project in search of a use case and properly go and search for that use case. I think I wouldprefer too stanoffff,f trying to build an agent. I think the fact that our brains developed to move in 3d dimensions would make it good in a drone, but I am not certain it would naturally do well at language to go be a transformer equivalent to replace current AI. Noam Chomsky has a theory that we speak because literally our brain is wired to do that, and that would imply connection, and our language centres are to some degree naturally an end product of our brain architecture and not something just brute force learned. Which is why probably not a good idea to start with that.
And if n, ll it was at least interesting to say you can get somewhere with simulating the brain. But if I really extend that test so the simulation lasts longer and longer, then I should really know the limit.
Add comment
Comments