"Two of the greatest predictors of success are the ability to say hello and the ability to say goodbye." Robert J.
I have been playing around with a artificial neurone with strange properties and behaviours. So having taken in these abilities I wondered how well it would manage learning a language. For obvious reasons I am not saying the AI learned the language this is more a exercise in how close it can get. This artificial neurone seemed able to learn waves and would oscillate.
So sorry about the amount of metaphors used here.
So I wanted to see how close that AI could get to getting words correct as lets be honest sound is waves, words are waves, lets see how close could replicate that wave.
The first step was to create a tonal scale to represent the letters in the alphabet (no i didn't figure out how sound works in reality) I just created a split and somewhat arbitrary tone system i.e. "b" is a couple % higher than "a". I have a theory of conscious in that mind develops from something like a wave predicting what will happen moderately into the future of which actual events then become fed into the machine and update.
Therefore this AI will be predicting the next letter of its tone scale for the word from the sentence it is reading.
Now that's all pretty impressive techno babble and usually I think you should be distrustful of anything that mentions waves and frequencies like this though at minimum for a AI to do this then words or sounds must be a wave but maybe the words themselves are wave like in how we interpret them as it becomes implied the words as meaning might be waves (i admitted maybe technobabble).
Now that sounds crazy... except...the below has the blue value showing the tonal values of the book and the green is the model of the AI output after around 4 million letters being learned. It is also a single neurone with the inputs of previous letters forming basis of the perturbance of the neurones oscillations.
Space bars appear as the blue tone hitting zero. Otherwise its the text of a book turned into a sound wave and a AI trying to learn that wave and predicting the next letter. If it gets it right then it would be guessing the right next letter. If its in the right ballpark it is still tonally trying to mumble the same "frequency".
It would as a metaphor resemble treating a book a tonal melody and trying to complete that melody by filling in the blanks.
It is worth considering how crazy it is that works at all on any level as it implies something about words or text can be predicted from only a little of data when taken as a wave. Kind of fairly spooky when you stop and think on it.
On the face of it the green oscillations are lining up with the target tones as a simplified wave imperfect but not dissimilar to outputs target. Shocking when considering its one neurone; though maybe explainable that a wave or oscillation can possibly represent more states as a wave than a single traditional artificial sigmoid neurone 1-0 output could show.
I have larger versions working but they appear to take longer to learn and I think it is because while whole sentences would appear to be series of tonal shifts for a single neurone they may start to be more complicated as scale up, i.e. you can mumble a word easily but crisp perfect grammar requires more and more data; and if I am honest larger networks may also start to suffer issues of precision from python. If wanted a high precision model may need to rewrite in C++ using extended precision memory.
The larger neural networks are presently taking much longer to learn and the first upsized one crashed because of a bug. Though worth a thought the target was predicting the tone of the next letter from preceding tones. How far off is that from a machine saying "Hello World"?
Furthermore its inputs was just one letter at a time, if it worked it shows some potential with memory and really brings into question what memory is.
I don't know why choose this book but being free, rhythmic, repeating words, poetic I choose the King James Bible... Hadn't really though about it but I made a AI hum the bible... Somehow that worked and if it works at all how bizarre is that...
So just to make you think well a word is a sound wave, except that AI didn't learn actual sound waves it learned a tonal system I had created myself but still it found something that looks close to encoding the words. To be really good at predicting the next letter you start considering how much do you need to have to know from preceding meaning or language to do this where predict further tones.
There will obviously be someone that says that it looks like noise but you should have seen how little it matched up without any training or development and watching it get better is kind of convincing.
I mean if the answer is clearly not a lot because its only one neurone and that kind of makes you question human language capabilities if one neurone can get close or insane amounts of data and information can be encoded in waves and oscillations.
And I don't know I'm just going to leave it running and go get something to drink now...
A few different screenshots


First larger model

Add comment
Comments