So, one of the things I’ve been learning about is ANNs. I’ve tried playing with several different frameworks and several different topologies, and one of the ones I’ve been playing with is Darknet.
I’ve been trying to train a Darknet RNN on a corpus generated from all the text in my blog. So far the results have been less than stellar – I think I need a bigger neural network than I’ve been using, and I think in order to do that I need a bigger GPU because I’m running out of patience. I was astonished to discover >1 teraflop GPUs are now in my price range, so I’ve ordered one.
I’m hoping soon to have simSheer available as a php endpoint that people can play with. All of this is building up to using Darknet for some other purposes, such as image recognition.
It’s interesting to think that even if simSheer manages to sound like me, it will be doing so with no sense of aboutness at all – well, I *think* it will be doing so with no sense of aboutness. It has no senses, and no other data to tie my writings in with, so I don’t think that any of the neurons in it can possibly be tagged with any real world meaning. Or can they? This is probably a subject that some famous philosopher has held forth on and I should probably go try and find their works and read them, but in the meantime it’s certainly fun to think about.
I really wonder to what extent the aboutness problem (borrowed from Stephenson’s Anathem) applies to NNNs. Would the cluster I have for the concept of love even remotely resemble the clusters other people have? What would the differences say about me and them?