How Artificial Neural Networks Paved the Way For A Dramatic New Theory of Dreams
Updated: Sep 24, 2020
A provocative new theory out of machine learning and the neurosciences suggests dreams are a way of overcoming human "overfitting," the error of "linking events that have no causal connection." See excerpt and link below.
I'm fascinated with the suggestion, toward the end of the article, that fiction may serve the same purpose. I'm also reflecting on the various connections between fiction and paranormal phenomenon (described in the work of Jeffrey Kripal and Eric Wargo for example) . . . if that's not overfitting.
On one hand, paranormal phenomena could be productive fictions that correct human overfitting (a way of waking us from our materialist slumber, for example, and/or protecting the creative impulse). On the other hand, the narrative aspects and fictional 'noise' of paranormal phenomena may be a way to counter overfitting in our understanding of actual paranormal phenomena.
"[Erik Hoel's] new idea is that the purpose of dreams is to help the brain to make generalizations based on specific experiences. And they do this in a similar way to machine learning experts preventing overfitting in artificial neural networks.
The most common way to tackle overfitting is to add some noise to the learning process, to make it harder for the neural network to focus on irrelevant detail. In practice, researchers add noise to images or feed the computer with corrupted data and even remove random nodes in the neural network, a process known as dropout."
Continue reading here.
Image source: Buddha Weekly