Photo: Daniel Bear |
Photo: Massive |
The workings of the brain are the greatest mystery in science. Unlike our models of physics, strong enough to predict gravitational waves and unseen particles, our brain models explain only the most basic forms of perception, cognition, and behavior. We know plenty about the biology of neurons and glia, the cells that make up the brain. And we know enough about how they interact with each other to account for some reflexes and sensory phenomena, such as optical illusions. But even slightly more complex levels of mental experience have evaded our theories.
We are quickly approaching the point when our traditional reasons for pleading ignorance – that we don’t have the right tools, that we need more data, that brains are complex and chaotic – will not account for our lack of explanations. Our techniques for seeing what the brain and its neurons are doing, at any given moment, get stronger every year.
But we are using the wrong set of metaphors to describe the entire field, basing our understanding of the brain on comparisons to communications fields, like signal processing and information theory. Going forward, we should leave that flawed language choice behind. Instead, the words and ideas needed to unlock our brains come from a computational field much nearer to real biology: the expanding world of machine learning.
Homines ex machina?
For most of its history, “systems” neuroscience – the study of brains as large groups of interacting neurons – has tried to frame perception, action, and even cognition in terms taken from fields like signal processing, information theory, and statistical inference. Because these frameworks were essential for developing communications technology and data-processing algorithms, they suggested testable analogies for how neurons might communicate with each other or encode what we perceive with our senses. Many discussions in neuroscience would sound familiar to an audio engineer designing an amplifier: a certain region of the brain “filters” the sensory stimulus, “passing information” to the next “processing stage.”
Words of this sort preclude certain assumptions about how we expect to understand the brain. For instance, talking about different stages of processing implies that what goes on at one physical location in the brain can be distinguished from what goes on at another spot. Focusing on information, which has both a lay meaning and a precise mathematical definition, often conflates the two and postpones the question of what an animal actually needs to know to perform a certain behavior.
These borrowed descriptions proved fruitful for a time. Our computer algorithms for processing visual and auditory stimuli really do resemble the function of neurons in some parts of the brain, typically those closest to the sensory organs. This discovery was one of the earliest indications that we might understand the brain through simple, physics-like theories. If neurons really could be said to detect the edges in an image or break sounds down into their component frequencies, why shouldn’t the signal processing analogy extend to higher-level phenomena?
Read more...
Source: Massive