The brain is a fantastic but finite machine.
That is, the brain can be bogged down like an other computer when pushed to its limits. But some large part of what makes the brain marvelous is how it deals with those limits, e.g. how seemingly intractable neurocomputational problems can become reasonable through the on-the-fly invention of new algorithms. The results of these algorithms are often plans for future action in which some swamp of prohibitive complexity is navigated successfully and efficiently.
Making sense of a subway network is one such example. Planning a trip through a vast web of stations and lines is a computationally complex task that quickly explodes as more lines and more stations are added. Understanding how the brain is able to pull this off without nuking itself like a laptop in a microwave would be of great interest to artificial intelligence and machine learning researchers.
A Google Deepmind-led study published this week in the journal Neuron offers a new, clear look at the neural activity behind such learning, finding that the brain is able to represent hierarchical planning problems (e.g. navigating subway trips) via its own efficient hierarchical representations in the brain. Computational optimization through evolution, in other words.
"Human cognition has evolved to meet this challenge, as exemplified by our ability to form and follow plans over multiple timescales, for example when finding an efficient route to run a series of errands, or envisaging a future career path and taking steps toward its fulfillment," the Deepmind researchers write. "Although we have known for decades that planning involves the [prefrontal cortex], to date, very little has been revealed about the computational mechanisms that unfold in these regions during plan formation and execution."
The typical approach to planning, computationally speaking, can be viewed as a search of all possible future states. The goal is to find and evaluate all possible outcomes, so we start stepping through what amounts to a network of possibilities. One action opens up some number of new actions, which all in turn open up new actions. And so on. This is how computers solve chess or Go: calculating everything that might happen given a particular move and then picking the best one. But chess and Go represent idealized worlds.
"Because the number of possible action sequences grows exponentially with each additional step in the planning horizon, this approach is computationally intractable in many natural environments," the paper notes. "For example, a visitor would probably not plan a trip to London by envisaging every unique interim step en route to the destination, but might rather imagine attaining only a subset of key states, such as reaching an airport or other transport hub."
This subset consists of contexts, and this is what enables the brain to accomplish complex planning. Instead of imagining a trip to Las Vegas, it's a simpler matter to consider switching contexts to just Nevada. In the subway sense, our brains may try and compute a trip from one particular station to another particular station, but it seems to do better when it's possible to imagine the trip as a process of context switching, e.g. switching from geographical region to geographical region or from subway line to subway line. This sort of planning is enabled by the presence of built-in hierarchy within a network.
"Unlike planning in a 'flat' (non-hierarchical) environment, plans formed in a hierarchical environment need not specify each and every state linking the current position and goal," the Deepmind group explains. "Rather, it is sufficient to identify the current context and the (termination) conditions that allow the next context to be reached; for example, when planning a journey from Marble Arch to King's Cross on the London Underground, one should 'take the Central Line to Oxford Circus, and from there, switch to the Victoria Line.'"
The researchers were able to verify this by observing brain electrical activity in subjects tasked with navigating a made-up subway network. The key finding was that neural activity spiked in response to changes in subway lines and bottleneck stations connecting different lines, rather than in increasing or decreasing numbers of stations. The implication is as above: The brain is computing its plan according to contexts rather than exhaustive searches of possible routings between stations.
"We want to see how the human brain implements things like hierarchical structures in order to design more clever algorithms," notes Jan Balaguer, a study co-author and Deepmind member, in a statement. "In machine learning, having a hierarchical representation for decision making might be helpful or harmful depending on whether you choose the right hierarchy to implement in the first place."