FYI.

This story is over 5 years old.

The Biggest Obstacle to Data Growth Is the Human Brain

The giant datasphere that represents the digitized collective knowledge of the entire human race is experiencing growing pains -- us.
Janus Rose
New York, United States
December 5, 2011, 11:15pm

The giant datasphere that represents the digitized collective knowledge of the entire human race is experiencing growing pains — us.

A small team of computer scientists have found that the amount of new data that can be created is limited. Following patterns in accordance with the Weber-Fechner law, which once measured human perception of weight changes in an object, the potential for new data hits its threshold based on the type of data in question, and the scientists leading this research are saying our inability to meaningfully absorb large amounts of data is at fault.

It seems strange at a time when data is becoming increasingly precious to various search engines, social networks and advertisers. But by analyzing the distributions of around 600 million files sourced from links on Wikipedia by type — image, audio, video and text — the study's findings suggest a relationship between our temporally-dependent perception of volumes of data and our ability to create more of it. In other words, time-demanding data like video and audio follow a different curve than images, which can be absorbed almost instantly, suggesting that the human brain's ability to perceive digital content affects how much or how little of it there actually is.

In a sense it's not surprising given how fundamentally different we are from the machines we filter our knowledge through. But if there is any hope to level the playing field without becoming machines ourselves, it will almost certainly come in the form of artificial intelligence.

AI is already being researched as a kind of neurophysiological handicap for humans in its hypothesized ability to re-contextualize data for better human consumption. But even if the advanced AI search engines of the future gain the ability to, say, deconstruct a YouTube video into a form that the human brain can digest instantly, how much knowledge would be lost in translation? With the market for fragmentary knowledge already booming thanks to services like Twitter and Wikipedia, maybe we should be obeying our brains rather than trying to keep pace with a time-independent machine world.

Connections:

Metacritic Proves Data Mining Still Doesn't Work
Some Human Thoughts On Watson, The AI That Beat Us In Jeopardy
Data Visualization And Disaster