Imagine a virtual reality mapping environment designed to perfectly replicate a city's streets and buildings. It could be powered by the data from a team of cars driving around with roof-mounted cameras pointing in every direction, whose images can then be manipulated into a map-based wireframe grid to fully immerse users. OK, so you're thinking this is old hat, right? I mean, Google Street View has been doing this for years now. Thing is, the idea was around long before Google. In fact, M.I.T. students were doing the same thing on the rich streets of Aspen as far back as 1978.
The Aspen Movie Map was developed by a large cast of talented individuals, including principle investigator Andy Lippman, Bob Mohl, who wrote the dissertation, and Nicholas Negroponte, who was heading M.I.T.'s Architecture Machine Group, the predecessor to the school's vaunted Media Lab. (Michael Naimark, who did the cinematography design and production, has a full list of credits on his site.) The DARPA-funded concept was brilliant: The team used video from 16mm stop-frame film cameras mounted on cars to create a "movie" of Aspen's streets.
By encoding the cameras to fire every ten feet as the car drove around (the distance measured by a sensor attached to a bike wheel attached to the car), the team was able to set up the base portraits for every sector of the city. They placed that footage into specific sectors on a laserdisc in order to correlate with the city's map, a feat considering that lasderdics still stored only analog video.
Once the laserdisc storage sites of street footage were correlated to a two-dimensional map of the city, Naimark's team was able to program an interface that allowed users to move wherever they wanted. A series of navigation buttons were laid over the video image – remember, this is still analog video, with a digital interface laid on top – with which users could plot their course in a similar fashion to Street View. Even better, a user could tap on a building to focus on its façade, and certain buildings even had extra data for users, like interior views.
As the Movie Map was refined, the metadata for specific bits of film was eventually encoded as a digital signal within the analog film, allowing users to move about more smoothly. The team also added a flat navigation map over the horizon in the top of each frame, helping users avoid getting lost. (Sounds like Grand Theft Auto, right?) Eventually the Movie Map even evolved to provide a three-dimensional polygonal, texture-mapped model of the city.
The Movie Map idea seems pretty standard to us these days, but it was stunningly complex at the time. DARPA's interest in the project was straightforward: the promise of realistic, properly-scaled digital copies of real environments would theoretically speed soldiers' familiarization of a specific location. Of course, they hadn't yet worked out how they might actually film sensitive locations to create the map, but the concept of a virtual mapping environment was nonetheless compelling.
Despite DARPA's interest, the Movie Map was famously awarded a Golden Fleece Award in 1980. The award was created by former Senator William Proxmire in 1975 as a sarcastic way of pointing out public officials that waste money on stupid projects. (Just about every governmental department received one at some point or another during the award's 13 year existence.) The award – which was later roundly criticized – is a testament to how advanced and out-there the Movie Map was at its time.
These days, we're a little more used to the idea. Reliant, even. So the next time you find yourself wandering through Street View, remember to pay homage to the Movie Map, and take a moment to wonder at the fact that modern mapping environments, for some reason, all hearken back to Aspen.