INDEX

In the spring of 2009, researchers from Cornell University presented their findings at the World Wide Web annual conference in Madrid.  For the first time in the history of photography, there was an attempt at quantifying the number of photographs being taken at the world’s most popular tourist destinations.  Cornell scientists downloaded and analyzed nearly thirty-five million Flickr photographs taken by more than 300,000 photographers from around the world, using a supercomputer at the Cornell Center for Advanced Computing.  Within their data, they found that the Eiffel Tower, according to Flickr at least, was the most photographed landmark in the world, followed by many other popular tourist destinations.


Just two years prior in the spring of 2007, Microsoft launched a software program called Photosynth at the annual TED Conference.  Within the demo, speaker Blaise Aguere y Arcas demonstrated the ability of the program to construct a pixel based 3D environment out of user uploaded photographed.  Using the tags associated with photographs and a complex mathematical rendering engine, the software analyzes the point in space where the photograph was taken and attempts to stitch it with a similar photograph taken nearby.   Arcas showed one of these very constructions of the Notre Dame Cathedral in Paris, dazzling the audience in the process as he panned through space and time within a virtual world.


While Photosynth was and still is at the cutting edge of image stitching, other practical applications of their approach are appearing over the Internet.  Google maps, for example, uses similar software to enhance their Street View option, allowing users to break away from the usual routes and explore space by clicking on user uploaded photographs instead.

 
The very idea of recreating space and time through images mapped through three-dimensional space is an intriguing one.  It is surely only a matter of time that our most heavily populated areas, with the rise of digital devices and electronics with embedded cameras, could potentially be mapped in extreme detail within the next decade.  The software effectively creates a digital copy of the visible physical world: a virtual world of unparalleled detail. 


The implications of this software as a starting point to future technology is truly astounding, and will surely raise many questions about its best uses, along with its problems and various privacy concerns.   It is not inconceivable to imagine the world fully spatially mapped in the near future – but what would it be a map of exactly? 


What if our photographs, immediately after being taken (if we allowed it of course), were instantly uploaded to such an environment, effectively creating a constantly updated virtual map of the world in three-dimensional space?  Could we watch as environments changed in real time, witnessing photographed moments as they happened?


For over one hundred and fifty years, photographs have had a clearly defined space to exist within.  The future of the photograph remains much less predictable. One thing, however, is becoming increasingly clear:  the authoritative, singular photograph that was once associated with analogue photography is rapidly reaching its end.  Digital technology is providing a new role for the photograph, one that places it as just a small piece of a much more complex, connected and networked picture.  

BACK TO INDEX