augmentedsculpture by pablo valbuena, originally uploaded by hc gilje.
I found the title of this post in one of Michael Naimarks essays, I guess it also could have been called augmented reality, projection of a virtual object onto a physical object, projecting a virtual layer ontop of a physical geometry, masking of projections, etc.
I have been researching different ways of projecting on other things than flat surfaces: projections that project on objects, follow the shape of the room, and projections of virtual 3D shapes onto physical 3D shapes.
In my own work I have used projections as advanced light sources, masking as a way to fit flat projections on objects and surfaces, but also to create the illusion of multiple screens from a single source. Some examples here. (Update may 2008: some more recent examples)
My goal has been to create tools which make it easy to start working with a physical space immediately, being able to make changes in realtime. I have mainly done this by using multiple opengl videoplane layers in max/msp jitter, with one of the layers having a drawing mode so you are able to draw the shape of a particular object after you have placed a opengl layer over it. I made a crude 3 layer tool for the workshop I did at KHIO this summer to enable the participants to immediately start relating to the physical space.
A prime example of multiple opengl videoplanes is Piotr Pajchels work with Verdensteatret.
I have done some experiments with projecting a 3D shape onto physical objects, but still have a long way to go in terms of having a simple setup for this.
Obviously I have been looking at what other people have been doing, but none of systems I have found seems to be available to the public, and few of them seem to have been used beyond the developing-period of the system, which might be a sign of them not being as flexible as wanted, and maybe also quite timeconsuming to prepare.
Most systems uses a method to track the shape/space they want to project onto in combination with custommade software, to be able to map the projected image correctly onto the physical object, which is related to the lens specifications of the projector, the placement of the projector in relation to the objects to be projected on, etc.
The LightTwist system developed at the University of Montreal (not much seems to have happened after 2004) use “a panoramic (catadioptric) camera to get correspondances between each projector pixel with the camera pixel. This camera represents the viewpoint of our futur observers. Then, from what the observer should see, we can build the projector images from their respective mapping.”
The videobjects from Whitevoid design in Germany is a software for realtime distortion of video to fit physical objects, but using predistorted video, and you calibrate it either with a helpgrid or by importing a model of the realworld setup. So you would need to first create the 3D shapes to project onto, and then decide how the video will map onto the 3D objects, and finally doing the calibration to match up the virtual objects with the physical ones.
I think the most spectacular callibration solution so far is the “automatic projector calibration with embedded light sensors” (pdf), a collaboration between people from Carnegie-Mellon, Mitsubishi Electric Research Lab and Stanford. They use fiberoptics and light sensors built into the objects/surfaces to be projected on, and by projecting a series of grey coded binary patterns, a custom-made software is able to adjust the image in less than a second to perfectly fit the projectionsurface, with a much higher resolution than a camerabased solution. Take a look at the impressive video:
The pdf and video seems to be from 2004, but I found some more information at Johnny Chung Lee´s website. They are hoping to make the system fast enough for live tracking of moving objects, and also to make the calibration pattern invisible using infrared light.
update: there is now more information on Lee´s website.
If you have a big budget you could always invite Circus of Now to do the video for you (”We build skyscrapers of light”).
At Ars Electronica this year I had the pleasure to see Palbo Valbuena´s Augmented Sculpture (image at top of this post) which consists of a physical structure in the corner of the room, with the exact same virtual shape projected onto it using one projector. By then animating the color and lighting of this virtual shape, some very interesting light/shadowplays happen. Valbuena collaborates with some game developers in Spain who constructed the virtual model and animation in a standard 3D software.
This work shows the potential in augmented reality using videoprojection, and I hope to see more of his work soon (He has a big outdoor installation in Madrid at the moment, hopefully there will be some documentation soon.)
update feb 5th 2008: Valbuena has updated his website with documentation of several projects: different versions of the augmented sculpture and the public square installation in Madrid.