relief projection

augmentedsculpture by pablo valbuena, originally uploaded by hc gilje.

I found the title of this post in one of Michael Naimarks essays, I guess it also could have been called augmented reality, projection of a virtual object onto a physical object, projecting a virtual layer ontop of a physical geometry, masking of projections, etc.

I have been researching different ways of projecting on other things than flat surfaces: projections that project on objects, follow the shape of the room, and projections of virtual 3D shapes onto physical 3D shapes.

In my own work I have used projections as advanced light sources, masking as a way to fit flat projections on objects and surfaces, but also to create the illusion of multiple screens from a single source. Some examples here. (Update may 2008: some more recent examples)

My goal has been to create tools which make it easy to start working with a physical space immediately, being able to make changes in realtime. I have mainly done this by using multiple opengl videoplane layers in max/msp jitter, with one of the layers having a drawing mode so you are able to draw the shape of a particular object after you have placed a opengl layer over it. I made a crude 3 layer tool for the workshop I did at KHIO this summer to enable the participants to immediately start relating to the physical space.
A prime example of multiple opengl videoplanes is Piotr Pajchels work with Verdensteatret.

I have done some experiments with projecting a 3D shape onto physical objects, but still have a long way to go in terms of having a simple setup for this.
Obviously I have been looking at what other people have been doing, but none of systems I have found seems to be available to the public, and few of them seem to have been used beyond the developing-period of the system, which might be a sign of them not being as flexible as wanted, and maybe also quite timeconsuming to prepare.

Most systems uses a method to track the shape/space they want to project onto in combination with custommade software, to be able to map the projected image correctly onto the physical object, which is related to the lens specifications of the projector, the placement of the projector in relation to the objects to be projected on, etc.

The LightTwist system developed at the University of Montreal (not much seems to have happened after 2004) use “a panoramic (catadioptric) camera to get correspondances between each projector pixel with the camera pixel. This camera represents the viewpoint of our futur observers. Then, from what the observer should see, we can build the projector images from their respective mapping.”

The videobjects from Whitevoid design in Germany is a software for realtime distortion of video to fit physical objects, but using predistorted video, and you calibrate it either with a helpgrid or by importing a model of the realworld setup. So you would need to first create the 3D shapes to project onto, and then decide how the video will map onto the 3D objects, and finally doing the calibration to match up the virtual objects with the physical ones.

I think the most spectacular callibration solution so far is the “automatic projector calibration with embedded light sensors” (pdf), a collaboration between people from Carnegie-Mellon, Mitsubishi Electric Research Lab and Stanford. They use fiberoptics and light sensors built into the objects/surfaces to be projected on, and by projecting a series of grey coded binary patterns, a custom-made software is able to adjust the image in less than a second to perfectly fit the projectionsurface, with a much higher resolution than a camerabased solution. Take a look at the impressive video:

The pdf and video seems to be from 2004, but I found some more information at Johnny Chung Lee´s website. They are hoping to make the system fast enough for live tracking of moving objects, and also to make the calibration pattern invisible using infrared light.
update: there is now more information on Lee´s website.

If you have a big budget you could always invite Circus of Now to do the video for you (”We build skyscrapers of light”).

At Ars Electronica this year I had the pleasure to see Palbo Valbuena´s Augmented Sculpture (image at top of this post) which consists of a physical structure in the corner of the room, with the exact same virtual shape projected onto it using one projector. By then animating the color and lighting of this virtual shape, some very interesting light/shadowplays happen. Valbuena collaborates with some game developers in Spain who constructed the virtual model and animation in a standard 3D software.
This work shows the potential in augmented reality using videoprojection, and I hope to see more of his work soon (He has a big outdoor installation in Madrid at the moment, hopefully there will be some documentation soon.)

update feb 5th 2008: Valbuena has updated his website with documentation of several projects: different versions of the augmented sculpture and the public square installation in Madrid.

Advertisement

The mirror project

I was introduced to Martin Andersen last week, the artist behind the mirror project in Rjukan. Rjukan is in a valley where the sun disappears behind the mountains 5 months a year. Martin wants to construct a heliostat mirror (it follows the position of the sun) to get sunlight to the town square of Rjukan. This is actually an old idea from 1913 supported by Sam Eyde, the director of Norsk Hydro (which basically founded Rjukan for industrial purposes).

Unlike a similar project in Viganella in Italy which uses brushed steel as the reflection surface, the mirror project will use mirrors which focus the sun only to the town square (about 100m2).

David Cuartielles´s talk at Piksel

This year´s piksel festival is over, with focus on circuit bending and open hardware solutions. I could only stay the first day, but have been following part of the festival through the quite impressive streams archive from the festival.

I got to see the wonderful loud objects live, which I have written about before.

Last night I watched David Cuartielles from the Arduino team give an interesting talk about the beginning of the arduino project, the current status of the arduino development and challenges ahead. They have sold over 20000 boards, and it has become the standard tool for electronics prototyping. As a result of this position there are now clones being made all over the world, for instance a dual core arduino (!!) made by some people in South-Korea.

He talked about how everything with arduino is open, they only reserve the use of the name arduino. There is quite a big difference between open software and open hardware, as hardware needs to be manufactured so you need some initial investment. They have an interesting strategy for funding new development by letting corporations and institutions paying them to work on the arduino project, so for instance Samsung agreed to let Cuartielles make a lot of board designs which will be available for free soon, without Samsung having any rights to the designs. Some of these boards looked very interesting, like a 32 channel output board (which can be daisychained) with transistors to control lights, motors etc, a 64 channel input board, relay boards etc.

The arduino project is being restructured into a foundation, and some money will each year be used to support people and projects who have problems with access to hardware.

Here is a direct link to his talk (ogg stream).

mikro performance

mikro performance, originally uploaded by hc gilje.

Mikro is a series of improvised performances using the immediate surroundings as raw material: A microscope captures everyday objects and surfaces like wallpaper, coins, clothing, furniture, newspapers and transforms it into an explosive universe of textures. Contact microphones and electromagnetic sniffers pick up unhearable sounds to create the live soundtrack.
Mikro is a collaboration between HC Gilje (video) and Justin Bennett (sound).
Performances so far:
Paradiso (Amsterdam), IMAL (Brussels), TAG (den Haag), DNK (Amsterdam), Bergen Kunsthall Landmark (Bergen), Laznia (Gdansk)

Verdensteatret installations

Louder_k11, originally uploaded by hc gilje.

This weekend I got the chance to see the two installations “Fortellerorkesteret” and “Louder” by Verdensteatret at Kunstnernes Hus in Oslo. Both come out of two theater performances by Verdensteatret. I had the opportunity to be a little bit involved in the production of Fortellerorkesteret so it was nice to see it in this huge beautiful space.

The installations are a mix of sculpture,sound,video,kinetic objects,light and shadows, and are inspiring examples of compositions in space. Fortellerorkesteret has a more theater structure, resemblant of old mechanical puppet theater, while Louder is more of a spatial experience dominated by the huge mechanical spider and the numerous speakers.

more images of louder and fortellerorkesteret

Connect the dots

mouselab2.jpg, originally uploaded by hc gilje.

I have been very busy preparing and giving a 2 week physical computing workshop at The Academy of Fine Arts in Bergen, “Connect the dots”. It has its own blog, with lots of useful info related to arduino, mice etc. (look for resources category). There are also images available from the workshop.
The aim was to introduce to a mixed group of students the basic concepts of physical computing, and how to to create relations between objects,spaces,actions and people, so it was both a hands on workshop with arduino (analog in/out, digital in/out, serial communication with computer), different sensors, transistors and relays controlling 12 and 220 volt appliances, discussion and presentation of other artists´ work, and the production of a one day exhibition including a listening post, a mouse radio, a paper burning machine, a weather machine and a callstation (where the arduino picks up the phone when you called a specific number ++). Read all about it in the connect the dots blog!