shift v2: relief projection installation

shift v.2, originally uploaded by hc gilje.

I decided to give my current series of relief projections a name, shift: moving from one place to another, changing the emphasis, direction or focus of something. It also has a loose relation to the idea of shapeshifting.
As mentioned in my previous posts about my relief projection projects, shift combines multichannel sequencing, audio generated from video (soundtransducer inside every box, the sound you hear is directly related to the video projected on that particular box), with masking/mapping a projection to fit physical objects. This creates a dynamic audiovisual landscape, a spatial light painting.
The software to create the installation has developed over almost two years and some workshops, and I have shown documentation of the development, but never exhibited it as a final work.
It is only this autumn that I have found the right opportunity to show it in an exhibition. I was invited to participate in the Total Aktion exhibition at Museet for Samtidskunst in Roskilde, Denmark. I had the opportunity to exhibit there in 2005 as part of Get Real, a exhibition with real-time art as the focus (which was also shown at Kiasma in Helsinki, Finland). It also resulted in the book where I wrote the essay “Within the space of a moment”.


Shift became a sort of drone installation, with slow light/colour changes of volume, sometimes cut off by sharp white planes. The video documentation is a cut version showing some of the different scenes. Here is a slide show of still images.

(youtube link to the same video, if someone prefers that)

The software used is an expansion of the videoprojectiontool available here.

relief projection 2008

This easter I got the opportunity to work in a large theaterspace (BIT Teatergarasjen) to continue my work on relief projection (or masked projection).
I made 9 plywood boxes to use as my projection objects, and worked with two projectors, having a total of 16 projection surfaces.
One of the aims for this session was to also work with sound (each object would double as a speaker), and to create a depth in the placement of the objects.
I implemented my nodio system into the projection patch, which made it possible for me to create sequences of movement.
There are plenty of images from the session here.

spring sightings

nodio five-aside 2008, originally uploaded by hc gilje.

the last few months have been packed with lab work and a few public appearances.
I made a 5-channel nodio installation as part of a group show at Visningsrommet USF in Bergen, trying to develop further the idea of composing a circular audiovisual composition.
A few more images can be found here.

HKmark1 was part of a program curated by Per Platou, shown at the FESTIVAL INTERNATIONAL DE FILM ET VIDEO DE CREATION in Beirut, Lebanon. The day after the screening the rest of the festival was cancelled due to the violence in Beirut.

The queen is the supreme power of the realm, the piece commissioned by ZKM and Musikfabrik in 2007, was performed at this years Moers festival in Germany, on may 9th. The Moers festival looks like any summer rock festival, with a camping ground, people in a good mood (for various reasons), a wide selection of temporary tatoos, hair extensions, falafels etc. What is different is that it is a jazz/impro festival (for instance John Zorn was playing later the same evening), and the main venue is a huge circus tent. It was also a first for this festival to present a contemporary music ensemble like Musikfabrik.

lab jan 2007: nodio 3rd generation

nodio 2, originally uploaded by hc gilje.

In january 2007 I got the first opportunity to work two intense weeks in a project space, enabling me to test out some new elements in the nodio system.

I set up 9 nodes (macminis+monitors) in a local network, and expanded my nodio composer tool to work with the 9 channel setup. A continuous discussion I have with myself is where to put the control: on the individual nodes or on a master computer talking to the nodes. Usually I end up with a combination, that one of the nodes does slightly more work than the others. I managed in this work period to sync the nodes with an audiosignal, making it easier to put more control on each node.

The results of this period resulted in a more refined way of moving elements between the sources:

I also played with using the soundtrack on videoclips (instead of generating sound from the video which is usually the case), and moving clips between the different nodes:

In general I was able to improve the sequencer aspects of the system a lot, making it possible for more complex patterns and rhythms:

 

nodio composer, 2006

nodio composer, originally uploaded by hc gilje.

nodio composer is a composer/sequencer for realtime multichannel video and audio, developed in maxmsp and jitter.

The nodio composer system consists of 4 modules:
The client is installed on each node, and does realtime processing of image and sound.

The motor talks to the clients and coordinates them, and it sets an saves the state of the whole system, and plays back sequences.

The composer is the gui for the motor, so it allows the user to set and save the state of the system and to program and play sequences. the motor sends feedback to the composer about the current state of the system and the sequencer.

The simulator is a fully working simulation of the system. It contains the three clients, and shows the three screens and pans the sound according to which client generate the sound. It is a slight modification of the node clients, but is made to be easy to replace with updates of the clients. As with the node
clients, simulator communicates with the motor.

The system is intended to be operated in 4 different modes:
composer, standalone, performer and composer_offline.

composer:
each node has a client application, one of the nodes contains the motor, the
composer is on another computer.

standalone installation:
each node has a client application, one of the nodes contains the motor.
performer: each node has a client application. The motor and the composer is on another computer.

performer:
each node has a client application. The motor and the composer is on another computer.

composer offline:
When the network or individual nodes are not available, it is possible to run the composer and motor together with a simulator on the same computer.

Here is a video with documentation from one composition and a brief description of how it works:

 

drifter, installation 2006

drift_total, originally uploaded by hc gilje.

(Slideshow of more images from Drifter here)

Drifter is a 12 channel audiovisual installation: 12 nodes, each with a computer,flatscreen and speakers, are placed in a circle. The nodes are connected over a wireless network, but each node only relate to its neighbour: It knows when a image is coming and knows where to pass it on to. Images travel clockwise across the network. The images leave traces. The image and traces are processed in realtime individually on each node and a sound is generated from the video, based on a given frequency. There are 4 base frequencies for the sound distributed among the different nodes, creating chords.
Each node has the same set of rules for how to behave, but they make individual choices (using the dice analogy, all the nodes follow the same rules for what happens if they get a 1 or a 6, but they throw their own dice, which will get different results on the different nodes).
There are also a few states or moods which change on a global level: the change happens on all the nodes simultaneously, switching between nervous, relaxed or more intense behaviour.
The overall result is an everchanging surrounding audiovisual landscape.

The first version of drifter was developed for my solo show at Trøndelag Senter for Samtidskunst in march 2006, and then in april at Rom for kunst+arkitektur.

A documention video from TSSK:

A video explaining the principles of the installation:

 

 

Dense, installation 2006

Dense 01, originally uploaded by hc gilje.

(Slideshow of more images from Dense here)

A doublesided videoprojection on six vertical strips of half transparent material at different depths in a blackbox space. One projection creates downward movement and the other a movement from side to side, thus creating a video weave on the projection surface where the projections overlap. The audio is generated by the changes in the video, one a dry chirping sound which pans with the horizontal movement of the video, the other is created by the downward movements of the other video, creating a very loud, deep sound resonating in the space. Moving around in the space is like walking inside a videomixer, perception of image and sound changes dramatically as you move inside the installation.

The installation was comissioned for the opening of the 2006 season of Black Box Teater in Oslo, and was developed during my residency at Tesla, Berlin in autumn 2005.

 

nodio 1st generation

nodio 1st generation, originally uploaded by hc gilje.

The series of experiments which I have called nodio (nodes of video and audio) started in the spring of 2005 when I got a short residency at BEK and an opportunity to show it at prøverommet BETA.

nodio is a networked multichannel audiovisual system, where each node in the system is a source both for video and audio. The nodes are linked either by a LAN or WLAN network.

So far the nodio project has resulted in the installations dense and drift, and the system was also used on the last kreutzerkompani performance irre.

My interest is to explore what happens when combining several audiovisual sources, where maybe the most interesting things happens in between the screens: the development of patterns and rhythms, and of audiovisual “powerchords”, but also to look at the spatial aspect of having several sources in a physical space that you can move around in, and where the image and sound changes the space.

Each node can operate individually or as part of a bigger setup. One image can be split up to the different nodes, or each node can have a separate image. Images can freely move between the different nodes, to create a distinct experience of movement in a physical space. Image transformations
and fades can be triggered individually or globally.

Sound is created by image analysis of the current video being played. this creates a very tight relation between the image and sound, and when images are moved around to the different nodes in a space, the sound moves with them.

The software is made using maxmsp and jitter, and the hardware is a macmini for each node.

The following video shows some of the possibillities in the first generation of nodio during a residency at BEK in Bergen (spring 2005) and Tesla in Berlin (autumn 2005).