Michael Snow: La Région Centrale

Michael Snow La Region Centrale

Image taken from the Medienkunstnetz site, which has extensive information about the project and also a short videodocumentation.

Snow constructed a device for creating a quite complex camera movement, and placed it in a remote area in the mountains in Canada. The result was a 3 hour film of the camera scanning this landscape.

Another quite good article from Medienkunstnetz relating cinematography to the landscape.

The Vasulkas

Vasulka at zkm, originally uploaded by hc gilje.

The Vasulkas have inspired me in their approach to working with technology in a playful manner, very much in a lab situation as I am trying to do, exploring the medium. Particularly Steina Vasulka´s Machine Vision series and Woody Vasulka´s Brotherhood series are interesting in relation to my current work.

The Vasulkas website has tons of information on their own work, other video artists the last 30 years and also covers the busy period at The Kitchen in New York, run by the Vasulkas 1971-73.

I just saw the Mindframes exhibition at ZKM, with work from the Vasulkas, Gerald O´Grady, Hollis Frampton, Paul Sharits, James Blue, Tony Conrad and Peter Weibel, all involved with the media study department at Buffalo, New York in the seventies.

I have had the opportunity to meet Steina several times through work with 242.pilots, and we were also in the same exhibition Get Real in 2005 which also ended up as a book and dvd (with contributions from Lev Manovitch, Steven Dixon, Mogens Jacobsen, HC Gilje, Morten Søndergaard, Steina Vasulka, Pink Twins, Arijana Kajfes, Björn Norberg, Elin Wikström, Jacob Kirkegaard, Thor Magnusson, Michael Scherdin, Jack Burnham, Charlie Gere, Perrtu Rastas and Andreas Brøgger).

You can read my essay from the book here (pdf).


David Rokeby: Machine for taking time

Another project I wish I had done:

A colour surveillance camera has been mounted outside the gallery on a computer controlled pan/tilt mechanism, allowing it to see most of the surrounding gardens. Every day since March 28, 2001, the system has been taking still images from 1079 pre-determined positions along a sweeping path around the garden.

[..] the computer software travels through this accumulating archive of images, wandering through time, but progressing very slowly and smoothly through the successive positions in the original path.

The software does four kinds of wandering. It sometimes moves along the path using images from a single day. Or it might disolve sequentially from day to day as it progresses along the path. Alternatively it might dissolve from date to date randomly. Occasionally it will stop its movement along the path and show all the images taken from that position in rapid succession. The shifting of modes and the choices of dates is a function of a somewhat random process, and so the piece never repeats itself.”

from David Rokeby´s website

Rokeby has a lot of video documentation available through youtube.

Michael Naimark: Displacements

michael naimark displacments

this image found in the archives of the eyeteeth blog

This is one of the projects I wish I had done, uniting the capturing of a space and the projection back into the same space using motion (camera+projector rotating at same speed).

The camera is mounted in the middle of the room on a turntable, recording the space and the actions in it (top image). Then the whole space is spraypainted white (middle image). Finally the recorded film/video is projected back into the space, projector mounted on the same turntable as the camera was (bottom image).

The first version was done with film in 1980-84, then made for digital video in 2005.

Naimark has written two very interesting papers relating to this work, and you can also find two videoclips from the installation, one from the film version and one from the digital video version, on his website.

lab jan 2007: masking projections

lab jan 2007, originally uploaded by hc gilje.

(Slideshow of more images from the projection sketches here)

The second focus in this lab session was to work with video projections, and masking them to create several projection surfaces from one projector, and to be able to relate to physical shapes in the room, like my projection on a sphere in Iball, or the masking of video to fit the gallery spaces in one of the sleepers installation:

The simplest way to do it is to work with a 2 dimensional mask. I also wanted to try to work with 3D masks: projecting a virtual 3D version of an object back onto itself. In this way I managed to cover 4 sides of a cube with 2 projectors. Another thing I did was to link the individual projection surfaces (from one projector) so focus could move from one area to another, quite like how the nodes in nodio operate.

Finally, I did some simple tests using the projector as a light beam.

 

lab jan 2007: nodio 3rd generation

nodio 2, originally uploaded by hc gilje.

In january 2007 I got the first opportunity to work two intense weeks in a project space, enabling me to test out some new elements in the nodio system.

I set up 9 nodes (macminis+monitors) in a local network, and expanded my nodio composer tool to work with the 9 channel setup. A continuous discussion I have with myself is where to put the control: on the individual nodes or on a master computer talking to the nodes. Usually I end up with a combination, that one of the nodes does slightly more work than the others. I managed in this work period to sync the nodes with an audiosignal, making it easier to put more control on each node.

The results of this period resulted in a more refined way of moving elements between the sources:

I also played with using the soundtrack on videoclips (instead of generating sound from the video which is usually the case), and moving clips between the different nodes:

In general I was able to improve the sequencer aspects of the system a lot, making it possible for more complex patterns and rhythms:

 

nodio composer, 2006

nodio composer, originally uploaded by hc gilje.

nodio composer is a composer/sequencer for realtime multichannel video and audio, developed in maxmsp and jitter.

The nodio composer system consists of 4 modules:
The client is installed on each node, and does realtime processing of image and sound.

The motor talks to the clients and coordinates them, and it sets an saves the state of the whole system, and plays back sequences.

The composer is the gui for the motor, so it allows the user to set and save the state of the system and to program and play sequences. the motor sends feedback to the composer about the current state of the system and the sequencer.

The simulator is a fully working simulation of the system. It contains the three clients, and shows the three screens and pans the sound according to which client generate the sound. It is a slight modification of the node clients, but is made to be easy to replace with updates of the clients. As with the node
clients, simulator communicates with the motor.

The system is intended to be operated in 4 different modes:
composer, standalone, performer and composer_offline.

composer:
each node has a client application, one of the nodes contains the motor, the
composer is on another computer.

standalone installation:
each node has a client application, one of the nodes contains the motor.
performer: each node has a client application. The motor and the composer is on another computer.

performer:
each node has a client application. The motor and the composer is on another computer.

composer offline:
When the network or individual nodes are not available, it is possible to run the composer and motor together with a simulator on the same computer.

Here is a video with documentation from one composition and a brief description of how it works:

 

drifter, installation 2006

drift_total, originally uploaded by hc gilje.

(Slideshow of more images from Drifter here)

Drifter is a 12 channel audiovisual installation: 12 nodes, each with a computer,flatscreen and speakers, are placed in a circle. The nodes are connected over a wireless network, but each node only relate to its neighbour: It knows when a image is coming and knows where to pass it on to. Images travel clockwise across the network. The images leave traces. The image and traces are processed in realtime individually on each node and a sound is generated from the video, based on a given frequency. There are 4 base frequencies for the sound distributed among the different nodes, creating chords.
Each node has the same set of rules for how to behave, but they make individual choices (using the dice analogy, all the nodes follow the same rules for what happens if they get a 1 or a 6, but they throw their own dice, which will get different results on the different nodes).
There are also a few states or moods which change on a global level: the change happens on all the nodes simultaneously, switching between nervous, relaxed or more intense behaviour.
The overall result is an everchanging surrounding audiovisual landscape.

The first version of drifter was developed for my solo show at Trøndelag Senter for Samtidskunst in march 2006, and then in april at Rom for kunst+arkitektur.

A documention video from TSSK:

A video explaining the principles of the installation:

 

 

Dense, installation 2006

Dense 01, originally uploaded by hc gilje.

(Slideshow of more images from Dense here)

A doublesided videoprojection on six vertical strips of half transparent material at different depths in a blackbox space. One projection creates downward movement and the other a movement from side to side, thus creating a video weave on the projection surface where the projections overlap. The audio is generated by the changes in the video, one a dry chirping sound which pans with the horizontal movement of the video, the other is created by the downward movements of the other video, creating a very loud, deep sound resonating in the space. Moving around in the space is like walking inside a videomixer, perception of image and sound changes dramatically as you move inside the installation.

The installation was comissioned for the opening of the 2006 season of Black Box Teater in Oslo, and was developed during my residency at Tesla, Berlin in autumn 2005.

 

nodio 1st generation

nodio 1st generation, originally uploaded by hc gilje.

The series of experiments which I have called nodio (nodes of video and audio) started in the spring of 2005 when I got a short residency at BEK and an opportunity to show it at prøverommet BETA.

nodio is a networked multichannel audiovisual system, where each node in the system is a source both for video and audio. The nodes are linked either by a LAN or WLAN network.

So far the nodio project has resulted in the installations dense and drift, and the system was also used on the last kreutzerkompani performance irre.

My interest is to explore what happens when combining several audiovisual sources, where maybe the most interesting things happens in between the screens: the development of patterns and rhythms, and of audiovisual “powerchords”, but also to look at the spatial aspect of having several sources in a physical space that you can move around in, and where the image and sound changes the space.

Each node can operate individually or as part of a bigger setup. One image can be split up to the different nodes, or each node can have a separate image. Images can freely move between the different nodes, to create a distinct experience of movement in a physical space. Image transformations
and fades can be triggered individually or globally.

Sound is created by image analysis of the current video being played. this creates a very tight relation between the image and sound, and when images are moved around to the different nodes in a space, the sound moves with them.

The software is made using maxmsp and jitter, and the hardware is a macmini for each node.

The following video shows some of the possibillities in the first generation of nodio during a residency at BEK in Bergen (spring 2005) and Tesla in Berlin (autumn 2005).