Michael Snow: La Région Centrale

Michael Snow La Region Centrale

Image taken from the Medienkunstnetz site, which has extensive information about the project and also a short videodocumentation.

Snow constructed a device for creating a quite complex camera movement, and placed it in a remote area in the mountains in Canada. The result was a 3 hour film of the camera scanning this landscape.

Another quite good article from Medienkunstnetz relating cinematography to the landscape.

The Vasulkas

Vasulka at zkm, originally uploaded by hc gilje.

The Vasulkas have inspired me in their approach to working with technology in a playful manner, very much in a lab situation as I am trying to do, exploring the medium. Particularly Steina Vasulka´s Machine Vision series and Woody Vasulka´s Brotherhood series are interesting in relation to my current work.

The Vasulkas website has tons of information on their own work, other video artists the last 30 years and also covers the busy period at The Kitchen in New York, run by the Vasulkas 1971-73.

I just saw the Mindframes exhibition at ZKM, with work from the Vasulkas, Gerald O´Grady, Hollis Frampton, Paul Sharits, James Blue, Tony Conrad and Peter Weibel, all involved with the media study department at Buffalo, New York in the seventies.

I have had the opportunity to meet Steina several times through work with 242.pilots, and we were also in the same exhibition Get Real in 2005 which also ended up as a book and dvd (with contributions from Lev Manovitch, Steven Dixon, Mogens Jacobsen, HC Gilje, Morten Søndergaard, Steina Vasulka, Pink Twins, Arijana Kajfes, Björn Norberg, Elin Wikström, Jacob Kirkegaard, Thor Magnusson, Michael Scherdin, Jack Burnham, Charlie Gere, Perrtu Rastas and Andreas Brøgger).

You can read my essay from the book here (pdf).


David Rokeby: Machine for taking time

Another project I wish I had done:

A colour surveillance camera has been mounted outside the gallery on a computer controlled pan/tilt mechanism, allowing it to see most of the surrounding gardens. Every day since March 28, 2001, the system has been taking still images from 1079 pre-determined positions along a sweeping path around the garden.

[..] the computer software travels through this accumulating archive of images, wandering through time, but progressing very slowly and smoothly through the successive positions in the original path.

The software does four kinds of wandering. It sometimes moves along the path using images from a single day. Or it might disolve sequentially from day to day as it progresses along the path. Alternatively it might dissolve from date to date randomly. Occasionally it will stop its movement along the path and show all the images taken from that position in rapid succession. The shifting of modes and the choices of dates is a function of a somewhat random process, and so the piece never repeats itself.”

from David Rokeby´s website

Rokeby has a lot of video documentation available through youtube.

Michael Naimark: Displacements

michael naimark displacments

this image found in the archives of the eyeteeth blog

This is one of the projects I wish I had done, uniting the capturing of a space and the projection back into the same space using motion (camera+projector rotating at same speed).

The camera is mounted in the middle of the room on a turntable, recording the space and the actions in it (top image). Then the whole space is spraypainted white (middle image). Finally the recorded film/video is projected back into the space, projector mounted on the same turntable as the camera was (bottom image).

The first version was done with film in 1980-84, then made for digital video in 2005.

Naimark has written two very interesting papers relating to this work, and you can also find two videoclips from the installation, one from the film version and one from the digital video version, on his website.

lab jan 2007: masking projections

lab jan 2007, originally uploaded by hc gilje.

(Slideshow of more images from the projection sketches here)

The second focus in this lab session was to work with video projections, and masking them to create several projection surfaces from one projector, and to be able to relate to physical shapes in the room, like my projection on a sphere in Iball, or the masking of video to fit the gallery spaces in one of the sleepers installation:

The simplest way to do it is to work with a 2 dimensional mask. I also wanted to try to work with 3D masks: projecting a virtual 3D version of an object back onto itself. In this way I managed to cover 4 sides of a cube with 2 projectors. Another thing I did was to link the individual projection surfaces (from one projector) so focus could move from one area to another, quite like how the nodes in nodio operate.

Finally, I did some simple tests using the projector as a light beam.

 

lab jan 2007: nodio 3rd generation

nodio 2, originally uploaded by hc gilje.

In january 2007 I got the first opportunity to work two intense weeks in a project space, enabling me to test out some new elements in the nodio system.

I set up 9 nodes (macminis+monitors) in a local network, and expanded my nodio composer tool to work with the 9 channel setup. A continuous discussion I have with myself is where to put the control: on the individual nodes or on a master computer talking to the nodes. Usually I end up with a combination, that one of the nodes does slightly more work than the others. I managed in this work period to sync the nodes with an audiosignal, making it easier to put more control on each node.

The results of this period resulted in a more refined way of moving elements between the sources:

I also played with using the soundtrack on videoclips (instead of generating sound from the video which is usually the case), and moving clips between the different nodes:

In general I was able to improve the sequencer aspects of the system a lot, making it possible for more complex patterns and rhythms:

 

nodio composer, 2006

nodio composer, originally uploaded by hc gilje.

nodio composer is a composer/sequencer for realtime multichannel video and audio, developed in maxmsp and jitter.

The nodio composer system consists of 4 modules:
The client is installed on each node, and does realtime processing of image and sound.

The motor talks to the clients and coordinates them, and it sets an saves the state of the whole system, and plays back sequences.

The composer is the gui for the motor, so it allows the user to set and save the state of the system and to program and play sequences. the motor sends feedback to the composer about the current state of the system and the sequencer.

The simulator is a fully working simulation of the system. It contains the three clients, and shows the three screens and pans the sound according to which client generate the sound. It is a slight modification of the node clients, but is made to be easy to replace with updates of the clients. As with the node
clients, simulator communicates with the motor.

The system is intended to be operated in 4 different modes:
composer, standalone, performer and composer_offline.

composer:
each node has a client application, one of the nodes contains the motor, the
composer is on another computer.

standalone installation:
each node has a client application, one of the nodes contains the motor.
performer: each node has a client application. The motor and the composer is on another computer.

performer:
each node has a client application. The motor and the composer is on another computer.

composer offline:
When the network or individual nodes are not available, it is possible to run the composer and motor together with a simulator on the same computer.

Here is a video with documentation from one composition and a brief description of how it works: