stereolux

For my performance and installation at Stereolux in Nantes I answered a few questions about my work from Laurent Diouf, chief editor of MCD. The short version will be at stereolux website, but here is the long version:

01_ First, a few sentences about Radiant… about the meaning of this installation… and also about the use of laser and phosphorescent paint…

My work with Radiant started out with some thoughts on extinction, growth and decay, the fascination with how plants create food from light and the material qualities of laser and the phosphorescent pigment. Laser light is more intense than sunlight, and phosporecent pigment are actual natural minerals that are able to capture light and slowly release it as a green glow (In the times of Galileo they were called solar sponges).
Radiant is also much about time and speed: The intense quick drawings of the laser point versus the slowly fading out when the light is released. The interesting things happen in the layering of these drawings, where you can see traces of multiple pasts mixed with fresh drawings: different time scales (or Bergson´s duration) coexist on the surface of the screen.
For the audience I think it works also in a different sense when thinking about scale. It contains both a macro and a micro scale: You could be staring into the universe or be looking at a cell or subatomic processes.

02_ Your works / installations are “conversations with spaces” (with light, projection, sound and motion)… is it the same way for/with Radiant ?

Normally my process is to start with the space where I will make a work, spend time exploring it, using improvisation as a method: I bring my set of tools and start experimenting my way towards a path that creates an interesting amplification or transformation of the space.
With Radiant it is a bit different since I started it out as making a big flat quadratic light painting (the one at Stereolux will be 3.6×3.6m), and not particularly thinking about the space it would be presented in.
However the installation is transformed by the spaces it is presented in, and it also has the power to transform and intensify the spaces it is presented in. It is a quite different setting from a rough concrete environment at Kraftwerk during Berlin Atonal to a circular floor projection with a custom made arced wall in at Kunsthall Grenland in Norway.
Also the light of the white laser beam is quite intense so the resulting shadows in the space is quite similar to my other light installations.

03_ Following the opening of the exhibition, you’ll do a performance. What about this live a/v ? Will you use some elements from the installation in/for your set ?

Radiant started out as an installation, and I always thought of it as a loop piece because of the constant layering of time creating new images, where the installation becomes a place or a state you walk into. As a live performance time is not circular, it has a direction so this changes quite a lot the experience of the piece I think. Also a live set is more of a communal experience while the installation is maybe better experienced alone or with a few people.
The raw material of the installation and the live set is more or less the same, it is the structuring that is the main difference.
Also, the live set introduced the soundtrack, which is created in real time using the amplified mechanical sounds produced by the laser mirrors. The sound from the laser is amplified and played through the speakers, but also recorded and reappearing as new sound layers (similar to the visual material) during the performance.
My exhibition at Stereolux will actually be the first time I will try use this sound setup for the installation as well.
04_ Generally speaking, how do you manage your live’ set ? What do you want to show, to give to the audience through a live A/V. ? Is it also a “space for co-working” with other artists ?

I guess I partly answered that in the previous question, but for me this performance is about creating a focused intensified experience for the audience, in contrast to the installation which is more of a meditative piece.
Almost all of my other live A/V performances are quite different from this one, as they are real time free improvisations with musicians or other visual artists, where the process unfolding through the collaboration is the interesting part.
So Radiant Live is a very controlled piece in that sense, with a quite fixed structure but with room for variations.
05_ If you have some others projects…

I recently created a site specific light installation for four interconnected rooms in a gallery space. The piece was called Red White Black and consisted of two rails of LED strip that followed the contours of the rooms and doorways. One pulse of white light moved in one direction, one pulse of red light moved in the other. Super simple in one way, but it created a very dynamic space of opening and closing, revealing and hiding, a space that expands, collapses, twists and turns in the light from the red and the white pulse of light that chases along the walls, corners and door openings of the space. Probably one of my favourite pieces 🙂

A very different work but which has been my most shown work the last years is Barents (mare incognitum). It is a single channel video installation of the Barents Sea slowly turning around. It was filmed at the border of Norway and Russia with my custom built camera pointing towards the North Pole. It is one of many works that came out of my involvement with the Dark Ecology project; a series of journeys and projects initiated by Sonic Acts and Hilde Mehti in the Norwegian-Russian border area.
Another work that came out the Dark Ecology project was my film rift, combining
my love for the experimental film maker Len Lye and an interest in the deep time of plastic. It was part of the Vertical Cinema program: experimental 35 mm cinemascope films in the vertical format.

Finally I would like to mention speiling, which is the latest in a long series of projection spaces: solid coloured organic forms projected onto a highly reflective floor, creating a dynamic light space.

Right now I am working on two quite different projects: an installation for a stalactite cave in an old fortress to be presented in August, and a series of installations for next year where I give myself the challenge to work with light, sound and motion but in a normally lit space.

06_ Feel free to add or highlight anything you might think relevant.
This is my first solo show in France, although I showed my installation in transit X in Marseille as part of Chroniques Festival in 2017. I have also had a few screenings in various locations in France, including Cinemateque Francaise and Centre Pompidou, due to my involvement with the video art publisher lowave, which released my DVD Cityscapes back in 2005.

I made a book in 2017 documenting many of the Conversations with Spaces projects.
It is available through motto distribution.

Advertisements

junction: light motion installation under King William Bridge for the Adelaide Festival of the Arts

junction

photos by Paul Armour. For larger versions (and a few more images) go to the junction album on flickr.

I was commissioned to make a new installation as part of the blinc program for the Adelaide Festival of the Arts which took place end of february / beginning of march 2015. Blinc, curated by Craig Morrison and Joel Cockrill, was a series of outdoor projection and light-based art works, including works by Squidsoup and Bordos Artworks among others.

I chose my location based on some images Joel sent me, and I immediately got drawn to the beautiful arches of the pedestrian walkway / bike path underneath the King William Bridge, situated in Elders Park in the center of Adelaide.

elderpark-adelaide

The underpass is about 40 m (130 ft) long and 9 m (30 ft) wide with 9 arches. I wanted to work with the space in a similar way to what I did with trace at LIAF and Todays Art, constructing a grid of LED lines that animates and activates the structures of the space, in this case mainly the arches.

The unusual thing about this project for me was that it was a site-specific project where I didn´t have the possibillity to go there myself. I selected the location based on photos and created the setup based on drawings and an idea of how I wanted it to look. I prepared a detailed manual for the construction of the installation and the festival crew did a good job with the remote install.

This was obviously not an ideal situation, as for this type of work I am really interested in a more direct dialogue with the space, leaving room for improvisation and adjustements, and I quite enjoy the manual labour involved in building up installations. Also, not having been there myself makes the project less real somehow, especially since I like to emphasize the physical experience of my installations. That being said I was extremely lucky to get in touch with Paul Armour based in Adelaide who did a great job documenting the installation under difficult conditions. It gave me a chance to experience my own work the way most people do, through images and video published online.

One thing I discovered watching the video documentation was the fantastic soundtrack created by the cars passing over the bridge which I think worked really well with the motion of the light pulses.

multimorf 3: a composition for trombone and waterfall

I just finished a new collaboration with composer Knut Vaage, Multimorf 3, a composition for waterfall and trombone.

The image above is a snapshot from each of the nine sections of the piece.

The trombone plays together with recordings of waterfall and layered trombone overtones (also combined with live processing of the trombone).
The composition has an open form:
Vaage has together with sound designer Thorolf Thuestad created nine morphed backgrounds of waterfall recordings and trombone overtones. The trombone player, John Arild Suther, chooses how long each morph section should be played and in which order they should be played, as well as which sections of the composed score he wants to play within each section. This opens for very different performances based on the same material.

The video is conceived in a similar way to the morph backgrounds. The material is based on waterfall recordings and generated material, and each of the nine sections can be viewed as individial motion paintings. My intention was not to create something that followed the music directly, but to create a visual component that responded to the energy contained in a waterfall and to reflect the individual qualities of the nine sections. Since the length and order of each section can vary for each performance, I wanted each section to focus on layers instead of creating linear trajectories: If you see thirty seconds or four minutes of a section you will get more or less the same visual content, it is the combination and length of the different sections that create the linear dynamic of the piece.

The trombone player controls both the playback of video and sound backgrounds, using midi foot pedals, making it relatively easy to tour with the piece without involving too many people.

Multimorf 3 premiered on August 12th at Baroniet in Rosendal (probably one of the most beautiful places in Norway). We played outside in the courtyard of the Manor from 1665 to an enthusiastic audience.

It will also be performed at Bergen Architechture School on September 1st (together with multimorf 2), and at Integra in Copenhagen October 1st.

video projection tool v2.5 + cornerpin keystone app

This post is really an announcement for an updated version of my videoprojectiontool + a new handy cornerpin keystone application.
But first some lines about another program, vvvv:

The last few months I have spent some time working with vvvv, which is a graphical programming language similar to max and pd, but with a few distinct differences.

First of all, it is windows only (It is based on directx instead of opengl), which is the main reason why I haven´t used it before.
The vvvv developer´s attitude is that it is better to make the program as good as possible on one platform rather than trying to make everybody happy.

Second, it has a clearer focus on 3D than max/jitter.
The main reason I have looked at vvvv is because the developers seem to be quite interested in making it a practical tool for real-life projects, to make it easier for the users to focus on content rather than trying to solve programming problems all the time.
Especially for people like me, who work with projectors, multi-screen setups, mapping etc, it was a relief to see dedicated modules for multiscreen blending, cornerpin keystone, and a projector module to make it easier to combine virtual models with physical environments. Also, there are easy-to-find modules and examples for how to interface vvvv with different hardware.

I had an interesting exchange with one of the jitter developers in the cycling74 forum a little while ago, where I was complaining about how such a useful thing as cornerpin keystoning is not available in jitter. I have attempted several times to make this in max/jitter without luck. It took the developer fifteen minutes to make it, but I would have never found this solution myself.
I guess my point is that I wish there was a set of useful abstractions and instructions, especially when working with 3D, instead of always being referred to the “red book” (the opengl reference book). Something like auv-i was for earlier versions of max/jitter for working with video.

And now, to the announcements: cornerpin keystone + v2.5 of videoprojectiontools
I have made a simple application, based on the above-mentioned exchange in the cycling74 forum, for a very simple but effective cornerpin keystoning. So, instead of tweaking the projector settings to get the projection to fit your projection surface, you can now just click in the corners on your desired projection surface, and the image will fit. The source code for the cornerpin keystoning is available on the cycling74 forums.

Videoprojectiontool v2.5 is now available, for mac osx and windows!
The new features are corner pin distortion of the individual layers, + enhanced functionality for drawing directly on the projected output for advanced masking. Please watch the video tutorials to learn more.

If you are working with mapping image content onto physical objects or surfaces, there are two main approaches: Creating a virtual 3D model which matches the physical environment, and then project that 3D model back onto the physical environment. This is not an easy task which involves a lot of calibration, as everything has to match exactly. For this I think vvvv is by far the best tool, check out their tutorial.

The second approach consists of positioning and masking individual layers in the physical environment. I think my videoprojectiontool is a good starting point with this approach. It involves no projector calibration, and you can get some great results in a very short time. By combining 3d placement of layers with advanced masking, and a sequencer for storing presets and creating transitions it has become a quite powerful tool.

Go to the resourcepage for the videoprojectiontool for the links to the new applications and several new video tutorials related to the update.

Controlling a xbee network using an arduino


As promised some days ago, here is the followup to the minimal arduino post. I share the arduino code used in controlling a znet 2.5/xbee series 2 network, as well as the schematics for the controller itself.