Iain Mott is a sound artist and a lecturer (professor adjunto) in the area of voice and performance in the Departamento de Artes Cênicas (theatre arts), Universidade de Brasilia. His sound installations are characterised by high levels of audience participation and novel approaches to interactivity. He has exhibited widely in Australia and at shows including the Ars Electronica Festival in Linz, Emoção Art.ficial in São Paulo and the Dashanzi International Art Festival and Multimedia Art Asia Pacific (MAAP) in Beijing. His most recent installation with Simone Reis O Espelho was exhibited at the Centro Cultural Banco do Brasil (CCBB) in Brasilia in the second half of 2012. Iain has received numerous awards and grants and has successfully managed innovative projects for almost 20 years. His GPS-based project Sound Mapping was awarded an Honorary Mention in the 1998 Prix Ars Electronica. In 2005 he was awarded an Australia China Council Arts Fellowship to work with the Beijing arts company the Long March Project. His work Zhong Shuo was created as part of the fellowship in collaboration with Chinese artists and was given 3rd prize in the UNESCO Digital Art Awards. The project has in addition been selected by MAAP for two further installations in Shanghai and Brisbane in 2006. Iain was artist in residence at the CSIRO Mathematical and Information Sciences in Canberra for 12 months in 1999/2000. The notion of collaboration between artist and audience has ongoing importance in Iain's work. His PhD from the University of Wollongong was supervised by Greg Schiemer and is entitled Sound Installation and Self-listening.
A new version of the software Mosca will shortly be released, implementing work by Thibaud Keller. The release will bring support for new ambisonic libraries as well as VBAP, OSC support and integration with OSSIA-score, improved GUI, support for higher order ambisonic signals, banks of RIRs selectable on a per-source basis and many other improvements. For details on the upcoming release, see the conference paper by Iain Mott and Thibaud Keller: Three-dimensional sound design with Mosca. See also https://escuta.org/mosca.
Tutorial on using the GUI interface of the Mosca quark for SuperCollider. Please listen with headphones and please view in full-screen mode.
Bash shell script and other settings to download and display the last photo taken on an Android phone. The script is run on a computer and communicates with an Android phone cia WiFi. Tested on Linux. Not particularly efficient, as due to security issues on the Android disabling the running of commands on the remote device via SSH, it is necessarily to download all photos in a designated directory in each cycle (if anyone knows how to get around this, please leave a comment below!). Requires an SSH server app to be running on the Android and for the Android to be configured as a WiFi hub. It also requires the Android to have a copy of an RSA or DSA SSH public key. It's best that the phone is configured to take low resolution photos and the person taking photos will need to periodically delete all the photos in the selected directory to keep the downloads fast. Quite quick to do.
Used for a theatre performance of the play "As Três Patetas em Chamas" (adaptation of Chekhov's "Three Sisters") at the Departamento de Artes Cênicas, Universidade de Brasília, July, 2017, directed by Simone Reis. Images taken by the actors were projected during the performance.
# Mobile phone data
# local directory on computer where photos are to be downloaded to
# create a start up image of your choice in directory where you will store images
# in my case I created a black image called black.jpg
cp black.jpg image.jpg
#keep displaying the most recent image with the utility "feh"
feh -F --reload 1 image.jpg&
#fetch images from the phone each 2 seconds (should probably be less frequent)
#name the most recent photo as "image.jpg"
while true; do
scp -P $port -oHostKeyAlgorithms=+ssh-dss $dst:$remote_dir/* /home/iain/projects/3patetas
[ -f "2" ] && rm 2 # remove strange file downloaded from mobile
cp `ls -t /home/username/projects/3patetas | head -1` /home/username/projects/3patetas/image.jpg
# In the following line -t for timeout, -N for just 1 character
read -t 0.25 -N 1 input
if [[ $input = "q" ]] || [[ $input = "Q" ]]; then
# The following line is for the prompt to appear on a new line.
How to use:
1. On the computer, make sure you have SSH and "feh" installed (and anything else required in the script not installed by default).
2. On the Android, install an SSH server app. I chose "SSH Server" and it works well.
3. On the computer, create an RSA key pair and email or transfer a copy of the public key to your phone.
4. On the phone, save the RSA public key to a known location and also discover to which directory the phone saves its photos.
5. In SSH Server, create a new server, give it a name an allocate a port number (or note the port number automatically allocated)
6. Click the "user" tab in SSH Server and create a new user, giving the user a name.
7. Deselect "enable password" and select "enable public key" then browse to find the key previously stored in step 4.
8. Leave the SSH Server app, saving your settings.
9. Still on the Android, turn WiFi off and set up your phone as an active WiFi router, WiFi Hotspot or whatever.
10. On your computer, connect to the Android's WiFi.
11. Edit the script changing the remote_dir, port, username, local download directory and anything else that needs changing.
Also edit the name of your start up image, if not "black.jpg" (create this image too, and place it in the local download directory).
12. Mark script as executable and run it.
14. Use "q" as your escape character.
Please listen with high quality headphones and ensure that the left and right channels are correctly aligned.
This video documents Botanica with unedited footage made from the perspective of the participant. The camera follows the participant's head movements and the sounds heard in the headphones move correspondingly. As the participant walks, he/she passes through various dynamics sound fields mapped to the garden.
Recordings used in this demonstration of Botanica made by Iain Mott include: sounds of the cerrado environments of central Brazil, aquatic sounds from the Melbourne Aquarium and Port Phillip Bay in Australia and of a Buddhist temple in Chongqing, China. The sounds of geese and of a choir were recorded by John Leonard and the sounds of satellite transmissions were recorded by Roy Welch e Don Woodward. These recordings are used with kind permission. Electro-acoustic music in the video is by Iain Mott.
This audio demonstration of Mosca involves the spatialisation of mono, stereo and B-format material. As well as using music and B-format recordings made by Iain Mott, it also contains 3 B-format recording by John Leonard of geese, a Chinook helicopter and of the Stanbrook Abbey Choir. These recording by John are presented here with kind permission. Other recorded sounds include those of monks in the Luohan Temple in Chongqing, China, frogs in São Jorge, Goiás in Brazil, a matchbox and an original recording of Sputnik 1 made by Roy Welch, reproduced with Roy's permission.
Please listen with headphones and ensure that the left and right channels are positioned correctly.
Use the three scripts contained in the zip file below in "Download attachments" to batch convert a directory of B-format audio to binaural and UHJ stereo. Requires that SuperCollider is installed with the standard plugins and that the "Ctk" quark is enabled. The main shell script also encodes mp3 versions of the binaural and UHJ files and if this feature is used (not commented out), the system will require that "lame" is installed. The SuperCollider script uses the ATK and is adapted directly from the SynthDef and NRT examples for ATK.
Unzip the scripts in a directory, edit the paths to match your installation and distribution of Linux in the file renderbinauralUHJ.sh, make this file executable and run in the directory containing the b-format files to perform the conversions.
The following procedure shows how to make B-format impulse responses (IRs) with the Linux software Aliki by Fons Adriaensen. A detailed user manual is available for Aliki, however the guide presented here in escuta.org is intended to show how to produce IRs without the need to run the software in the field and enables the use of portable audio recorders recorders such as the Tascam DR-680. The procedure was arrived upon through email correspondence with Fons. His utility "bform2ald" is included here with permission.
Field Equipment used:
- Zoom H4n - for sine sweep playback
- Core Sound Tetramic, stand and cabling
- Tascam DR-680 multi-channel recorder
- Yorkville YSM1p powered audio monitor with cabling to play Zoom H4n output
- 12V sealed lead acid battery and recharging unit
- Power inverter to suit monitor
1. Launch Aliki in the directory in which you wish to create and store your "session" files and sub-directories, select the "Sweep" window and create a sweep file with these or other values:
- rate: 48000
- fade in: 0.1
- start freq: 20
- Sweep time: 10
- End freq: 20e3
- Fade out: 0.03
2. Select "Load" to load the sweep into Aliki and perform an export as a 24bit wav file or file type of your choosing.
3. Import the "*-F.wav" export in Ardour or other sound editor and insert an 800Hz blip or other audio marker 5 seconds before start. Insert some silence before the blip as some players (the Zoom H4n for example) may miss some initial milliseconds of files on playback. Export file as stereo 24bit 48kHz stereo file since the Zoom doesn't accept mono files.
4. Import file into Zoom H4n recorder for playback.
5. In the field, connect line out of Zoom H4n to Yorkville YSM1p and play file, recording with tetramic and Tascam DR-680. In my first test I recorded with the meter reading at around -16dB. Could have given the amp more gain, but the speaker casing was beginning to buzz with the low frequencies.
6. The Tascam creates 4 mono files. Use script to convert to A-format and with Tetrafile to convert to B-format with the mic's calibration data (with "def" setting).
7. Install and use the utility bform2ald (see "Download attachments" below) to convert the B-format capture to Aliki's native "ald" file format.
8. Load the "ald" sweep capture into Aliki. Enter into edit mode and right-click to place a marker at the beginning of the blip. Use the logarithmic display to make the positioning easier. Once positioned, left-click "Time ref" to zero the location of the blip, then slide the marker to the 5 second mark and again left-click "Time ref" to zero the location of the start of the capture.
9. Right-click a second time a little to the right of the blue start marker. This will create a second olive coloured marker, marking the point at which a raised cosine fade-in starting at the blue marker will reach unity gain. When positioned, left-click "Trim start". Zoom out and drag the two markers to the end of the capture in order to perform a fade out in the same way with "Trim end". Use the log view to aid with this process.
10. Save this trimmed capture in the edited directory with "Save section".
11. Select "Cancel" and then "Load" to reload the freshly trimmed capture in the edited directory, then select "Convol". In this window, select the original sweep file used to create the capture in the "Sweep" dialogue. Enter "0" in the "Start time" field and in the "End time" field enter a number in seconds that represents the expected reverberation time plus two or three more seconds. Finally, select apply to perform the deconvolution, then perform a "Save section" to save the complete IR in the "impresp" directory.
12. Select "Cancel" and "Load" to load the recently created impulse in the "impresp" directory, then enter edit mode. The impulse may not be visible so use the zoom tools and in Log view, identify the first peak in the IR which should appear shortly after 0 seconds. This peak should represent the direct sound. While we may decide not to keep this peak, we will use it now to normalise the IR so that a 0 dB post fader aux send to the convolver will reproduce the correct ratio of direct sound to reverberation when using "tail IRs" or IRs without the direct impulse (see 13 below). To normalise, right-click to position the blue marker on the peak then left-click "Time-ref" to zero the very start of the direct impulse and shift-click "Gain / Norm".
13. The complete IR created above in step 12, containing the impulse of the direct signal as well as those of the first reflections and of the diffuse tail, may be convolved with an anechoic source to position that source in the sound field. If used in this way, the "dry" signal of the source should not be mixed with the "wet" or convolved signal and there will be no control over the degree of reverberation. If however the first 10msec of the IR are silenced (using the blue and olive markers and "Trim start" in Aliki to fade in from silence just before 10msec, for example), the anechoic signal may be positioned in the sound field by including the dry signal in the mix (panned by abisonic means to a position corresponding to that of the original source in the IR) and varying the gain on the "wet" or convolved signal to adjust the level or reverberation and reinforce the apparent position of the virtual source through first reflections encoded in the IR. Another alternative is to silence the first 120msec of the IR to create a so-called "tail IR". This removes the 1st reflections information entirely from the IR and enables the sound to be moved freely by ambisonic panning. The level of reverberation is adjustable however the will be no 1st reflections information to aid in the listener's localisation of the virtual source or to contribute to the illusion of its "naturalness". A fourth possibility is to use a tail IR in conjunction with various IRs for different locations. These IRs encoding first reflections only, those occurring between 10 and 120msec, could be chosen for example to match the positions of specific musicians on a stage. The engineer will first pan the dry signal of a source in a particular position, then mix in the wet signal derived from convolution with the 1st reflections IR for the corresponding location and additionally send a feed from the dry signal to a global tail IR common to all sources.