Sábado, 29 Outubro 2016 12:32

Mapa Ambisônico

Avalie este item
(1 Voto)

Mapa ambisônico com gravações de campo em B-format de alta qualidade para baixar (48kHz, 24bit, formato wav). Cada registro está acompanhado com mixagens de estéreo UHJ e binaural para ouvir diretamente online. O mapa foi produzido como parte do projeto Cerrado Ambisônico.

VEJA O MAPA

 

Bruna Martini: Ganhadora do prêmio de Melhor Atriz, Mostra Cometitiva Nacional, FESTU-Rio na peça "Stanisloves-me". Simone Reis: Indicação, Melhor Direção.

"Stanisloves-me"

Atuação: Bruna Martini
Direção: Simone Reis
Concepção e Dramaturgia: Simone Reis e Bruna Martini
Textos: Hilda Hilst, Bruna Martini e Simone Reis

Domingo, 11 Setembro 2016 13:53

B-Format para Estéreo Binaural & UHJ

Avalie este item
(1 Voto)

Use the three scripts contained in the zip file below in "Download attachments" to batch convert a directory of B-format audio to binaural and UHJ stereo. Requires that SuperCollider is installed with the standard plugins and that the "Ctk" quark is enabled. The main shell script also encodes mp3 versions of the binaural and UHJ files and if this feature is used (not commented out), the system will require that "lame" is installed. The SuperCollider script uses the ATK and is taken directly from the SynthDef and NRT examples for ATK.

Unzip the scripts in a directory, edit the paths to match your installation and distribution of Linux in the file renderbinauralUHJ.sh, make this file executable and run in the directory containing the b-format files to perform the conversions. 

Domingo, 11 Setembro 2016 13:50

Criando Respostas ao Impulso com Aliki

Avalie este item
(1 Voto)

The following procedure shows how to make B-format impulse responses (IRs) with the Linux software Aliki by Fons Adriaensen. A detailed user manual is available for Aliki, however the guide presented here in escuta.org is intended to show how to produce IRs without the need to run the software in the field and enables the use of portable audio recorders recorders such as the Tascam DR-680. The procedure was arrived upon through email correspondence with Fons. His utility "bform2ald" is included here with permission.

Field Equipment used:

  • Zoom H4n - for sine sweep playback
  • Core Sound Tetramic, stand and cabling
  • Tascam DR-680 multi-channel recorder
  • Yorkville YSM1p powered audio monitor with cabling to play Zoom H4n output
  • 12V sealed lead acid battery and recharging unit
  • Power inverter to suit monitor

1. Launch Aliki in the directory in which you wish to create and store your "session" files and sub-directories, select the "Sweep" window and create a sweep file with these or other values:

  • rate: 48000
  • fade in: 0.1
  • start freq: 20
  • Sweep time: 10
  • End freq: 20e3
  • Fade out: 0.03

2. Select "Load" to load the sweep into Aliki and perform an export as a 24bit wav file or file type of your choosing.

3. Import the "*-F.wav" export in Ardour or other sound editor and insert an 800Hz blip or other audio marker 5 seconds before start. Insert some silence before the blip as some players (the Zoom H4n for example) may miss some initial milliseconds of files on playback. Export file as stereo 24bit 48kHz stereo file since the Zoom doesn't accept mono files.

4. Import file into Zoom H4n recorder for playback.

5. In the field, connect line out of Zoom H4n to Yorkville YSM1p and play file, recording with tetramic and Tascam DR-680. In my first test I recorded with the meter reading at around -16dB. Could have given the amp more gain, but the speaker casing was beginning to buzz with the low frequencies.

6. The Tascam creates 4 mono files. Use script to convert to A-format and with Tetrafile to convert to B-format with the mic's calibration data (with "def" setting). 

7. Install and use the utility bform2ald (see "Download attachments" below) to convert the B-format capture to Aliki's native "ald" file format.

8. Load the "ald" sweep capture into Aliki. Enter into edit mode and right-click to place a marker at the beginning of the blip. Use the logarithmic display to make the positioning easier. Once positioned, left-click "Time ref" to zero the location of the blip, then slide the marker to the 5 second mark and again left-click "Time ref" to zero the location of the start of the capture.

9. Right-click a second time a little to the right of the blue start marker. This will create a second olive coloured marker, marking the point at which a raised cosine fade-in starting at the blue marker will reach unity gain. When positioned, left-click "Trim start". Zoom out and drag the two markers to the end of the capture in order to perform a fade out in the same way with "Trim end". Use the log view to aid with this process. 

10. Save this trimmed capture in the edited directory with "Save section".

11. Select "Cancel" and then "Load" to reload the freshly trimmed capture in the edited directory, then select "Convol". In this window, select the original sweep file used to create the capture in the "Sweep" dialogue. Enter "0" in the "Start time" field and in the "End time" field enter a number in seconds that represents the expected reverberation time plus two or three more seconds. Finally, select apply to perform the deconvolution, then perform a "Save section" to save the complete IR in the "impresp" directory.

12. Select "Cancel" and "Load" to load the recently created impulse in the "impresp" directory, then enter edit mode. The impulse may not be visible so use the zoom tools and in Log view, identify the first peak in the IR which should appear shortly after 0 seconds. This peak should represent the direct sound. While we may decide not to keep this peak, we will use it now to normalise the IR so that a 0 dB post fader aux send to the convolver will reproduce the correct ratio of direct sound to reverberation when using "tail IRs" or IRs without the direct impulse (see 13 below). To normalise, right-click to position the blue marker on the peak then left-click "Time-ref" to zero the very start of the direct impulse and shift-click "Gain / Norm". 

13. The complete IR created above in step 12, containing the impulse of the direct signal as well as those of the first reflections and of the diffuse tail, may be convolved with an anechoic source to position that source in the sound field. If used in this way, the "dry" signal of the source should not be mixed with the "wet" or convolved signal and there will be no control over the degree of reverberation. If however the first 10msec of the IR are silenced (using the blue and olive markers and "Trim start" in Aliki to fade in from silence just before 10msec, for example), the anechoic signal may be positioned in the sound field by including the dry signal in the mix (panned by abisonic means to a position corresponding to that of the original source in the IR) and varying the gain on the "wet" or convolved signal to adjust the level or reverberation and reinforce the apparent position of the virtual source through first reflections encoded in the IR. Another alternative is to silence the first 120msec of the IR to create a so-called "tail IR". This removes the 1st reflections information entirely from the IR and enables the sound to be moved freely by ambisonic panning. The level of reverberation is adjustable however the will be no 1st reflections information to aid in the listener's localisation of the virtual source or to contribute to the illusion of its "naturalness". A fourth possibility is to use a tail IR in conjunction with various IRs for different locations. These IRs encoding first reflections only, those occurring between 10 and 120msec, could be chosen for example to match the positions of specific musicians on a stage. The engineer will first pan the dry signal of a source in a particular position, then mix in the wet signal derived from convolution with the 1st reflections IR for the corresponding location and additionally send a feed from the dry signal to a global tail IR common to all sources.

Avalie este item
(1 Voto)

The script and other configurations detailed on this page convert mono files generated by a Tascam DR-680 with a Core Sound Tetramic soundfield microphone to B-format 4-channel wav files. It requires that the Tascam DR-680 is configured to save recordings as mono sound files on channels 1, 2, 3 & 4 and that these channel numbers match the corresponding capsules on the Tetramic. The script also requires that Tetramic calibration files are installed (see below) and the additional installation of the following programs by Fons Adriaensen

tetraproc-0.8.2.tar.bz2 

jconvolver-0.9.2.tar.bz2 (includes the necessary utility "makemulti")

Fons Adriaensen provides a free calibration service for Tetramics which generates calibration files specific to each microphone based on data provided with the microphone on purchase. See"TetraProc / TetraCal" and "Calibration service for Core Sound's TetraMic" on this page for further information.

Run the script in a directory containing the mono files. Change paths and configuration filenames in the script as necessary. Use the command line argument --elf to enable extended low frequency response in the b-format output (-3dB at 40Hz) or none to use the default roll-off at 80Hz.

The B-format script is contained in the attachment "mono2bformat.zip" below. Alternatively, copy the following code:

#!/bin/bash

#Converts dated mono files generated by a Tascam DR-680 with a Coresound Tetramic ambisonic microphone to B-format 4-channel wav files. Run this script in directory containing the mono files. Change paths as necessary. Use the command line argument --elf to enable extended low frequency response in the b-format output (-3dB at 40Hz) or none to use the default roll-off at 80Hz.

if [ "$1" = "--elf" ]; then
    config="elf"
else
    config="def"
fi

[ -d aformat ] || mkdir aformat
[ -d bformat ] || mkdir bformat

for file in *.wav 
do 
base=${file:0:11}
channelnumber=${file:15:1}
if [ "$channelnumber" = "1" ]; then
      command="/usr/local/bin/makemulti --wav --24bit $file"
            fi
if [ "$channelnumber" = "2" ]; then
      command="$command $file"
           fi
if [ "$channelnumber" = "3" ]; then
      command="$command $file"
            fi
if [ "$channelnumber" = "4" ]; then
    command="$command $file $base"
    suffix="a-format.wav"
    command=$command$suffix
    $command
    aformatfile=$base$suffix
    mv ./$aformatfile ./aformat

    if [ "$config" = "elf" ]; then
suffix="b-format_elf.wav"
    else
suffix="b-format.wav"
    fi
    bformatfile=$base$suffix
    if [ "$config" = "elf" ]; then
/usr/local/bin/tetrafile --fuma --wav --hpf 20 /home/iain/.tetraproc/CS2293-elf.tetra aformat/$aformatfile bformat/$bformatfile
    else
/usr/local/bin/tetrafile --fuma --wav --hpf 20 /home/iain/.tetraproc/CS2293-def.tetra aformat/$aformatfile bformat/$bformatfile
    fi

fi

done

rm -r aformat

exit 0

 

A binaural demonstration of the Mosca SuperCollider class using the voice of Simone Reis reciting from the drama Gota d'Água, B-format recordings of a Chinook helicopter and of Spitfires by John Leonard, B-format recordings of insects and frogs from Brasilia and Chapada dos Veadeiros, a galloping horse (spatialised mono source with Doppler effect and Chowning-style reverberation) and some Schubert (stereo). Binaural decoding performed with the CIPIC HRTF database's subject ID# 21, included in the ATK.

Please listen with headphones and ensure that they are correctly orientated

Vídeo relacionado

Avalie este item
(1 Voto)

Mosca is a SuperCollider class for GUI-assisted authoring of ambisonic sound fields with simulated moving or stationary sound sources. The class makes extensive use of the Ambisonic Toolkit (ATK, see: http://www.ambisonictoolkit.net/) by Joseph Anderson and the Automation quark (https://github.com/neeels/Automation) by Neels Hofmeyr. Mosca is written by Iain Mott and licensed under a Creative Commons Attribution-NonCommercial 4.0 International License: http://creativecommons.org/licenses/by-nc/4.0/

Sound fields may be decoded using a variety of built in 1st order ambisonic SuperCollider decoders (including binaural) or with external 2nd order decoders such as Ambdec in Linux. Input sources may be any combination of mono, stereo or B-format material and the signals may originate from file, from hardware inputs (physical or from other applications such a DAW via Jack) or from SuperCollider's own synths. In the case of synth input, synths are associated by the user with a particular source in the GUI and registered in a synth registry. In this way, they are spatialised by the GUI and also receive all of the data from the GUI pertaining to the source (eg. x, y and z coordinates and auxiliary fader data). Mosca has its own transport provided by the Automation quark for recording and playback of source data. This may be used independently or may be synchronised to a DAW using Midi Machine Control (MMC) messages. This function has been tested to work with Ardour and Jack.

Mono and stereo sources are encoded as second order ambisonic signals whereas B-format signals remain as 1st order and are angled in space using "push" transformations. Source signals are attenuated proportionally to the inverse of the square root of proximity or in a linear relationship with distance, selectable on a per-source basis via the GUI. All sources are subject to high-frequency attenuation with distance and if decoding is performed by one of the ATK's 1st order decoders, a proximity effect is generated adding a bass boost to proximal sources among other phase effects to simulate wave curvature (see: http://doc.sccode.org/Classes/FoaProximity.html).

Reverberation is performed either using a B-format tail room impulse response (RIR) - the preferred method - or using simple built-in allpass filters, options selectable on creation of a Mosca instance. With both options, two reverberation level controls are included in the GUI to set close and distant levels. A further two reverb types are selectable in the GUI on a per-source basis for both RIR and allpass reverberation modes. The default reverb type uses John Chowning's technique of applying "local" and "global" reverberation to sources (CHOWNING). The "Close" reverberation of the GUI in this case is "global" and is audible by the listener from all directions when the source is close whereas "distant" reverb is "local" in scope and is encoded as a 2nd order ambisonic signal along with the dry signal. This predominates as the source becomes more distant. The second type of reverberation may be described as a "2nd order diffuse A-format reverberation". This technique produces reverberation weighted in the direction of sound events encoded in the dry ambisonic signal and involves conversion to and from A-format in order to apply the effect (ANDERSON). The encoded 2nd order ambisonic signal is converted to a 12-channel A-format signal and then either a) convolved with a B-format RIR which has been "upsampled" to 2nd order and converted to A-format impulse spectrum, or, as in the case of the allpass option, b) passed through a 12-channel bank of allpass filters before being converted back to a 2nd order B-format diffuse signal. Please note that the 2nd order diffuse reverberation may require the user to set a larger audio output buffer and thus increase the latency of the system. The "Chowning" type reverberation is more efficient and the "allpass" option, more still. 

Mosca also has other features including a scalable Doppler Effect on moving sources, looping of sources loaded from file, adjustment of virtual loudspeaker angle of stereo sources and in the case of B-format sources: a rotation control, adjustment of "directivity" (see ATK documentation) and a control of "contraction", whereby the B-format signal may be crossfaded with its W component and which is spatialised as a 2nd order ambisonic signal.

Additionally, Mosca supports methods for making "A-format inserts" on any source spatialised in the GUI. In this way, the user may write a filtering synth and apply it to the sound without disrupting the encoded spatial characteristics.

If you use these resources or have suggestions, please get in contact!

DOWNLOAD

To use Mosca, note that SuperCollider must be installed with its full assortment of plugins. The Mosca class has been prepared as a SuperCollider quark and is available here: https://github.com/escuta/mosca

You may clone the Mosca quark using "git clone https://github.com/escuta/mosca" or download the project as a Zip file from the github page and then place it in you quarks directory to install. Alternatively, if using Supercollider 3.7 or higher, simply run the following command in SuperCollider to install: Quarks.install("https://github.com/escuta/mosca"); 

See the Mosca quark help file for instructions. You may also choose to download the following zipped example project directory:

moscaproject.zip

This archive contains the file structure necessary to run Mosca as well as example room impulse responses (RIRs). B-format material, including a B-format Spitfire recording by John Leonard, is also provided in the archive with kind permission as well as other B-format material recorded by Iain Mott in Chapada dos Veadeiros and Brasilia. A README file in the archive and attached separately below, includes complete instructions on installing Mosca from scratch on Linux. Importantly, the README also details how to use the Mosca GUI.

Listen with headphones to an audio demonstration of Mosca.

ACKNOWLEDGEMENTS

Many thanks to Joseph Anderson, Neels Hofmeyr and members of the SuperCollider list for their assistance and valuable suggestions.

REFERENCES

ANDERSON, Joseph. Authoring complex Ambisonic soundfields: An artists tips & tricks. . In: DIGITAL HYBRIDITY AND SOUNDS IN SPACE JOINT SYMPOSIUM. University of Derby, UK: 2011.

CHOWNING, John M. The Simulation of Moving Sound Sources. Computer Music Journal, v. 1, n. 3, p. 48–52, 1977. 

 

Sexta, 23 Outubro 2015 18:12

Seja bem-vindo a Escuta.org!

Escuta.org reúne projetos de arte sonora e performance de Simone Reis e Iain Mott, junto com outros artistas e técnicos colaboradores, incluindo Marc Raszewski, Jim Sosnin, Nelson Maravalhas, estudantes do Departamento de Arte Cênicas da Universidade de Brasília (UnB), entre outros.

Para ver os projetos individuais, consulte projetos no menu acima. Os projetos são divididos em cinco grupos: 1) instalação sonora e cênica de Mott, Reis e outros 2) performance de Simone Reis 3) arte sonora e composição musical de Iain Mott  4) pesquisa de Iain Mott 5) projetos pedagógicos de Reis e Mott na Universidade de Brasília.

Esperamos que goste de Escuta.org!

Novas gravações ambisônicas de "b-format". Veja esse link

Domingo, 27 Setembro 2015 18:30

Zhong Shuo

Zhong Shuo (As pessoas dizem) é uma instalação sonora feita colaborativamente pelo artista sonoro Australiano Iain Mott, o artista visual de Beijing Ding Jie e o grupo de Chongqing The Li Chuan Group, contendo Li Chuan, Ren Qian e Li Yong. O trabalho funciona como um sistema para a coleta e narração de histórias, focado na rápida força da mudança na China. Em 2005 o trabalho consistia de duas instalações, uma no Long March Space no bairro Dashanzi em Beijing e a segunda no Chongqing Planning Exhibition Gallery no Chaotianmen Square, em Chongqing. Conectadas pela internet, as instalações compartilharam histórias e produziram uma reprodução automático na instalação e online usando streaming MP3. Uma terceira e quarta instalações foram feitas em Shangai e Brisbane em 2006 como parte do Multimedia Art Asia Pacific - MAAP.

Página 1 de 2

Artigos Novos e Notícias