
Scott Wheeler wrote:
The main thing this boils down to from my side is doing controlled randomization, parameter envelopes or event sequences that are too complicated or tedious to do directly in the sequencer.
I can see doing that myself. For the last couple of years I have been using Reaktor to build instruments that generate sound through simple interaction... no complex sequencing. For example I slow a drum machine to 2 BPM and run the hits through a resonant filter and delay, with one or two LFOs cycling some parameters. This might create odd popping an chirping sounds at randomish intervals. This is all well and good, but the only algorithmic devices I've used have been made by other people, since Reaktor is not the best environment for writing equations. That said, I would rarely want to simply feed and equation and watch it run. What is great about Reaktor is that it is easy for me to "play" these instruments in real time, since any of their parameters can be exposed to controls and mapped with MIDI. So I am able to jam with my creations with some sound factors under strict control, others wandering, and still others directly played.
From what I gather on the list, I'm also different from most of the crowd here in that I've defected from the artsy world (to the general direction of repetitive dance music with lots of beeping sounds) and performances tend to be in warehouses or bars rather than conservatory concert halls. :-)
I gigged last weekend at just such a venue in Dublin, in what might be called "enhanced DJ" mode. I mixed other people's music with my own, played live out of Reaktor. Whether the audience knew it or not, they were witness to a one-of-a-kind audio landscape. -- robin