On 5/30/07, Stephen Sinclair <radarsat1@gmail.com> wrote:
when you do this, how can you really write the code completely from
scratch?
Yeah. To explain where I am now; I signed up to write ChucK in a livecoding competiton this Friday evening. These will be audience-graded 6 minute sessions.
I don't realy like that as a time span in combination with the competitive context as I think that promotes prepared and practiced cantricks. I'd rather explore longer formats with a element of interaction. Interaction would at the same time force people to adapt on the fly and also take advantage of livecoding as a supremely adaptable instrument.
You can play for 6 minutes on a guitar too but it's quite hard to turn a guitar into a different instrument within a hour...
how come there isn't a 30-minute pause at the beginning of
your set while you figure out what sound effect you want to make and
then you perform a z-transform to figure out the filter coefficients
you need or whatnot. no i'm kidding, I know there are a lot of
pre-built filters in Chuck, but my point is, how much of your code do
you write in advance, or is it really _completely_ on the fly?
While practising for this I've written everything on the fly. This is actually a lot of fun. With ChucK it's very easy to write something that makes some sound within half a minute to a minute and it can be elaborated on in similar steps. I'm finding the STK is a treasure-trove for this. The STK has a lot of toys that will make very nice sounds in a very simple way yet that can still be elaborated on to get very personalised results.
i'm imagining that a livecoder would have at least prewritten some
sound effects and is triggering them live on stage by sporking them...
but how many actual sound routines do you make up right there on the
stage?
I imagine this differs per person and per occasion. If you would be asked to supply a entire evening of music it would be nice to have some files at hand to fill at least some of it....
personally it takes me minutes or hours of coding to get the sound
that i want, that's why i can't figure out how you'd do it on a stage
in front of people..
Yes; that's most definately a question, but I find that's a question that affects all live performance. For example; I use a farly large analogue mixing desk in my little studio as well as a stack of analogue effects and some of those big old samplers with the cruchy DA's. To me those are great if time-consuming toys to get the exact sound I want but I'm NOT hauling that to a gig. As you mention there would be no time but another matter is that I couldn't even lift all of it at the same time either. So; for live performance there will need to be a compromise which basically means; recording some of it to the laptop. If you make that choice you get the same or nearly the same sound but with less of a live element.
I think it's safe to say that live performance of electronic music is a compromise anyway. With livecoding you don't get the same sound that you could otherwise get because there is no time for that. On the other hand; you do get a lot of surprises that can be stimulating and nobody will complain it's not live enough.
Another matter is that in my own experience sets that sound perfect but are etched in stone aren't that much fun to perform on stage. For my own primary liveset right now I put the compromise at using prepared sounds and sequencing those (in a ChucK-made sequencer) on the fly. I don't think I'll move to livecoding as a primary instrument any time soon but as a chalenge and a experiment it sounded like fun and it turns out to be, so far.
Maybe there's something about the concept that i'm just not getting..
I think the easiest way to "get it" would be to sit down one afternoon and see how much you can do from scratch in a hour. If that's fun, do that a few times and I'm sure you'll soon be able to make something simple yet fun in a few minutes without using the manual. At that point you can considder "showing it off", kinda like guitarists will pick up a guitar in mid-conversation.
This isn't a hard process, it's fun and there is no need to fill a stadium for 3 hours by next week.
As for my personal interest in Chuck, I'm actually planning on using
it as an audio back-end for some GUI-style audio applications, I think
it makes for a really nice, OSC-enabled, cross-platform sound driver.
(Well, as soon as I have time to work on something new...)
Well, that's perfectly fine as well but that's not mutually exclusive with occasional one hour livecoding jamsessions for personal enjoyment at all. In fact I think that those two would benefit from eachother.
I think it's clear that I'd like to take this moment to encourage people to try livecoding a few times but it should also be clear that there is no pressure on anyone. There is most definately nothing wrong with making softsynths "OSC-enabled, cross-platform sound drivers" or midi-learn tools for devices that lack one!
Cheers,
Kas.
ps, wow, that turned out long....