Lisa, voice-gain convolution, etc.
Hi Dan, Hi list! I was thinking about a little ChucK convolution patch, kind of as a excersise, kind of as a joke. Basically I was thinking about taking a array of SndBuf's (as many as there are samples in the impulse responce and all holding the same impulse responce) and firering them off in turn each samp with a gain determined the .last() of the input sound. Simple, fun and probably a bad idea on any practical level since excelent free convolution reverbs are around already. Then I thought of LiSa... LiSa is already a lot like a array of identical buf's so this might even make a bit of sense. Sadly it turns out that LiSa lacks something like this; mylisa.voicegain( voice, float) The more I think about that, the more it makes sense as a practical addition, not just for convolution. For example this might be mapped to velocity for some cheap & cheerfull soft-sampler and per-grain variable volume is usefull in grain-clouds as well. Could this perhaps be considdered for a future update? Cheers, Kas.
hi Kas, wow i never thought LiSa would ever be used to approximate convolution; cool! that's a very easy addition and i can see how it might be generally useful; will definitely add. best, dan On May 14, 2007, at 7:58 PM, Kassen wrote:
Hi Dan, Hi list!
I was thinking about a little ChucK convolution patch, kind of as a excersise, kind of as a joke. Basically I was thinking about taking a array of SndBuf's (as many as there are samples in the impulse responce and all holding the same impulse responce) and firering them off in turn each samp with a gain determined the .last() of the input sound. Simple, fun and probably a bad idea on any practical level since excelent free convolution reverbs are around already.
Then I thought of LiSa... LiSa is already a lot like a array of identical buf's so this might even make a bit of sense. Sadly it turns out that LiSa lacks something like this;
mylisa.voicegain( voice, float)
The more I think about that, the more it makes sense as a practical addition, not just for convolution. For example this might be mapped to velocity for some cheap & cheerfull soft-sampler and per- grain variable volume is usefull in grain-clouds as well.
Could this perhaps be considdered for a future update?
Cheers, Kas. _______________________________________________ chuck-users mailing list chuck-users@lists.cs.princeton.edu https://lists.cs.princeton.edu/mailman/listinfo/chuck-users
On 5/15/07, dan trueman
hi Kas, wow i never thought LiSa would ever be used to approximate convolution; cool!
Approximate? I thought that my outlined method came down to actual convolution but it's entirely possible that I grossly misunderstand what's involved; this happens to me a lot but often it's possible to mask this by claiming my misunderstanding was actually a completely new idea :¬). Actually I've been thinking about ChucKian convolution on and off since I started ChucKing since as I see convolution it deals with spectral characteristics following from timed information so in a way it fits. It's just that convolution is notoriously CPU heavy and ChucK is not so famous brute efficiency (at least not with the CPU's time, it is with mine) so I never got round to actually coding it up. More generally I think LiSa might be good for many, many unexpected things because it's quite general and open to interpertation so that's good. that's a very easy addition and i can see how it might be generally
useful; will definitely add.
Wonderfull! Many thanks for your quick responce. Yours, Kas.
Approximate? I thought that my outlined method came down to actual convolution but it's entirely possible that I grossly misunderstand what's involved; this happens to me a lot but often it's possible to mask this by claiming my misunderstanding was actually a completely new idea :¬).
i re-read and see that you actually plan to throw off a voice *every sample* (more wonderful chuck-inspired insanity!) so i withdraw the "approximate." how fast is your machine? ;--}
Actually I've been thinking about ChucKian convolution on and off since I started ChucKing since as I see convolution it deals with spectral characteristics following from timed information so in a way it fits. It's just that convolution is notoriously CPU heavy and ChucK is not so famous brute efficiency (at least not with the CPU's time, it is with mine) so I never got round to actually coding it up.
i believe there is a bunch of spectral stuff in the works, which i am hankering to get a hold of meself.
More generally I think LiSa might be good for many, many unexpected things because it's quite general and open to interpertation so that's good.
i'm glad to hear it! dan
that's a very easy addition and i can see how it might be generally useful; will definitely add.
Wonderfull! Many thanks for your quick responce.
Yours, Kas.
_______________________________________________ chuck-users mailing list chuck-users@lists.cs.princeton.edu https://lists.cs.princeton.edu/mailman/listinfo/chuck-users
i re-read and see that you actually plan to throw off a voice *every sample* (more wonderful chuck-inspired insanity!) so i withdraw the "approximate." how fast is your machine? ;--}
2GHz Pentium 4 Mobile with 1GB of ram. In other words; "not fast enough for my dreams". Still, we can render and I said up front that it was a excersise and joke. I don't mind rendering if it gives me something I can't otherwise obtain. Convolution reverbs tend to be all about "quality" and leave little options for messing it up and seeing what happens so rolling your own might have some advantages. We'll see where and if it collapses, safety seems very un-ChucKian to me :¬).
i believe there is a bunch of spectral stuff in the works, which i am hankering to get a hold of meself.
OOOH! And now so am I.
i'm glad to hear it!
It's good, I like how it's a very open-ended way of doing grains, I was a bit tirered of the typical ready-made grain-based tool that's a bit heavy on the random stuff. Kas.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Tue, May 15, 2007 at 03:02:46AM +0200, Kassen wrote:
On 5/15/07, dan trueman
wrote: hi Kas, wow i never thought LiSa would ever be used to approximate convolution; cool!
Approximate? I thought that my outlined method came down to actual convolution but it's entirely possible that I grossly misunderstand what's involved; this happens to me a lot but often it's possible to mask this by claiming my misunderstanding was actually a completely new idea :?).
Actually I've been thinking about ChucKian convolution on and off since I started ChucKing since as I see convolution it deals with spectral characteristics following from timed information so in a way it fits. It's just that convolution is notoriously CPU heavy and ChucK is not so famous brute efficiency (at least not with the CPU's time, it is with mine) so I never got round to actually coding it up.
More generally I think LiSa might be good for many, many unexpected things because it's quite general and open to interpertation so that's good.
that's a very easy addition and i can see how it might be generally
useful; will definitely add.
Consider maybe adapting code Fons is writing. http://www.kokkinizita.net/linuxaudio/ His JACE is really fast. He's also rewriting it and creating a new convolution engine, with a goal of having any latency at all and being very CPU efficient. Or, using JACK to feed ChucK stuff out to an external convolution engine and back again. One of the bummers about ChucK using RTAudio is that it can't mess about directly with JACK graphs. It seems like a missed opportunity, when a sample-accurate, real-time music programming language doesn't have direct access to a sample-accurate, real-time audio engine like JACK. LADSPA and/or LV2 support inside ChucK would be really nice too. There are some fine "ugens" available in LADSPA format, and new ones coming for LV2 that are quite nice. - -ken -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.1 (GNU/Linux) iD8DBQFGSiBje8HF+6xeOIcRAqeRAJwKV2ytgBv/QmPxFg+EzSQtSsBhogCfX0+V JjmTZ//SN8yUe3+s9y12MaA= =6T6F -----END PGP SIGNATURE-----
Ken Restivo
Consider maybe adapting code Fons is writing.
http://www.kokkinizita.net/linuxaudio/
His JACE is really fast. He's also rewriting it and creating a new convolution engine, with a goal of having any latency at all and being very CPU efficient.
Hi, Ken! That one (I just read the intro just now) is FFT based. That's nice and I'm sure it's way faster then what I described but doesn't realy fit with this "do new things with ChucK's timing" thought experiment. Depending on exactly what the spectral stuff that Dan hinted at might be coming involves that might be a option for the future. Doing FFT math yourself still sounds a bit scary to me. Or, using JACK to feed ChucK stuff out to an external convolution engine and
back again. One of the bummers about ChucK using RTAudio is that it can't mess about directly with JACK graphs. It seems like a missed opportunity, when a sample-accurate, real-time music programming language doesn't have direct access to a sample-accurate, real-time audio engine like JACK.
LADSPA and/or LV2 support inside ChucK would be really nice too. There are some fine "ugens" available in LADSPA format, and new ones coming for LV2 that are quite nice.
Sounds like good ideas. On a practical level though; I have to confess to being a bit behind the times and actually using a spring-reverb when I need reverb most of the time. ChucKian-convolution (Chuckvolution!) was a bit of a thought experiment but I could imagine it being practical at times. For example one could use a IR that would end in a tail of noise that would eventually decay. Using LiSa one could loop the noisy end befor the decay during quiet passages or at the end of a phrase, then drop that loop at other moments to preserve clarity and headroom, for example. Convolution, like my springreverb, is nice but it's usually not very versatile once set up and roling your own, then doing new and weird stuff to it seems easier then having a CV-controled damper on a spring-reverb. I'll be the first to admit that if efficiency is a issue then mine is a very bad idea. A less bad idea might be using LiSa to try to make a grain-based reverb like Ableton has? Yours, Kas.
participants (3)
-
dan trueman
-
Kassen
-
Ken Restivo