Hi there I'd like to setup a patch like this... noise n => biquad f => dac; ...and then have an adsr modulate f.pfreq. I know how to use ADSR for modulating volume (just send it through) and I also know that the modulation can be done with explicit timing using a loop and chucking to now at "control rate". But is it done with an ADSR? -- peace, love & harmony Atte http://www.atte.dk | quartet: http://www.anagrammer.dk http://www.atte.dk/gps | compositions: http://www.atte.dk/compositions
Atte André Jensen wrote:
But is it done with an ADSR?
...*how* is it... -- peace, love & harmony Atte http://www.atte.dk | quartet: http://www.anagrammer.dk http://www.atte.dk/gps | compositions: http://www.atte.dk/compositions
I think I understand what you want, chances are you are used to modular systems and so "noise n => biquad f => dac;" looks sensible and you now also want to go "ADSR e => f.pfreq;" (or something similar) like a patch cable, right? Well, I'm sorry but that won't work. Writing to parameters has to be done explicidly at some controll rate (or at least in some sort of known series of events). The good news though is that controll rate can be the sample rate if you need it to or it can be the highest rate that doesn't cause glitches on your cpu or you can dynamically modulate it through the piece or whatever. This is a lot more powerfull then Csound (where everything has the same controll rate set at the begining) or the Nord Modular (where it's always a 4th of the sample rate) but it still inherits from the traditional computer music concept of "controll rates" (meant to save cpu). I'm not sure about you but to me at first this looked clumsy compared to the Nord Modular (and similar systems) for simple synthesis. However, now that I figured out that this makes "controll rate" in chuck work the same as "musical events" it kinda makes sense to me. Hz and BPM, after all, are different words for the same sort of parameter. I hope that clarifies a little and doesn't disapoint too much. relative to other systems I think this is a very good idea but nothing is perfect, especially not ChucK. Then again; ChucK doesn't claim to be perfect as soon as you just buy it either which lots of other things seem to do..... Kas.
Hi there
I'd like to setup a patch like this...
noise n => biquad f => dac;
...and then have an adsr modulate f.pfreq. I know how to use ADSR for modulating volume (just send it through) and I also know that the modulation can be done with explicit timing using a loop and chucking to now at "control rate".
But is it done with an ADSR?
Kassen wrote:
Well, I'm sorry but that won't work. Writing to parameters has to be done explicidly at some controll rate (or at least in some sort of known series of events).
:-( Well I appreciate the power inherent power with the way chuck works. My problem with accepting that is not so much conceptual as code modularity. All of the examples (however fine they are) are stand-alone programs that have to think of all aspects. But what I really, really need to do is to be able to (sometimes) seperate "the patch" from "the notes". IOW I need to encapsulate some kind of instrumental idea and be able to reuse it everywhere. AFAICS the need to explicit worry about timing when modulating something makes that close to impossible. The instrument shouldn't care if it's requested to play for 1::ms or 1::week, however to me (but I'm still new to chuck concepts + tricks) this seems to be totally intertwined with what informaion the caller has. How does everyone go about modualizing their setup, thus seperating noise making from note making? BTW: I can of course think of a lot of situations where this seperation of instrument/score (old csounder here) doesn't make sense, and I already use alot of that. But sometimes I just want to encapsulate that nice bassdrumm sound for reuse... -- peace, love & harmony Atte http://www.atte.dk | quartet: http://www.anagrammer.dk http://www.atte.dk/gps | compositions: http://www.atte.dk/compositions
BTW: I can of course think of a lot of situations where this seperation of instrument/score (old csounder here) doesn't make sense, and I already use alot of that. But sometimes I just want to encapsulate that nice bassdrumm sound for reuse...
Yeah, I hear you. I think you'll have to live with seeing a sporked shred that takes care of the "controll-rate" for the envelope as a part of the patch. It's just a few lines anyway. That's what I do, I just spork some functions, then focus on the controll stucture for the notes/beats/interface/whatever. To be honest; I haven't been nearly as interested in synthesis as I've been in controll structures for the notes lately. I just set up whatever the sound needs, then leave it be, only returning to it to look up how to hook new parameters to it. That being said; having the sound intergrated with the controling structure does have large advantages for expressive playing. Too many abstraction layers make me feel like I'm wearing oven-mittens (I never liked MIDI either). Kas.
BTW: I can of course think of a lot of situations where this seperation of instrument/score (old csounder here) doesn't make sense, and I already use alot of that. But sometimes I just want to encapsulate that nice bassdrumm sound for reuse...
Yeah, I hear you. I think you'll have to live with seeing a sporked shred that takes care of the "controll-rate" for the envelope as a part of the patch. It's just a few lines anyway.
The goal of course is to be able to comfortably do either: abstract/encapsulate vs. integrate (the kitchen sink approach) (or any mix in between) I think it should be totally possible to separate/combine instrument/score in any number of ways, and to customize it to your liking. So far it's possible with the class system, but parts are cumbersome (lack of #include or auto-class discovery, classpath, namespaces). But we are working on it - it's essential to provide a flexible framework to built abstractions and libraries for reuse. Best, Ge!
Ge Wang wrote:
So far it's possible with the class system, but parts are cumbersome (lack of #include or auto-class discovery, classpath, namespaces). But we are working on it - it's essential to provide a flexible framework to built abstractions and libraries for reuse.
That's good to know! What do you think of my suggestion about reuing shred-ids and/or reshufling id's OTF? I mean, if I spork a shred on every note I quite fast end up with id's in the 1000 or even 10000 range, which are harder to replace/stop OTF. I would imagine it's non-trivial to implement, though... If we could kill or replace shreds by name it would be possible to live with huge shred ids, and I believe this is also on the TODO... -- peace, love & harmony Atte http://www.atte.dk | quartet: http://www.anagrammer.dk http://www.atte.dk/gps | compositions: http://www.atte.dk/compositions
Remember that when you spork a shred that operation returns a Shred and you can get the int of that with Shred.id(). You can store that int in an intermediate variable that you can access with a name. You can't access that name directly from the command line but you could store those names in an object and have a little script that you can launch that polls your objects (public ones that can be seen outside of the scope of their shred) and prints out the shred ids you are looking for. Not totally elegant but effective. For your other problem with modularity of code I solve that with the object structure. You can make a synth public object and then load it into a running VM. Then you can load separate score files that have your synth object in there. I regularly do things like this: %>chuck mysynth.ck score1.ck %> score2.ck (where score 2 also uses the mysynth object in mysynth.ck) Also, with ADSR can you poll its output? I thought that would have to do something like this: step s => ADSR env => blackhole; 1. => s.value; Doesn't ADSR multiply its input with its current value? Best of luck. Thanks for the multitude of questions. --art On 29-Jun-06, at 10:07 AM, Atte André Jensen wrote:
Ge Wang wrote:
So far it's possible with the class system, but parts are cumbersome (lack of #include or auto-class discovery, classpath, namespaces). But we are working on it - it's essential to provide a flexible framework to built abstractions and libraries for reuse.
That's good to know!
What do you think of my suggestion about reuing shred-ids and/or reshufling id's OTF? I mean, if I spork a shred on every note I quite fast end up with id's in the 1000 or even 10000 range, which are harder to replace/stop OTF. I would imagine it's non-trivial to implement, though...
If we could kill or replace shreds by name it would be possible to live with huge shred ids, and I believe this is also on the TODO...
-- peace, love & harmony Atte
http://www.atte.dk | quartet: http://www.anagrammer.dk http://www.atte.dk/gps | compositions: http://www.atte.dk/ compositions _______________________________________________ chuck-users mailing list chuck-users@lists.cs.princeton.edu https://lists.cs.princeton.edu/mailman/listinfo/chuck-users
Kassen wrote:
and you now also want to go "ADSR e => f.pfreq;" (or something similar) like a patch cable, right?
Exactly! I still don't see why this wouldn't be possible within the current conventions. I admit that I'm comming from another way of thinking, but who isn't :-) So allow me to think as I'm used to for a minute. So if I have a ADSR sonnected to a blackhole, it will generate samples when time passes, right? Normally an adsr scales an AC signal. If it input a DC of say 1 (could be hacked with a slooow sqr) to an ADSR it should scale thi according to it's settings. An it should do so in free-wheeling time (between chuck to now), right? Al that is needed is that I can connect this output to something like f.pfreq, and I would have a free-wheeling, simple to grasp, CPU-eater (updated at sample rate). Simple? yes. Flexible? Not that much. Newbie/Nord-Modular-Moogie friendly? Very. Elegant? In it's own way, yes. Sufficient? For basic stuff; yes, for the rest; no. Sorry for bringing this pragmatic garbage into the beautiful world of chuck :-) -- peace, love & harmony Atte http://www.atte.dk | quartet: http://www.anagrammer.dk http://www.atte.dk/gps | compositions: http://www.atte.dk/compositions
Atte, You have some good points, we'll see what Ge thinks but when you write this;
If it input a DC of say 1 (could be hacked with a slooow sqr)...
From the top of my head "step" has two important members called ".value" and ".next" which sorta kinda do the same thing except that I meant to tell Ge that he should have a look at ".value" because I
I think you should realy look up the "step" ugen. Step is like the mirror image of "My_ugen.last()" and very usefull if you are batteling with the sort of thing you are currently running into. think that one has called in sick a while ago and hasn't yet returned to work. I think it's in the documentation but the VM goes all "what are you talking about?" on me with that one. Anyway, "step" is good. For audio rate multiplications using multiple instances of step combined with gains set to multiply will outperform doing it manually by a factor of 3 or so in my experience (at the expense of hard to read code). I would be in favour of explaining how step and .last link the controll stuff to the "patching" style stuff in a seperate manual section because for me understanding that was a big moment in seeing how ChucK related to Serge modulars (Serge only has banana plugs, Buchla uses seperate plugs for controll and audio). Kas.
I would be in favour of explaining how step and .last link the controll stuff to the "patching" style stuff in a seperate manual section because for me understanding that was a big moment in seeing how ChucK related to Serge modulars (Serge only has banana plugs, Buchla uses seperate plugs for controll and audio).
Good idea. The tutorial section is highly lacking at this point. I will try my best to write something up to add. If you have any code or insights that you would like to share that could help this tutorial please send them along and I will try to add them to the manual. --art
Good idea. The tutorial section is highly lacking at this point. I will try my best to write something up to add. If you have any code or insights that you would like to share that could help this tutorial please send them along and I will try to add them to the manual.
Very good. "Buchla v.s. Serge" as a analogy is probably too obscure for most even if it's basically the origin of this sort of question. Perhaps it would be good to have a better analogy. I'd be very happy to proof-read your atempt and add to it where I might have some ideas. Right now I don't have any good clear example code that doesn't also involve realy strange stuff I was trying out myself. Perhaps it'd be good to have some example of a sinosc used as a LFO? Maybe something involving a saw osc made with a counter and a modulo function too? We need some realy simple sweeping oscs because with those it's easy to hear what's going on, also to demostrate the effect of various controll rates and maybe even aliassing of "cv" signals? This stuff involves both the fundamentals of simple modulated sounds and the fundamentals of computer music so I imagine a section like that would get quite large very quickly and it'd need some solid proof-reading before we get all kinds of strange ideas into the Princeton students.... I'll have a look at the curent version of step's documentation. Not even sure wether it curently notes how step is similar to a S&H module, I think that's a interesting perspective on it to explain it to people coming from a modular synths background. Kas.
participants (4)
-
Adam Tindale
-
Atte André Jensen
-
Ge Wang
-
Kassen