Hans, Yes, you can do it simpler than that. I don't know for sure but I think the degeneration problem is because you are ChucKing up patches dynamically but never unChucKing them. So they build on top of one another and corrupt. Not sure but that's my guess. This may not be quite your answer, but i did implement a monome-like keyboard interface in my Digitar program. It does a crude attempt at Karplus-Strong string synthesis, which amazingly sounds like a real guitar. When you start the program it is silent, then when you press alphanumeric keys that puts logic table entries into the boolean sequencer that plays the guitar. Press F1 thru F4 to page the monome-like keyboard interface among the four vertical positions in the logic table, and press other function keys followed by the +/- keys to adjust stuff like distortion, reverb, etc. Oh, and watch the console monitor for the user interface. Enjoy! Les
On 17 Apr 2009, at 18:21, Les Hall wrote:
Yes, you can do it simpler than that. I don't know for sure but I think the degeneration problem is because you are ChucKing up patches dynamically but never unChucKing them. So they build on top of one another and corrupt. Not sure but that's my guess.
Yes, that is most surely the reason. The tricky part is know exactly how advanced ChucK is, knowing what has to be done by hand. It would have been cool if one just fed those patches into the output, and the program figured it out.
This may not be quite your answer, but i did implement a monome-like keyboard interface in my Digitar program. It does a crude attempt at Karplus-Strong string synthesis, which amazingly sounds like a real guitar. When you start the program it is silent, then when you press alphanumeric keys that puts logic table entries into the boolean sequencer that plays the guitar. Press F1 thru F4 to page the monome-like keyboard interface among the four vertical positions in the logic table, and press other function keys followed by the +/- keys to adjust stuff like distortion, reverb, etc. Oh, and watch the console monitor for the user interface. Enjoy!
Those instructions should perhaps be in the file. (And note that the location of the +/- keys are different on other keyboards than the US. So you want them on the +/- keys in all layouts, one should probably use the ASCII key function.) Though I intended to add more features later, and your input is welcome, I do not see how that problem with patch-buildup. Hans
Hans; Yes, that is most surely the reason. The tricky part is know exactly how
advanced ChucK is, knowing what has to be done by hand. It would have been cool if one just fed those patches into the output, and the program figured it out.
Yes, I see. I had a quick glance at your code and with your setup you'll end up with a lot of connected yet silent series of UGens. On top of that you'll be creating double connections so even if your CPU could keep up you would run into clipping. The most obvious strategy might be disconecting the UGens again at a isButtonUp() event but that would mean cutting off the envelope's decay which is likely undesirable. What ChucK is doing here indeed isn't very "smart" but on the bright side it is behaving according to the specs and this behaviour can be useful in other cases. For your needs I'd suggest looking into voice cycling; there are examples about polyphony in the /midi/ directory. I know you aren't using MIDI here but the strategy for dealing with polyphony without overloading the CPU will be useful, at least as a strating point. Give a shout if you get stuck there. Those instructions should perhaps be in the file. (And note that the
location of the +/- keys are different on other keyboards than the US. So you want them on the +/- keys in all layouts, one should probably use the ASCII key function.)
Yes, that's a good point, you are quite right. Yours, Kas,
On 17 Apr 2009, at 19:38, Kassen wrote:
Yes, that is most surely the reason. The tricky part is know exactly how advanced ChucK is, knowing what has to be done by hand. It would have been cool if one just fed those patches into the output, and the program figured it out.
Yes, I see. I had a quick glance at your code and with your setup you'll end up with a lot of connected yet silent series of UGens. On top of that you'll be creating double connections so even if your CPU could keep up you would run into clipping.
I think both may happen (by my testing): the CPU load may increase to the point that it is the delays that causes the crash, but also before.
The most obvious strategy might be disconecting the UGens again at a isButtonUp() event but that would mean cutting off the envelope's decay which is likely undesirable.
Yes, or having fewer generators, and letting the keys circulate assignment around them, which may have the same effect (which, when I read ahead, see that you are mentioning, too.
What ChucK is doing here indeed isn't very "smart" but on the bright side it is behaving according to the specs and this behaviour can be useful in other cases. For your needs I'd suggest looking into voice cycling; there are examples about polyphony in the /midi/ directory. I know you aren't using MIDI here but the strategy for dealing with polyphony without overloading the CPU will be useful, at least as a strating point.
In the case of MIDI, it may be necessary, due to a limited number of channels. Scala uses complex algorithms for that. But on a first, try, I was hoping to void that.
Give a shout if you get stuck there.
Sure. Hans
Hans; I think both may happen (by my testing): the CPU load may increase to the
point that it is the delays that causes the crash, but also before.
Yes, I wouldn't be surprised if that were true. Yes, or having fewer generators, and letting the keys circulate assignment
around them, which may have the same effect (which, when I read ahead, see that you are mentioning, too.
Yes, that would come down to voice cycling as well. There are a lot of possible strategies for that, depending on taste and how complex you are willing to get as well as how sensitive you are to inappropriate voice-stealing. In the case of MIDI, it may be necessary, due to a limited number of
channels. Scala uses complex algorithms for that.
I don't think either MIDI or Scala affects this matter all that much; as I see it the core of the issue is that CPU resources are limited so we need to conserve them. MIDI could still generate 128 * 16 concurrent notes if you really wanted to and such amounts of voices will cause issues in any realistic and practical system. I may be misunderstanding your comment here. Kas.
On 17 Apr 2009, at 20:11, Kassen wrote:
Yes, or having fewer generators, and letting the keys circulate assignment around them, which may have the same effect (which, when I read ahead, see that you are mentioning, too.
Yes, that would come down to voice cycling as well. There are a lot of possible strategies for that, depending on taste and how complex you are willing to get as well as how sensitive you are to inappropriate voice-stealing.
Voice-stealing sounds awful, but want now mainly tio focus on pitches and keyboard layouts.
In the case of MIDI, it may be necessary, due to a limited number of channels. Scala uses complex algorithms for that.
I don't think either MIDI or Scala affects this matter all that much; as I see it the core of the issue is that CPU resources are limited so we need to conserve them. MIDI could still generate 128 * 16 concurrent notes if you really wanted to and such amounts of voices will cause issues in any realistic and practical system.
I may be misunderstanding your comment here.
The methods are similar, but the cause different: Scala needs a lot of MIDI channels for the microtonality (and by default uses all but one). So when playing on the layout I gave, one uses several different channels (or so is my impression). Hans
Hans; The methods are similar, but the cause different: Scala needs a lot of MIDI
channels for the microtonality (and by default uses all but one). So when playing on the layout I gave, one uses several different channels (or so is my impression).
Ah, I get it. In this case though we are dealing with input from the computer keyboard which we send internally to a set of voices in ChucK so we aren't limited by MIDI's rather out-dated concepts. What we do need is some way to translate key numbers to pitches. I'd do that with a simple array of floats mapping key-numbers to pitches. I'd likely deal with pitchbend using some sort of scaling on those values, if I needed pitchbend, that could then get rid of any issues arising from non-equal spacing between notes. To clarify; what is interesting about the polyphonic examples in the MIDI dir is how they deal with polyphony, not the MIDI as such.
From your other mail;
It strikes me that Chuck may not be good for mixing key numbers with the translated "AsCII" numbers. Ir seems one has to make choice of what to use. When mixing, one may want to pick up the event first and looking at both the key and the (eventual) Unicode number.
Well, both methods can be used in a single set of keyboard parsing rules though that would mean that we'd need to make sure we0re not mixing up all the numbers that will be flying around. We might, for example, look at the key number only when there is no character associated with the key that was pressed, for example. Many other strategies are possible. ChucK may be build on decades of knowledge about sound and computation but sadly we are also stuck with decades of legacy and semi-standards that occasionally make our life more annoying than it would ideally be. Personally I hate writing tens if not hundreds of lines parsing the keyboard because it's so boring and there are so many nearly random numbers yet it all needs to be exactly right. I'm terribly sorry but I don't think there is anything I can do for you there aside from showing understanding. Yours, Kas.
smelt.cs.princeton.edu has some nice examples for handling this kind of stuff... dt On Apr 17, 2009, at 2:45 PM, Kassen wrote:
Hans;
The methods are similar, but the cause different: Scala needs a lot of MIDI channels for the microtonality (and by default uses all but one). So when playing on the layout I gave, one uses several different channels (or so is my impression).
Ah, I get it. In this case though we are dealing with input from the computer keyboard which we send internally to a set of voices in ChucK so we aren't limited by MIDI's rather out-dated concepts.
What we do need is some way to translate key numbers to pitches. I'd do that with a simple array of floats mapping key-numbers to pitches. I'd likely deal with pitchbend using some sort of scaling on those values, if I needed pitchbend, that could then get rid of any issues arising from non-equal spacing between notes.
To clarify; what is interesting about the polyphonic examples in the MIDI dir is how they deal with polyphony, not the MIDI as such.
From your other mail;
It strikes me that Chuck may not be good for mixing key numbers with the translated "AsCII" numbers. Ir seems one has to make choice of what to use. When mixing, one may want to pick up the event first and looking at both the key and the (eventual) Unicode number.
Well, both methods can be used in a single set of keyboard parsing rules though that would mean that we'd need to make sure we0re not mixing up all the numbers that will be flying around. We might, for example, look at the key number only when there is no character associated with the key that was pressed, for example. Many other strategies are possible.
ChucK may be build on decades of knowledge about sound and computation but sadly we are also stuck with decades of legacy and semi-standards that occasionally make our life more annoying than it would ideally be. Personally I hate writing tens if not hundreds of lines parsing the keyboard because it's so boring and there are so many nearly random numbers yet it all needs to be exactly right. I'm terribly sorry but I don't think there is anything I can do for you there aside from showing understanding.
Yours, Kas. _______________________________________________ chuck-users mailing list chuck-users@lists.cs.princeton.edu https://lists.cs.princeton.edu/mailman/listinfo/chuck-users
On 17 Apr 2009, at 20:45, Kassen wrote:
In this case though we are dealing with input from the computer keyboard which we send internally to a set of voices in ChucK so we aren't limited by MIDI's rather out-dated concepts.
Exactly. If I could, I would bypass all MIDI stuff. The organ patch is fully sufficient for my purposes - so I have already at least a minimum in sound patches.
What we do need is some way to translate key numbers to pitches. I'd do that with a simple array of floats mapping key-numbers to pitches. I'd likely deal with pitchbend using some sort of scaling on those values, if I needed pitchbend, that could then get rid of any issues arising from non-equal spacing between notes.
To clarify; what is interesting about the polyphonic examples in the MIDI dir is how they deal with polyphony, not the MIDI as such.
yes, I understood that. Thanks.
From your other mail;
It strikes me that Chuck may not be good for mixing key numbers with the translated "AsCII" numbers. Ir seems one has to make choice of what to use. When mixing, one may want to pick up the event first and looking at both the key and the (eventual) Unicode number.
Well, both methods can be used in a single set of keyboard parsing rules though that would mean that we'd need to make sure we0re not mixing up all the numbers that will be flying around. We might, for example, look at the key number only when there is no character associated with the key that was pressed, for example. Many other strategies are possible.
The problem is that it might not be possible, if some keys are used for playing and other for manipulating data.
ChucK may be build on decades of knowledge about sound and computation but sadly we are also stuck with decades of legacy and semi-standards that occasionally make our life more annoying than it would ideally be. Personally I hate writing tens if not hundreds of lines parsing the keyboard because it's so boring and there are so many nearly random numbers yet it all needs to be exactly right. I'm terribly sorry but I don't think there is anything I can do for you there aside from showing understanding.
Some things are similar to a Swedish Electronic Music Studio computer from 1970, which sported digitally controlled analog oscillators and filters. But as for the voice assignment problem, perhaps there might be the need of some structure handling it. Hope for the future :-). Hans
Hans; The problem is that it might not be possible, if some keys are used for
playing and other for manipulating data.
I don't think there will be a issue with that. Your program would have its data and its structure for playing sounds and both would be affected by a single shred that would parse the keyboard. That sort of thing is quite possible and a fairly normal type of structure for a ChucK program.
But as for the voice assignment problem, perhaps there might be the need of some structure handling it. Hope for the future :-).
I'm certain you can do way better than hope. You'll need to define for yourself what voice assignment means to you here. From there on it's a matter of translating that definition to formal code. Personally I think it's a good thing that ChucK has no inherent idea of what voice assignment and cycling mean as that means we get to figure out what we need in our specific situation. The developers can't foresee what any individual programmer might want or need (I'm quite certain they are already very aware of that...)
I was only hoping not having to program so much...
Sorry :¬). I fear you're going to have to program a bit. It's not all *that* much and 90% is coming up with a good plan. Yours, Kas.
On 17 Apr 2009, at 22:22, Kassen wrote:
The problem is that it might not be possible, if some keys are used for playing and other for manipulating data.
I don't think there will be a issue with that. Your program would have its data and its structure for playing sounds and both would be affected by a single shred that would parse the keyboard. That sort of thing is quite possible and a fairly normal type of structure for a ChucK program.
ASCII would not be needed at this point. And I could think of workarounds: using escape sequences or multiple keyboards. I was think more in general: it's not good having these limitations.
But as for the voice assignment problem, perhaps there might be the need of some structure handling it. Hope for the future :-).
I'm certain you can do way better than hope. You'll need to define for yourself what voice assignment means to you here. From there on it's a matter of translating that definition to formal code. Personally I think it's a good thing that ChucK has no inherent idea of what voice assignment and cycling mean as that means we get to figure out what we need in our specific situation. The developers can't foresee what any individual programmer might want or need (I'm quite certain they are already very aware of that...)
To begin with, support might be in a combination of library and program. It's more like it's nice to have a feature like say GC (garbage collecting) rather than having to implement it on your own. If there was a simple way, if that now was needed, to just being able to add generators and the program could sort out which ones were needed for the sound output, would that not be great?
I was only hoping not having to program so much...
Sorry :¬).
I fear you're going to have to program a bit. It's not all *that* much and 90% is coming up with a good plan.
I have done a lot of programming in the past. But I think a programming language that forces you trying to find workarounds rather than focusing on the task ahead is flawed. Hans
On 17 Apr 2009, at 22:22, Kassen wrote:
Personally I think it's a good thing that ChucK has no inherent idea of what voice assignment and cycling mean as that means we get to figure out what we need in our specific situation. The developers can't foresee what any individual programmer might want or need (I'm quite certain they are already very aware of that...)
I have checked a bit with 'top -uR' what happens: Each new key, which initiates a sound generator, adds in CPU power after being allowed to settle down and not producing sound anymore about as much as 3 times when as full program at startup time. So there is a quick buildup of from these idle generators. Such a resource consuming problem might be fixed if each sound generator could turn itself to sleep. Say there is a cutoff-level, or something. Hans
On 17 Apr 2009, at 19:38, Kassen wrote:
... (And note that the location of the +/- keys are different on other keyboards than the US. So you want them on the +/- keys in all layouts, one should probably use the ASCII key function.)
Yes, that's a good point, you are quite right.
Even the ASCII keys may differ from the US, like the German keyboard layout, which swaps "z" and "y". Hans
Have you thought about having a voice map correspond to your key map?
As in, have Db and C# assigned to the same voice, because presumably
if you're using E31 you're looking for pure consonances and not
extreme dissonances. That way you would only need 12 voices (if each
enharmonic pitch shared a voice with its other enharmonic pitch)
instead of 256. It would limit your playing somewhat, but it all
depends on what you're going for.
I'm not a very good ChucKist, and I didn't really fully understand
your code, but I do understand the garbage collection problem. Also,
I'm a dedicated microtonalist, which is one reason why I'm following
this thread. Good luck!
Andrew
On Fri, Apr 17, 2009 at 11:12 AM, Hans Aberg
On 17 Apr 2009, at 19:38, Kassen wrote:
... (And note that the location of the +/- keys are different on other keyboards than the US. So you want them on the +/- keys in all layouts, one should probably use the ASCII key function.)
Yes, that's a good point, you are quite right.
Even the ASCII keys may differ from the US, like the German keyboard layout, which swaps "z" and "y".
Hans
_______________________________________________ chuck-users mailing list chuck-users@lists.cs.princeton.edu https://lists.cs.princeton.edu/mailman/listinfo/chuck-users
On 17 Apr 2009, at 20:38, Andrew C. Smith wrote:
Have you thought about having a voice map correspond to your key map? As in, have Db and C# assigned to the same voice, because presumably if you're using E31 you're looking for pure consonances and not extreme dissonances.
This is a good idea, though in normal playing, one moves between scale degrees. So pitches that differ only by a number of flats or sharps. (One might define a scale degree d = p + q of the note p M + q m, where M is major and m a minor second. Also, scale degrees are important in much music: 12-tone atonal music I think might be described as 12-scale degrees per octave instead of 7.)
That way you would only need 12 voices (if each enharmonic pitch shared a voice with its other enharmonic pitch) instead of 256. It would limit your playing somewhat, but it all depends on what you're going for.
But one cannot be sure. Not even Mozart follows that convention, though true in most music. So one needs extra generators.
I'm not a very good ChucKist, and I didn't really fully understand your code, but I do understand the garbage collection problem.
So you might be the man on this problem, then :-).
Also, I'm a dedicated microtonalist, which is one reason why I'm following this thread. Good luck!
That is one reason I am interested in it. I might move ahead to oriental style intermediate pitches. Then making a wholly new key map might be too difficult to learn playing. So I want to somehow making the key map deformable so that those scale fit. Hans
On 17 Apr 2009, at 20:38, Andrew C. Smith wrote:
As in, have Db and C# assigned to the same voice, because presumably if you're using E31 you're looking for pure consonances and not extreme dissonances. That way you would only need 12 voices (if each enharmonic pitch shared a voice with its other enharmonic pitch)
Traditionally such chromaticisms may have been notated by simply taking 5 enharmonic accidentals to make up a total of 12 pitches (as in meantone keyboards). And in the program, they might be reassigned. But in some cases one may not want to. Take the "Harry Lime Theme" from "The Third Man" movie, which starts (written) G G# A G# A. One might reason that the first G# is a successor to G, and then it might be better changing scale degree, to have it Ab. Then G# and Ab would appear close in time: G Ab A G# A. And in E31 can make glissando-like sequences like G Abb G# Ab G## A. So in principle, such situations might appear. Hans
participants (5)
-
Andrew C. Smith
-
Daniel Trueman
-
Hans Aberg
-
Kassen
-
Les Hall