polyphony (what happens after note off)? was: Killing thread from without
Sorry, this was meant to go to the list, not just to Hans.
---------- Forwarded message ----------
From: Kassen
Since there are no such limits in Chuck, I want in principle enable the same generator to do note-on effects emulating real instruments.
Well, I'd argue that modern computers do have the same limitation that hardware synths have; a finite amount of CPU resources. To complicate matters further; there are big differences between how various acoustical instruments deal with "voices"; a guitar is rather different from a harpsichord there even when the way of creating the sound is quite similar.
You mean apart from scheduling decay cutoffs and the like.
Yes, well, those are implicit in UGen behaviour and not so much "scheduled" as such, I feel.
It seems that problem that folks try to program around is that Chuck returns references of class objects, without having a root tracing mechanism that could be used for a GC.
Sorry, that's beyond my expertise. I don't think we lack devices that "could be used for GC" as GC is being implemented right now. The issue, as I understand it, is that combining GC with realtime performance is tricky. Beyond that one of the DEV's will have to help you.
Chuck probably just gets its threads from the OS rather implementing in effect an OS on top of the one already there, so that determines what it should be capable of.
Sorry, that's not quite right. ChucK runs in a single thread from the OS perspective and several "shreds" may run on the ChucK VM. The purpose of shreds is mainly to determine execution order in a deterministic yet convenient manner. We can't depend on the OS for that as not all OS's that ChucK runs on are capable of realtime performance (why MS and Apple don't implement this at least a s a option is beyond me but they don't). In short; there is a very real difference between "shreds" running in the ChucK VM and "threads" running on the OS (and supported by cores, etc); we don't just use those funny words because they are cute, even when some of us may enjoy such exercises in creative linguistics. In short that means that ChucK shred timing should be completely deterministic and extremely precise and when it's not that's a bug, it also means that for chucking multi-core cpu's won't do you much good. While it may sound like a complicated extra layer to implement "a new OS" on top of systems like Linux, OSX or Windows it does mean that we can share code across OS's with little regard for who runs what; I think that's quite nice. For a more in depth treatment on the how and why of ChucK's internal structure I'd like to refer you to this; http://www.cs.princeton.edu/~gewang/thesis.pdf Hope that helps, Kas.
On 26 Apr 2009, at 22:04, Kassen wrote:
Since there are no such limits in Chuck, I want in principle enable the same generator to do note-on effects emulating real instruments.
Well, I'd argue that modern computers do have the same limitation that hardware synths have; a finite amount of CPU resources. To complicate matters further; there are big differences between how various acoustical instruments deal with "voices"; a guitar is rather different from a harpsichord there even when the way of creating the sound is quite similar.
Though I mainly had keyboard instruments, it is quite uncommon to have a large number of generators on the same pitch set off in rapid succession. You can try that effect with two generators on my keyboard layout by tremolo between the tuning reference key and the key that sets it off. It sounds not bad, but resetting the generator might be somewhat more distinct.
You mean apart from scheduling decay cutoffs and the like.
Yes, well, those are implicit in UGen behaviour and not so much "scheduled" as such, I feel.
If you want to automate that, then the Ugen must be able to generate a signal to do the disconnect. The future scheduling is just a workaround in the absence of that.
It seems that problem that folks try to program around is that Chuck returns references of class objects, without having a root tracing mechanism that could be used for a GC.
Sorry, that's beyond my expertise.
It seems Chuck avoids C++ "automatic elements", in favor of returning references. This is what is causing the cleanup problem, because when the function has finished, there is no way to tell which references are used and which are not. One way to do that is to trace or references from the function stack and global objects. This way, the unusued elements can be removed.
I don't think we lack devices that "could be used for GC" as GC is being implemented right now. The issue, as I understand it, is that combining GC with realtime performance is tricky. Beyond that one of the DEV's will have to help you.
Yes. A tracing simple GC is a two-space copier. Have two memory chunks. Allocate new memory in the one by moving a pointer. When it fills up, trance the root and copy it over to the unused space. Now the problem is that it runs fast until GC time. when a lot of copying is made. So it needs to run in a thread, and do a little GC now and then, when the CPU admits it.
Chuck probably just gets its threads from the OS rather implementing in effect an OS on top of the one already there, so that determines what it should be capable of.
Sorry, that's not quite right. ChucK runs in a single thread from the OS perspective and several "shreds" may run on the ChucK VM. The purpose of shreds is mainly to determine execution order in a deterministic yet convenient manner. We can't depend on the OS for that as not all OS's that ChucK runs on are capable of realtime performance (why MS and Apple don't implement this at least a s a option is beyond me but they don't).
The both use the Mach kernel, I think (at least Mac OS X does), which uses preemptive multitasking. Mac OS X may still have support for cooperative multitasking, where each program must release time to the others - I have a vague memory that may be better for time-critical situations, though a chore from the programming point of view.
In short; there is a very real difference between "shreds" running in the ChucK VM and "threads" running on the OS (and supported by cores, etc); we don't just use those funny words because they are cute, even when some of us may enjoy such exercises in creative linguistics.
In short that means that ChucK shred timing should be completely deterministic and extremely precise and when it's not that's a bug, it also means that for chucking multi-core cpu's won't do you much good.
Aha, so you may get a timing problem, there.
While it may sound like a complicated extra layer to implement "a new OS"
Sorry for exaggerating :-).
on top of systems like Linux, OSX or Windows it does mean that we can share code across OS's with little regard for who runs what; I think that's quite nice.
A VM though usually means an overhead.
For a more in depth treatment on the how and why of ChucK's internal structure I'd like to refer you to this; http://www.cs.princeton.edu/~gewang/thesis.pdf
Hope that helps,
Yes, thank you. That timing aspect of threads (or shreds) is important. Hans
Hans;
If you want to automate that, then the Ugen must be able to generate a signal to do the disconnect. The future scheduling is just a workaround in the absence of that.
While I can see your perspective I think UGens mainly generate units and don't do much else and that that's nice because it's clear. I could imagine optimising the STKInstruments so they act as though they were set to .op(0) after the .noteOff() decay ran out but that0s a optimisation issue and not a ChucK syntax issue.
It seems Chuck avoids C++ "automatic elements", in favor of returning references. This is what is causing the cleanup problem, because when the function has finished, there is no way to tell which references are used and which are not. One way to do that is to trace or references from the function stack and global objects. This way, the unusued elements can be removed.
Yes, I think we'll get a method based on reference counting.
The both use the Mach kernel, I think (at least Mac OS X does), which uses preemptive multitasking. Mac OS X may still have support for cooperative multitasking, where each program must release time to the others - I have a vague memory that may be better for time-critical situations, though a chore from the programming point of view.
Yes, the concerns are quite different; there are good reasons why business users aren't very interested in investing in the Linux RT kernel variants either. (or so I hear, most of those reasons are way beyond me but that's fine as I don't design OS's)
In short that means that ChucK shred timing should be completely deterministic and extremely precise and when it's not that's a bug, it also means that for chucking multi-core cpu's won't do you much good.
Aha, so you may get a timing problem, there.
Well, the ChucK VM runs as a single thread. As long as that thread gets enough CPU to do it's calculations by the time it needs to a turn in a buffer worth of samples we should be fine, timing wise. For better or worse we are fairly independent from the rest of the OS and from the hardware, at least as long as the cpu (or the core assigned to us) holds. Distributing something like ChucK over multiple CPU's is non-trivial.
A VM though usually means an overhead.
Yes. However, for us here that's worthwhile as we have a VM that shares memory space with the parser and compiler. This should -hopefully- lead to the ability to update running code in ways that plain compiled languages aren't able to do. With some trickery with public classes we can already do some of that. There has been some speculation about stand-alone compiled ChucK programs, there is no reason why we couldn't have those in the future but personally I'm still more interested in more interaction with the VM.
Yes, thank you. That timing aspect of threads (or shreds) is important.
Yes. It's quite essential. IMHO Ge's thesis is one of the best resources we have for understanding the how and why of ChucK's architecture in detail (there actually is a method to the madness, you see...). It's one of the more interesting aspects to chucking and you seem to have run into it quite early on. Maybe we should archive the last few days worth of posts labeled as "how to jump in at the deep end" :¬). I think we covered nearly all of it now. Yours, Kas.
On 26 Apr 2009, at 23:47, Kassen wrote:
If you want to automate that, then the Ugen must be able to generate a signal to do the disconnect. The future scheduling is just a workaround in the absence of that.
While I can see your perspective I think UGens mainly generate units and don't do much else and that that's nice because it's clear. I could imagine optimising the STKInstruments so they act as though they were set to .op(0) after the .noteOff() decay ran out but that0s a optimisation issue and not a ChucK syntax issue.
The UGen could be extended so that those that so wish can send a special generator terminator event. Then, on a more basic level, it could be used to disconnect by hand in the circumstances where it is needed. But one could also admit calling a special handler (bult into Chuck) that disconnects the generator. One would then still need to have some construct to indicate which ones. Such constructs would be backwards compatible in the sense that no existing UGen is required to have these capabilities.
It seems Chuck avoids C++ "automatic elements", in favor of returning references. This is what is causing the cleanup problem, because when the function has finished, there is no way to tell which references are used and which are not. One way to do that is to trace or references from the function stack and global objects. This way, the unusued elements can be removed.
Yes, I think we'll get a method based on reference counting.
That is about the only method one can implement in C++ (if that is used to implement Chuck), but not so difficult to implement in C++ (I have used it in my computer languages). It is supposed to be slow relative other GC methods and cannot remove reference loops, but is better than nothing.
The both use the Mach kernel, I think (at least Mac OS X does), which uses preemptive multitasking. Mac OS X may still have support for cooperative multitasking, where each program must release time to the others - I have a vague memory that may be better for time-critical situations, though a chore from the programming point of view.
Yes, the concerns are quite different; there are good reasons why business users aren't very interested in investing in the Linux RT kernel variants either. (or so I hear, most of those reasons are way beyond me but that's fine as I don't design OS's)
It is hard to write a Unix style kernel because of security concerns much low level lookups may have to be done, making it slow. I think there are some discussions about that on the Wikipedia GNU Hurd page http://en.wikipedia.org/wiki/Hurd
There has been some speculation about stand-alone compiled ChucK programs, there is no reason why we couldn't have those in the future but personally I'm still more interested in more interaction with the VM.
programs Hugs, an interpreter, and GHCi, an interactive compiler. Mostly, I prefer Hugs, but the ability to compile and get a faster
You might check the difference between the Haskell <http://haskell.org/ program might be good too.
Yes, thank you. That timing aspect of threads (or shreds) is important.
Yes. It's quite essential. IMHO Ge's thesis is one of the best resources we have for understanding the how and why of ChucK's architecture in detail (there actually is a method to the madness, you see...). It's one of the more interesting aspects to chucking and you seem to have run into it quite early on. Maybe we should archive the last few days worth of posts labeled as "how to jump in at the deep end" :¬).
I think we covered nearly all of it now.
I am interested in these questions, but it helps doing programming, too :-). Hans
participants (2)
-
Hans Aberg
-
Kassen