
Hans;
If you want to automate that, then the Ugen must be able to generate a signal to do the disconnect. The future scheduling is just a workaround in the absence of that.
While I can see your perspective I think UGens mainly generate units and don't do much else and that that's nice because it's clear. I could imagine optimising the STKInstruments so they act as though they were set to .op(0) after the .noteOff() decay ran out but that0s a optimisation issue and not a ChucK syntax issue.
It seems Chuck avoids C++ "automatic elements", in favor of returning references. This is what is causing the cleanup problem, because when the function has finished, there is no way to tell which references are used and which are not. One way to do that is to trace or references from the function stack and global objects. This way, the unusued elements can be removed.
Yes, I think we'll get a method based on reference counting.
The both use the Mach kernel, I think (at least Mac OS X does), which uses preemptive multitasking. Mac OS X may still have support for cooperative multitasking, where each program must release time to the others - I have a vague memory that may be better for time-critical situations, though a chore from the programming point of view.
Yes, the concerns are quite different; there are good reasons why business users aren't very interested in investing in the Linux RT kernel variants either. (or so I hear, most of those reasons are way beyond me but that's fine as I don't design OS's)
In short that means that ChucK shred timing should be completely deterministic and extremely precise and when it's not that's a bug, it also means that for chucking multi-core cpu's won't do you much good.
Aha, so you may get a timing problem, there.
Well, the ChucK VM runs as a single thread. As long as that thread gets enough CPU to do it's calculations by the time it needs to a turn in a buffer worth of samples we should be fine, timing wise. For better or worse we are fairly independent from the rest of the OS and from the hardware, at least as long as the cpu (or the core assigned to us) holds. Distributing something like ChucK over multiple CPU's is non-trivial.
A VM though usually means an overhead.
Yes. However, for us here that's worthwhile as we have a VM that shares memory space with the parser and compiler. This should -hopefully- lead to the ability to update running code in ways that plain compiled languages aren't able to do. With some trickery with public classes we can already do some of that. There has been some speculation about stand-alone compiled ChucK programs, there is no reason why we couldn't have those in the future but personally I'm still more interested in more interaction with the VM.
Yes, thank you. That timing aspect of threads (or shreds) is important.
Yes. It's quite essential. IMHO Ge's thesis is one of the best resources we have for understanding the how and why of ChucK's architecture in detail (there actually is a method to the madness, you see...). It's one of the more interesting aspects to chucking and you seem to have run into it quite early on. Maybe we should archive the last few days worth of posts labeled as "how to jump in at the deep end" :¬). I think we covered nearly all of it now. Yours, Kas.