On 26 Apr 2009, at 23:47, Kassen wrote:
If you want to automate that, then the Ugen must be able to generate a signal to do the disconnect. The future scheduling is just a workaround in the absence of that.
While I can see your perspective I think UGens mainly generate units and don't do much else and that that's nice because it's clear. I could imagine optimising the STKInstruments so they act as though they were set to .op(0) after the .noteOff() decay ran out but that0s a optimisation issue and not a ChucK syntax issue.
The UGen could be extended so that those that so wish can send a special generator terminator event. Then, on a more basic level, it could be used to disconnect by hand in the circumstances where it is needed. But one could also admit calling a special handler (bult into Chuck) that disconnects the generator. One would then still need to have some construct to indicate which ones. Such constructs would be backwards compatible in the sense that no existing UGen is required to have these capabilities.
It seems Chuck avoids C++ "automatic elements", in favor of returning references. This is what is causing the cleanup problem, because when the function has finished, there is no way to tell which references are used and which are not. One way to do that is to trace or references from the function stack and global objects. This way, the unusued elements can be removed.
Yes, I think we'll get a method based on reference counting.
That is about the only method one can implement in C++ (if that is used to implement Chuck), but not so difficult to implement in C++ (I have used it in my computer languages). It is supposed to be slow relative other GC methods and cannot remove reference loops, but is better than nothing.
The both use the Mach kernel, I think (at least Mac OS X does), which uses preemptive multitasking. Mac OS X may still have support for cooperative multitasking, where each program must release time to the others - I have a vague memory that may be better for time-critical situations, though a chore from the programming point of view.
Yes, the concerns are quite different; there are good reasons why business users aren't very interested in investing in the Linux RT kernel variants either. (or so I hear, most of those reasons are way beyond me but that's fine as I don't design OS's)
It is hard to write a Unix style kernel because of security concerns much low level lookups may have to be done, making it slow. I think there are some discussions about that on the Wikipedia GNU Hurd page http://en.wikipedia.org/wiki/Hurd
There has been some speculation about stand-alone compiled ChucK programs, there is no reason why we couldn't have those in the future but personally I'm still more interested in more interaction with the VM.
programs Hugs, an interpreter, and GHCi, an interactive compiler. Mostly, I prefer Hugs, but the ability to compile and get a faster
You might check the difference between the Haskell <http://haskell.org/ program might be good too.
Yes, thank you. That timing aspect of threads (or shreds) is important.
Yes. It's quite essential. IMHO Ge's thesis is one of the best resources we have for understanding the how and why of ChucK's architecture in detail (there actually is a method to the madness, you see...). It's one of the more interesting aspects to chucking and you seem to have run into it quite early on. Maybe we should archive the last few days worth of posts labeled as "how to jump in at the deep end" :¬).
I think we covered nearly all of it now.
I am interested in these questions, but it helps doing programming, too :-). Hans