Hi Robin and all!
ok.. I've been happily hacking away at Chuck 1.2.0.0 sources, with good results so far.
Wow, that is awesome!!
(You can have them now, if you like -- a .cpp and .h file, with a query function to register the ugens, all in the same style as current ugens; but I'd like to do a little more testing if you do want it now).
We are leaving for ICMC soon (anyone else here going?) but we would love to add your code, either as plug-ins (we will enable it soon) or as statically linked default ugens. Thank you very much!
Next on the slab: VSTs. And there, things didn't go so well.
Hehe, the sample-at-a-time thing is likely the biggest hurdle.
I supposed it is possible to try running VSTs one sample at a time.
We should try this first (as you suggested), and I also think the performance will be bad to horrendous. But it would give us a baseline idea though so it is perhaps worth doing.
What do you think about converting Chuck_VM::run( ) to execute Ugens on blocks of samples at a time? As far as I can see, it's do-able. And the across-the-board performance gains would probably be worth the effort.
We have certainly thought about this possibility, mostly when we are lamenting about ChucK's less than optimal throughput. It is not high priority right now because we are still trying to make it work and make it right, before making it fast. There are still holes in the compiler and language that needs to be implemented. But I totally agree with you, the potential gain from block processing could be tremendous, and it may be possible without compromising the timing/granularity properties of the system.
Roughly, I think the revised code would look like this:
#define MAX_UGEN_BLOCKSIZE 32 // or so while( m_running ) run shreds for 1 tick broadcast queued events determine how many ticks we can run ugens for: int ugenBlockSize; if (shreds active) { ugenblocksize = 1; } else { ugenblocksize = min(MAX_UGEN_BLOCKSIZE,ticks_to_next_shred()) } Run_ugens(ugenblocksize); process messages }
This plan is solid and right on. I would venture to get rid of even the 1 tick when running shreds, since shreds technically always operate between samples, never on samples. Of course we need to handle loops...
The only breaking change I can see is Ugens that are chucked into loops,
You are right on once again. Here is how we might handle that, maybe: 1) detect/mark cycles in the ugen graph when changing connections 2) when computing samples, nodes in cycles still compute sample at a time, while the rest can compute in blocks. 3) furthermore, non-cyclic nodes need to be partitioned and sorted. subgraphs that a cycle depend on must compute the block before the first sample of that cycle computes. 4) We may be able to further optimize if delay-lines are involved in the cycle. Hmm, I have not much confidence about the validity of the above. Let me know if you see a problem or a better way altogether.
I'm willing to take a crack at an experimental version some time, but I don't imagine your sources are too stable right now. I'd be a bit concerned about making this kind of wide-ranging change on a very active source tree, with no prospect of ever being able to merge it back into mainline sources.
The source tree is indeed in flux these days/hours/seconds/samps. However, I really want to do this too. Let's work together to give this a shot. If you like, sign up and join 'chuck' on CVS: http://cvs.cs.princeton.edu/ Thanks again for your great work! This rocks. Best, Ge!