towards matching client / server messages in chuck
Hello All, I'm looking to build a minimal collaborative music production / jamming system using chuck + pygtk + possibly pyORBit2 if it is needed. basically, the flow is this : I see there being a 'live' chuck (remote) running in loop mode which is going to the 'house' mix, and independant workstations (local) running where people can edit shreds with the pygtk apps and preview them. Once they are satisfied with their previews, they can submit them to the 'live' mix. however, there are 2 missing features with that as of 1.1.53 : - as far as I can tell, it is not possible to synchronize the clocks of 2 chuck processes - submitting a shred to chuck in loop mode does not return an ID to the clients. The former means that 'remote' loops will not play in time with 'local' loops, and the latter means clients have no way of knowing which shreds they have submitted into the 'live' mix. I wasn't prepared to fix the first problem yet (big surprise :) ) so I went ahead and wrote this patch against chuck 1.1.5.3 for the second problem. (would have written it against -current, but didn't know about anoncvs) Basically, the second data field in the Net_Msg is filled with the return code of the desired operation (instead of the more global 'ok'/'nok' that seems to be stored in the first data field). I had to tweak the shred queuing a little bit so that the shred id was returned up the call graph, tweak the client such that it would actually print the corresponding message, and hold on to the initial message type for the print. I'll be happy to integrate it in a cleaner way with the cvs chuck if desired / add support for other status messages. Ideally, I'd like all the messages to match on the client and server, but I didn't want to go mucking around with the protocol without seeing how well it would be recieved / fit in with other plans. As a side note, I wasn't sure what 'immediate' mode was for shred submission, so I made all the shreds immediate as a test (because immediate shreds get an id at that point, which made it -much- easier to know the shred id : ) ) It seemed to have cleared up a strange bug whereby shred submission triggers an un-queued sound to play at the beginning of the new shred.. (developing on apple laptop, but can do linux/jack testing.. this patch has only been tested on osx) Thanks for the coolest audio thingy around.. it is exactly what I was looking for. - Chris
On Nov 23, 2004, at 9:43 AM, chris turner wrote:
I'm looking to build a minimal collaborative music production / jamming system using chuck + pygtk + possibly pyORBit2 if it is needed.
This is excellent.
however, there are 2 missing features with that as of 1.1.53 : - as far as I can tell, it is not possible to synchronize the clocks of 2 chuck processes - submitting a shred to chuck in loop mode does not return an ID to the clients.
I wasn't prepared to fix the first problem yet (big surprise :) ) so I went ahead and wrote this patch against chuck 1.1.5.3 for the second problem.
Yes, we need this very much - thank you, I will put into next release, we should work together to integrate it.
I had to tweak the shred queuing a little bit so that the shred id was returned up the call graph, tweak the client such that it would actually print the corresponding message, and hold on to the initial message type for the print.
I'll be happy to integrate it in a cleaner way with the cvs chuck if desired / add support for other status messages. Ideally, I'd like all the messages to match on the client and server, but I didn't want to go mucking around with the protocol without seeing how well it would be recieved / fit in with other plans.
We need to somewhat design this part of the protocol...
As a side note, I wasn't sure what 'immediate' mode was for shred submission, so I made all the shreds immediate as a test (because immediate shreds get an id at that point, which made it -much- easier to know the shred id : ) )
This could be problematic. Immediate mode was for skipping the message queue and calling the VM on the current thread. This is how machine.add() and friends work. However, it could crash using it from the server thread because the VM is on another thread and is not reentrant. Sorry for the lack of documentation on this...
It seemed to have cleared up a strange bug whereby shred submission triggers an un-queued sound to play at the beginning of the new shred.. (developing on apple laptop, but can do linux/jack testing.. this patch has only been tested on osx)
Does the unwanted audio occur with all client/server network shreds? One thing to test if it's the audio interface part of ChucK or if there are actual unwanted samples generated. You could record the server using rec.ck - this is sample-synchronous with the dac, so it will only catch things that are actually synthesized, and is immune to timing artifacts or audio interrupts. If you don't hear the glitches in the recording, then it's likely something in the machine or the audio interface code. If you hear the glitches, then something is sending it to the dac. Since changing to immediate got rid of this problem, then it's likely something in the machine code. Please post an example, when you have time.
Thanks for the coolest audio thingy around.. it is exactly what I was looking for.
Thank _you_! Please keep posting dev issues here - we will start a (gasp) document effort soon for us developers! Best, Ge!
Ge, Thanks for the kind words and sorry for the delay in the reply. I've been working on completing the 'alpha' prototype of my system (so I knew I wouldn't have time to plan on coding chuck patches) However, this is nearing completion, so I'll probably be able to devote more effort to patching support for similar client/server messages. I'm thinking the plan might go like this (pending your approval, other plans, etc): - creating a central message table or message printing class/function - modifying the existing server-side messages to use this table - sorting out the request/response protocol to support message return codes - implementing client-side printouts based on the protocol + table as for the immediate mode / 'bug' : It looks like the 'bug' I was talking about was possibly related to the script I was using. I'll come up with an example to post / work on debugging my scripts don't remember the exact method calls, but there was a 'submit-immediate' and 'submit-for-later' call to the chuck vm class. I'm wondering if there would be any negative impact on the networking if there was a 'submit-eventually' that would block until an ID was properly returned back to the caller.. If this could work it might be the approach to take for the problem I was addressing (getting a shred ID on any submission.) in kludging around it. I'll probably just do some tests with this to see.. Assuming we're able to create a plan to get the changes into chuck-proper, would you prefer I work against the stable release or the development copy? (assuming the latter) Thanks again for the great software. I look forward to working on this project in the future. - Chris On Nov 24, 2004, at 4:28 AM, Ge Wang wrote:
I had to tweak the shred queuing a little bit so that the shred id was returned up the call graph, tweak the client such that it would actually print the corresponding message, and hold on to the initial message type for the print.
I'll be happy to integrate it in a cleaner way with the cvs chuck if desired / add support for other status messages. Ideally, I'd like all the messages to match on the client and server, but I didn't want to go mucking around with the protocol without seeing how well it would be recieved / fit in with other plans.
We need to somewhat design this part of the protocol...
As a side note, I wasn't sure what 'immediate' mode was for shred submission, so I made all the shreds immediate as a test (because immediate shreds get an id at that point, which made it -much- easier to know the shred id : ) )
This could be problematic. Immediate mode was for skipping the message queue and calling the VM on the current thread. This is how machine.add() and friends work. However, it could crash using it from the server thread because the VM is on another thread and is not reentrant. Sorry for the lack of documentation on this...
Hi Chris!
I've been working on completing the 'alpha' prototype of my system (so I knew I wouldn't have time to plan on coding chuck patches)
This is python + ChucK jamming system, right? How many critical features of ChucK are missing for doing this, besides the proper network/id protocol?
I'm thinking the plan might go like this (pending your approval, other plans, etc): - creating a central message table or message printing class/function - modifying the existing server-side messages to use this table - sorting out the request/response protocol to support message return codes - implementing client-side printouts based on the protocol + table
This sounds very good - that would clear up much of the existing mess. We should wait for a little while longer, however, before implementing - because the major version will be released soon, and there are many changes to existing code.
as for the immediate mode / 'bug' : It looks like the 'bug' I was talking about was possibly related to the script I was using. I'll come up with an example to post / work on debugging my scripts
I see. But it is still dangerous to call VM in this mode from another thread. I will make some passes over this and see if there are any safe/quick workarounds.
don't remember the exact method calls, but there was a 'submit-immediate' and 'submit-for-later' call to the chuck vm class.
I'm wondering if there would be any negative impact on the networking if there was a 'submit-eventually' that would block until an ID was properly returned back to the caller.. If this could work it might be the approach to take for the problem I was addressing (getting a shred ID on any submission.) in kludging around it.
submit-eventually would make sense - the caller wouldn't have to wait for too long before the id is assigned or the thing is rejected anyway.
Assuming we're able to create a plan to get the changes into chuck-proper, would you prefer I work against the stable release or the development copy?
right now there is a manual branch in CVS for 1.2.0.0 (chuck_dev/v2), which is going back into src/ once it's ready - we should start there (and then).
I look forward to working on this project in the future.
That's great to hear. Thank you very much! Best, Ge!
participants (2)
-
chris turner
-
Ge Wang