Hello ChucKers, I have a ChucK class that maps all the samples in a drum kit to SndBufs https://github.com/heuermh/lick/blob/deab64e5b1eb1783e9a68bb508a0317c0287bf1... Altogether this creates 238 SndBufs, which appears to be more than ChucK can handle. Sample playback skips, repeats, or is otherwise corrupted. Removing some of the mappings improves playback. Is there a way to increase the memory allocated to ChucK at startup? Alternatively, is there a better way to handle loading and playback for hundreds of samples? Thanks, michael
On Thu, Jan 26, 2012 at 03:06:24PM -0600, Michael Heuer wrote:
Hello ChucKers,
Hey Michael.
I have a ChucK class that maps all the samples in a drum kit to SndBufs
https://github.com/heuermh/lick/blob/deab64e5b1eb1783e9a68bb508a0317c0287bf1...
Altogether this creates 238 SndBufs, which appears to be more than ChucK can handle. Sample playback skips, repeats, or is otherwise corrupted. Removing some of the mappings improves playback.
That's rather a lot! :-) When I read your mail I suspected that you were connecting all of those SndBuf's to the DAC, which would indeed cost quite a lot of CPU (only connecting the ones that you need when you need them and disconnecting them after that would save a lot in your case)... However looking at your file I don't see either a SndBuf or a dac at all. Those must be in another file. So, let's talk about it; do you mean that simply allocating this amount of memory causes a large CPU load in and of itself? It shouldn't, IMHO only UGens and code should cost CPU. It could be that the garbage collection is going bonkers? I don't think memory allocation should do this, at least not once it's allocated. 200 mono drumhits doesn't sound like a lot of memory to put into the ram of a computer these days, expecting it all to play *while* the samples are being pulled from the drive into ram might be a bit much. Does that help at all? Kas.
Kassen wrote:
Michael wrote:
Altogether this creates 238 SndBufs, which appears to be more than ChucK can handle. Sample playback skips, repeats, or is otherwise corrupted. Removing some of the mappings improves playback.
That's rather a lot! :-)
When I read your mail I suspected that you were connecting all of those SndBuf's to the DAC, which would indeed cost quite a lot of CPU (only connecting the ones that you need when you need them and disconnecting them after that would save a lot in your case)... However looking at your file I don't see either a SndBuf or a dac at all. Those must be in another file.
Yep, every instance of Sample has a SndBuf chucked to dac https://github.com/heuermh/lick/blob/deab64e5b1eb1783e9a68bb508a0317c0287bf1...
So, let's talk about it; do you mean that simply allocating this amount of memory causes a large CPU load in and of itself? It shouldn't, IMHO only UGens and code should cost CPU. It could be that the garbage collection is going bonkers?
ChucK appears to be hitting one CPU on a two CPU machine pretty hard, but is only using 305M of RAM. Overall CPU usage on the machine is around 55%.
I don't think memory allocation should do this, at least not once it's allocated. 200 mono drumhits doesn't sound like a lot of memory to put into the ram of a computer these days, expecting it all to play *while* the samples are being pulled from the drive into ram might be a bit much.
Good point. I tried adding a 20 second wait between creating the BigMono object and all the samples and playing them, to allow for disk IO, and still get the playback problems. If anyone wants to play along at home, you'll need to download LiCK from github and the Big Mono drum kit from here http://www.analoguedrums.com and rename the .wav files according to the attached (some files in bigmono.zip are named incorrectly), or rename some other samples, or change the file paths in BigMono.ck to match your samples, or code-reuse by cut and paste BigMono.ck to a different ChucK class to match your samples. See e.g. https://github.com/heuermh/lick/blob/master/RolandTr606.ck Then $ chuck --loop & $ chuck + import.ck $ chuck + examples/bigMonoDemo.ck michael
On Thu, Jan 26, 2012 at 04:01:09PM -0600, Michael Heuer wrote:
Yep, every instance of Sample has a SndBuf chucked to dac
Then that is where your CPU is going. The UGens that aren't "playing" are still being calculated so you have 200+ UGens generating 0's. Clearly that's not so efficient so you will need to implement some sort of system for "voice cycling" where at any time only -say- 10 of these buffers are connected to the dac.
ChucK appears to be hitting one CPU on a two CPU machine pretty hard, but is only using 305M of RAM. Overall CPU usage on the machine is around 55%.
Yes, that sounds like what I expected. The bad news is that ChucK doesn't use multiple CPU's (or cores or threading, which comes down to the same thing). This is because of the design and it'll be non-trivial to have ChucK figure out in what cases we can use multiply CPU's. Sorry to have to bring the bad news, but I don't think there is any way around it, you'll have to implement voices to limit how many UGens are connected at any time. I wish I had better news. Yours, Kas.
Kassen wrote:
Michael wrote:
Yep, every instance of Sample has a SndBuf chucked to dac
Then that is where your CPU is going. The UGens that aren't "playing" are still being calculated so you have 200+ UGens generating 0's. Clearly that's not so efficient so you will need to implement some sort of system for "voice cycling" where at any time only -say- 10 of these buffers are connected to the dac.
Hmm. That sounds difficult, so it might have to wait until the RPM challenge is over. :) michael
On Thu, Jan 26, 2012 at 05:09:52PM -0600, Michael Heuer wrote:
Hmm. That sounds difficult, so it might have to wait until the RPM challenge is over. :)
If you keep all of them in a big array then they are all indexed. Then you can have a array of 10 (or so) integers that keeps track of the current 10 in use. When a new sample needs playing you look up in that array what the oldest sample is, unchuck that, replace the number with the number of the new one and chuck the new one to dac. Then you just need to cover for re-triggering buffers that are already connected, but that's quite simple. That will work, but it won't deal with clicks for you. How hard clicks are to deal with will depend on how dense your drumming gets, compared to the length of the sample. If you are lucky you won't need to care at all. Yours, Kas.
On Thu, Jan 26, 2012 at 15:22, Kassen
If you keep all of them in a big array then they are all indexed. Then you can have a array of 10 (or so) integers that keeps track of the current 10 in use. When a new sample needs playing you look up in that array what the oldest sample is, unchuck that, replace the number with the number of the new one and chuck the new one to dac. Then you just need to cover for re-triggering buffers that are already connected, but that's quite simple.
That will work, but it won't deal with clicks for you. How hard clicks are to deal with will depend on how dense your drumming gets, compared to the length of the sample. If you are lucky you won't need to care at all.
Yours, Kas.
It's not too hard to create an instrument with noteOn / noteOff methods for each SndBuf. A noteOn patches the instrument into a chain with a quick gain ramp up from zero, a noteOff ramps it down and sporks a thread to unpatch it after the ramp hits zero. At least, that what I think I used to do... ;)
On Thu, Jan 26, 2012 at 03:52:07PM -0800, Robert Poor wrote:
It's not too hard to create an instrument with noteOn / noteOff methods for each SndBuf. A noteOn patches the instrument into a chain with a quick gain ramp up from zero, a noteOff ramps it down and sporks a thread to unpatch it after the ramp hits zero. At least, that what I think I used to do... ;)
That's cleanest sound-wise, not very hard and very ChucKist in structure. On the downside it's hard to guarantee that you won't run out of CPU that way. Ah, DSP... somehow it always starts interesting and then gets to choices that hurt a bit however you make them. Kas.
On Thu, Jan 26, 2012 at 16:04, Kassen
That's cleanest sound-wise, not very hard and very ChucKist in structure. On the downside it's hard to guarantee that you won't run out of CPU that way.
Okay, the devil is in the details. I actually had a fixed array of N of instruments, and would steal the oldest instrument if the limit was hit. There are other strategies. For example, you can have a soft limit that allow any number noteOn's, but start ramping down the oldest notes to keep the number of instruments at or below N. Ah, DSP... somehow it always starts interesting and then gets to
choices that hurt a bit however you make them.
Totally off topic, but did everyone see the paper on the new 'faster than fft' algorithms? lightweight: http://web.mit.edu/newsoffice/2012/faster-fourier-transforms-0118.html heavyweight: http://arxiv.org/abs/1201.2501v1 Enjoy...
On Thu, Jan 26, 2012 at 04:25:27PM -0800, Robert Poor wrote:
Okay, the devil is in the details. I actually had a fixed array of N of instruments, and would steal the oldest instrument if the limit was hit. There are other strategies. For example, you can have a soft limit that allow any number noteOn's, but start ramping down the oldest notes to keep the number of instruments at or below N.
Another possible optimisation is to first scan the array of running notes for any that may have stopped playing on their own (because they might be short samples) and prefer to first recycle those. The annoying bit is that that would work nicely with a list but would be annoying with a array as the new note would still have to go at the end as it'd be the newest.
Totally off topic, but did everyone see the paper on the new 'faster than fft' algorithms?
I saw the summary. Looked like something that might apply to us. I was hoping our MIR experts would be looking into this. Kas.
participants (3)
-
Kassen
-
Michael Heuer
-
Robert Poor