[chuck-users] 147th shred removed...segmentation fault? (Ryan Supak)

Ryan Supak ryansupak at gmail.com
Mon Jun 23 22:17:26 EDT 2014


Yuck, it behaves badly on Mac, but even worse on RPi. I get to maybe 30
shreds before it tanks... rs


On Mon, Jun 23, 2014 at 5:33 PM, Michael Heuer <heuermh at gmail.com> wrote:

> Oops, forgot to increment count
>
>   <<<"count", count++>>>;
>
> ...
>
> $ chuck --silent examples/dmxBug.ck
> [chuck](VM): removing shred: 2 (spork~exp)...
> count 0
> [chuck](VM): removing shred: 3 (spork~exp)...
> count 1
> ...
> [chuck](VM): removing shred: 144 (spork~exp)...
> count 142
> [chuck](VM): removing shred: 145 (spork~exp)...
> count 143
> [chuck](VM): removing shred: 146 (spork~exp)...
> Segmentation fault: 11
>
>
> On Mon, Jun 23, 2014 at 5:31 PM, Michael Heuer <heuermh at gmail.com> wrote:
> > Interesting find, even this simple example blows for me
> >
> > dmxBug.ck:
> >
> > fun void blink()
> > {
> >   while (true)
> >   {
> >     50::ms => now;
> >   }
> > }
> >
> > Shred shred;
> > spork ~ blink() @=> shred;
> >
> > 0 => int count;
> >
> > while (true)
> > {
> >   200::ms => now;
> >   shred.id() => Machine.remove;
> >   spork ~ blink() @=> shred;
> >   <<<"count", count>>>;
> > }
> >
> > $ chuck --silent examples/dmxBug.ck
> > ...
> > [chuck](VM): removing shred: 144 (spork~exp)...
> > count 0
> > [chuck](VM): removing shred: 145 (spork~exp)...
> > count 0
> > [chuck](VM): removing shred: 146 (spork~exp)...
> > Segmentation fault: 11
> >
> >    michael
> >
> >
> > On Mon, Jun 23, 2014 at 5:17 PM, Ryan Supak <ryansupak at gmail.com> wrote:
> >> Update: I eliminated everything I could from the Shreds that were being
> >> Sporked, and I still get the exact same error (at exactly 147 Shreds.)
> >>
> >> Here is what I shortened the Shreds to:
> >>
> >> fun void Blink()
> >> {
> >>     while( true )
> >>     {
> >>         50::ms => now;
> >>     }
> >> }
> >>
> >> fun void LFOMod()
> >> {
> >>     while( true )
> >>     {
> >>         50::ms => now;
> >>     }
> >> }
> >>
> >>
> >> On Mon, Jun 23, 2014 at 4:22 PM, Ryan Supak <ryansupak at gmail.com>
> wrote:
> >>>
> >>> Thanks for the thorough reply. I'm doing one safer, even than "voice
> >>> stealing": each Spork is, at most, reconfiguring a single global
> oscillator.
> >>>
> >>> Attached is the entire source. Lines 128-142 contain initialization
> code
> >>> that creates a Shred to make an LED blink by sending a MIDI message
> within a
> >>> time loop, and another Shred that sets the frequency of an LFO, polls
> it
> >>> every ten milliseconds, and sends some Serial output based on the LFO
> >>> position.
> >>>
> >>> Anytime that the parameters controlling the blinking rate and the LFO
> >>> rate/phase have occasion to change (when MIDI events come in to change
> >>> them), the "old" Shreds are Machine.Remove'd and they're recreated
> >>> immediately following. You can see this at lines 343 and 589.
> >>>
> >>> Notice that the LFOMod function and the Blink function aren't creating
> >>> anything new (with the exception of local arguments being
> instantiated), but
> >>> if you think those local arguments could be causing my problem I'll
> >>> eliminate them too.
> >>>
> >>> Please see the attached.
> >>> rs
> >>>
> >>>
> >>> On Mon, Jun 23, 2014 at 11:59 AM, Perry R Cook <prc at cs.princeton.edu>
> >>> wrote:
> >>>>
> >>>> I have lots of programs that spork hundreds or thousands
> >>>> of shreds without this error.  So, without seeing your
> >>>> code, my guesses (and questions) are as follows:
> >>>>
> >>>> Are you running this on a small memory architecture? Since
> >>>> you say headless, I'm guessing maybe Raspberry Pi?  It
> >>>> could be that you're running out of memory, so see next
> >>>> question.
> >>>>
> >>>> Does the shred that you're sporking declare new UGs or
> >>>> require memory to be allocated (arrays, lots of string
> >>>> manipulation, etc.)?  In general, this is a bad idea if you
> >>>> can avoid it.  ChucK does no garbage collection, which
> >>>> means that you need to be cautious about declaring memory.
> >>>>
> >>>> So even a shred as simple as:
> >>>>
> >>>> fun void mySine()  {
> >>>>     SinOsc s => dac;
> >>>>     Math.random2f(100,1000) => s.freq;
> >>>>     second => now;
> >>>>     s =< dac;
> >>>> }
> >>>>
> >>>> will eat up memory, because even though the SinOsc
> >>>> is unchucked and never computes again, the memory
> >>>> for that structure is still around.  We want, someday
> >>>> to make ChucK a proper garbage collecting language,
> >>>> but it's hard, especially for real-time systems.
> >>>>
> >>>> If this turns out to be your issue, then one way to
> >>>> handle this is to make a global pool of fixed
> >>>> resources and have your shred grab from that.
> >>>> Like classic "round robin voice stealing" in
> >>>> synthesizers:
> >>>>
> >>>> SinOsc s[100];
> >>>> 0 => int next2Use;
> >>>>
> >>>> fun void mySine() {
> >>>>     next2Use => int thisOne;
> >>>>     1 +=> next2Use;
> >>>>     if (next2Use > 99) 0 => next2Use;
> >>>>     s[thisOne] => dac;
> >>>>     Math.random2f(100,1000) => s[thisOne].freq;
> >>>>     second => now;
> >>>>     s[thisOne] =< dac;
> >>>> }
> >>>>
> >>>> // to test:
> >>>> while (1) {
> >>>>     Math.random2f(0.01,0.1) :: second => now;
> >>>>     spork ~ mySine();
> >>>> }
> >>>>
> >>>> I just fired up a few dozen of these running in
> >>>> parallel, VM showed 2600 total shreds running,
> >>>> no problem (and no clicks or dropouts!!).
> >>>>
> >>>> Hope this helps, if none of this applies, then you
> >>>> may have discovered a strange bug.  Source code please,
> >>>> and we can all scratch our heads on it.
> >>>>
> >>>>  Thanx!!  PRC
> >>>>
> >>>>
> >>>> Today's Topics:
> >>>>
> >>>>    1. 147th shred removed...segmentation fault? (Ryan Supak)
> >>>> _______________________________________________
> >>>> chuck-users mailing list
> >>>> chuck-users at lists.cs.princeton.edu
> >>>> https://lists.cs.princeton.edu/mailman/listinfo/chuck-users
> >>>
> >>>
> >>
> >>
> >> _______________________________________________
> >> chuck-users mailing list
> >> chuck-users at lists.cs.princeton.edu
> >> https://lists.cs.princeton.edu/mailman/listinfo/chuck-users
> >>
> _______________________________________________
> chuck-users mailing list
> chuck-users at lists.cs.princeton.edu
> https://lists.cs.princeton.edu/mailman/listinfo/chuck-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.cs.princeton.edu/pipermail/chuck-users/attachments/20140623/3f0f9687/attachment-0001.html>


More information about the chuck-users mailing list