Ah, the digital world - never enough bits! My first reaction was to think that this extra sample was a floating point truncation error in the s.phase() and s.period() math, but then I came to think hmm... would those minuscule fractions of samples make any difference to a zero crossing timing?
Yes, those are good questions. Of course a samp is a very short period but thanks to double floats we have a almost obscene amount of resolution at that scale. I also think that floating point errors only become significant when multiplying very large numbers with very small ones, which I don't think is the case here (but I'm not sure).
Furthermore, why would a truncation error be weighted?
That bit may not be very hard; it may well be the case that if the floats when truncated are truncated towards zero. As we are always dealing with positive numbers that could explain something?
Your declaration that "the value is on -average- lower in magnitude" started me thinking 'okay, how average?' So I wrote up a little script:
Ah-Ha! A scientific approach to investigating a unknown phenomena. :¬)
I'll admit, at first I didn't have the zero crossing checks in, so I was understanding very little, but once I realized that the zero crossings were equally distributed between the two possible locations, things clicked. Obviously the extra sample would "on average" (about 75% of the time) result in a lower s.last() value, because it's the sample around which the zero crossing would pivot. If the zero crossing didn't pivot, then you'd see an even distribution of probability for the lowest value.
I think you are right.
Your initial calculation is advancing time to the end of the period (or so it tries). I think there is a truncation error that is hidden in the math (that would explain why the zero crossing is jumping back and forth between the two locations, with an even distribution), and therefore only half the time you've advanced fully to the end of the period (the other half of the time you're off by one sample).
Hmmmm, I really would expect it to truncate towards zero. Another thing is that I'm not sure at what point the .phase() is actually calculated. When you call to .last() you get the sample at the last UGen tick so that value is at that point on average .5::samp out of date. I'm not sure whether .phase() is also calculated at that moment or whether it can be calculated in between ticks. This could be another factor.
The calculation is not finding the sample closest to the zero crossing - which might be a more precise, yet more laborious approach with little real world improvement.
I think it could be done by dividing the " (1-s.phase())::s.period()" by a samp, then rounding to the nearest integer and advancing time by that many samples, however that's assuming "(1-s.phase())::s.period()" doesn't cause rounding errors itself, a asumption that now seems far from safe. Also; this would demand that we know how the shred's timing and the UGen ticks line up as we want to disconnect right after the lowest sample and not right before.
I'd unchuck items with the inclusion of a while loop:
while (s.gain()>0.000000001)
{
s.gain()/1.005 => s.gain();
samp=>now;
}
s =< dac;
Though who knows how long that can take.
That will indeed work but that's far from cheap. For audio range signals this should finish sooner (nearly always)
while (Std.fabs(s.last()) >0.000000001)
{
s.gain()/1.005 => s.gain();
samp=>now;
}
s =< dac;