If it were me, I'd make a new pair of rev/g in smack(), chuck the
voice to it, then unchuck it on the way out. You're making shreds
pretty fast so that may not lead to a script you can leave running for
a long time, but it'd have the desired effect.
I'm not sure about this approach. Reverbs are the very last thing I'd have a large amount of due to the processing involved. If at all possible I'd always try to deal with the issue by setting volume and mix. Clearly this isn't always a option for all reverbs; we may want to throw acoustic viability to the wind and instead deal with reverb as a per-voice parameter for a "purely synthetic" aesthetic.
It's a interesting issue; LiSa doesn't allow for per-voice outputs, nor do I think the ChucK architecture offers any options for that type of functionality; at least to my knowledge.
I'm not sure I understand the problem as the original post and code don't really go into what exactly Andrew is trying to do, making it hard to determine what the problem is, especially as I don't have a computer with a SMS (that would have to be a Mac in this context, I think) at hand.
I would think that any behaviour we care to define here should be available using one reverb and two instances of LiSa, on fed directly to the DAC and and one fed exclusivley to the reverb (which would end up at the DAC as well, of course), combined with LiSa.voiceGain(). Thanks to the build in reverbs being linear and time invariant we could safely use the two instances of LISa in paralel, in identical ways except that one would set the amount of straight signal for this voice, with the other setting the reverb level with no difference in the final result from using many reverbs summed.