2008/5/20 Mike McGonagle <mjmogo@gmail.com>:
Just curious, Kassen, you used the word "Mapping" so much that is seems to have lost some sort of context. Might you give a description of what you mean, and maybe an example of what you do with it?


Yes, of course!

What I mean is that on the one hand we have some sort of sound generating setup (say a few Ugens in our case) and on the other was have some input like MIDI, HID, a musical score or a table containing weather, tides and wildlife observed on a local beach over a month.

By "mapping" I mean the way the second is linked to the first, especially if this is done in a particularly interesting way. On a simple hardware synth you might for example link a high MIDI "velocity" to a higher filter-cutoff and shorter envelope atack-time. There it's fairly easy and straightforward (most of the time), it stands to reason that for weather and tides to become "musical" a bit of thought may be needed.

Now why it's important to me, some of this is personal obeservation, some of it known science. In accoustical instruments and in the "real world" in general a single factor will often affect the sound of a object in various ways at the same time. A sax, for example, may produce low, deep and sensitive notes and it gets louder when blown harder.... but you can't (easily) get tones that are loud as well as deep and sensitive because blowing it harder won't just affect volume but also the tone. Notes on the higher keys of the piano don't last as long as lower ones either, for example, despite centuries of research.

These linked parameters are limitations of acoustical instruments that electronocally we can get around with relative ease, but I wonder if sidestepping those limitations is always usefull. Typcally a sound gets generated by something acting on something else. The human ear (and hearing psychology) is very sensitive to this as it tells us a lot about our suroundings. If we hear a sound at night our survival may depend on determining very quickly whether it was made by something larger or smaller then ourselves and what it was doing so we -very quickly- try to analyse what makes the sound and why it made it. We turn out to be quite good at this, if we weren't we wouldn't be here anymore.

So; acoustical instruments are relatively easy to analyise for our ears, we can -by listening- figure out a lot about the structure and because of this figure out a lot about the forces acting on it which in turn tells us something about the way the musician is acting on it.

As musicians, on the other hand, our experience with natual phenomena makes acoustical instruments relatively predictable on a intuitive level.

Hence; I feel that if we want to make a intrument that's intuitive to controll while it may be quite complicated internally or if we want to convey a certain feeling from some set of sounds it makes a lot of sense to take inspiration from nature and physics for the way we link parameters to eachother and to the interface.

This need not all be completely direct. One strategy I use at times is take into acount how acoustical instruments have a "state" and that this state may have to "change it's ballance" when some force is acting on it. To (crudely) represent this for non-physically modeled instruments I sometimes determine the rate of change for a given controll and use that as a parameter for some element (like the amount of noise) while the actual value generated by the parameter itself may affect something else (like pitch).

I can't give any hard and fast rules for how to do this and lots of trial&error is often needed but reading a bit about acoustics, pyschoacoustics and interface design does help a lot. Mentally picturing a instrument while fingering (or footing or....) the interface before you start coding helps as well.

There is -obviously- no need at all for electronic sound to have anything to do with nature at all but it does pay off to keep in mind that when sounds become too "unlike nature" the ear (and hearing psychology) will "disbelieve" them, give up trying to relate them to the listener on a practical level and a different kind of hearing kicks in. This may be exactly what you want at times but personally I think it's easier to get a emotional responce from a listener if you (partially) engage the side of hearing psychology that's in charge of sorting the sound of a lover's breath from that of a tiger.

Right, that was quite a bit of stuff to glance over really quickly but that's basically the core of it for me.

Hope that clarifies, I also hope it gave rise to lots of questions (I know I have a lot and they are lots of fun!).
 
Yours,
Kas.