Yo All....I'm trying to download and study some of the best music out there which features granular synthesis. I haven't heard much yet beyond Iannis Xenakis, but I'd love to know what your favorites are. Perhaps it would be worthwhile putting up a torrent of the same. If possible, kindly provide links to mp3 files or any other type of free downloads. (What do you think of Jaron Lanier's music?) http://www.jaronlanier.com/musicdownload.html cheers! ------- -.- 1/f ))) --. ------- ... http://www.algomantra.com
2008/5/19 AlgoMantra
Yo All....I'm trying to download and study some of the best music out there which features granular synthesis. I haven't heard much yet beyond Iannis Xenakis, but I'd love to know what your favorites are.
Replying on the ChucK list since it seems more topical (?). Mostly I'm not so into either the "6 speakers, grains, stroke your chin while sitting on a chair, debate afterwards" style or the "pull a amen through the latest hottest VST" thing. I heard a bit too much of that and to my ear most of it lacks feeling, probably because grains are often quite hard to control. Many times I feel randomness plays a too large role as well. What I really like though is Michel Waisvisz's performances which are based on Steim's "LiSa" which in turn uses grains and controlled in realtime using his "Hands" controler. http://www.youtube.com/watch?v=SIfumZa2TKY I'm not sure how much of his music was recorded at all, I think he stuck to live performing for years. Since he lives in (I think) Amsterdam and me in The Hague (45 minutes by train) so I got to see him play a few times. I think the main thing that atrackts me in his work is how much of it is done directly and in realtime with very little (if any?) randomness. Oh, and AGF's song "Piano's", I like that one a lot as well. Yours, Kas.
Hello,
I'll add my voice to the chorus (cloud? swarm?) recommending Roads'
Microsound. Although I must admit that I've never really managed to
properly listen to and enjoy any of his compositions, finding them a bit on
the cold academic side, but that may reflect a lack of time and effort on my
part more than anything. Hey, who said music had to be about 'feeling',
anyway? ;-)
Also, Tim Blackwell has done some good work with swarms / flocking behaviour
simulations mapped to granular synthesis:
http://www.timblackwell.com/
Actually, I've done some similar things too FWIW, but nothing online etc at
the moment. I think it's an interesting approach as one has lots of data
that might otherwise be generated randomly / stochastically that can be
mapped quite naturally to granular synthesis. At the same time, it is
possible to interact with the system quite intuitively using a device with a
few degrees of freedom (like analysis of a normal acoustic instrument, in
Tim's case). To me, that kind of interaction is more interesting than total
'control'; that may be getting off-topic in a way, but given the sheer
volume of numbers that are required to drive granular synthesis, the mapping
and interaction tends to be particularly important.
Even straight randomness can be have its place, though and often,
'randomness' is stochastic in a way that is informed by physics equations
etc; I think that was the case with Riverrun, for example. I suppose that
may be where one starts getting into chin-stroking territory...
Cheers,
Peter
p.s. there is another list called microsound; might be of interest.
http://microsound.org/
On Mon, May 19, 2008 at 1:45 PM, Kassen
2008/5/19 AlgoMantra
: Yo All....I'm trying to download and study some of the best music out there which features granular synthesis. I haven't heard much yet beyond Iannis Xenakis, but I'd love to know what your favorites are.
Replying on the ChucK list since it seems more topical (?).
Mostly I'm not so into either the "6 speakers, grains, stroke your chin while sitting on a chair, debate afterwards" style or the "pull a amen through the latest hottest VST" thing. I heard a bit too much of that and to my ear most of it lacks feeling, probably because grains are often quite hard to control. Many times I feel randomness plays a too large role as well.
What I really like though is Michel Waisvisz's performances which are based on Steim's "LiSa" which in turn uses grains and controlled in realtime using his "Hands" controler. http://www.youtube.com/watch?v=SIfumZa2TKY
I'm not sure how much of his music was recorded at all, I think he stuck to live performing for years. Since he lives in (I think) Amsterdam and me in The Hague (45 minutes by train) so I got to see him play a few times. I think the main thing that atrackts me in his work is how much of it is done directly and in realtime with very little (if any?) randomness.
Oh, and AGF's song "Piano's", I like that one a lot as well.
Yours, Kas.
_______________________________________________ chuck-users mailing list chuck-users@lists.cs.princeton.edu https://lists.cs.princeton.edu/mailman/listinfo/chuck-users
2008/5/20 Peter Todd
Hello,
Hi!
I'll add my voice to the chorus (cloud? swarm?) recommending Roads' Microsound. Although I must admit that I've never really managed to properly listen to and enjoy any of his compositions, finding them a bit on the cold academic side, but that may reflect a lack of time and effort on my part more than anything. Hey, who said music had to be about 'feeling', anyway? ;-)
To be clear; I too recommended Microsound last evening (my time) in response to Algo mentioning Gabor (off list). Road's compositions are indeed very abstract but I love the book in how it adresses many of the issues I had with grains in how they were commonly used and the book is pleasantly personal, I feel.
Also, Tim Blackwell has done some good work with swarms / flocking behaviour simulations mapped to granular synthesis: http://www.timblackwell.com/
Actually, I've done some similar things too FWIW, but nothing online etc at the moment. I think it's an interesting approach as one has lots of data that might otherwise be generated randomly / stochastically that can be mapped quite naturally to granular synthesis. At the same time, it is possible to interact with the system quite intuitively using a device with a few degrees of freedom (like analysis of a normal acoustic instrument, in Tim's case). To me, that kind of interaction is more interesting than total 'control'; that may be getting off-topic in a way, but given the sheer volume of numbers that are required to drive granular synthesis, the mapping and interaction tends to be particularly important.
Yes, I agree. I used to work a lot with "chorus" type sounds, not the popular effect but the way a actual chorus (of people) works; I'd use a few paralel tone generators to build up a single sound. At the start of a not they'd play at the set pitch + some random offset and over the cource of the note the scaling of that offset would decrease, leading to a single pitch, like singers tuning to eachother. Here it's quite natural to map the amount of randomness at the start to the note's velocity as real instruments tend to be harder to controll at higher volumes. These particular tones probably wouldn't be called "granular" but that's one example of how I look at randomness and controller mappings where some randomness makes a lot of sense. What really changed the way I look at mappings and what I'd recomend in adition to Road's notes on controller mappings for grains is Stephen Beck's article "Designing Acoustically Viable Instruments in Csound" which can be found in the Csound Handbook. This article contains some examples in Csound but it's very readable for ChucKists that may not read Csound. Recomended.
Even straight randomness can be have its place, though and often, 'randomness' is stochastic in a way that is informed by physics equations etc; I think that was the case with Riverrun, for example. I suppose that may be where one starts getting into chin-stroking territory...
Yes, of course. Sounds like rain or the ocean can be beautiful and indeed touch one emotionally without any need for a composer to get involved... and those are purely random (at least for practical purposes). I also have to say that concepts are great, but what gets to me is pieces where the concept seems to have been turned into sound directly without the composer keeping a ear on the end result (some schools of composition actively encourage that at times) or without relating it back to a listener. I still go to concerts like this at times and while these pieces are often interesting on a technical level they tend not to touch me emotionally which, to return to your point above, I would dare say is a nice property for art to have. Call me old-fashioned but all of my favourite works of art (some of which are *also* quite abstract) touch me emotionally. I don't feel this is a inherent issue of grains but more one of mappings which just happen to be very hard to do well for grains. On a entirely personal note; I used to spend most of my time for a given piece on sound-design yet lately I've been most happy with relatively simple sounds with very carefull mappings. Right now I'm working on a instrument that sound-wise is (at the moment) just a PulseOsc with a LPF, ADSR and a (custom) overdrive. Extremely simple stuff yet with good mappings (I'm using a tilt-sensing joypad) even something simple like that can be very evocative as a instrument. I plan to develop the sound generating bits further later but first I want to get my mappings right.
p.s. there is another list called microsound; might be of interest. http://microsound.org/
Yes, I used to be on it years ago. I had to unsubscribe when it became swamped with politics and increasingly abstract language. IMHO grains are already hard enough to controll without using language seemingly meant to obfuscate what's actually being said. I thought I'd leave those debates to those who enjoy them and try to have fun with music on my own instead (Which is not to say there weren't people sharing interesting ideas as well and in the time since it may have changed!). Yours, Kas PS; IMHO, IMHO & IMHO. No offence intended at all to others with different tastes and experiences.
Just curious, Kassen, you used the word "Mapping" so much that is seems to
have lost some sort of context. Might you give a description of what you
mean, and maybe an example of what you do with it?
Thanks...
Mike
On Mon, May 19, 2008 at 5:59 PM, Kassen
2008/5/20 Peter Todd
: Hello,
Hi!
I'll add my voice to the chorus (cloud? swarm?) recommending Roads' Microsound. Although I must admit that I've never really managed to properly listen to and enjoy any of his compositions, finding them a bit on the cold academic side, but that may reflect a lack of time and effort on my part more than anything. Hey, who said music had to be about 'feeling', anyway? ;-)
To be clear; I too recommended Microsound last evening (my time) in response to Algo mentioning Gabor (off list). Road's compositions are indeed very abstract but I love the book in how it adresses many of the issues I had with grains in how they were commonly used and the book is pleasantly personal, I feel.
Also, Tim Blackwell has done some good work with swarms / flocking behaviour simulations mapped to granular synthesis: http://www.timblackwell.com/
Actually, I've done some similar things too FWIW, but nothing online etc at the moment. I think it's an interesting approach as one has lots of data that might otherwise be generated randomly / stochastically that can be mapped quite naturally to granular synthesis. At the same time, it is possible to interact with the system quite intuitively using a device with a few degrees of freedom (like analysis of a normal acoustic instrument, in Tim's case). To me, that kind of interaction is more interesting than total 'control'; that may be getting off-topic in a way, but given the sheer volume of numbers that are required to drive granular synthesis, the mapping and interaction tends to be particularly important.
Yes, I agree. I used to work a lot with "chorus" type sounds, not the popular effect but the way a actual chorus (of people) works; I'd use a few paralel tone generators to build up a single sound. At the start of a not they'd play at the set pitch + some random offset and over the cource of the note the scaling of that offset would decrease, leading to a single pitch, like singers tuning to eachother. Here it's quite natural to map the amount of randomness at the start to the note's velocity as real instruments tend to be harder to controll at higher volumes. These particular tones probably wouldn't be called "granular" but that's one example of how I look at randomness and controller mappings where some randomness makes a lot of sense.
What really changed the way I look at mappings and what I'd recomend in adition to Road's notes on controller mappings for grains is Stephen Beck's article "Designing Acoustically Viable Instruments in Csound" which can be found in the Csound Handbook. This article contains some examples in Csound but it's very readable for ChucKists that may not read Csound. Recomended.
Even straight randomness can be have its place, though and often, 'randomness' is stochastic in a way that is informed by physics equations etc; I think that was the case with Riverrun, for example. I suppose that may be where one starts getting into chin-stroking territory...
Yes, of course. Sounds like rain or the ocean can be beautiful and indeed touch one emotionally without any need for a composer to get involved... and those are purely random (at least for practical purposes).
I also have to say that concepts are great, but what gets to me is pieces where the concept seems to have been turned into sound directly without the composer keeping a ear on the end result (some schools of composition actively encourage that at times) or without relating it back to a listener. I still go to concerts like this at times and while these pieces are often interesting on a technical level they tend not to touch me emotionally which, to return to your point above, I would dare say is a nice property for art to have. Call me old-fashioned but all of my favourite works of art (some of which are *also* quite abstract) touch me emotionally.
I don't feel this is a inherent issue of grains but more one of mappings which just happen to be very hard to do well for grains.
On a entirely personal note; I used to spend most of my time for a given piece on sound-design yet lately I've been most happy with relatively simple sounds with very carefull mappings. Right now I'm working on a instrument that sound-wise is (at the moment) just a PulseOsc with a LPF, ADSR and a (custom) overdrive. Extremely simple stuff yet with good mappings (I'm using a tilt-sensing joypad) even something simple like that can be very evocative as a instrument. I plan to develop the sound generating bits further later but first I want to get my mappings right.
p.s. there is another list called microsound; might be of interest. http://microsound.org/
Yes, I used to be on it years ago. I had to unsubscribe when it became swamped with politics and increasingly abstract language. IMHO grains are already hard enough to controll without using language seemingly meant to obfuscate what's actually being said. I thought I'd leave those debates to those who enjoy them and try to have fun with music on my own instead (Which is not to say there weren't people sharing interesting ideas as well and in the time since it may have changed!).
Yours, Kas
PS; IMHO, IMHO & IMHO. No offence intended at all to others with different tastes and experiences.
_______________________________________________ chuck-users mailing list chuck-users@lists.cs.princeton.edu https://lists.cs.princeton.edu/mailman/listinfo/chuck-users
-- Peace may sound simple—one beautiful word— but it requires everything we have, every quality, every strength, every dream, every high ideal. —Yehudi Menuhin (1916–1999), musician
2008/5/20 Mike McGonagle
Just curious, Kassen, you used the word "Mapping" so much that is seems to have lost some sort of context. Might you give a description of what you mean, and maybe an example of what you do with it?
Yes, of course! What I mean is that on the one hand we have some sort of sound generating setup (say a few Ugens in our case) and on the other was have some input like MIDI, HID, a musical score or a table containing weather, tides and wildlife observed on a local beach over a month. By "mapping" I mean the way the second is linked to the first, especially if this is done in a particularly interesting way. On a simple hardware synth you might for example link a high MIDI "velocity" to a higher filter-cutoff and shorter envelope atack-time. There it's fairly easy and straightforward (most of the time), it stands to reason that for weather and tides to become "musical" a bit of thought may be needed. Now why it's important to me, some of this is personal obeservation, some of it known science. In accoustical instruments and in the "real world" in general a single factor will often affect the sound of a object in various ways at the same time. A sax, for example, may produce low, deep and sensitive notes and it gets louder when blown harder.... but you can't (easily) get tones that are loud as well as deep and sensitive because blowing it harder won't just affect volume but also the tone. Notes on the higher keys of the piano don't last as long as lower ones either, for example, despite centuries of research. These linked parameters are limitations of acoustical instruments that electronocally we can get around with relative ease, but I wonder if sidestepping those limitations is always usefull. Typcally a sound gets generated by something acting on something else. The human ear (and hearing psychology) is very sensitive to this as it tells us a lot about our suroundings. If we hear a sound at night our survival may depend on determining very quickly whether it was made by something larger or smaller then ourselves and what it was doing so we -very quickly- try to analyse what makes the sound and why it made it. We turn out to be quite good at this, if we weren't we wouldn't be here anymore. So; acoustical instruments are relatively easy to analyise for our ears, we can -by listening- figure out a lot about the structure and because of this figure out a lot about the forces acting on it which in turn tells us something about the way the musician is acting on it. As musicians, on the other hand, our experience with natual phenomena makes acoustical instruments relatively predictable on a intuitive level. Hence; I feel that if we want to make a intrument that's intuitive to controll while it may be quite complicated internally or if we want to convey a certain feeling from some set of sounds it makes a lot of sense to take inspiration from nature and physics for the way we link parameters to eachother and to the interface. This need not all be completely direct. One strategy I use at times is take into acount how acoustical instruments have a "state" and that this state may have to "change it's ballance" when some force is acting on it. To (crudely) represent this for non-physically modeled instruments I sometimes determine the rate of change for a given controll and use that as a parameter for some element (like the amount of noise) while the actual value generated by the parameter itself may affect something else (like pitch). I can't give any hard and fast rules for how to do this and lots of trial&error is often needed but reading a bit about acoustics, pyschoacoustics and interface design does help a lot. Mentally picturing a instrument while fingering (or footing or....) the interface before you start coding helps as well. There is -obviously- no need at all for electronic sound to have anything to do with nature at all but it does pay off to keep in mind that when sounds become too "unlike nature" the ear (and hearing psychology) will "disbelieve" them, give up trying to relate them to the listener on a practical level and a different kind of hearing kicks in. This may be exactly what you want at times but personally I think it's easier to get a emotional responce from a listener if you (partially) engage the side of hearing psychology that's in charge of sorting the sound of a lover's breath from that of a tiger. Right, that was quite a bit of stuff to glance over really quickly but that's basically the core of it for me. Hope that clarifies, I also hope it gave rise to lots of questions (I know I have a lot and they are lots of fun!). Yours, Kas.
On Mon, May 19, 2008 at 8:05 PM, Kassen
Hope that clarifies, I also hope it gave rise to lots of questions (I know I have a lot and they are lots of fun!).
Thanks, while I have to still read your whole post, I kind of figured this is what you meant. Guess after a while of hearing a particular word, it starts to lose some of its meaning... I will have to read your whole post now, as it looks on first glance to be worth reading MORE than once... Just as a thank you, I also ordered a copy of Microsound, and hope to have it next week. Mike
Yours, Kas.
_______________________________________________ chuck-users mailing list chuck-users@lists.cs.princeton.edu https://lists.cs.princeton.edu/mailman/listinfo/chuck-users
-- Peace may sound simple—one beautiful word— but it requires everything we have, every quality, every strength, every dream, every high ideal. —Yehudi Menuhin (1916–1999), musician
2008/5/20 Mike McGonagle
Thanks, while I have to still read your whole post, I kind of figured this is what you meant. Guess after a while of hearing a particular word, it starts to lose some of its meaning... I will have to read your whole post now, as it looks on first glance to be worth reading MORE than once...
If it looks like this is "hard" stuff that's entirely my fault. On some level you already know all of this; knocking on a door with the palm of your hand will sound different from how it does when you knock it with your knuckles. It'll sound different in a few ways at the same time (spectral content, volume, decay time etc) after changing a single parameter. The ways in which is sounds different could also be accomplished in other ways, like for example knocking harder. There's nothing new here, it's just like with so many other things one might want to put in code; you have to be concious of the phenomenon to type it up (or take it into account).
Just as a thank you, I also ordered a copy of Microsound, and hope to have it next week.
Not being Curtis Roads I'm not sure exactly *how* this thanks me but I appreciate the gesture :¬). I don't think you'll regret that buy, it's the kind of book that'll be with you for years as a reference and source of inspiration. Happy synthesising, Kas.
Not being Curtis Roads I'm not sure exactly *how* this thanks me but I appreciate the gesture :¬). I don't think you'll regret that buy, it's the kind of book that'll be with you for years as a reference and source of inspiration.
I read the first few pages on Google's Preview that you sent, and it's a bomb. Immediately requested a friend in the US to ship one to me. I guess I'm riding a hunch that the applications of granular synthesis will not be limited to 'music'.
2008/5/21 AlgoMantra
I read the first few pages on Google's Preview that you sent, and it's a bomb. Immediately requested a friend in the US to ship one to me. I guess I'm riding a hunch that the applications of granular synthesis will not be limited to 'music'.
Oh, no, it'll work nicely for sand-paintings or mosaics and.... :¬) BTW, to return to your question on AGF's song; I used to have a mp3of it but it got lost which I'm not that sad about because I have it on vinyl as well. Ah; re-found it, it's on her page; http://www.poemproducer.com/freemusic.php Not the be-all-and-all of grains but it's a cute little song that touches me emotionally which is worth something as well :¬) Cheers, Kas.
On Tue, May 20, 2008 at 2:37 PM, Kassen
2008/5/20 Mike McGonagle
: Thanks, while I have to still read your whole post, I kind of figured this is what you meant. Guess after a while of hearing a particular word, it starts to lose some of its meaning... I will have to read your whole post now, as it looks on first glance to be worth reading MORE than once...
If it looks like this is "hard" stuff that's entirely my fault. On some level you already know all of this; knocking on a door with the palm of your hand will sound different from how it does when you knock it with your knuckles. It'll sound different in a few ways at the same time (spectral content, volume, decay time etc) after changing a single parameter. The ways in which is sounds different could also be accomplished in other ways, like for example knocking harder.
My comment about having to read it was more because I am at work, and didn't really have time at that moment. I guess my interest in hearing your descriptions is because I don't (at least I haven't yet) tried to create a realtime interface, and my idea of "mapping" is how to control various parameters with various types of control signals. One thing that I have done is to use fractal equations (who hasn't, right) to control various parameters of a simple sinewave grain. The parameters I use are "onset, pitch, amplitude, attack slope, decay slope, phase, and placement". On the one hand, I have found some really nice combinations, but more often than not, it still feels like I am "shooting in the dark". I have thought about trying to implement some sort of "searching" method that allows me to "audition" various random parameter set possibilities, and then use those results to "compose" something later. Also, your comment about "changing a single parameter" can be very minimal or extremely different. I have used my same set of parameters "onset, etc..." without addressing the phase of the sinewave, and it amazes me how "dull" the sound is. Just that one parameter can make the difference between "an experiment" or a "piece of music/sound sculpture".
There's nothing new here, it's just like with so many other things one might want to put in code; you have to be concious of the phenomenon to type it up (or take it into account).
I think this concept is very similar to something that Wendy Carlos said in an interview. She had originally thought that through the use of record, she would be able to eliminate the "performer", but in doing so, she said that she found herself being forced into the role of "performer". So to speak, to get the "ghost in the machine" to sing, she had to learn how to play the instrument. Isn't that what we are trying to do here? Learn how to play this "granular instrument"?
Just as a thank you, I also ordered a copy of Microsound, and hope to have it next week.
Not being Curtis Roads I'm not sure exactly *how* this thanks me but I appreciate the gesture :¬). I don't think you'll regret that buy, it's the kind of book that'll be with you for years as a reference and source of inspiration.
I was thinking it was you that mentioned it on the list here, maybe it was someone else..
Happy synthesising,
I will have to keep remembering this when I get to some of those parameter combinations that aren't all that good... Mike
Kas.
_______________________________________________ chuck-users mailing list chuck-users@lists.cs.princeton.edu https://lists.cs.princeton.edu/mailman/listinfo/chuck-users
-- Peace may sound simple—one beautiful word— but it requires everything we have, every quality, every strength, every dream, every high ideal. —Yehudi Menuhin (1916–1999), musician
HI Mike, I am Prabhu Ram, a newbie to the Chuck world. Reading the post, can you kindly mention which book is being appreciated. Regards Prabhu Mike McGonagle wrote:
On Tue, May 20, 2008 at 2:37 PM, Kassen
mailto:signal.automatique@gmail.com> wrote: 2008/5/20 Mike McGonagle
mailto:mjmogo@gmail.com>: Thanks, while I have to still read your whole post, I kind of figured this is what you meant. Guess after a while of hearing a particular word, it starts to lose some of its meaning... I will have to read your whole post now, as it looks on first glance to be worth reading MORE than once...
If it looks like this is "hard" stuff that's entirely my fault. On some level you already know all of this; knocking on a door with the palm of your hand will sound different from how it does when you knock it with your knuckles. It'll sound different in a few ways at the same time (spectral content, volume, decay time etc) after changing a single parameter. The ways in which is sounds different could also be accomplished in other ways, like for example knocking harder.
My comment about having to read it was more because I am at work, and didn't really have time at that moment. I guess my interest in hearing your descriptions is because I don't (at least I haven't yet) tried to create a realtime interface, and my idea of "mapping" is how to control various parameters with various types of control signals. One thing that I have done is to use fractal equations (who hasn't, right) to control various parameters of a simple sinewave grain. The parameters I use are "onset, pitch, amplitude, attack slope, decay slope, phase, and placement". On the one hand, I have found some really nice combinations, but more often than not, it still feels like I am "shooting in the dark".
I have thought about trying to implement some sort of "searching" method that allows me to "audition" various random parameter set possibilities, and then use those results to "compose" something later.
Also, your comment about "changing a single parameter" can be very minimal or extremely different. I have used my same set of parameters "onset, etc..." without addressing the phase of the sinewave, and it amazes me how "dull" the sound is. Just that one parameter can make the difference between "an experiment" or a "piece of music/sound sculpture".
There's nothing new here, it's just like with so many other things one might want to put in code; you have to be concious of the phenomenon to type it up (or take it into account).
I think this concept is very similar to something that Wendy Carlos said in an interview. She had originally thought that through the use of record, she would be able to eliminate the "performer", but in doing so, she said that she found herself being forced into the role of "performer".
So to speak, to get the "ghost in the machine" to sing, she had to learn how to play the instrument.
Isn't that what we are trying to do here? Learn how to play this "granular instrument"?
Just as a thank you, I also ordered a copy of Microsound, and hope to have it next week.
Not being Curtis Roads I'm not sure exactly *how* this thanks me but I appreciate the gesture :¬). I don't think you'll regret that buy, it's the kind of book that'll be with you for years as a reference and source of inspiration.
I was thinking it was you that mentioned it on the list here, maybe it was someone else..
Happy synthesising,
I will have to keep remembering this when I get to some of those parameter combinations that aren't all that good...
Mike
Kas.
_______________________________________________ chuck-users mailing list chuck-users@lists.cs.princeton.edu mailto:chuck-users@lists.cs.princeton.edu https://lists.cs.princeton.edu/mailman/listinfo/chuck-users
-- Peace may sound simple—one beautiful word— but it requires everything we have, every quality, every strength, every dream, every high ideal. —Yehudi Menuhin (1916–1999), musician ------------------------------------------------------------------------
_______________________________________________ chuck-users mailing list chuck-users@lists.cs.princeton.edu https://lists.cs.princeton.edu/mailman/listinfo/chuck-users
On Thu, May 22, 2008 at 1:23 PM, Prabhu Ram
HI Mike, I am Prabhu Ram, a newbie to the Chuck world. Reading the post, can you kindly mention which book is being appreciated.
MICROSOUND by Curtis Roads. MIT Press. (Err Prabhu....are you writing from anywhere within the Indian subcontinent? That would make two of us. ) ------- -.- 1/f ))) --. ------- ... http://www.algomantra.com
tis 2008-05-20 klockan 21:37 +0200 skrev Kassen:
2008/5/20 Mike McGonagle
: Just as a thank you, I also ordered a copy of Microsound, and hope to have it next week.
Not being Curtis Roads I'm not sure exactly *how* this thanks me but I appreciate the gesture :¬). I don't think you'll regret that buy, it's the kind of book that'll be with you for years as a reference and source of inspiration.
Happy synthesising, Kas.
I'm not 100% sure I understand the concept of granular synthesis, and I can't afford a book right now, so I'm asking if you guys got an example/demo of the technique which I could study (preferably in ChucK)?
I'm not 100% sure I understand the concept of granular synthesis, and I can't afford a book right now, so I'm asking if you guys got an example/demo of the technique which I could study (preferably in ChucK)?
I would lurrrv an example in Chuck too! ------- -.- 1/f ))) --. ------- ... http://www.algomantra.com
I can't give you a much better description than what Kassen's already given,
as I'm not really that knowledgeable on the subject either, but baeksan over
on the monome forums has posted what he calls a "multigrain granular synth"
patch written in ChucK.
The forum thread is at http://post.monome.org/comments.php?DiscussionID=1011
Best,
-K
On Thu, May 22, 2008 at 4:55 AM, AlgoMantra
I'm not 100% sure I understand the concept of
granular synthesis, and I can't afford a book right now, so I'm asking if you guys got an example/demo of the technique which I could study (preferably in ChucK)?
I would lurrrv an example in Chuck too!
------- -.- 1/f ))) --. ------- ... http://www.algomantra.com _______________________________________________ chuck-users mailing list chuck-users@lists.cs.princeton.edu https://lists.cs.princeton.edu/mailman/listinfo/chuck-users
trying to add a little bit... the main idea around granular synthesis is to
make a continuous sound from a bunch of sound grains. There are different
specifications for the typical grain size, i prefer to stick to barry
trouax, saying that _typical_ grains have durations between 20 and 50 ms.
These collections of grains are called clouds, which preferably evolve in
time in some different ways (static sounds are usually not interesting) and
in which the grains are superimposed at least a little bit.
imho, to start with, its a good idea to make a cloud with sinusoidal grains,
all equal and overlapping equally as well. And, as time passes, change just
one parameter of them all, like duration, fade time, or spectral content. In
a second experiment I would be playing with sampled grains, which usually
give nice results ;o)
about an example of it, i´ve made a highly experimental python script to
impose group theory on granular synthesis. It is not a typical GS algorithm
as it does not deal with a high density of grains. And i find it better for
making melodic lines. runs only on linux and requires
sagehttp://www.sagemath.org/intalled with external python packages
installed on its own python
interpreter.... anyway, its here:
http://cortex.lems.brown.edu/~renato/sonic-art/nicshttp://cortex.lems.brown.edu/%7Erenato/sonic-art/nics
cheers,
renato
2008/5/22 kevin
I can't give you a much better description than what Kassen's already given, as I'm not really that knowledgeable on the subject either, but baeksan over on the monome forums has posted what he calls a "multigrain granular synth" patch written in ChucK.
The forum thread is at http://post.monome.org/comments.php?DiscussionID=1011
Best, -K
On Thu, May 22, 2008 at 4:55 AM, AlgoMantra
wrote: I'm not 100% sure I understand the concept of
granular synthesis, and I can't afford a book right now, so I'm asking if you guys got an example/demo of the technique which I could study (preferably in ChucK)?
I would lurrrv an example in Chuck too!
------- -.- 1/f ))) --. ------- ... http://www.algomantra.com _______________________________________________ chuck-users mailing list chuck-users@lists.cs.princeton.edu https://lists.cs.princeton.edu/mailman/listinfo/chuck-users
_______________________________________________ chuck-users mailing list chuck-users@lists.cs.princeton.edu https://lists.cs.princeton.edu/mailman/listinfo/chuck-users
thank you
2008/5/22 AlgoMantra
I'm not 100% sure I understand the concept of
granular synthesis, and I can't afford a book right now, so I'm asking if you guys got an example/demo of the technique which I could study (preferably in ChucK)?
I would lurrrv an example in Chuck too!
Ok, pasted below is a small ChucktudE in grains and blitsaw. First it
synthesises a theme, then it uses grains to manipulate the theme..... Well,
that was the plan. The result is more like "first it synthesises a theme,
then does stuff with it and a ending is bolted on because I got a bit bored
with it and grains are hard to use on themes". :¬)
Still, it demonstrates a few nice tricks with grains and it's a start for
exploration, I hope. I also strongly suspect that requesting and using
voices from LiSa too quickly (a few hundred per second) will crash the whole
VM. Badly. These crashes may also have been caused by some other factor,
there are quite a few variables flying around, after all. Uses some tricks
with time and timing which could be eductational and/or confusing.
==================================
//"A night with LiSa", composed by Kassen
//permision granted to copy for fun and educational value
//No waranties, no refunds; mind your speakers and CPU
//remixing and extending strongly encouraged
<<<"let's pretend we're Bach and base the theme on a name!", "">>>;
float G, E;
43 => Std.mtof => G;
52 => Std.mtof => E;
BlitSaw s => Gain amp => dac;
.8=> s.gain;
dac => LiSa l => dac;
4::second => l.duration;
1 => l.record;
3=> s.harmonics;
3 => amp.op;
SinOsc lfo => amp;
second => lfo.period;
G => s.freq;
second => now;
//use the LFO to cover up clicks in the sound
.25::second => lfo.period;
repeat(8)
{
s.harmonics() + 2 => s.harmonics;
.125::second => now;
}
.5::second => lfo.period;
for (0 => int n; n< 4; n++)
{
s.harmonics() -3 => s.harmonics;
if (n%2) G => s.freq;
else E => s.freq;
.25::second => now;
}
5 => s.harmonics;
2::second => lfo.period;
1::second => now;
//stop recording, disconect blitsaw
0 => l.record;
amp =< dac;
<<<"Now we use simple granualtion to repeat the theme a octave down", "">>>;
100 => int slices;
(l.duration() / slices) / 2 => dur ramprate;
l.rate(0, .5);
l.rate(1, .5);
for (0 => int n; n
2008/5/22 Martin Ahnelöv
I'm not 100% sure I understand the concept of granular synthesis, and I can't afford a book right now, so I'm asking if you guys got an example/demo of the technique which I could study (preferably in ChucK)?
Well, the basic idea is to take some signal in a buffer and play back bits (grains) of it, especially small bits, typically around 50::ms (give or take a few octaves). The 50::ms comes from that being the threshold where human hearing will still detect pitch (20Hz) You can play back just a few grains every once in a while, resulting in a rhythmical pattern or lots&lots of them for noisy textures. A simple intro is found here as well; http://en.wikipedia.org/wiki/Granular_synthesis There are some examples in the LiSa examples ( /examples/special/ ), but those might not be general enough for you. I'll cook up a example for you tonight or tomorrow or so, could be fun. :¬) Kas.
I'd suggest it can even be generalised a little more than Kassen has
described; grains may be purely synthesised, rather than using a signal in a
buffer. Also, I noticed someone posted this link from Sound On Sound
before; it seems to function as a good intro, so I'll repeat it:
http://www.soundonsound.com/sos/dec05/articles/granularworkshop.htm
Cheers,
Peter
On Thu, May 22, 2008 at 2:54 PM, Kassen
2008/5/22 Martin Ahnelöv
: I'm not 100% sure I understand the concept of granular synthesis, and I can't afford a book right now, so I'm asking if you guys got an example/demo of the technique which I could study (preferably in ChucK)?
Well, the basic idea is to take some signal in a buffer and play back bits (grains) of it, especially small bits, typically around 50::ms (give or take a few octaves). The 50::ms comes from that being the threshold where human hearing will still detect pitch (20Hz) You can play back just a few grains every once in a while, resulting in a rhythmical pattern or lots&lots of them for noisy textures. A simple intro is found here as well; http://en.wikipedia.org/wiki/Granular_synthesis There are some examples in the LiSa examples ( /examples/special/ ), but those might not be general enough for you.
I'll cook up a example for you tonight or tomorrow or so, could be fun. :¬)
Kas.
_______________________________________________ chuck-users mailing list chuck-users@lists.cs.princeton.edu https://lists.cs.princeton.edu/mailman/listinfo/chuck-users
2008/5/22 Peter Todd
I'd suggest it can even be generalised a little more than Kassen has described; grains may be purely synthesised, rather than using a signal in a buffer.
Yes! Shakers are great for this. The following simple example uses a set of parallel shakers. Each is playing a random harmonic of the current note and is retriggered after a amount of time that's a sub-harmonic of the current pitch. The result is a sort of random-ish noise that still conveys a hint of melody. I added some volume modulation to spice it up. To demonstrate how different sounds can still get a perception of pitch across the sounds used are randomised every time the code is run. No waranties, no refunds. Please copy, please remix. ==================== 8 => int grains; 64 => int root; root => int note; [0, 3, 9, 5, 7] @=> int melody[]; //a different sound every time we play it! Std.rand2(0,22) => int offset; now => time start; repeat(grains) spork ~synth(); me.yield(); while(1) { for ( 0=> int n; n < melody.cap(); n++) { root + melody[n] => note; 4::second => now; } } fun void synth() { Shakers s => dac; me.id() + offset => s.preset; while(9) { //the "*1.0" is just to force the fraction into becoming a float Std.mtof(note) * ( (Std.rand2(1,3)* 1.0) / Std.rand2(1, 2) )=> s.freq; //add volume modulation s.noteOn( ((now - start)% second) /second); //the time betwee triggers affects the perception of pitch Std.rand2(8, 16)::second / Std.mtof(note) => now; } } ======================== Cheers, Kas.
Kassen wrote:
... Yes, I agree. I used to work a lot with "chorus" type sounds, not the popular effect but the way a actual chorus (of people) works; I'd use a few paralel tone generators to build up a single sound. At the start of a not they'd play at the set pitch + some random offset and over the cource of the note the scaling of that offset would decrease, leading to a single pitch, like singers tuning to eachother.
Cool, I would love to have a guitar-pedal-style effect that worked that way. I was thinking of using my implementation of PSO http://en.wikipedia.org/wiki/Particle_swarm_optimization but I suppose any non-linear optimization technique to bring the "chorusers" together would sound cool. Not sure how to do the realtime pitch-shifting though. michael
On Tue, May 20, 2008 at 7:19 AM, Michael Heuer
Not sure how to do the realtime pitch-shifting though.
Well, I'd probably use granular synthesis :-) In ChucK, that would involve some fairly intensive Lisa work, I guess.
2008/5/20 Peter Todd
On Tue, May 20, 2008 at 7:19 AM, Michael Heuer
wrote: Not sure how to do the realtime pitch-shifting though.
Well, I'd probably use granular synthesis :-)
In ChucK, that would involve some fairly intensive Lisa work, I guess.
We do have a pitch-shift Ugen;
http://chuck.cs.princeton.edu/doc/program/ugen_full.html#PitShift You'd need to figure out where notes starts though and it will only work on monophonic material as a signal treatment but yes; you could. Yours, Kas.
Oh, and AGF's song "Piano's", I like that one a lot as well.
I liked the Hands performance, Kas......do you have this AGF song on mp3? ------- -.- 1/f ))) --. ------- ... http://www.algomantra.com
Oh, and...if you guys would like to share some mp3s on your computer, which include gran-synth wizardry, kindly do not hesitate to email them as attachments to: algomantra@gmail.com (I can't buy CDs and books off the internet, cuz I don't have a credit card. I'm a shameless pirate and freeloader.)
granular ... kim cascone: some samples form dust theories at http:// www.cycling74.com/c74music/004 of course, many of the artists who released records on mille plateaux in the late 90s - early 2000's (check the clicks + cuts comps): http://www.mille-plateaux.net/mp/index.php4 take a look at the following labels as well: 12k, agf, apestaartje, häpna, anticipate, humme, kranky, room 40, raster-noton but really you should just get a hold of critus road's book microsound http://www.amazon.com/Microsound-Curtis-Roads/dp/ 0262681544 everything you ever wanted to know on the subject plus recommended listening. be well, eli On May 19, 2008, at 1:13 AM, AlgoMantra wrote:
Yo All....I'm trying to download and study some of the best music out there which features granular synthesis. I haven't heard much yet beyond Iannis Xenakis, but I'd love to know what your favorites are.
Perhaps it would be worthwhile putting up a torrent of the same. If possible, kindly provide links to mp3 files or any other type of free downloads.
(What do you think of Jaron Lanier's music?) http://www.jaronlanier.com/musicdownload.html
cheers! ------- -.- 1/f ))) --. ------- ... http://www.algomantra.com _______________________________________________ chuck-users mailing list chuck-users@lists.cs.princeton.edu https://lists.cs.princeton.edu/mailman/listinfo/chuck-users
One of my all-time granular synthesis faves is Barry Truax' piece "Riverrun". It's a real tour-de-force of granular fun, and it is an elegant piece of music to boot. I don't think Barry has it on-line, but you should be able to get the CD (or maybe even get it through amazon's mp3 or iTunes). brad http://music.columbia.edu/~brad On May 19, 2008, at 1:13 AM, AlgoMantra wrote:
Yo All....I'm trying to download and study some of the best music out there which features granular synthesis. I haven't heard much yet beyond Iannis Xenakis, but I'd love to know what your favorites are.
Perhaps it would be worthwhile putting up a torrent of the same. If possible, kindly provide links to mp3 files or any other type of free downloads.
(What do you think of Jaron Lanier's music?) http://www.jaronlanier.com/musicdownload.html
cheers! ------- -.- 1/f ))) --. ------- ... http://www.algomantra.com _______________________________________________ chuck-users mailing list chuck-users@lists.cs.princeton.edu https://lists.cs.princeton.edu/mailman/listinfo/chuck-users
participants (12)
-
AlgoMantra
-
Brad Garton
-
eli queen
-
Kassen
-
kevin
-
Martin Ahnelöv
-
Michael Heuer
-
Mike McGonagle
-
muhammedkrkyn@gmail.com
-
Peter Todd
-
Prabhu Ram
-
Renato Fabbri