Hi there - I'm very new to ChucK so please be kind if I sound incredibly stupid, and very angry because I am going nuts here. (Kassen, I'm here - we had a very brief exchange about ChucK with toplappers) I'm writing my scripts using SciTE, and using the command-line VM. I have tried some Chuck with tutorials, manual and helpful tips on forums, usually by someone too good to be true like Kassen.I want to know what people really mean by OTF (on the fly) or by "livecoding" in Chuck. If you prepare your files and code in advance and then just chuck the shreds in and out of the VM, it really is a a bit like sequencing, rather than livecoding. And if I change the code in the file, save it, then the effects don't appear live, do they? I really don't see how i'm changing the code "while its running" here. Maybe i'm missing something freakin obvious, but I'm so frustrated having had to learn Csound, Chuck, SuperCollider and all sorts of new languages just because Python did not provide me with a simple audio processing module. All I wanted to do using Python was analyse the sound of a live flute playing and plot its frequency, and other characteristics, straight off the audio port. Now I'm a newbie at all of the above, and thats really fantastic because it hasn't solved a single problem. It has certainly added a few more. What a royal mess. ------- 1/f ))) ------- http://www.algomantra.com
On 9/17/07, AlgoMantra
(Kassen, I'm here - we had a very brief exchange about ChucK with toplappers)
Very good, for a moment I feared you had gone up the mountain and started meditating until GlucK arrived! usually
by someone too good to be true like Kassen.
This online communications stuff is really great, nobody is ever confronted with morning moods or remembers the time one climbed in through the window while drunk and so on. :¬) I want to know what people really
mean by OTF (on the fly) or by "livecoding" in Chuck.
Yes, very good, let's talk about this because you have a great point and IMHO it needs more emphasis so that we could perhaps persuade Spencer to set some time aside and help us because this would be a huge improvement to both livecoding and rapid prototyping. If you prepare your files and code in advance and then just chuck the
shreds in and out of the VM, it really is a a bit like sequencing, rather than livecoding. And if I change the code in the file, save it, then the effects don't appear live, do they? I really don't see how i'm changing the code "while its running" here.
Yes, true. It's still better then sequencing waves because you can have random or rule-based variations but still. Changing, then saving files won't make them run, you need to replace them in the VM but to do that they indeed need to be saved (or be inside of a(mini) Audicle window.) So, you make a file called "mantra.ck" that does something marvellously interesting.
chuck --loop mantra.ck
Now it's running. So; you edit it to something yet better, then save it. open a new terminal and go
chuck = 1 mantra.ck
This will replace the first shred (being your original file) with the new one. Presto. This is cool and this is what people do when livecoding though in that case the miniaudicle will make things a lot more convenient. Now for the bad news; all variables that your file had and all arrays that might by now be filled with data that determines what music is being generated will be re-initialised. At least all other running shreds will be unaffected and you can try to make the replacement at a musically sensible moment but if your old code had just written a random yet nice melody that you now wanted to add reverb to that melody will be gone. A workaround would be to store all such data in static members of public classes, those will survive until the end of the VM so those are dependable but doing this leads to significant overhead in code which is one of the last things you need when livecoding. I've been meaning to experiment with this more and see how convenient I can make it for myself but it's still ugly. Now, ChucK does have a way to edit just parts of running code, it's part of the functionality of chuck --shell but it's hard to use (meaning I didn't get it to work) and almost completely un-documented (it was in some corner of the wiki). The problem was that you had to navigate through the name-space, like a directory structure, using the terminal, then type exactly what you wanted to do. What I would -perhaps naïvely- think would be *the* solution is having the MiniAudicle provide a graphical front-end to this aspect of chuck --shell with highlighting and hotkeys used to indicate what element in what scope we are trying to edit and replace/add. Last I heard on this was that Spencer (who manages the mini) seemed to find the idea interesting but I gather Spencer's time is limited (which I stress I have a lot of respect for) and I have to admit that this idea could fall squarely in the "likely to explode if you sneeze" category. What's slightly mysterious to me is that so far nobody else has been particularly vocal about how useful this would be so I'm quite happy that you bring it up now.
What a royal mess.
Let's not exaggerate, writing musical algorithms on stage is still exciting and fun, even if modifications mean re-starting them but there is certainly room for improvement.... That said; there is a lot of room for growth in a lot of aspects of ChucK and so there are priorities. Perhaps Ge, Graham and Perry who have been livecoding in ChucK more then I have and at a higher profile could share some ideas about how they deal with this? Perhaps I too am missing something fundamental. Yours, Kas.
AlgoMantra schrieb:
Maybe i'm missing something freakin obvious, but I'm so frustrated having had to learn Csound, Chuck, SuperCollider and all sorts of new languages just because Python did not provide me with a simple audio processing module. All I wanted to do using Python was analyse the sound of a live flute playing and plot its frequency, and other characteristics, straight off the audio port.
have you tried PySndObj? seems pretty powerful and not very complicated to use: http://sndobj.sourceforge.net/ it should be able to do fft etc. best joerg -- http://joerg.piringer.net http://www.transacoustic-research.com http://www.iftaf.org http://www.vegetableorchestra.org/
--- AlgoMantra
If you prepare your files and code in advance and then just chuck the shreds in and out of the VM, it really is a a bit like sequencing, rather than livecoding. And if I change the code in the file, save it, then the effects don't appear live, do they?
In my little free time that I am spending with ChucK I am trying to figure this out as well! The best I get is editing one file while another is playing. This feels more like batch programming than real time.
Maybe i'm missing something freakin obvious, but I'm so frustrated having had to learn Csound, Chuck, SuperCollider and all sorts of new languages just because Python did not provide me with a simple audio processing module. All I wanted to do using Python was analyse the sound of a live flute playing and plot its frequency, and other characteristics, straight off the audio port.
It is annoying that no-one has wrapped a decent library for Python. But haver you checked out my article on this topic? It could be that if you have simple needs PyMedia or one of the other mentioned tools might do. Surf: http://diagrammes-modernes.blogspot.com/2007/08/music-control-tools-python-b... -- robin ----- Robin Parmar robinparmar.com
See guys, I wouldn't raise a hellcry without having tinkered
enough with what was available on the internet. Far too many
people have suggested that I use PySndObj without having
done any kind of realtime audio analysis on it themselves.
Allow me to restate my objective. I want "to read the sound of a live flute
off the audio port in realtime, and analyse it using Python". Now...
This is a reply to me from one of the main PySndObj developers:
thinking a little more about this, I think there is no pitch tracker
there (I need to add one...). So you can try csound:
See that? at least he understood my question somewhat.
Now here's a response from the gentleman at Pymedia:
Can you please check voice_recorder_player.py or voice_recorder.py
from examples tar ball ?May be it will resolve most of the issues.
He is answering a completely different question! I'm talking about
intercepting data off a port, and he's talking of recording it. I had seen
the example he's talking about but it made no sense in the context.
And I was kinda lucky in that I know what a tarball is - most artists
who dabble in technology come from diverse background. ( I am one
of the 2 or 3 new media artists in India). So I find it odd that when
newbies
ask questions, developers answer very sweetly, but in code.
Perhaps the truth really is that adc => FFT => dac, which is so simple
for ChucK etc - has no analog in Python, and people are just too
ashamed to admit that they don't know how its done. To use Chuck to
do this, I will need to learn YET ANOTHER LANGUAGE called OSC
or something, which will talk to messages from Python (which are
messages originating in my phone coming via Bluetooth) so I can
pretty much give up on realtime.
I hate Python. Ugh, no! I love it, but I hate where I am with this
damn project.
*looks despondently at the wall picture of Lord Shiva,
who has a familiar serpent tied around his neck like a
scarf*
-------
1/f )))
-------
http://www.algomantra.com
On 9/17/07, robin.escalation
--- AlgoMantra
wrote: If you prepare your files and code in advance and then just chuck the shreds in and out of the VM, it really is a a bit like sequencing, rather than livecoding. And if I change the code in the file, save it, then the effects don't appear live, do they?
In my little free time that I am spending with ChucK I am trying to figure this out as well! The best I get is editing one file while another is playing. This feels more like batch programming than real time.
Maybe i'm missing something freakin obvious, but I'm so frustrated having had to learn Csound, Chuck, SuperCollider and all sorts of new languages just because Python did not provide me with a simple audio processing module. All I wanted to do using Python was analyse the sound of a live flute playing and plot its frequency, and other characteristics, straight off the audio port.
It is annoying that no-one has wrapped a decent library for Python. But haver you checked out my article on this topic? It could be that if you have simple needs PyMedia or one of the other mentioned tools might do.
Surf:
http://diagrammes-modernes.blogspot.com/2007/08/music-control-tools-python-b...
-- robin
----- Robin Parmar robinparmar.com _______________________________________________ chuck-users mailing list chuck-users@lists.cs.princeton.edu https://lists.cs.princeton.edu/mailman/listinfo/chuck-users
Hi,
On 9/17/07, AlgoMantra
Allow me to restate my objective. I want "to read the sound of a live flute off the audio port in realtime, and analyse it using Python". Now...
What I don't understand is why do you learn CSound, SuperCollider, ChucK if all you want to do is use Python?
This is a reply to me from one of the main PySndObj developers:
thinking a little more about this, I think there is no pitch tracker there (I need to add one...). So you can try csound:
See that? at least he understood my question somewhat.
The question is a little vague so he pointed you in *one* direction.
Now here's a response from the gentleman at Pymedia:
Can you please check voice_recorder_player.py or voice_recorder.py from examples tar ball ?May be it will resolve most of the issues. He is answering a completely different question! I'm talking about intercepting data off a port, and he's talking of recording it.
I think you are misunderstanding. He is telling you to look into the recorder example so that you can see how to capture live audio.
I had seen the example he's talking about but it made no sense in the context.
Which context? The context of capturing real-time audio? Or the context of doing "an analysis" on the signal. It certainly makes no sense in the latter but a lot in the former.
And I was kinda lucky in that I know what a tarball is
You mean you acquire knowledge through luck? Or am I misunderstanding something.
- most artists who dabble in technology come from diverse background. ( I am one of the 2 or 3 new media artists in India). So I find it odd that when newbies ask questions, developers answer very sweetly, but in code.
All artists come from diverse backgrounds. And regardless of what you dabble at, you still have to follow the learning path, especially if you want to become somewhat proficient at it. If computers are your thing, you have to learn some basics about the computer and operating systems and how to use different applications. If you are proficient enough with computers to start coding audio applications in python and you don't state your background but, instead, you ask a vague question, it is understandable that developers assume that you know what you're talking about. If you need hand-holding, which all newbies of the world need, you have say so and state your problem with as much precision as possible so that those who would like to help you do not need to do much guess work. Now, do you mean there are only 2-3 new media artists in India? Strange. A random Google hit: http://www.newmedia.sunderland.ac.uk/nmcr/india/ilinks.htm suggests that there are a few more.
Perhaps the truth really is that adc => FFT => dac, which is so simple for ChucK etc - has no analog in Python, and people are just too ashamed to admit that they don't know how its done.
I never used Python for audio but I would assume that it, in fact, is possible. In any case, I find it hard to believe that after having learned CSound and SuperCollider you have not been able to achieve you goal of reading live flute and analysing it (I don't know what kind of analysis you want to do and what you want to use the analysis data). Have you looked at Pure data? puredata.info. Perhaps this is a little more high-level than CSound or SC (or even chuck). Also, there are python wrappers for csound so you can script the csound shebang with python, if you're so inclined. So, if PySndObj doesn't cut if for you, do it with pyCSound.
To use Chuck to do this, I will need to learn YET ANOTHER LANGUAGE called OSC or something, which will talk to messages from Python (which are messages originating in my phone coming via Bluetooth) so I can pretty much give up on realtime.
OSC is a protocol. It should not be needed for such simple task as reading the audio port, analysing the signal and (insert your action here). However, if you intend to control your computer by messages you type on your phone, you can certainly forget about realtime, unless you're a hyper-fast phone-keypad-typist.
I hate Python. Ugh, no! I love it, but I hate where I am with this damn project.
Whining is certainly not going to help you. What will help, however, is that you think about what you want to achieve, clearly state your needs, problems and issues and then write to the appropriate mailing list. Regards, ./MiS
*looks despondently at the wall picture of Lord Shiva, who has a familiar serpent tied around his neck like a scarf*
------- 1/f ))) ------- http://www.algomantra.com
On 9/17/07, robin.escalation
wrote: --- AlgoMantra
wrote: If you prepare your files and code in advance and then just chuck the shreds in and out of the VM, it really is a a bit like sequencing, rather than livecoding. And if I change the code in the file, save it, then the effects don't appear live, do they?
In my little free time that I am spending with ChucK I am trying to figure this out as well! The best I get is editing one file while another is playing. This feels more like batch programming than real time.
Maybe i'm missing something freakin obvious, but I'm so frustrated having had to learn Csound, Chuck, SuperCollider and all sorts of new languages just because Python did not provide me with a simple audio processing module. All I wanted to do using Python was analyse the sound of a live flute playing and plot its frequency, and other characteristics, straight off the audio port.
It is annoying that no-one has wrapped a decent library for Python. But haver you checked out my article on this topic? It could be that if you have simple needs PyMedia or one of the other mentioned tools might do.
Surf:
http://diagrammes-modernes.blogspot.com/2007/08/music-control-tools-python-b...
-- robin
----- Robin Parmar robinparmar.com _______________________________________________ chuck-users mailing list chuck-users@lists.cs.princeton.edu
https://lists.cs.princeton.edu/mailman/listinfo/chuck-users
_______________________________________________ chuck-users mailing list chuck-users@lists.cs.princeton.edu https://lists.cs.princeton.edu/mailman/listinfo/chuck-users
Dear Michal Seta, I admit I was a bit tired at the end of a long, frustrating week and my email may have been a bit "whiny". I apologise. Here are answers to some of your questions/doubts: What I don't understand is why do you learn CSound, SuperCollider, ChucK if all you want to do is use Python? Because some of my friends suggested that I do, and I was curious to learn new stuff. For reasons of speed, simplicity and elegance I try to stick to Python. It is not a hard and fast rule so much as an aesthetic. The question is a little vague so he pointed you in *one* direction. I can't begin to see how you could have followed our entire private exchange, which is very different from the short public summary I gave you. I think you are misunderstanding. He is telling you to look into the recorder example so that you can see how to capture live audio. Have YOU looked into that example? It's crawling with reindeers singing Jingle Bells. You mean you acquire knowledge through luck? Or am I misunderstanding something. How one acquires knowledge is one's own business. How one can show it off is what seems to be yours. You have some really bizarre assumptions about who I might be and what I might not know. Now, do you mean there are only 2-3 new media artists in India? Strange. A random Google hit: http://www.newmedia.sunderland
.ac.uk/nmcr/india/ilinks.htm suggests that there are a few more.
A random Google search on "Michal Seta" suggests that you probably don't
exist, and if you do - your existence is not too consequential for mankind.
Next time, try selling your work in the art district of Bombay to earn your
dinner.
You'll probably run into me, and the only other artist in "new media" who is
not
even mentioned on that strange and funny website you quote as divine proof.
"new media", now I wonder what you understand by that phrase. The phrase
itself, right?
Whining is certainly not going to help you. What will help, however,
is that you think about what you want to achieve, clearly state your
needs, problems and issues and then write to the appropriate mailing
list.
You seem to have it all figured out. That is the universal mantra of
success, right?
Good luck, mate! You're going to need it.
-------
1/f )))
-------
http://www.algomantra.com
On 9/17/07, Michal Seta
Hi,
On 9/17/07, AlgoMantra
wrote: [snip] Allow me to restate my objective. I want "to read the sound of a live flute off the audio port in realtime, and analyse it using Python". Now...
What I don't understand is why do you learn CSound, SuperCollider, ChucK if all you want to do is use Python?
This is a reply to me from one of the main PySndObj developers:
thinking a little more about this, I think there is no pitch tracker there (I need to add one...). So you can try csound:
See that? at least he understood my question somewhat.
The question is a little vague so he pointed you in *one* direction.
Now here's a response from the gentleman at Pymedia:
Can you please check voice_recorder_player.py or voice_recorder.py from examples tar ball ?May be it will resolve most of the issues. He is answering a completely different question! I'm talking about intercepting data off a port, and he's talking of recording it.
I think you are misunderstanding. He is telling you to look into the recorder example so that you can see how to capture live audio.
I had seen the example he's talking about but it made no sense in the context.
Which context? The context of capturing real-time audio? Or the context of doing "an analysis" on the signal. It certainly makes no sense in the latter but a lot in the former.
And I was kinda lucky in that I know what a tarball is
You mean you acquire knowledge through luck? Or am I misunderstanding something.
- most artists who dabble in technology come from diverse background. ( I am one of the 2 or 3 new media artists in India). So I find it odd that when newbies ask questions, developers answer very sweetly, but in code.
All artists come from diverse backgrounds. And regardless of what you dabble at, you still have to follow the learning path, especially if you want to become somewhat proficient at it. If computers are your thing, you have to learn some basics about the computer and operating systems and how to use different applications. If you are proficient enough with computers to start coding audio applications in python and you don't state your background but, instead, you ask a vague question, it is understandable that developers assume that you know what you're talking about. If you need hand-holding, which all newbies of the world need, you have say so and state your problem with as much precision as possible so that those who would like to help you do not need to do much guess work.
Now, do you mean there are only 2-3 new media artists in India? Strange. A random Google hit: http://www.newmedia.sunderland.ac.uk/nmcr/india/ilinks.htm suggests that there are a few more.
Perhaps the truth really is that adc => FFT => dac, which is so simple for ChucK etc - has no analog in Python, and people are just too ashamed to admit that they don't know how its done.
I never used Python for audio but I would assume that it, in fact, is possible. In any case, I find it hard to believe that after having learned CSound and SuperCollider you have not been able to achieve you goal of reading live flute and analysing it (I don't know what kind of analysis you want to do and what you want to use the analysis data). Have you looked at Pure data? puredata.info. Perhaps this is a little more high-level than CSound or SC (or even chuck). Also, there are python wrappers for csound so you can script the csound shebang with python, if you're so inclined. So, if PySndObj doesn't cut if for you, do it with pyCSound.
To use Chuck to do this, I will need to learn YET ANOTHER LANGUAGE called OSC or something, which will talk to messages from Python (which are messages originating in my phone coming via Bluetooth) so I can pretty much give up on realtime.
OSC is a protocol. It should not be needed for such simple task as reading the audio port, analysing the signal and (insert your action here). However, if you intend to control your computer by messages you type on your phone, you can certainly forget about realtime, unless you're a hyper-fast phone-keypad-typist.
I hate Python. Ugh, no! I love it, but I hate where I am with this damn project.
Whining is certainly not going to help you. What will help, however, is that you think about what you want to achieve, clearly state your needs, problems and issues and then write to the appropriate mailing list.
Regards,
./MiS
*looks despondently at the wall picture of Lord Shiva, who has a familiar serpent tied around his neck like a scarf*
------- 1/f ))) ------- http://www.algomantra.com
On 9/17/07, robin.escalation
wrote: --- AlgoMantra
wrote: If you prepare your files and code in advance and then just chuck the shreds in and out of the VM, it really is a a bit like sequencing, rather than livecoding. And if I change the code in the file, save it, then the effects don't appear live, do they?
In my little free time that I am spending with ChucK I am trying to figure this out as well! The best I get is editing one file while another is playing. This feels more like batch programming than real time.
Maybe i'm missing something freakin obvious, but I'm so frustrated having had to learn Csound, Chuck, SuperCollider and all sorts of new languages just because Python did not provide me with a simple audio processing module. All I wanted to do using Python was analyse the sound of a live flute playing and plot its frequency, and other characteristics, straight off the audio port.
It is annoying that no-one has wrapped a decent library for Python. But haver you checked out my article on this topic? It could be that if you have simple needs PyMedia or one of the other mentioned tools might do.
Surf:
http://diagrammes-modernes.blogspot.com/2007/08/music-control-tools-python-b...
-- robin
----- Robin Parmar robinparmar.com _______________________________________________ chuck-users mailing list chuck-users@lists.cs.princeton.edu
https://lists.cs.princeton.edu/mailman/listinfo/chuck-users
_______________________________________________ chuck-users mailing list chuck-users@lists.cs.princeton.edu https://lists.cs.princeton.edu/mailman/listinfo/chuck-users
_______________________________________________ chuck-users mailing list chuck-users@lists.cs.princeton.edu https://lists.cs.princeton.edu/mailman/listinfo/chuck-users
--
Thank you for your comments.
This has gone off-topic enough.
./MiS
On 9/17/07, AlgoMantra
Dear Michal Seta,
I admit I was a bit tired at the end of a long, frustrating week and my email may have been a bit "whiny". I apologise. Here are answers to some of your questions/doubts:
What I don't understand is why do you learn CSound, SuperCollider, ChucK if all you want to do is use Python?
Because some of my friends suggested that I do, and I was curious to learn new stuff. For reasons of speed, simplicity and elegance I try to stick to Python. It is not a hard and fast rule so much as an aesthetic.
The question is a little vague so he pointed you in *one* direction.
I can't begin to see how you could have followed our entire private exchange, which is very different from the short public summary I gave you.
I think you are misunderstanding. He is telling you to look into the recorder example so that you can see how to capture live audio.
Have YOU looked into that example? It's crawling with reindeers singing Jingle Bells.
You mean you acquire knowledge through luck? Or am I misunderstanding something.
How one acquires knowledge is one's own business. How one can show it off is what seems to be yours. You have some really bizarre assumptions about who I might be and what I might not know.
Now, do you mean there are only 2-3 new media artists in India? Strange. A random Google hit: http://www.newmedia.sunderland
.ac.uk/nmcr/india/ilinks.htm suggests that there are a few more.
A random Google search on "Michal Seta" suggests that you probably don't exist, and if you do - your existence is not too consequential for mankind.
Next time, try selling your work in the art district of Bombay to earn your dinner. You'll probably run into me, and the only other artist in "new media" who is not even mentioned on that strange and funny website you quote as divine proof.
"new media", now I wonder what you understand by that phrase. The phrase itself, right?
Whining is certainly not going to help you. What will help, however, is that you think about what you want to achieve, clearly state your needs, problems and issues and then write to the appropriate mailing list.
You seem to have it all figured out. That is the universal mantra of success, right? Good luck, mate! You're going to need it.
------- 1/f ))) ------- http://www.algomantra.com
On 9/17/07, Michal Seta < mis@artengine.ca> wrote:
Hi,
On 9/17/07, AlgoMantra < algomantra@gmail.com> wrote: [snip]
Allow me to restate my objective. I want "to read the sound of a live flute off the audio port in realtime, and analyse it using Python". Now...
What I don't understand is why do you learn CSound, SuperCollider, ChucK if all you want to do is use Python?
This is a reply to me from one of the main PySndObj developers:
thinking a little more about this, I think there is no pitch tracker there (I need to add one...). So you can try csound:
See that? at least he understood my question somewhat.
The question is a little vague so he pointed you in *one* direction.
Now here's a response from the gentleman at Pymedia:
Can you please check voice_recorder_player.py or voice_recorder.py from examples tar ball ?May be it will resolve most of the issues. He is answering a completely different question! I'm talking about intercepting data off a port, and he's talking of recording it.
I think you are misunderstanding. He is telling you to look into the recorder example so that you can see how to capture live audio.
I had seen the example he's talking about but it made no sense in the context.
Which context? The context of capturing real-time audio? Or the context of doing "an analysis" on the signal. It certainly makes no sense in the latter but a lot in the former.
And I was kinda lucky in that I know what a tarball is
You mean you acquire knowledge through luck? Or am I misunderstanding something.
- most artists who dabble in technology come from diverse background. ( I am one of the 2 or 3 new media artists in India). So I find it odd that when newbies ask questions, developers answer very sweetly, but in code.
All artists come from diverse backgrounds. And regardless of what you dabble at, you still have to follow the learning path, especially if you want to become somewhat proficient at it. If computers are your thing, you have to learn some basics about the computer and operating systems and how to use different applications. If you are proficient enough with computers to start coding audio applications in python and you don't state your background but, instead, you ask a vague question, it is understandable that developers assume that you know what you're talking about. If you need hand-holding, which all newbies of the world need, you have say so and state your problem with as much precision as possible so that those who would like to help you do not need to do much guess work.
Now, do you mean there are only 2-3 new media artists in India? Strange. A random Google hit:
http://www.newmedia.sunderland.ac.uk/nmcr/india/ilinks.htm
suggests that there are a few more.
Perhaps the truth really is that adc => FFT => dac, which is so simple for ChucK etc - has no analog in Python, and people are just too ashamed to admit that they don't know how its done.
I never used Python for audio but I would assume that it, in fact, is possible. In any case, I find it hard to believe that after having learned CSound and SuperCollider you have not been able to achieve you goal of reading live flute and analysing it (I don't know what kind of analysis you want to do and what you want to use the analysis data). Have you looked at Pure data? puredata.info. Perhaps this is a little more high-level than CSound or SC (or even chuck). Also, there are python wrappers for csound so you can script the csound shebang with python, if you're so inclined. So, if PySndObj doesn't cut if for you, do it with pyCSound.
To use Chuck to do this, I will need to learn YET ANOTHER LANGUAGE called OSC or something, which will talk to messages from Python (which are messages originating in my phone coming via Bluetooth) so I can pretty much give up on realtime.
OSC is a protocol. It should not be needed for such simple task as reading the audio port, analysing the signal and (insert your action here). However, if you intend to control your computer by messages you type on your phone, you can certainly forget about realtime, unless you're a hyper-fast phone-keypad-typist.
I hate Python. Ugh, no! I love it, but I hate where I am with this damn project.
Whining is certainly not going to help you. What will help, however, is that you think about what you want to achieve, clearly state your needs, problems and issues and then write to the appropriate mailing list.
Regards,
./MiS
*looks despondently at the wall picture of Lord Shiva, who has a familiar serpent tied around his neck like a scarf*
------- 1/f ))) ------- http://www.algomantra.com
On 9/17/07, robin.escalation
wrote: --- AlgoMantra
wrote: If you prepare your files and code in advance and then just chuck the shreds in and out of the VM, it really is a a bit like sequencing, rather than livecoding. And if I change the code in the file, save it, then the effects don't appear live, do they?
In my little free time that I am spending with ChucK I am trying to figure this out as well! The best I get is editing one file while another is playing. This feels more like batch programming than real time.
Maybe i'm missing something freakin obvious, but I'm so frustrated having had to learn Csound, Chuck, SuperCollider and all sorts of new languages just because Python did not provide me with a simple audio processing module. All I wanted to do using Python was analyse the sound of a live flute playing and plot its frequency, and other characteristics, straight off the audio port.
It is annoying that no-one has wrapped a decent library for Python. But haver you checked out my article on this topic? It could be that if you have simple needs PyMedia or one of the other mentioned tools might do.
Surf:
http://diagrammes-modernes.blogspot.com/2007/08/music-control-tools-python-b...
-- robin
----- Robin Parmar robinparmar.com _______________________________________________ chuck-users mailing list chuck-users@lists.cs.princeton.edu
https://lists.cs.princeton.edu/mailman/listinfo/chuck-users
_______________________________________________ chuck-users mailing list chuck-users@lists.cs.princeton.edu
https://lists.cs.princeton.edu/mailman/listinfo/chuck-users
_______________________________________________ chuck-users mailing list chuck-users@lists.cs.princeton.edu
https://lists.cs.princeton.edu/mailman/listinfo/chuck-users
--
Right. I totally realise now that you own this list.
On 9/18/07, Michal Seta
Thank you for your comments. This has gone off-topic enough.
./MiS
Dear Michal Seta,
I admit I was a bit tired at the end of a long, frustrating week and my email may have been a bit "whiny". I apologise. Here are answers to some of your questions/doubts:
What I don't understand is why do you learn CSound, SuperCollider, ChucK if all you want to do is use Python?
Because some of my friends suggested that I do, and I was curious to learn new stuff. For reasons of speed, simplicity and elegance I try to stick to Python. It is not a hard and fast rule so much as an aesthetic.
The question is a little vague so he pointed you in *one* direction.
I can't begin to see how you could have followed our entire private exchange, which is very different from the short public summary I gave you.
I think you are misunderstanding. He is telling you to look into the recorder example so that you can see how to capture live audio.
Have YOU looked into that example? It's crawling with reindeers singing Jingle Bells.
You mean you acquire knowledge through luck? Or am I misunderstanding something.
How one acquires knowledge is one's own business. How one can show it off is what seems to be yours. You have some really bizarre assumptions about who I might be and what I might not know.
Now, do you mean there are only 2-3 new media artists in India? Strange. A random Google hit: http://www.newmedia.sunderland
.ac.uk/nmcr/india/ilinks.htm suggests that there are a few more.
A random Google search on "Michal Seta" suggests that you probably don't exist, and if you do - your existence is not too consequential for mankind.
Next time, try selling your work in the art district of Bombay to earn your dinner. You'll probably run into me, and the only other artist in "new media" who is not even mentioned on that strange and funny website you quote as divine
On 9/17/07, AlgoMantra
wrote: proof. "new media", now I wonder what you understand by that phrase. The phrase itself, right?
Whining is certainly not going to help you. What will help, however, is that you think about what you want to achieve, clearly state your needs, problems and issues and then write to the appropriate mailing list.
You seem to have it all figured out. That is the universal mantra of success, right? Good luck, mate! You're going to need it.
------- 1/f ))) ------- http://www.algomantra.com
On 9/17/07, Michal Seta < mis@artengine.ca> wrote:
Hi,
On 9/17/07, AlgoMantra < algomantra@gmail.com> wrote: [snip]
Allow me to restate my objective. I want "to read the sound of a
flute
off the audio port in realtime, and analyse it using Python". Now...
What I don't understand is why do you learn CSound, SuperCollider, ChucK if all you want to do is use Python?
This is a reply to me from one of the main PySndObj developers:
thinking a little more about this, I think there is no
there (I need to add one...). So you can try csound:
See that? at least he understood my question somewhat.
The question is a little vague so he pointed you in *one* direction.
Now here's a response from the gentleman at Pymedia:
Can you please check voice_recorder_player.py or voice_recorder.py from examples tar ball ?May be it will resolve most of the issues. He is answering a completely different question! I'm talking about intercepting data off a port, and he's talking of recording it.
I think you are misunderstanding. He is telling you to look into the recorder example so that you can see how to capture live audio.
I had seen the example he's talking about but it made no sense in the context.
Which context? The context of capturing real-time audio? Or the context of doing "an analysis" on the signal. It certainly makes no sense in the latter but a lot in the former.
And I was kinda lucky in that I know what a tarball is
You mean you acquire knowledge through luck? Or am I misunderstanding something.
- most artists who dabble in technology come from diverse background. ( I am one of the 2 or 3 new media artists in India). So I find it odd that when newbies ask questions, developers answer very sweetly, but in code.
All artists come from diverse backgrounds. And regardless of what you dabble at, you still have to follow the learning path, especially if you want to become somewhat proficient at it. If computers are your thing, you have to learn some basics about the computer and operating systems and how to use different applications. If you are proficient enough with computers to start coding audio applications in python and you don't state your background but, instead, you ask a vague question, it is understandable that developers assume that you know what you're talking about. If you need hand-holding, which all newbies of the world need, you have say so and state your problem with as much precision as possible so that those who would like to help you do not need to do much guess work.
Now, do you mean there are only 2-3 new media artists in India? Strange. A random Google hit:
http://www.newmedia.sunderland.ac.uk/nmcr/india/ilinks.htm
suggests that there are a few more.
Perhaps the truth really is that adc => FFT => dac, which is so
simple
for ChucK etc - has no analog in Python, and people are just too ashamed to admit that they don't know how its done.
I never used Python for audio but I would assume that it, in fact, is possible. In any case, I find it hard to believe that after having learned CSound and SuperCollider you have not been able to achieve you goal of reading live flute and analysing it (I don't know what kind of analysis you want to do and what you want to use the analysis data). Have you looked at Pure data? puredata.info. Perhaps this is a little more high-level than CSound or SC (or even chuck). Also, there are python wrappers for csound so you can script the csound shebang with python, if you're so inclined. So, if PySndObj doesn't cut if for you, do it with pyCSound.
To use Chuck to do this, I will need to learn YET ANOTHER LANGUAGE called OSC or something, which will talk to messages from Python (which are messages originating in my phone coming via Bluetooth) so I can pretty much give up on realtime.
OSC is a protocol. It should not be needed for such simple task as reading the audio port, analysing the signal and (insert your action here). However, if you intend to control your computer by messages you type on your phone, you can certainly forget about realtime, unless you're a hyper-fast phone-keypad-typist.
I hate Python. Ugh, no! I love it, but I hate where I am with this damn project.
Whining is certainly not going to help you. What will help, however, is that you think about what you want to achieve, clearly state your needs, problems and issues and then write to the appropriate mailing list.
Regards,
./MiS
*looks despondently at the wall picture of Lord Shiva, who has a familiar serpent tied around his neck like a scarf*
------- 1/f ))) ------- http://www.algomantra.com
On 9/17/07, robin.escalation
wrote: --- AlgoMantra
wrote: If you prepare your files and code in advance and then just
chuck
the shreds in and out of the VM, it really is a a bit like sequencing, rather than livecoding. And if I change the code in the file, save it, then the effects don't appear live, do they?
In my little free time that I am spending with ChucK I am trying to figure this out as well! The best I get is editing one file while another is playing. This feels more like batch programming than real time.
Maybe i'm missing something freakin obvious, but I'm so frustrated having had to learn Csound, Chuck, SuperCollider and all sorts of new languages just because Python did not provide me with a simple audio processing module. All I wanted to do using Python was analyse the sound of a live flute playing and plot its frequency, and other characteristics, straight off the audio port.
It is annoying that no-one has wrapped a decent library for Python. But haver you checked out my article on this topic? It could be
live pitch tracker that
if you have simple needs PyMedia or one of the other mentioned tools might do.
Surf:
http://diagrammes-modernes.blogspot.com/2007/08/music-control-tools-python-b...
-- robin
----- Robin Parmar robinparmar.com _______________________________________________ chuck-users mailing list chuck-users@lists.cs.princeton.edu
https://lists.cs.princeton.edu/mailman/listinfo/chuck-users
_______________________________________________ chuck-users mailing list chuck-users@lists.cs.princeton.edu
https://lists.cs.princeton.edu/mailman/listinfo/chuck-users
_______________________________________________ chuck-users mailing list chuck-users@lists.cs.princeton.edu
https://lists.cs.princeton.edu/mailman/listinfo/chuck-users
--
-- ------- 1/f ))) ------- http://www.algomantra.com
This has gone off-topic enough.
Speaking just for myself; I don't really mind a certain amount of off-topic discussion. ChucK doesn't exist in a vacuum nor do we. Some discussion about interfacing with other languages or even how ChucK compares to other languages, how it relates to devices, how we ourselves perform or compose and how we relate to other musicians/artists/engineers seem unavoidable. I would even say that because of the nature of ChucK it would be cause for grave concern if discussions didn't branch out from time to time. I don't think the development of ChucK (and the Audicle's) is just about implementation (for some) and bug reports/feature suggestions (for the rest); it's also about establishing what all of this is good for and how we want to use it. Writing one's own instrument inherently leads to deep questions about what we think a instrument is, what music is, perhaps even who we ourselves are, on some level. What I do mind is that here this discussion, through some combination of factors, has become unpleasant and because of that has also stopped being productive. I'd like to suggest backing up a bit, attempt to (re)establish clearly what the real questions are here and go from there. *IMHO *I realise I have no authority to tell anyone what to do (nor would I want this!), this is just a friendly suggestion. *I hope everyone is well and in a situation where pleasant walks with fresh air are a option. Yours, Kas.
I'd like to suggest backing up a bit, attempt to (re)establish clearly what the real questions are here and go from there.
I would absolutely love that, Kassen - and I would appreciate if Mr. Michal Seta would do the same. In advance, I'm sorry if I ticked off some circuits in his brain without intending to. Of course, my last missive to him was written with the intent of putting him off. With that silly exception.....
Writing one's own instrument inherently leads to deep questions about what we think a instrument is, what music is, perhaps even who we ourselves are, on some level.
I think this discussion started from me because ChucK appeared to me (from it's website) as a "language" whose scripts could be modified on-the-fly. After a couple of days of play, it seems a great type of software tp play with sound at a fundamental level. You can write your own plugins etc, but it does not qualify as a language. A language can be ported across platforms more easily - like a population across a border. Software is much heavier, like the freakin Taj Mahal someone made for hid dead wife. (forgive the metaphors there. i'm just trying to be funny!)
On 9/18/07, AlgoMantra
I would absolutely love that, Kassen - and I would appreciate if Mr. Michal Seta would do the same. In advance, I'm sorry if I ticked off some circuits in his brain without intending to. Of course, my last missive to him was written with the intent of putting him off. With that silly exception.....
Very well. Out of pure curiosity I too Googled for Michal Setta (partially because I recognised the name but couldn't remember from where) and the results I got would indicate that Michal too has a strong interest in programming audio and discussing that so I have high hopes all will be well now. Let's just pretend, if that's possible for all involved, we didn't have this side-track.
I think this discussion started from me because ChucK appeared to me (from it's website) as a "language" whose scripts could be modified on-the-fly. After a couple of days of play, it seems a great type of software tp play with sound at a fundamental level. You can write your own plugins etc, but it does not qualify as a language.
A language can be ported across platforms more easily - like a population across a border. Software is much heavier, like the freakin Taj Mahal someone made for hid dead wife.
Well, to me it's a language, for one thing I feel like I can express myself in it which is a good property for a language to have. It's also somewhat portable; you can take your ChucK code from Mac to Linux to Windows and it should work. You can't take it on a mobile-phone, at least not yet that I know off. As a instrument it's more portable then a grand piano but not as portable as a flute. Unlike with some other software I believe the ChucK developers would applaud and try to help anyone who would like to make ChucK more portable. Perhaps I misunderstand what you are aiming at here but I don't see how much more portability then having the source and a GPL license you can expect to have at this point in ChucK's development. As for OTF modifications; I already wrote all I really have to say about that at this stage in my first reply to this discussion. There is room for growth there, you are right, but I'm sure there are big questions about the interface to this and I'm willing to bet there are huge questions about practical implementations. Nobody ever claimed ChucK was mature or that it would effortlessly let you save the world within a week. Yours, Kas.
You can't take it on a mobile-phone, at least not yet that I know off.
Hey Kassen, I did mention at the livecode list that I am interested in using mobile phones as controllers for all sorts of things - especially audio synthesis. However, I never expected to load Chuck onto a phone! The current Nokia series allows developers only wave playback, and that too one file at a time. I also have no access to mic input directly, it has to be recorded. I am working with extreme constraints here, so I am a bit lost you see. The only good thing is that I can write for the phone using Python. However, one solution was this. I can easily control ChucK running off a laptop through a Nokia handset via bluetooth. Now I would probably project a graphical interface to control ChucK onto the screen, and press various blocks on it using the cellphone. The rest as you suggested, can be done in various ways, but I'm still trying all those things out.
On 9/18/07, AlgoMantra
You can't take it on a mobile-phone, at least not yet that I know off.
Hey Kassen, I did mention at the livecode list that I am interested in using mobile phones as controllers for all sorts of things - especially audio synthesis. However, I never expected to load Chuck onto a phone!
Yes, but we can dream about the future! As I understand the way GSM works mobile phones *need* to be capable of DSP at rather high frequencies because otherwise they couldn't encode your calls to whatever frequency GSM works at. I just looked it up and those GSM bands are slightly below and in the GHz range so whatever is doing that must be quite a amusing little chip, maybe that one isn't open for abuse. Generally mobile phones keep getting more advanced and people keep expecting them to do more (games, calendars, one friend of mine lamented that the interface on his phone was no good for working with the copy of Word it came with) so in the future.... why not? The current Nokia series allows developers only wave playback, and that too
one file at a time. I also have no access to mic input directly, it has to be recorded. I am working with extreme constraints here, so I am a bit lost you see. The only good thing is that I can write for the phone using Python.
However, one solution was this. I can easily control ChucK running off a laptop through a Nokia handset via bluetooth. Now I would probably project a graphical interface to control ChucK onto the screen, and press various blocks on it using the cellphone.
The rest as you suggested, can be done in various ways, but I'm still trying all those things out.
Yes, I see. Now, what if the FFT was done in ChucK and you build bars for the graphical representation of the spectrum (that I gather you want on the phone?) in python, then scaled the update rate and amount of bars to whatever Bluetooth can take if you send those values over OSC? Kas.
Da: chuck-users-bounces@lists.cs.princeton.edu per conto di Kassen
Inviato: mar 18/09/2007 13.43
A: ChucK Users Mailing List
Oggetto: Re: [chuck-users] how is this "on the fly"?
On 9/18/07, AlgoMantra
Kas, Your idea about what kind of DSP is happening inside the phone, and the access available right now was answered quite nicely by Gatti Lorenzo. That IS how it is for now. I do think, however, if one obtains sufficient knowledge of PIC/ARM or some sort of microcontrollers, one might be able to add on extra sensors (thermistor, accelerometer, gyroscope, piezoelectric) and a small microprocessor to go with it to eat all the data where it is collected and sing a nice tune to the intestines of the phone. I should be able to comment more authoritatively on this subject in the next few months. Modding the phone is the way out, for sure. That's what I intend to do. I just did some experiments and now finally I have decided to take the following route. Motion detection on my phone is proving too slow (there's a 1-2 second raster, which needs to be cut by ten times), so..... I have the camera of the phone watching me and sending pictures every few microseconds via Bluetooth to a buffer in the PC where they are analysed for motion. I am not sure whether to go with the regular pixel matcher algorithms, or try something new like R-G-B comparison, edge analysis. I don't know yet - but finally it is the result of this analysis that will drive a graphic board, the Chuck sequencer. The graphical representation of the Chuck sequencer has to appear in sync with, and its dynamics a direct result of my movement. That's the project, actually..............now back to burning my brain up. Now you might say, why not a webcam then? The answer to that I have, but its pretty long winded and I will provide it if you ask ;) Unless you can guess it first. Heh heh! - 1/f
AlgoMantra wrote:
Motion detection on my phone is proving too slow (there's a 1-2 second raster, which needs to be cut by ten times), so.....
[snip]
Now you might say, why not a webcam then? The answer to that I have, but its pretty long winded and I will provide it if you ask ;)
I plan on using one of these: http://www.sparkfun.com/commerce/product_info.php?products_id=639 Read distances from 0 to 6.45m with no dead zone. 26 bucks. Put some in an array. -- robin ----- Robin Parmar robinparmar.com
Like a previous writer, I am interested in how ChucK can be used for "live coding". From my limited explorations I have discovered some of the same limitations, but in this thread would prefer to ask "How do you use ChucK for live coding?" And: "What techniques are there to facilitate the process?" Further: "How has ChucK informed your practice? What attributes of the language and environment have changed your sound production?" Maybe there are already some articles on this I can be pointed to. While we're on the topic, the term "live coding" seems deficient to me, because it totally fails to mention the sound. None of the common alternatives remedy this fundamental (music pun) problem. -- robin
The very first "live coding" performance I have seen (in 2002), was
Alex Burton using MaxMSP, starting with blank canvas, putting down
first objects in silence until there was a possibility of making sound
and taking it from there for about 30 minutes and eventually taking it
apart, piece by piece until he was back to blank canvas again. The
title of he piece was "Live programming of an improvised musical
structure". I have done similar things with pd but never in a solo
setting.
While many will argue that MaxMSP, pd and other visual/dataflow
programming environments are not "real" programming languages, they
seem to be closer to the "sound" rather than "programming" in live
performance (perhaps because of the "patching" paradigm of
programming, similar to the modular analogue synthesisers). Also,
visually they are more appealing to the non-geek members in an
audience (if the coding is, in fact, projected, as seems to be the
trend with live coding events) as, apparently, the dataflow is easier
to follow than textual control structures.
I have tinkered with ChucK and realized that using a system which
forces me to think in algorithms is not "natural" enough for me. It
does not have the tangibility I am used to (I come from music
background, I played with analog synths, I've built acoustic and
electronic instruments and controllers and much of my noise making
activity was based on the cause-effect principle rather than
algorithmic virtuosity). Yes, I realize that one can set up scripts
that will allow some control of various aspects of sound with a
variety of interfaces but this would be getting further from "live
coding". Not that live coding is an absolute must for me, but
attractive enough to keep on lurking. ChucK's simple syntax and shred
approach seems like a good compromise for a text-based programming
language to make noise quickly.
All that said, I think that there are limits as to what we can do
"live" through programming. One needs some pre-made code snippets.
While simply chucking and unchucking pre-made code would qualify
simply as sequencing, I do not think that live coding an audio driver
to be used in a performance would qualify as an interesting experience
(for the listener, providing that s/he came for the music, not the
hacking). there needs to be a middle ground and ChucK is one of the
valid solutions.
./MiS
On 9/18/07, robin.escalation
While we're on the topic, the term "live coding" seems deficient to me, because it totally fails to mention the sound. None of the common alternatives remedy this fundamental (music pun) problem.
robin.escalation wrote:
Like a previous writer, I am interested in how ChucK can be used for "live coding". From my limited explorations I have discovered some of the same limitations, but in this thread would prefer to ask "How do you use ChucK for live coding?" And: "What techniques are there to facilitate the process?"
Further: "How has ChucK informed your practice? What attributes of the language and environment have changed your sound production?"
I'll go ahead and chime in here, as I suspect my usage of ChucK is rather different than others: I use ChucK mostly for MIDI-based automation. I send a note out of my sequencer (Live) over a virtual MIDI cable that usually trigger a series of MIDI CCs (and less often notes -- usually all of my audio is pre-rendered to save on CPU cycles during performances) sent back to the sequencer. The main thing this boils down to from my side is doing controlled randomization, parameter envelopes or event sequences that are too complicated or tedious to do directly in the sequencer. I've written some stuff doing audio processing in ChucK, but I've only ever used that once in a track, and that was mostly just for running some random number generators into the envelopes of a few oscillators. I don't do live coding with ChucK at all. From what I gather on the list, I'm also different from most of the crowd here in that I've defected from the artsy world (to the general direction of repetitive dance music with lots of beeping sounds) and performances tend to be in warehouses or bars rather than conservatory concert halls. :-) -Scott
Scott Wheeler wrote:
The main thing this boils down to from my side is doing controlled randomization, parameter envelopes or event sequences that are too complicated or tedious to do directly in the sequencer.
I can see doing that myself. For the last couple of years I have been using Reaktor to build instruments that generate sound through simple interaction... no complex sequencing. For example I slow a drum machine to 2 BPM and run the hits through a resonant filter and delay, with one or two LFOs cycling some parameters. This might create odd popping an chirping sounds at randomish intervals. This is all well and good, but the only algorithmic devices I've used have been made by other people, since Reaktor is not the best environment for writing equations. That said, I would rarely want to simply feed and equation and watch it run. What is great about Reaktor is that it is easy for me to "play" these instruments in real time, since any of their parameters can be exposed to controls and mapped with MIDI. So I am able to jam with my creations with some sound factors under strict control, others wandering, and still others directly played.
From what I gather on the list, I'm also different from most of the crowd here in that I've defected from the artsy world (to the general direction of repetitive dance music with lots of beeping sounds) and performances tend to be in warehouses or bars rather than conservatory concert halls. :-)
I gigged last weekend at just such a venue in Dublin, in what might be called "enhanced DJ" mode. I mixed other people's music with my own, played live out of Reaktor. Whether the audience knew it or not, they were witness to a one-of-a-kind audio landscape. -- robin
robin.escalation wrote:
Scott Wheeler wrote:
The main thing this boils down to from my side is doing controlled randomization, parameter envelopes or event sequences that are too complicated or tedious to do directly in the sequencer.
I can see doing that myself. For the last couple of years I have been using Reaktor to build instruments that generate sound through simple interaction... no complex sequencing. For example I slow a drum machine to 2 BPM and run the hits through a resonant filter and delay, with one or two LFOs cycling some parameters. This might create odd popping an chirping sounds at randomish intervals.
This is all well and good, but the only algorithmic devices I've used have been made by other people, since Reaktor is not the best environment for writing equations. That said, I would rarely want to simply feed and equation and watch it run.
What is great about Reaktor is that it is easy for me to "play" these instruments in real time, since any of their parameters can be exposed to controls and mapped with MIDI. So I am able to jam with my creations with some sound factors under strict control, others wandering, and still others directly played.
Well, you can also map stuff to MIDI in ChucK, you just have to build a few classes to make the setup easy. I've attached an example built on my collection of classes. (Most of which are hacked out of a recent set.) Once the base classes are there (which shouldn't ever need to be changed, really), this gives you a way to map functionality to a hardware MIDI control either based on subclassing: class FooControl extends Control { 1 => cc; fun void set(int value) { <<< "Foo: ", value >>>; } } FooControl foo; Or events: EventControl bar; 2 => bar.cc; fun void listener() { while(true) { bar.changed => now; <<< "Bar: ", bar.changed.value >>>; } } spork ~ listener(); Combined with the dummy code I inserted below that to simulate a knob and then a button we get: localhost: /Users/scott/Documents/ChucK> chuck Control.ck Foo: 0 Bar: 127 Foo: 1 Foo: 2 Foo: 3 Foo: 4 Foo: 5 Foo: 6 Foo: 7 Foo: 8 Foo: 9 -Scott (aka Self Appointed ChucK Algorithms Wonk) Extra Credit: One thing that I've noticed in a couple of places is that sometimes sporking doesn't do what I would expect it to. For instance, if I change line 223 to "spork ~ node.item.set(value);" to make it non-blocking the program doesn't work. I don't see why. // MIDI Event IDs int codes[0]; 144 => codes["NoteOn"]; 128 => codes["NoteOff"]; 176 => codes["ControlChange"]; class MidiMessage { int id; fun int[] data() { return [ 0, 0 ]; } } class NoteMessage extends MidiMessage { int pitch; int velocity; fun int[] data() { return [ pitch, velocity ]; } } class NoteOnMessage extends NoteMessage { codes["NoteOn"] => id; 100 => velocity; } class NoteOffMessage extends NoteMessage { codes["NoteOff"] => id; 0 => velocity; } class ControlChangeMessage extends MidiMessage { codes["ControlChange"] => id; 8 => int control; 127 => int value; fun int [] data() { return [ control, value ]; } } class MidiHandler { // Members MidiIn input; MidiOut output; 0 => int inputDevice; 0 => int outputDevice; // Constructor if(!input.open(inputDevice)) { <<< "Could not open MIDI input device." >>>; me.exit(); } if(!output.open(outputDevice)) { <<< "Could not open MIDI output device." >>>; me.exit(); } fun void send(MidiMessage message) { message.data() @=> int data[]; if(data.cap() == 2) { MidiMsg out; message.id => out.data1; data[0] => out.data2; data[1] => out.data3; output.send(out); } else { <<< "Invalid data() for MidiMessage." >>>; } } fun void run() { // Now handle incoming events. MidiMsg message; while(true) { input => now; while(input.recv(message)) { message.data1 => int code; if(code == codes["NoteOn"]) { spork ~ noteOn(message.data2, message.data3); } else if(code == codes["NoteOff"]) { spork ~ noteOff(message.data2, message.data3); } else if(code == codes["ControlChange"]) { spork ~ controlChange(message.data2, message.data3); } else { <<< "Unhandled MIDI Message: ", message.data1, message.data2, message.data3 >>>; } } } } fun void noteOn(int pitch, int velocity) { <<< "Note On: ", pitch, velocity >>>; } fun void noteOff(int pitch, int velocity) { <<< "Note Off: ", pitch, velocity >>>; } fun void controlChange(int control, int value) { <<< "Control Change: ", control, value >>>; } } class Control { -1 => int cc; ControlDispatcher.register(this); fun void set(int value) { <<< "Control Changed: ", cc, ", ", value >>>; } } class ControlEvent extends Event { int control; int value; } class EventControl extends Control { ControlEvent changed; fun void set(int value) { cc => changed.control; value => changed.value; changed.broadcast(); } } class ControlNode { ControlNode @ next; Control @ item; } class ControlList { static ControlNode @ first; static ControlNode @ last; fun void append(Control control) { if(first == null) { new ControlNode @=> first; first @=> last; control @=> first.item; } else { new ControlNode @=> last.next; last.next @=> last; control @=> last.item; } } } class ControlDispatcher extends MidiHandler { static ControlList @ controls; fun void controlChange(int control, int value) { if(controls == null) { return; } controls.first @=> ControlNode @ node; while(node != null) { if(node.item.cc == control) { node.item.set(value); } node.next @=> node; } } fun static void register(Control control) { if(controls == null) { new ControlList @=> controls; } controls.append(control); } } ControlDispatcher controller; // Two demos here: one with subclassing, one with events: class FooControl extends Control { 1 => cc; fun void set(int value) { <<< "Foo: ", value >>>; } } FooControl foo; // And now with events. EventControl bar; 2 => bar.cc; fun void listener() { while(true) { bar.changed => now; <<< "Bar: ", bar.changed.value >>>; } } spork ~ listener(); // And now let's create some fake hardware controls to test things. fun void fakeKnob() { ControlChangeMessage message; 1 => message.control; for(0 => int i; i < 10; i++) { i => message.value; controller.send(message); 10::ms => now; } } fun void fakeButton() { ControlChangeMessage message; 2 => message.control; 127 => message.value; controller.send(message); } spork ~ fakeKnob(); spork ~ fakeButton(); controller.run();
AlgoMantra schrieb:
Perhaps the truth really is that adc => FFT => dac, which is so simple for ChucK etc - has no analog in Python, and people are just too ashamed to admit that they don't know how its done. To use Chuck to do this, I will need to learn YET ANOTHER LANGUAGE called OSC or something, which will talk to messages from Python (which are messages originating in my phone coming via Bluetooth) so I can pretty much give up on realtime.
pysndobj can do FFT. just look it up in the docs. and adc => FFT => dac won't work in chuck either. you'd need at least an inverse FFT. best joerg -- http://joerg.piringer.net http://www.transacoustic-research.com http://www.iftaf.org http://www.vegetableorchestra.org/
Thank you very much, Joerg. I haven't looked
at all the docs all the time. I tried some examples
and moved on...will let you know if it works.
- f
On 9/18/07, joerg piringer
AlgoMantra schrieb:
Perhaps the truth really is that adc => FFT => dac, which is so simple for ChucK etc - has no analog in Python, and people are just too ashamed to admit that they don't know how its done. To use Chuck to do this, I will need to learn YET ANOTHER LANGUAGE called OSC or something, which will talk to messages from Python (which are messages originating in my phone coming via Bluetooth) so I can pretty much give up on realtime.
pysndobj can do FFT. just look it up in the docs.
and adc => FFT => dac won't work in chuck either. you'd need at least an inverse FFT.
best joerg
-- http://joerg.piringer.net http://www.transacoustic-research.com http://www.iftaf.org http://www.vegetableorchestra.org/ _______________________________________________ chuck-users mailing list chuck-users@lists.cs.princeton.edu https://lists.cs.princeton.edu/mailman/listinfo/chuck-users
-- ------- 1/f ))) ------- http://www.algomantra.com
participants (7)
-
AlgoMantra
-
Gatti Lorenzo
-
joerg piringer
-
Kassen
-
Michal Seta
-
robin.escalation
-
Scott Wheeler