file open semantics for clustered chuck
Since an important usage mode for chuck is analogous to a server for multiple performers, I'd like to suggest that the file open semantics be extended. When a chuck script is submitted from a performers host to a remotely running chuck server, that script may open files, which must reside on the server file system, not the submitters host. This means that wave files, or other data must be first copied into the server, or otherwise exported. By simply extending the semantics for Chuck's "open" from expecting a file name, I suggest that it be a URL instead. If the filename starts with http:// or ftp:// or other legal prefix, give the URL to a curl library for access. For this to work, the performer's system need only have an httpd or ftpd or ssh deamon running locally to give the remote chuck access to those needed files. As a side effect, the files need only be present "somewhere" out there in cluster land. The URL will rule. Jim Hinds
Hi!
Since an important usage mode for chuck is analogous to a server for multiple performers, I'd like to suggest that the file open semantics be extended.
Great idea, though there are certainly more issue to consider for remote operations, since they often aren't always as transparent as their interface advertises. For example, network timeouts when retrieving a file need to handled. Actually this seems to be part of a similar problem of pre-loading/chunking sound files to avoid interruption on large sound files. So, we should probably add some options to control load behavior.
By simply extending the semantics for Chuck's "open" from expecting a file name, I suggest that it be a URL instead. If the filename starts with http:// or ftp:// or other legal prefix, give the URL to a curl library for access.
Cool. The .open function already examines the path to catch "special" internal data (i.e. "special:glot_pop"). It would make sense to extend that to URL. We just need to find a way to semi-gracefully deal with potential network timeout and lag. Thanks. We shall look into this. Best, Ge!
participants (2)
-
Ge Wang
-
Jim Hinds