Difference between revisions of "PLOrk2009/SeanMurphyFinalProject"

From CSWiki
Jump to: navigation, search
 
(things i really, really want to add)
 
(10 intermediate revisions by the same user not shown)
Line 5: Line 5:
  
 
=== what i'm trying to do here ===
 
=== what i'm trying to do here ===
 +
my hope for this project was to build a granular synthesizer from the ground up. luckily, ChucK is readily prepared for this task; both its unit analyzer bank as well as its live sampling unit generator, LiSa, serve as ample tools for both the audio analysis and playback we need. i opted to use LiSa, for no particular reason.
  
 +
the code is (conceptually) pretty simple: begin by loading a soundfile into a LiSa buffer, then specify what grain you want to pull out and how you want to play it in terms of its duration, position in the buffer, rate of playback (pitch) and how hard/soft you'd like attack and decay. the code will interpolate (and loop) through the soundfile, pulling grains and reassembling soundforms according to these conditions. what gets interesting, however, is when a bit of randomness is involved: each grain parameter has an accompanying scatter value that quite significantly affects the sound's output. it is easy to get creative quickly with these. for instance, a scatter value for grain length that is larger than the base grain length value introduces the possibility to travel backwards in the file-- e.g., if grainlen=20ms, and grainlen_scatter=80ms, there is a possibility that the next grain pulled will reach up to 60ms backwards in the soundfile.
  
 +
in addition to hardcoded values for these parameters, considerably further expression can be found with dynamic control. unfortunately, i quite flatly ran out of time for a proper interface. what i was able to add was a bit of control via the laptop's tilt. in "standard mode" (default), the x-axis tilt contributes for more randomness in grain length, and the y-axis more variation in grain pitch. by hitting capslock, the patch enters a "scrub mode" where the y-axis rigorously controls playback position while the x-axis controls a position scatter value. in other words, grains will be pulled only from where you have tilted the laptop to on the y-axis, plus or minus the value taken from the x-axis. while this is by no means a fully useful tool (oh how nice it would've been if i could've just gotten a few sliders to work before i had to give that computer back!), it's a start (and kind of fun to play with).
  
[http://www.google.com with a link off the wiki]
+
=== a few examples ===
  
[[PLOrk_spring2009 | and a link on the wiki]]
+
no video (sadly) but instead a few (pretty lame) audio clips showing different effects.
  
=== What to include on your project page ===
+
[http://www.princeton.edu/~smmurphy/files/plork%20final/recordings/piano2.mp3 some tinkling at the piano]
* A description of your project
+
 
* Your code
+
[http://www.princeton.edu/~smmurphy/files/plork%20final/recordings/comeout.mp3 let some of the bruise blood come out and show them]
** If it's short, you can make a new page for it like [http://wiki.cs.princeton.edu/index.php/Poly_demo.ck this one]
+
 
** Or, if there's a lot of it, put it in a .zip file so that people can upload it.
+
[http://www.princeton.edu/~smmurphy/files/plork%20final/recordings/imogen.mp3 lousy pop made into lousy glitch]
*** We suggest: put it in your public_html directory on your network drive, then make a link, e.g. to http://www.princeton.edu/~yourname/yourfile.zip. Let us know if you need any help!
+
 
* Instructions on how to run your code
+
=== things i really, really want to add ===
* A sound or video recording of your piece. Going lo-fi and using built-in webcam from another laptop (e.g. PLOrk machine in studio B) is fine. But for audio, if you're using chuck, best to use rec.ck for writing chuck's output directly to a file.
+
* a gui (obviously)
** See directions above on putting it on your network drive and linking to it
+
** sliders controlling base and scatter values, button indicating mode, more modes, etc.
 +
* a "jitter" delay
 +
** more randomness: a feedback-based delay that draws from another LiSa buffer that is recording output
 +
** also controlled with semi-randomized parameters
 +
* lots of little micro-editing things
 +
** grain panspread (lisa.voicePan()???)
 +
** basic filtering tools
 +
* fix my LiSa setup to accept 2+ channels (currently only accepts mono)
 +
* fix everything to make it more user-friendly (including me-friendly)
 +
* polish the code in general
 +
 
 +
=== how to run ===
 +
 
 +
* download [http://www.princeton.edu/~smmurphy/files/plork%20final/granulator.ck granulator.ck]
 +
* put your MONO (important) soundfile, .wav format, in the miniAudicle directory
 +
* open granulator.ck in the miniAudicle
 +
* change the path as indicated at the top of the code
 +
* tweak settings and shred 'er up

Latest revision as of 15:44, 22 May 2009

sean's final project ::: granular synthesis in ChucK

granular synthesis

granular synthesis is a type of audio synthesis that functions by dissolving a sampled waveform into several microchunks--called grains--that are then reassembled to form an altogether different sound. grains often have durations of 1 to 100 ms in length, each capable of transformation as an individual waveform. this ability to operate on a micro-time scale allows for a vast amount of control on the treatment of the sound. (as always, wikipedia is not a bad place to start for some reading.)

what i'm trying to do here

my hope for this project was to build a granular synthesizer from the ground up. luckily, ChucK is readily prepared for this task; both its unit analyzer bank as well as its live sampling unit generator, LiSa, serve as ample tools for both the audio analysis and playback we need. i opted to use LiSa, for no particular reason.

the code is (conceptually) pretty simple: begin by loading a soundfile into a LiSa buffer, then specify what grain you want to pull out and how you want to play it in terms of its duration, position in the buffer, rate of playback (pitch) and how hard/soft you'd like attack and decay. the code will interpolate (and loop) through the soundfile, pulling grains and reassembling soundforms according to these conditions. what gets interesting, however, is when a bit of randomness is involved: each grain parameter has an accompanying scatter value that quite significantly affects the sound's output. it is easy to get creative quickly with these. for instance, a scatter value for grain length that is larger than the base grain length value introduces the possibility to travel backwards in the file-- e.g., if grainlen=20ms, and grainlen_scatter=80ms, there is a possibility that the next grain pulled will reach up to 60ms backwards in the soundfile.

in addition to hardcoded values for these parameters, considerably further expression can be found with dynamic control. unfortunately, i quite flatly ran out of time for a proper interface. what i was able to add was a bit of control via the laptop's tilt. in "standard mode" (default), the x-axis tilt contributes for more randomness in grain length, and the y-axis more variation in grain pitch. by hitting capslock, the patch enters a "scrub mode" where the y-axis rigorously controls playback position while the x-axis controls a position scatter value. in other words, grains will be pulled only from where you have tilted the laptop to on the y-axis, plus or minus the value taken from the x-axis. while this is by no means a fully useful tool (oh how nice it would've been if i could've just gotten a few sliders to work before i had to give that computer back!), it's a start (and kind of fun to play with).

a few examples

no video (sadly) but instead a few (pretty lame) audio clips showing different effects.

some tinkling at the piano

let some of the bruise blood come out and show them

lousy pop made into lousy glitch

things i really, really want to add

  • a gui (obviously)
    • sliders controlling base and scatter values, button indicating mode, more modes, etc.
  • a "jitter" delay
    • more randomness: a feedback-based delay that draws from another LiSa buffer that is recording output
    • also controlled with semi-randomized parameters
  • lots of little micro-editing things
    • grain panspread (lisa.voicePan()???)
    • basic filtering tools
  • fix my LiSa setup to accept 2+ channels (currently only accepts mono)
  • fix everything to make it more user-friendly (including me-friendly)
  • polish the code in general

how to run

  • download granulator.ck
  • put your MONO (important) soundfile, .wav format, in the miniAudicle directory
  • open granulator.ck in the miniAudicle
  • change the path as indicated at the top of the code
  • tweak settings and shred 'er up