Difference between revisions of "COS597b Fall2012"

From CSWiki
Jump to: navigation, search
(Schedule)
(Schedule)
 
(169 intermediate revisions by 17 users not shown)
Line 11: Line 11:
 
* Edward Zhang '13
 
* Edward Zhang '13
 
* Jeff Snyder '13
 
* Jeff Snyder '13
 +
* Avneesh Sarwate '14
 
* Joe Tylka, first year grad student
 
* Joe Tylka, first year grad student
 
* [http://www.cs.princeton.edu/~roda/about/Home.html Reid Oda], second year grad student
 
* [http://www.cs.princeton.edu/~roda/about/Home.html Reid Oda], second year grad student
* Rahulram Sridhar, G1
+
* [http://www.princeton.edu/~rahulram Rahulram Sridhar], G1
 
* [http://www.cs.princeton.edu/~kewolf/home.html Katie Wolf], second year grad student
 
* [http://www.cs.princeton.edu/~kewolf/home.html Katie Wolf], second year grad student
 
* Sasha Koruga, G2
 
* Sasha Koruga, G2
 
* Ohad Fried, first year grad
 
* Ohad Fried, first year grad
 
* Abu Saparov '13
 
* Abu Saparov '13
 +
* [http://www.princeton.edu/~dbragg/ Danielle Bragg], second year grad student
 +
* Daniel Ryan '13
 +
* Alejandro Van Zandt-Escobar, CS '14
 +
* Nikitas Tampakis '14
 +
* Jennifer Guo, first year grad
 +
 +
== Assignments ==
 +
* <b>Assignment 1, due 9/20</b>
 +
** Post on Piazza one example of interactive computer/technology used in live performance (could be a software program, a digital instrument or controller, a performance, ...). Choose something you find exciting or inspiring. Provide a URL and/or citation. Describe how the technology works (at a high level) and how the human(s) interact with the technology. What do you find exciting about it? Any problems you see with it, or criticisms you might offer (including aesthetic or technical concerns)? How does the technology impact or constrain the type of interaction that is possible, and the type of music that is possible or easy to make?
 +
** (Post publicly on Piazza, use hashtag #assignment1.)
 +
* <b> Assignment 2, due 10/2</b>
 +
** Choose two or more synthesis methods to experiment within a music programming environment of your choosing. (Suggestions: [http://cycling74.com/products/max/ Max/MSP], [http://puredata.info/ pd] (a free Max/MSP-like environment), [http://chuck.cs.princeton.edu ChucK], [http://supercollider.sourceforge.net/ SuperCollider], ???).
 +
** Post a thoughtful critique of the methods to Piazza, considering the quality of sounds that you can produce with a given method, the ease with which you can control the method, and any other characteristics that might influence someone's choice of whether to use the method in a performance or composition.
 +
** Use hashtag #assignment2 in your post
 +
** If you've never used an audio programming environment before and want some tips, just post to Piazza. Feel free to start with existing code & tutorials on the internet. Feel free to share code and programming tips with one another, but do the experimentation and response individually.
 +
* <b>Assignment 3, due 10/9</b>
 +
** Create a gesturally-controlled "instrument" that allows you to interactively control sound in real-time. Use an explicit mapping strategy that you program in whatever environment(s) you choose to use (i.e., no machine learning). Reflect on what was easy and hard to do in creating the mapping, what you found rewarding or frustrating about the process, and the process by which you chose the mapping you did. Submit your response on Piazza using #assignment3.
 +
** Feel free to build on any of your previous assignments. Easy-to-use controllers include the built-in laptop inputs (see http://smelt.cs.princeton.edu), the Wiimote (OSCulator is recommended if you're on a Mac), or joysticks (we have some you can borrow).
 +
** [http://en.wikipedia.org/wiki/Open_Sound_Control OpenSoundControl] is a good tool for patching together code in different environments, e.g. if you want to use Smelt to capture motion sensor input and send it to pd, or if you want a ChucK program to receive Wiimote messages from OSCulator. Google for OSC examples for the languages you're using, and or post to piazza and get others to share their code with you.
 +
* <b>Assignment 4, due 10/11</b>
 +
** Write 1 paragraph (or more if you really want) reflecting on your experiences with the Fauvel seminar last week. Please post to Piazza using #assignment4. Feel free to start a new thread or join an existing thread to chime in on someone else's post. (If you do that, please offer thoughtful commentary and response to their post, not just post something unrelated without starting a new thread.) You can write about any aspect you want, but here are some ideas for starting points:
 +
*** Did the seminar change any of your thinking around what "interactive technology" is (and has been, historically)?
 +
*** Musicology involves ways of studying and reasoning about the world that are quite different from those used in computer science. What might be some of the practices, perspectives, or research goals we have in common? How might computer scientists benefit from understanding more about musicology or other humanities disciplines?
 +
*** Did your ideas about how to digitize Fauvel -- either for scholars or for the public -- change in any surprising and interesting ways following the two seminar sessions led by musicologists?
 +
* <b>Assignment 5, due 10/23</b>
 +
** Build at least one gesturally-controlled instrument using [http://wekinator.cs.princeton.edu/ Wekinator], with the controller(s) and synthesis method of your choosing.
 +
** Post a response to Piazza (using #assignment5), including the following: 1) Describe what controller, gestures, learning algorithms), and synthesis method you used. 2) Reflect on what was easy, what was difficult, and how you might improve the software. 3) Also reflect on how the experience of building with Wekinator compared to your previous assignment of building a mapping explicitly using programming.
 +
** Be sure to read [http://wiki.cs.princeton.edu/index.php/ChucK/Wekinator/Instructions Wekinator instructions] to help you get started. Please run the walkthrough ahead of time to verify that the code works for you. We have about a 95% success rate on running on arbitrary machines, but if your machine happens to be Wekinator-unfriendly, we'll want to know ASAP. (Also, note that you may have to turn off firewall & antivirus for OSC to work properly on your machine.)
 +
* <b>Assignment 6, due 10/25</b>
 +
** Look through the table of contents for at least one conference proceedings or journal below, for at least the last 2 years. Choose a conference/journal that you expect to be closely related to your final project.
 +
** Write a response that includes the following: 1) A description of the type of work you generally find published at this venue (and what, if anything, is surprisingly absent). 2) List 10 papers that you find exciting, intriguing, or potentially useful. Provide a 1-sentence description of each one, and say briefly why you've included it. You <b>don't</b> (necessarily) have to read these papers-- it's fine to base your choice on the title and abstract. Post your response on Piazza using #assignment6
 +
** Possible venues:
 +
*** [http://quod.lib.umich.edu/i/icmc/ International Computer Music Conference]: A diverse mixture of technical and artistic work, held yearly. Includes just about any topic related to computer music.
 +
*** [http://www.nime.org/archive/ International Conference on New Interfaces for Musical Expression (NIME)]: NIME grew out of the ACM CHI (human-computer interaction conference), and it's mostly centered around hardware and software interfaces for performance and composition. However, there is a good mixture of other topics, as well. (Note that 2012 proceedings might not be posted yet at the above link; 2012 is available [http://www.eecs.umich.edu/nime2012/Proceedings/NIME2012WebProceedings.html here].)
 +
*** [http://www.dafx.de/ International Conference on Digital Audio Effects (DAFx)]: DAFx certainly includes some signal-processing-heavy research on cool-sounding audio effects, but it also includes broader topics like analysis (and synthesis) algorithms, spatial audio, interactive performance issues, audio perception, and others.
 +
*** [http://www.computermusicjournal.org/ Computer Music Journal]: A journal with very diverse content, spanning both technical and artistic considerations in computer music. Breadth is similar to ICMC. Sometimes there are special issues on particular topics-- see the website if any special issues appeal to you!
 +
*** [http://journals.cambridge.org/action/displayJournal?jid=OSO Organised Sound]. Another interdisciplinary journal, can be more focused on musical issues compared to technical issues, but still includes a wide range of topics. Same as CMJ with respect to special issues.
 +
*** [www.ismir.net/all-papers.html International Conference on Music Information Retrieval (ISMIR)]: Includes not just music "information retrieval," but music informatives more generally. Lots on audio analysis, most (but not all) of it not targeted at performance. The interactive aspects of systems are not usually explicitly considered, but there are definite exceptions. Lots of cool machine learning work here.
 +
*** [http://icad.org/ International Conference on Auditory Display (ICAD)]: A conference not focused on music, per se, but on sonification in many forms. There is considerable attention to human perception of audio, as well as focus on contexts in which sonification is useful.
 +
*** [http://smcnetwork.org/ Sound and Music Computing Conference]: Another international conference with a very broad focus on many issues related to sound, music, and computing. Lots of interesting stuff here. (Note that SMC offers a "summer school" session before each conference, targeted at students in the field-- something to consider!)
 +
*** [http://cmmr2012.eecs.qmul.ac.uk/ International Conference on Computer Music Modeling and Retrieval]: A conference that is maybe not quite as broad as ICMC, focused on modeling and retrieval. Includes music emotion analysis, spatial audio, synthesis, computer models of perception and cognition, music information retrieval, computational musicology, others. (You'll have to google for each year individually to find proceedings by year.)
 +
*** If there is another venue you think would be appropriate to add, just say so (on piazza or in class).
 +
* <b>Written final project proposals due November 4</b>. More info will be posted soon & discussed in class.
 +
* You will be scheduling a 30-minute meeting to discuss your project proposal for the week of <b>November 5</b>.
  
 
== Schedule ==
 
== Schedule ==
 
* 13 September: First day!
 
* 13 September: First day!
 
** Course overview & introduction
 
** Course overview & introduction
 +
** [http://www.cs.princeton.edu/courses/archive/fall12/cos597B/protected/lec09-13.pdf Link to lecture slides]
 
** A brief history of computers in music composition, performance, listening, and scholarship
 
** A brief history of computers in music composition, performance, listening, and scholarship
 
** <b>Assignment 0, due 9/18:</b>
 
** <b>Assignment 0, due 9/18:</b>
Line 35: Line 81:
 
** Tutorial: Physics, perception, and digital representations of sound and music
 
** Tutorial: Physics, perception, and digital representations of sound and music
 
*** Leader: Rebecca
 
*** Leader: Rebecca
 +
*** [http://www.cs.princeton.edu/courses/archive/fall12/cos597B/protected/lec09-18.pdf Link to lecture slides]
 
** Reading: [http://music.columbia.edu/cmc/musicandcomputers/ Music and Computers], Chapters 1 and 2
 
** Reading: [http://music.columbia.edu/cmc/musicandcomputers/ Music and Computers], Chapters 1 and 2
 
*** (This is a tutorial, not a paper for discussion. No discussion leader needed.)
 
*** (This is a tutorial, not a paper for discussion. No discussion leader needed.)
 
*** Response link: https://docs.google.com/spreadsheet/embeddedform?formkey=dEJxX1J1ckZzYXNTUUp0S3JWZThZekE6MQ.
 
*** Response link: https://docs.google.com/spreadsheet/embeddedform?formkey=dEJxX1J1ckZzYXNTUUp0S3JWZThZekE6MQ.
** <b>Assignment 1, due 9/20</b>: Post on Piazza one example of interactive computer/technology used in live performance (could be a software program, a digital instrument or controller, a performance, ...). Choose something you find exciting or inspiring. Provide a URL and/or citation. Describe how the technology works (at a high level) and how the human(s) interact with the technology. What do you find exciting about it? Any problems you see with it, or criticisms you might offer (including aesthetic or technical concerns)? How does the technology impact or constrain the type of interaction that is possible, and the type of music that is possible or easy to make?
 
 
* 20 September: Synthesis algorithms & brief history of live electronic music
 
* 20 September: Synthesis algorithms & brief history of live electronic music
 
** Tutorials: 5-minute lightning tutorials on synthesis methods
 
** Tutorials: 5-minute lightning tutorials on synthesis methods
Line 44: Line 90:
 
*** Additive synthesis: Katie Wolf
 
*** Additive synthesis: Katie Wolf
 
*** Subtractive synthesis: Jeff Snyder
 
*** Subtractive synthesis: Jeff Snyder
*** Granular synthesis: PLEASESIGNUP
+
*** Granular synthesis: Alejandro Van Zandt-Escobar
*** Wavetable synthesis: PLEASESIGNUP
+
*** Wavetable synthesis: Rahulram Sridhar
*** Physical modeling synthesis: Ohad
+
*** Physical modeling synthesis: Ohad <br /> [http://music.columbia.edu/cmc/musicandcomputers/chapter4/04_09.php Demo shown in class]
*** Waveshaping synthesis: PLEASESIGNUP
+
*** Waveshaping synthesis: Reid Oda
** Tutorials: 5-minute lightning tutorials on music programming environments
 
*** Max/MSP PLEASESIGNUP
 
*** pd PLEASESIGNUP
 
*** chuck PLEASESIGNUP
 
*** supercollider PLEASESIGNUP
 
*** others?
 
 
** Reading: Nicolas Collins, "Live electronic music" (Chapter 3 of <i>The Cambridge Companion to Electronic Music</i>, 2011)
 
** Reading: Nicolas Collins, "Live electronic music" (Chapter 3 of <i>The Cambridge Companion to Electronic Music</i>, 2011)
 
*** To view this book chapter online, go to http://library.princeton.edu/, type in "The Cambridge companion to electronic music" into the "Books+" search box, and select the first result. This chapter is within the book section called "Electronic music in context."
 
*** To view this book chapter online, go to http://library.princeton.edu/, type in "The Cambridge companion to electronic music" into the "Books+" search box, and select the first result. This chapter is within the book section called "Electronic music in context."
*** Response link: TODO
+
*** Response link: https://docs.google.com/spreadsheet/viewform?formkey=dFpRY2lnRnFSR0Fmcm4waVMzSmJDLWc6MA
 
*** Discussion leader: Jeff Snyder
 
*** Discussion leader: Jeff Snyder
 
** If time permits, in-class discussion of Assignment 1
 
** If time permits, in-class discussion of Assignment 1
** <b> Assignment 2, due 9/27</b>: Choose two or more synthesis methods to experiment within a music programming environment of your choosing. (Suggestions: [http://cycling74.com/products/max/ Max/MSP], [http://puredata.info/ pd] (a free Max/MSP-like environment), [http://chuck.cs.princeton.edu ChucK], [http://supercollider.sourceforge.net/ SuperCollider], ???). Post a thoughtful critique of the methods to Piazza, considering the quality of sounds that you can produce with a given method, the ease with which you can control the method, and any other characteristics that might influence someone's choice of whether to use the method in a performance or composition.
 
*** If you've never used an audio programming environment before and want some tips, just post to Piazza. Feel free to start with existing code & tutorials on the internet. Feel free to share code and programming tips with one another, but do the experimentation and response individually.
 
 
* 25 September: Music synthesis and programming environments
 
* 25 September: Music synthesis and programming environments
** System/Work of the day: PLEASESIGNUP
+
** Tutorials: 5-minute lightning tutorials on music programming environments
 +
*** Max/MSP: Joe Tylka
 +
*** pd: Avneesh
 +
*** chuck: Sasha
 +
*** supercollider: Danielle Bragg
 
** Reading 1: Chris Chafe, [https://ccrma.stanford.edu/~cc/lyon/historyFinal.pdf "A Short History of Digital Sound Synthesis by Composers in the United States"]
 
** Reading 1: Chris Chafe, [https://ccrma.stanford.edu/~cc/lyon/historyFinal.pdf "A Short History of Digital Sound Synthesis by Composers in the United States"]
*** Response link: TODO
+
*** Response link: https://docs.google.com/spreadsheet/viewform?formkey=dDRGTEk3czAtanpNUTlmTWxaY2EwNXc6MA
*** Discussion leader: PLEASESIGNUP
+
*** Discussion leader: Alejandro Van Zandt-Escobar
 
** Reading 2: Ge Wang, "A history of programming and music"
 
** Reading 2: Ge Wang, "A history of programming and music"
 
*** This is another chapter of <i>The Cambridge Companion to Electronic Music</i>. See online access instructions above.
 
*** This is another chapter of <i>The Cambridge Companion to Electronic Music</i>. See online access instructions above.
*** Discussion leader: PLEASESIGNUP
+
*** Discussion leader: Daniel Ryan
*** Response link: TODO
+
*** Response link: https://docs.google.com/spreadsheet/viewform?formkey=dHdVcEJSaWw4R0hyWndjUDJkYThublE6MA
 
* 27 September: Introduction to Gestural Control of Sound
 
* 27 September: Introduction to Gestural Control of Sound
** System/Work of the day:PLEASESIGNUP
+
** System/Work of the day: Alejandro Van Zandt-Escobar - Serato Live
 
** Read: Bert Bongers, [http://www.create.ucsb.edu/~dano/594O/PhysicalInteractionBongers.pdf "Physical Interfaces in the Electronic Arts"]
 
** Read: Bert Bongers, [http://www.create.ucsb.edu/~dano/594O/PhysicalInteractionBongers.pdf "Physical Interfaces in the Electronic Arts"]
*** Discussion leader: PLEASESIGNUP
+
*** Discussion leader: Nikitas Tampakis
*** Response link: TODO
+
*** Response link: https://docs.google.com/spreadsheet/viewform?formkey=dHVJRUw2cnRhRVJ1YThvSXJUenN4aXc6MA
 
** Read: Wanderley and Depalle, [http://www.cim.mcgill.ca/~jer/courses/hci/ref/Wanderley_DePalle.pdf "Gestural Control of Sound Synthesis"].
 
** Read: Wanderley and Depalle, [http://www.cim.mcgill.ca/~jer/courses/hci/ref/Wanderley_DePalle.pdf "Gestural Control of Sound Synthesis"].
*** Discussion leader: PLEASESIGNUP
+
*** Discussion leader: Rahulram Sridhar
*** Response link: TODO
+
*** Response link: https://docs.google.com/spreadsheet/viewform?formkey=dFFNcWZzRVZQeGVHeU4zWGRrRlB0U1E6MA
** Discuss outcomes of Assignment 2
 
** <b>Assignment 3, due 10/9:</b> Create a gesturally-controlled "instrument" that allows you to interactively control sound in real-time. Use an explicit mapping strategy that you program in whatever environment(s) you choose to use (i.e., no machine learning). Reflect on what was easy and hard to do in creating the mapping, what you found rewarding or frustrating about the process, and the process by which you chose the mapping you did. Submit your response on Piazza.
 
*** Feel free to build on any of your previous assignments. Easy-to-use controllers include the built-in laptop inputs (see http://www.smelt.cs.princeton.edu), the Wiimote (OSCulator is recommended if you're on a Mac), or joysticks (we have some you can borrow).
 
*** OpenSoundControl (TODO) is a good tool for patching together code in different environments, e.g. if you want to use Smelt to capture motion sensor input and send it to pd, or if you want a ChucK program to receive Wiimote messages from OSCulator. Google for OSC examples for the languages you're using, and or post to piazza and get others to share their code with you.
 
 
* 2 October: Musicology crash-course on Fauvel
 
* 2 October: Musicology crash-course on Fauvel
** System/Work of the day: PLEASESIGNUP
+
** <b>Assignment 2 due (post to Piazza: see instructions above)</b>
 
** Guest lecturer: Anna Zayaruznaya (Music @ Princeton)
 
** Guest lecturer: Anna Zayaruznaya (Music @ Princeton)
** Reading: TBA
+
** Reading: Introduction to Fauvel Studies, + lines 1-666 of poem (English translation + Gallica manuscript + French if possible) (Posted on Piazza)
 
*** No discussion leader needed
 
*** No discussion leader needed
** Response: TBA
+
*** Response: https://docs.google.com/spreadsheet/viewform?formkey=dHRUSmJoNVg0ZkEwNGMxa1hkV3QwbVE6MA
 
* 4 October: Meeting with Fauvel seminar: Mise en Page
 
* 4 October: Meeting with Fauvel seminar: Mise en Page
 
== Tentative further schedule ==
 
<font color="grey">
 
<b>Schedule from this point on is still tentative!</b>
 
 
* 9 October: Mappings for digital musical instruments
 
* 9 October: Mappings for digital musical instruments
 +
** <b> Assignment 3 due (see above) </b>
 
** Brief discussion on Fauvel
 
** Brief discussion on Fauvel
** Read: Wanderley on evaluation of mappings
+
** System/Work of the day: Edward
 +
** Reading 1: Hunt and Wanderley, [http://digitalmusicsstudio.wikispaces.asu.edu/file/view/Mapping+Performer+Parameters.pdf "Mapping performer parameters to synthesis engines,"] <i>Organised Sound</i> 7(2): 97–108, 2002.
 
*** Discussion leader: Katie Wolf
 
*** Discussion leader: Katie Wolf
** ? Read: Tanaka?
+
*** Response link: https://docs.google.com/spreadsheet/viewform?formkey=dC1vdUlZaU9ITHRZWUVSV0xGMkZ0dnc6MA#gid=0
*** ? Discussion leader: ???
+
** Reading 2: Cort Lippe, [http://www.music.buffalo.edu/faculty/lippe/pdfs/Japan-2002.pdf Real-Time Interaction Among Composers, Performers, and Computer Systems], <i>Information Processing Soceity of Japan, SIG Notes</i>, 123:1–6, 2002.
** Tutorial: 5-to-10-minute overviews of computational methods for mapping creation
+
*** Discussion leader: Abu Saparov
*** Neural networks (Lee, Freed, and Wessel?)
+
*** Response link: https://docs.google.com/spreadsheet/viewform?formkey=dGRYUVhIRU05Yk94MGxrOFNfMWdQeEE6MA#gid=0
*** MnM toolkit?
+
** Supervised learning overview & Wekinator demo, time permitting (Rebecca)
*** Mapping n-dimensional to...
+
*** TODO: Post slides
*** DTW (Merril)
+
* 11 October: Gesture analysis algorithms and tools: Part 1 of 2
** Tutorial/demo on Wekinator
+
** <b>Assignment 4 due</b> (see above)
*** Leader: Rebecca
+
** System/Work of the day: Sasha (ReplayGain)
** Supplemental material:
+
** Tutorial: 5-to-10-minute overview on HMMs
** System/Work of the day:
+
*** Leader: PLEASESIGNUP
** <b>Assignment: Due 10/18:</b> Build at least one gesturally-controlled instrument using Wekinator, with the controller(s) and synthesis method of your choosing. Reflect on what was easy, what was difficult, and how you might improve the software. Also reflect on how the experience of building with Wekinator compared to your previous assignment of building a mapping explicitly using programming.</b>
+
** Reading 1: Fiebrink, R., D. Trueman, and P. R. Cook. [http://www.cs.princeton.edu/%7Efiebrink/publications/FiebrinkTruemanCook_NIME2009.pdf “A meta-instrument for interactive, on-the-fly machine learning.”] Proceedings of New Interfaces for Musical Expression (NIME), Pittsburgh, June 4–6, 2009.
* 11 October: Computational methods for gesture analysis
+
*** Response URL: https://docs.google.com/spreadsheet/viewform?formkey=dG5PaGNfRTBMbzdvTk5oT2twaDVDcXc6MA
** Read: Gesture Follower (TODO: Add link)
+
*** Discussion leader: Rahulram Sridhar
*** Discussion leader: ???
+
** Reading 2: N. Gillian, R. B. Knapp, and S. O’Modhrain. [http://www.nickgillian.com/papers/Gillian_NDDTW.pdf "Recognition of multivariate temporal musical gestures using n-dimensional dynamic time warping."] In Proceedings of the 2011 International Conference on New Interfaces for Musical Expression (NIME11), Oslo, Norway, 2011.
** Supplemental material:
+
*** Response URL: https://docs.google.com/spreadsheet/viewform?formkey=dEloWU9ZZVVpOHJSMzNBLTdYRG1WOVE6MA
** System/Work of the day:
+
*** Discussion leader: Edward Zhang
* 16 October: Composers on instrument building
+
* 16 October: Gesture analysis algorithms and tools: Part 2 of 2
** Read: ?? Composing the instrument?
+
** System/Work of the day: Katie Wolf
** Read: Chadabe on mapping
+
** Tutorial: 5-to-10-minute overview on neural networks
** Read: Wessel: Problems and prospects
+
*** Leader: Jennifer Guo
*** Discussion leader: ???
+
*** Optional reference (early use of NNs in music): M. Lee, A. Freed, and D. Wessel. [http://pdf.aminer.org/000/313/669/the_mixture_of_neural_networks_adapted_to_multilayer_feedforward_architecture.pdf Neural networks for simultaneous classification and parameter estima- tion in musical instrument control]. Adaptive and Learning Systems, 1706:244–55, 1992.
** Supplemental material:
+
** Reading 1: F. Bevilacqua, B. Zamborlin, A. Sypniewski, N. Schnell, F. Guedy, and N. Rasamimanana. [http://www.springerlink.com/content/v8pn885111256625/?MUD=MP Continuous realtime gesture following and recognition]. <i>Springer Lecture Notes in Computer Science, Volume 5934, Gesture in Embodied Communication and Human-Computer Interaction</i>, Pages 73-84, 2010. <b>Please be sure to download directly from SpringerLink, as not all PDFs you'll find online include the complete text.</b>  
** System/Work of the day: ???
+
*** Discussion leader: Nikitas Tampakis
* 18 October: Wekinator
+
*** Response link: https://docs.google.com/spreadsheet/viewform?formkey=dEtTNDFKOEpUYnpfeU9GaXoxa19Fc3c6MA
** Read: Wekinator papers (NIME + composers)
+
** Reading 2: N. Rasamimanana, E. Flety, and F. Bevilacqua. [http://www.springerlink.com/content/h710741265086351/ Gesture analysis of violin bow strokes]. In Proceedings of Gesture Workshop 2005 (GW05), pages 145–155, 2005.
*** Discussion leader: ???
+
*** Response link: https://docs.google.com/spreadsheet/viewform?formkey=dHJxZFBaNGptMXVGV1VUVnRxelBEcGc6MA
** System/Work of the day: ???
+
*** Discussion leader: Alejandro Van Zandt-Escobar
* 23 October
+
* 18 October: Composing the instrument; alternative views of interaction
** Read:
+
** System/Work of the day: Avneesh Sarwate
** Read:
+
** Reading 1: Fiebrink, R., D. Trueman, C. Britt, M. Nagai, K. Kaczmarek, M. Early, M.R. Daniel, A. Hege, and P. R. Cook. 2010. [http://www.cs.princeton.edu/~fiebrink/publications/Fiebrink_etal_ICMC2010.pdf “Toward understanding human-computer interactions in composing the instrument.”] Proceedings of the International Computer Music Conference (ICMC).
** Discussion leader:
+
*** Discussion leader: Jennifer Guo
** Supplemental material:
+
*** Response link: https://docs.google.com/spreadsheet/viewform?formkey=dFd5LWlWSjVVWTZPaTVaVWNUR3lkY3c6MA
** System/Work of the day:
+
** Reading 2: J. Drummond. 2009. [http://www.sfu.ca/~eigenfel/Drummond-Understanind%20Interactive%20Systems.pdf Understanding interactive systems.] <i>Organised Sound</i>, 14(2):124–133.
 +
*** Discussion leader: Sasha Koruga
 +
*** Response link: https://docs.google.com/spreadsheet/viewform?formkey=dGt5X01xUlpacjJGdVVZck1qYkhENVE6MA
 +
* 23 October: State-of-the-art DMIs
 +
** System/Work of the day: Jennifer Guo
 +
** <b>Assignment 5 due</b> (see above)
 +
** Readings: <b>Please read the abstracts of ALL the following papers, then pick two of the papers to read.</b> Submit responses for the two papers [https://docs.google.com/spreadsheet/viewform?formkey=dFp2eWlQeVJxMUdEX3lNZFZLOUp3SlE6MA using this form].
 +
*** 1. Vamvakousis, Z. and R. Ramirez. 2012. [http://www.eecs.umich.edu/nime2012/Proceedings/papers/215_Final_Manuscript.pdf Temporal Control In the EyeHarp Gaze-Controlled Musical Interface]. <i>Proceedings of the International Conference on New Interfaces for Musical Expression (NIME)</i>. (This is an eye-controlled instrument, usable by people with motion impairments/paralysis. Please check out the video at http://www.youtube.com/watch?v=XyU8FyB0nZ8)
 +
*** 2. Wang, J.,  N. d'Alessandro,  S. Fels, and R. Pritchard. 2012. [http://www.eecs.umich.edu/nime2012/Proceedings/papers/291_Final_Manuscript.pdf Investigation of Gesture Controlled Articulatory Vocal Synthesizer using a Bio-Mechanical Mapping Layer]. <i>Proceedings of the International Conference on New Interfaces for Musical Expression (NIME)</i>. (This is pretty much what it sounds like. Please check out the video at http://www.youtube.com/watch?v=p2pAKMTNuWE)
 +
*** 3. Mitchell, T., S. Madgwick, and I. Heap. 2012. [http://www.eecs.umich.edu/nime2012/Proceedings/papers/272_Final_Manuscript.pdf Musical Interaction with Hand Posture and Orientation: A Toolbox of Gestural Control Mechanisms]. <i>Proceedings of the International Conference on New Interfaces for Musical Expression (NIME)</i>. (This is a technical paper on the glove controller used by musician Imogen Heap, and created in collaboration between Heap and some music technology academics. See http://www.wired.co.uk/magazine/archive/2011/10/how-to/how-to-make-music-with-gestures for one appearance in the popular press.)
 +
*** 4. Gillian, N. and J. Paradiso. 2012. [http://www.eecs.umich.edu/nime2012/Proceedings/papers/248_Final_Manuscript.pdf Digito: A Fine-Grain Gesturally Controlled Virtual Musical Instrument]. <i>Proceedings of the International Conference on New Interfaces for Musical Expression (NIME)</i> (This project uses the Kinect depth sensor not for skeleton tracking, but for fine-grained hand gestures. Please check out video at https://vimeo.com/43068363)
 +
*** 5. Smith, B. and G. Garnet, [http://www.eecs.umich.edu/nime2012/Proceedings/papers/68_Final_Manuscript.pdf Unsupervised Play: Machine Learning Toolkit for Max]. <i>Proceedings of the International Conference on New Interfaces for Musical Expression (NIME)</i>. (This paper provides a toolkit for unsupervised machine learning in music. The algorithms are complementary to those discussed previously in class. You can download it and view examples online at http://ben.musicsmiths.us/ml.phtml.)
 +
*** 6. Donnarumma, M. 2011. [http://marcodonnarumma.com/publications/marco-donnarumma_xth-sense_a-study-of-muscle-sounds_ICMC2011.pdf Xth Sense: a study of muscle sounds for an experimental paradigm of musical performance.] Proceedings of the International Computer Music Conference (ICMC). (Xth Sense allows the performer to control sound via the <b>sounds of his own muscles</b>. This instrument won the 2012 [http://www.gatech.edu/newsroom/release.html?nid=110311 Guthman musical instrument competition] at Georgia Tech. Watch some video [https://vimeo.com/37921373#at=0 here].)
 +
* 25 October: Wrap-up on gesturally-controlled instruments, discussion on Assignment 6 and final projects
 +
** <b> Assignment 6 due</b> (see above)
 
* <b> 30 October and 1 November: Fall break</b>
 
* <b> 30 October and 1 November: Fall break</b>
 
* <b> 4 November: Written project proposals due </b>
 
* <b> 4 November: Written project proposals due </b>
* <b> Week of 5 November: Schedule a 30-minute meeting to discuss your project proposal </b>
+
* <b> Week of 5 November: Schedule a 30-minute meeting to discuss your project proposal. [http://tinyurl.com/9dbnjaw Sign up here.] </b>
*6 November: Either live coding or networked performance?
+
*6 November: Score following
** Read:
+
**<b>Today's readings are a bit different-- both are by the same author, but one gives a technical presentation for IEEE and one discusses the problem for an audience of musicians & music technologists.</b> Please read the IEEE paper to get as much of an understanding of the technical detail as possible, and read the SMC paper to get a feel for the argument the author is making to that audience. (i.e., spend more time on reading 1). Respond to both using the same response link.
** Read:
+
** Reading 1: A. Cont. [http://hal.archives-ouvertes.fr/docs/00/47/97/37/PDF/PAMI5.pdf A coupled duration-focused architecture for real-time music-to-score alignment.] <i>IEEE Transactions on Pattern Analysis and Machine Intelligence</i> 32(6):974–987, 2010.
** Discussion leader:
+
** Reading 2: A. Cont. [http://hal.inria.fr/docs/00/69/25/75/PDF/ArshiaCont_SMC2011_1.pdf On the creative use of score following and its impact on research]. In <i>Proceedings of SMC 2011: 8th Sound and Music Computing conference, Padova, Italy</i>, July 2011.
** Supplemental material:
+
** Discussion leader: Reid Oda
** System/Work of the day:
+
** Response link: https://docs.google.com/spreadsheet/viewform?formkey=dGtiR2VvRFJkR2djcjFvMl8yczZCVmc6MA
*8 November:  
+
** System/Work of the day: PLEASESIGNUP
** Read:
+
*8 November: Improvisatory systems
** Read:
+
** Reading 1 (of 1): F. Pachet. [http://www.tandfonline.com/doi/abs/10.1076/jnmr.32.3.333.16861 The continuator: Musical interaction with style.] <i>Journal of New Music Research</i>, 32(3):333– 341, 2003. <b>Please be sure to download the JNMR version of the paper and not the earlier conference version!</b>
** Discussion leader:
+
*** Discussion leader: Edward Zhang
** Supplemental material:
+
*** Response link: https://docs.google.com/spreadsheet/viewform?formkey=dDBVbUcyY2JlcXJ1Y2xXOGlUMTAxeVE6MA
** System/Work of the day:
+
** Possible informal class discussion of projects, if time allows.
*13 November:  
+
*13 November: Laptop orchestras
** Read:
+
** Reading 1: D. Trueman. [http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=1185276 Why a laptop orchestra?] <i>Organised Sound</i>, 12(2):171–179, 2007.
** Read:
+
*** Response link: https://docs.google.com/spreadsheet/viewform?formkey=dDAtVE5wZHJZZTEzUzRxb3pjcWxjY3c6MA
** Discussion leader:
+
*** Discussion leader: Nikitas Tampakis
** Supplemental material:
+
** Reading 2: L. Dahl. [http://www.eecs.umich.edu/nime2012/Proceedings/papers/259_Final_Manuscript.pdf Wicked problems and design considerations in composing for laptop orchestra]. In <i>Proceedings of the International Conference on New Interfaces for Musical Expression (NIME)</i>, 2012.
** System/Work of the day:
+
*** Response link: https://docs.google.com/spreadsheet/viewform?formkey=dFUwNTFyS2FfcmpIZFBtb1hkRmFzb2c6MA
*15 November: Laptop orchestras
+
*** Discussion leader: Katie Wolf
*** Leaders: ??? (Could be 1-3 people)
+
** System/Work of the day: Jennifer Guo
** Read: Why a laptop orchestra?
+
*15 November: Mobile & social music
*** Discussion leader:???
+
** "Reading" 1: Watch [http://www.youtube.com/watch?v=uHtCAAj8jFI Ge Wang, "Breaking Barriers With Sound," Google Tech Talk, November 2010]. <b>It is recommended that you watch this before reading the paper, since the talk includes some demos of the apps mentioned in the paper.</b>
** Read: Wicked problems for laptop orchestras
+
*** Response: https://docs.google.com/spreadsheet/viewform?formkey=dG9iV0U1dUwxZGExTkFtbUFNcWZNQmc6MA
*** Discussion leader: ???
+
** Reading 2: Hamilton, R., Smith, J., and Wang, G. 2011. [https://ccrma.stanford.edu/groups/mcd/publish/files/2011-lmj-social.pdf "Social Composition: Musical Data Systems for Expressive Mobile Music."] Leonardo Music Journal Vol. 21. pp.57-64.
** Supplemental material:
+
*** Response link: https://docs.google.com/spreadsheet/viewform?formkey=dGNZYkxCOXF1eS1yN3E5MFdTck1HVWc6MA
** System/Work of the day:
+
*** Discussion leader: Jennifer Guo
*20 November (May merge with another class or two?): Other collaborative / social systems
+
** System/Work of the day: Joe Tylka
** Tutorial: Overview of music production process and tools
+
*20 November: Percussion & networked collaboration
** Essl?
+
** Reading 1: M. Sarkar and B. Vercoe. 2007. [http://web.media.mit.edu/~mihir/documents/mihir_tablanet_nime2007.pdf Recognition and prediction in a network music performance system for indian percussion.] In <i>Proceedings of the 7th international conference on New interfaces for musical expression (NIME ’07)</i>, pages 317–320.
** Networked performance?
+
*** Response: https://docs.google.com/spreadsheet/viewform?formkey=dDJaVGlhUEZlOGpYbm5laGozSU9WUVE6MA#gid=0
** Robots?
+
*** Discussion leader: Reid Oda
** Smule apps?
+
** Reading 2:  Derbinsky, N. and Essl, G. 2012. [http://web.eecs.umich.edu/~gessl/georg_papers/NIME12-Drum.pdf “Exploring Reinforcement Learning for Mobile Percussive Collaboration,”] In <i>Proceedings of the International Conference on New Interfaces for Musical Expression (NIME)</i>.
** Collaborative instruments?
+
*** Response link: https://docs.google.com/spreadsheet/viewform?formkey=dFREelc3ai1QdXRQU2VGcC1OSVF0ZUE6MA#gid=0
** System/Work of the day:
+
*** Discussion leader: Alejandro Van Zandt-Escobar
 +
** System/Work of the day: Avneesh Sarwate
 
* <b> 22 November: Thanksgiving, no class</b>
 
* <b> 22 November: Thanksgiving, no class</b>
*27 November: Music production  
+
 
** Read: Pardo
+
* 27 November: No readings to do!
*** Discussion leader: ???
+
** Joe Tylka: Tutorial on 3D sound
** Read: Duignan
+
** Jeff Snyder, Technical Director of Music at Princeton (not the undergrad), invited guest
** Discussion leader: ???
+
*29 November: Music production  
** System/Work of the day:
+
** Reading 1: B. Pardo, D. Little, and D. Gergle. [http://www.eecs.umich.edu/nime2012/Proceedings/papers/74_Final_Manuscript.pdf Towards speeding audio EQ interface building with transfer learning.] In <i>Proceedings of the International Conference on New Interfaces for Musical Expression (NIME)</i>, 2012.
*29 November: Interaction with music recommendation systems
+
*** Response: https://docs.google.com/spreadsheet/viewform?formkey=dEFSd20tMmYxdFZpY2s3UVV4YlRaOVE6MA#gid=0
** Read: Kulesza
+
*** Discussion leader: Daniel Ryan
*** Discussion leader: ???
+
** Reading 2: J. Reiss. [http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6004988 Intelligent systems for mixing multichannel audio]. In <i>Proceedings of the 17th International Conference on Digital Signal Processing (DSP)</i>, July 2011.
** Read: ???
+
*** Response: https://docs.google.com/spreadsheet/viewform?formkey=dGZKc1VseXJ5cXFKQ0sxX2ZvQ1BsRnc6MA#gid=0
** System/Work of the day:
+
*** Discussion leader: Joe Tylka
* <b> Week of 3 December: Schedule a 30-minute meeting to discuss your project progress </b>
+
** System/Work of the day: Danielle Bragg
* 4 December: Revisiting interactive systems in music scholarship
+
* <b> Week of 3 December: [https://wass.princeton.edu/pages/viewcalendar.page.php?cal_id=1761&view=week&st_dt=2012-11-25&makeapp=1 Schedule a 30-minute meeting to discuss your project progress] </b>
** Read: ???
+
* 4 December: Introduction to Music Information Retrieval
*** Discussion leader: ???
+
** Reading 1: Chapters 3 and 4 from N. Orio. [http://mlsp.cs.cmu.edu/courses/fall2012/lectures/Music_retr_tutorial.pdf Music retrieval: A tutorial and review.] Foundations and Trends in Information Retrieval, 1(1), 2006.
** Supplemental material:
+
*** Response: https://docs.google.com/spreadsheet/viewform?formkey=dGJ6REN0bFM0RFpleFBZOWdpQzQwRVE6MA
** System/Work of the day:
+
** Reading 2: Please pick any paper from [http://ismir2012.ismir.net/event/programme ISMIR 2012], on a topic of your choosing (except for the papers we're reading on 6 December, below).
* 6 December: "Creativity support tools" in HCI
+
*** Response: https://docs.google.com/spreadsheet/viewform?formkey=dDQwTzNESXBvUUNYQmo4MURaMDg5WUE6MA#gid=0
** Read: Shneiderman, Resnick, et al. on creativity support tools
+
** Discussion leader: Abu Saparov
*** Discussion leader: ???
+
** System/Work of the day: Rahulram Sridhar
** Read: Latulipe et al
+
* 6 December: Recent MIR reflection papers
*** Discussion leader: ???
+
** Reading 1: M. Schedl and A. Flexer. [http://ismir2012.ismir.net/event/papers/385-ismir-2012.pdf Putting the user in the center of music information retrieval]. In <i>Proceedings of the International Conference on Music Information Retrieval (ISMIR)</i>, 2012.
** Supplemental material:
+
*** Discussion leader: Sasha Koruga
** System/Work of the day:
+
*** Response: https://docs.google.com/spreadsheet/viewform?formkey=dEhBcTh2TTJPMnZPS2l3bGR3WjIzdVE6MA#gid=0
* 11 December: Grab bag topics ???
+
** Reading 2: E. J. Humphrey, J. P. Bello, and Y. LeCun. [http://ismir2012.ismir.net/event/papers/403-ismir-2012.pdf Moving beyond feature design: Deep architectures and automatic feature learning in music informatics.] In <i>Proceedings of the International Conference on Music Information Retrieval (ISMIR)</i>, 2012.
** Read:
+
*** Discussion leader: Ohad
** Read:
+
*** Response: https://docs.google.com/spreadsheet/viewform?formkey=dFRIa1dUUkZTY2tCcVJGTWlEbkY2SEE6MA#gid=0
** Discussion leader:
+
** System/Work of the day: Nikitas Tampakis
** Supplemental material:
+
* 11 December: "Creativity support tools" in HCI
** System/Work of the day:
+
** Reading 1: B. Shneiderman. [http://dl.acm.org/citation.cfm?id=1323689 Creativity support tools: Accelerating discovery and innovation]. <i>Communications of the ACM</i>, 50:20–32, December 2007.
* 13 December: Grab bag topics ???, wrap-up discussion
+
*** Discussion leader: Danielle Bragg
** Read:
+
*** Response: https://docs.google.com/spreadsheet/viewform?formkey=dDNoUGRYYWo1N2JYNjUyUnFDMG92eEE6MA#gid=0
** Read:
+
** Reading 2: E. Carroll, C. Latulipe, R. Fung, and M. Terry. [http://dl.acm.org/citation.cfm?id=1640255 Creativity factor evaluation: Towards a standardized survey metric for creativity support]. In <i>Proceedings of ACM Creativity & Cognition</i>, pages 127–136, 2009.
** Discussion leader:
+
*** Discussion leader: Avneesh Sarwate
** Supplemental material:
+
*** Response: https://docs.google.com/spreadsheet/viewform?formkey=dGw5S19SU0FDN3g5dlNHUnZFSjE1N3c6MA#gid=0
** System/Work of the day:
+
** System/Work of the day: Sasha Koruga - NASA Task Load Index
</font>
+
* 13 December:  
 +
** Reading: D. Stowell, A.Robertson, N.Bryan-Kinns, and M.D.Plumbley. [http://www.sciencedirect.com/science/article/pii/S107158190900069X Evaluation of live human–computer music-making: Quantitative and qualitative approaches]. <i>International Journal of Human-Computer Studies</i>, 67:960–975, 2009.
 +
*** Discussion leader: Avneesh Sarwate
 +
*** Response: https://docs.google.com/spreadsheet/viewform?formkey=dGVzcnJaa0dWaG5OVG9HWjZQLWRDRWc6MA#gid=0
 +
** Wrap-up discussion
 +
* <b>Winter break!</b>
 
* 8 January
 
* 8 January
 
** Final project presentations
 
** Final project presentations
Line 241: Line 297:
 
* ???
 
* ???
  
== Resources ==  
+
== General Research Resources ==  
* Course Piazza site: https://piazza.com/class#fall2012/597b
 
 
* Leading a discussion:
 
* Leading a discussion:
 
** Our discussions will focus on both the technical points in a paper (i.e., <i>how</i> was something done?) as well as a broader critical examination (why was it done? how is this work useful? what other questions does it raise? what are some shortcomings? etc.).
 
** Our discussions will focus on both the technical points in a paper (i.e., <i>how</i> was something done?) as well as a broader critical examination (why was it done? how is this work useful? what other questions does it raise? what are some shortcomings? etc.).
Line 265: Line 320:
 
** [http://www.zotero.org/ Zotero] (free)
 
** [http://www.zotero.org/ Zotero] (free)
 
** [http://www.mekentosj.com/papers/ Papers]
 
** [http://www.mekentosj.com/papers/ Papers]
 +
 +
== Resources for Topics Discussed in Class ==
 +
* [http://www.cs.princeton.edu/courses/archive/fall12/cos597B/protected/ Slides from class]
 +
* Recommended books on related topics
 +
** [http://www.amazon.com/Digital-Signal-Processing-Primer-Applications/dp/0805316841 A Digital Signal Processing Primer: With Applications to Digital Audio and Computer Music] by Ken Steiglitz
 +
** [http://www.amazon.com/Music-Cognition-Computerized-Sound-Psychoacoustics/dp/0262531909/ Music, Cognition, and Computerized Sound: An Introduction to Psychoacoustics] by Perry Cook
 +
** [http://www.amazon.com/Electric-Sound-Promise-Electronic-Music/dp/0133032310/ Electric Sound: The Past and Promise of Electronic Music] by Joel Chadabe
 +
* Online resources for sound synthesis
 +
** Students: Add your tutorial links here
 +
** Additive Synthesis:
 +
***[http://music.columbia.edu/cmc/musicandcomputers/chapter4/04_02.php Music and Computers Chapter 4.2 Additive Synthesis]
 +
** Wavetable Synthesis:
 +
*** [http://www.mobileer.com/wp/direct_vs_wavetable.pdf Direct vs. Wavetable Synthesis] by Phil Burk
 +
*** [http://www.musicdsp.org/files/Wavetable-101.pdf Wavetable Synthesis 101, A Fundamental Perspective] by Robert-Bristow Johnson
 +
*** [http://www.music.mcgill.ca/~gary/307/week4/wavetables.html#fig:sinetable Wavetable Synthesis] by Gary P Scavone, McGill University
 +
*** [http://www.youtube.com/watch?v=7p06xVzE4yQ Understanding Wavetable Synthesis] YouTube Tutorial
 +
* Online resources for music programming environments
 +
** [http://en.wikipedia.org/wiki/Comparison_of_audio_synthesis_environments Software Comparison Wiki]
 +
** [http://cycling74.com/docs/max6/dynamic/c74_docs.html#docintro Max/MSP Developer's documentation & tutorials]
 +
** [http://www.youtube.com/watch?v=5RYy8Cvgkqk Max/MSP User tutorials]
 +
* Timbre perception
 +
** Grey, John M. 1977. "Multidimensional Perceptual Scaling of Musical Timbres". <i>The Journal of the Acoustical Society of America</i> 61(5):1270–77.
 +
** Caclin,  A., S. McAdams, B.K. Smith, and S. Winsberg. 2005. “Acoustic correlates of timbre space dimensions: a confirmatory study using synthetic tones.” <i>The Journal of the Acoustical Society of America</i> 118(1):471–82.
 +
* Other?

Latest revision as of 17:40, 7 January 2013

Welcome

Welcome to COS 597B, Interactive Music Systems.

The homepage for the course is http://www.cs.princeton.edu/courses/archive/fall12/cos597B/index.php.

We will be using this Wiki to manage the course schedule and readings, and sign up for presentation/discussion leader slots.

Course participants

Add your name here, with link to your webpage if you want:

  • Rebecca Fiebrink, instructor
  • Edward Zhang '13
  • Jeff Snyder '13
  • Avneesh Sarwate '14
  • Joe Tylka, first year grad student
  • Reid Oda, second year grad student
  • Rahulram Sridhar, G1
  • Katie Wolf, second year grad student
  • Sasha Koruga, G2
  • Ohad Fried, first year grad
  • Abu Saparov '13
  • Danielle Bragg, second year grad student
  • Daniel Ryan '13
  • Alejandro Van Zandt-Escobar, CS '14
  • Nikitas Tampakis '14
  • Jennifer Guo, first year grad

Assignments

  • Assignment 1, due 9/20
    • Post on Piazza one example of interactive computer/technology used in live performance (could be a software program, a digital instrument or controller, a performance, ...). Choose something you find exciting or inspiring. Provide a URL and/or citation. Describe how the technology works (at a high level) and how the human(s) interact with the technology. What do you find exciting about it? Any problems you see with it, or criticisms you might offer (including aesthetic or technical concerns)? How does the technology impact or constrain the type of interaction that is possible, and the type of music that is possible or easy to make?
    • (Post publicly on Piazza, use hashtag #assignment1.)
  • Assignment 2, due 10/2
    • Choose two or more synthesis methods to experiment within a music programming environment of your choosing. (Suggestions: Max/MSP, pd (a free Max/MSP-like environment), ChucK, SuperCollider, ???).
    • Post a thoughtful critique of the methods to Piazza, considering the quality of sounds that you can produce with a given method, the ease with which you can control the method, and any other characteristics that might influence someone's choice of whether to use the method in a performance or composition.
    • Use hashtag #assignment2 in your post
    • If you've never used an audio programming environment before and want some tips, just post to Piazza. Feel free to start with existing code & tutorials on the internet. Feel free to share code and programming tips with one another, but do the experimentation and response individually.
  • Assignment 3, due 10/9
    • Create a gesturally-controlled "instrument" that allows you to interactively control sound in real-time. Use an explicit mapping strategy that you program in whatever environment(s) you choose to use (i.e., no machine learning). Reflect on what was easy and hard to do in creating the mapping, what you found rewarding or frustrating about the process, and the process by which you chose the mapping you did. Submit your response on Piazza using #assignment3.
    • Feel free to build on any of your previous assignments. Easy-to-use controllers include the built-in laptop inputs (see http://smelt.cs.princeton.edu), the Wiimote (OSCulator is recommended if you're on a Mac), or joysticks (we have some you can borrow).
    • OpenSoundControl is a good tool for patching together code in different environments, e.g. if you want to use Smelt to capture motion sensor input and send it to pd, or if you want a ChucK program to receive Wiimote messages from OSCulator. Google for OSC examples for the languages you're using, and or post to piazza and get others to share their code with you.
  • Assignment 4, due 10/11
    • Write 1 paragraph (or more if you really want) reflecting on your experiences with the Fauvel seminar last week. Please post to Piazza using #assignment4. Feel free to start a new thread or join an existing thread to chime in on someone else's post. (If you do that, please offer thoughtful commentary and response to their post, not just post something unrelated without starting a new thread.) You can write about any aspect you want, but here are some ideas for starting points:
      • Did the seminar change any of your thinking around what "interactive technology" is (and has been, historically)?
      • Musicology involves ways of studying and reasoning about the world that are quite different from those used in computer science. What might be some of the practices, perspectives, or research goals we have in common? How might computer scientists benefit from understanding more about musicology or other humanities disciplines?
      • Did your ideas about how to digitize Fauvel -- either for scholars or for the public -- change in any surprising and interesting ways following the two seminar sessions led by musicologists?
  • Assignment 5, due 10/23
    • Build at least one gesturally-controlled instrument using Wekinator, with the controller(s) and synthesis method of your choosing.
    • Post a response to Piazza (using #assignment5), including the following: 1) Describe what controller, gestures, learning algorithms), and synthesis method you used. 2) Reflect on what was easy, what was difficult, and how you might improve the software. 3) Also reflect on how the experience of building with Wekinator compared to your previous assignment of building a mapping explicitly using programming.
    • Be sure to read Wekinator instructions to help you get started. Please run the walkthrough ahead of time to verify that the code works for you. We have about a 95% success rate on running on arbitrary machines, but if your machine happens to be Wekinator-unfriendly, we'll want to know ASAP. (Also, note that you may have to turn off firewall & antivirus for OSC to work properly on your machine.)
  • Assignment 6, due 10/25
    • Look through the table of contents for at least one conference proceedings or journal below, for at least the last 2 years. Choose a conference/journal that you expect to be closely related to your final project.
    • Write a response that includes the following: 1) A description of the type of work you generally find published at this venue (and what, if anything, is surprisingly absent). 2) List 10 papers that you find exciting, intriguing, or potentially useful. Provide a 1-sentence description of each one, and say briefly why you've included it. You don't (necessarily) have to read these papers-- it's fine to base your choice on the title and abstract. Post your response on Piazza using #assignment6
    • Possible venues:
      • International Computer Music Conference: A diverse mixture of technical and artistic work, held yearly. Includes just about any topic related to computer music.
      • International Conference on New Interfaces for Musical Expression (NIME): NIME grew out of the ACM CHI (human-computer interaction conference), and it's mostly centered around hardware and software interfaces for performance and composition. However, there is a good mixture of other topics, as well. (Note that 2012 proceedings might not be posted yet at the above link; 2012 is available here.)
      • International Conference on Digital Audio Effects (DAFx): DAFx certainly includes some signal-processing-heavy research on cool-sounding audio effects, but it also includes broader topics like analysis (and synthesis) algorithms, spatial audio, interactive performance issues, audio perception, and others.
      • Computer Music Journal: A journal with very diverse content, spanning both technical and artistic considerations in computer music. Breadth is similar to ICMC. Sometimes there are special issues on particular topics-- see the website if any special issues appeal to you!
      • Organised Sound. Another interdisciplinary journal, can be more focused on musical issues compared to technical issues, but still includes a wide range of topics. Same as CMJ with respect to special issues.
      • [www.ismir.net/all-papers.html International Conference on Music Information Retrieval (ISMIR)]: Includes not just music "information retrieval," but music informatives more generally. Lots on audio analysis, most (but not all) of it not targeted at performance. The interactive aspects of systems are not usually explicitly considered, but there are definite exceptions. Lots of cool machine learning work here.
      • International Conference on Auditory Display (ICAD): A conference not focused on music, per se, but on sonification in many forms. There is considerable attention to human perception of audio, as well as focus on contexts in which sonification is useful.
      • Sound and Music Computing Conference: Another international conference with a very broad focus on many issues related to sound, music, and computing. Lots of interesting stuff here. (Note that SMC offers a "summer school" session before each conference, targeted at students in the field-- something to consider!)
      • International Conference on Computer Music Modeling and Retrieval: A conference that is maybe not quite as broad as ICMC, focused on modeling and retrieval. Includes music emotion analysis, spatial audio, synthesis, computer models of perception and cognition, music information retrieval, computational musicology, others. (You'll have to google for each year individually to find proceedings by year.)
      • If there is another venue you think would be appropriate to add, just say so (on piazza or in class).
  • Written final project proposals due November 4. More info will be posted soon & discussed in class.
  • You will be scheduling a 30-minute meeting to discuss your project proposal for the week of November 5.

Schedule

Tutorial topics

  • Possible topic overviews:
    • Programming tools (Max/MSP, ChucK, SuperCollider, ?)
    • Music production / studio tools & practices (e.g., Logic demo; overview of editing, mixing, mastering processes)
    • BCIs & biosignals for music
    • ML tools (e.g., Marsyas, Wekinator, Gesture Follower)
    • Sound synthesis methods (e.g., additive, wavetable, waveshaping?, subtractive?, physical modeling, FM, granular) (high-level overview)
    • Summarize the state-of-the-art regarding gesture analysis for conducting analysis, dance (e.g., Laban analysis), or ancillary gestures of instrumentalists.
    • Kinect basics (how it works, how to program for it)
    • Basic real-time audio processing methods (not synthesis) (e.g., vocoders, Autotune, other live effects) ?
    • Summarize state-of-the-art regarding audio analysis of some sort (e.g., onset detection, beat tracking, pitch tracking)
    • Summarize state-of-the-art regarding MIR topics (e.g., recommendation, tagging, playlist generation, collection visualization)
  • Possible technical overviews:
    • Digital audio (representation of audio in a computer, sampling & quantizing, Nyquist's theorem)
    • Audio feature extraction
    • Machine learning topics: classification, neural networks, graphical models
    • HMMs, DTW
    •  ??

Possible "cool systems" to highlight

  • Reactable
  • Theremin
  • Ondes Martenot
  • The Hands
  • George Lewis' Voyager
  • Monome
  • The Continuator
  •  ???

General Research Resources

Resources for Topics Discussed in Class