Revision as of 12:23, 15 October 2009 by RebeccaFiebrink
Welcome to the wiki home of the unofficial 2009 music & machine learning workshop. Who knows what will become of this page?
- Rebecca Fiebrink, organizer
- Dan Trueman
- Anne Hege
- Cameron Britt
- Michelle Nagai
- Konrad Kaczmarek
- MR Daniel
- Seth Cluett
- Michael Early
Stuff we want to try out
- pitch tracking
- percussion gesture identification
- general new instrument building (mapping)
- add your request to this list!
Stuff we want to find out more about
- Add your request to this list. For example, history of learning in NIME/ICMC/ISIMR; details of how one or more algorithms actually work; ??
Bugs / Feature requests
- File IO based communication (as alternative to OSC for large feature vectors)
- Allow synthesis module to communicate current parameters to GUI (just once, not in play-along mode), to allow parameter text boxes to be auto-filled (Konrad's idea)
- Ability to designate N-M mappings (not All-All), e.g. ability to train one classifier on vocal features and one on gesture features (Anne's request)
- Integrate MFCC feature extractor, possibly ramp time feature extractor (Cameron's request, ref. Adam Tindale's paper)
- Debug mode flag so that users don't get scared when they see code messages being printed out
- Add a way to examine the current training dataset (useful for debugging); also good to be able to edit (for fine-grained fixing)
- Allow training of 1 parameter (classifier) at a time (Konrad's request)
- Ability to work with multiple channels of audio input, for use both with audio and with sensors (Cameron)
- This could start as another # in the feature extractor GUI
- Document fact that GUI ports are for Chuck-GUI only; need to use different port for max-chuck synth communication, for example
- Nice picture of architecture, what communicates with what and how
- Tip: Require ./ before running .sh file from terminal
- Tip: restart after changing OSC ports, and don't run miniAudicle concurrently if having OSC problems
- Clear API for playalong learning synth class (update this from what's on webpage currently)
- Clear OSC API for updated system (feature extractors and synthesizers)
- ML bibliography page (Seth's request)
- More on how to implement your own feature extractor in max/msp (for Dan 3D knob)
- A clear stub / skeleton for play-along learning class, chuck feature extractor, and OSC feature extractor
- Add Anne's vowel feature extractor to example set
General topics for conversation
- Wekinator system architecture
- What is required to add your own synth class? feature extractor?
- October 5, 2:30pm
- Discuss goals for the semester
- Possible outcomes:
- Conference or journal publication
- Compositions to be performed by ourselves, with PLOrk in Spring, other?
- Community software repository: general tools for Wekinator, and/or specific to our compositions/interfaces
- Scheduling - let's do every week for now, as long as people can make it
- Possible papers to read together?
- Things we want Rebecca or someone else to lecture on?
- October 12, 2:30pm
- Walked through system architecture, what a SynthClass looks like
- Konrad showed his Max/MSP synthesis code, which integrates with a larger Max/MSP patch using patter
- Dan brought in the magic 3D knob and hooked it up for feature extraction in Max, then we sent the features out to Wekinator via OSC. Turned it into obnoxious physical model controller.
- Cameron brought in a drum w/ 2 pickups, verified that we could detect "hit" versus "silent" using basic audio features.
- October 19, 2:30pm
- Updates: some additions to documentation, skeleton code for you to start with
- Try out better feature extractors w/ Cameron's drum
- Discuss some interface options for data set creation / annotation