From CSWiki
Revision as of 20:18, 17 May 2012 by RebeccaFiebrink (talk | contribs) (Kinect)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search


What is the Wekinator? (Read this first)

A system for patching input features to output parameters

The Wekinator is, at heart, a system for patching input features to output parameters. Typically, your inputs will be gestural control signals (like an accelerometer indicating laptop tilt, or a joystick) or audio signals (like the audio captured from an acoustic flute player playing into a mic). The "features" are numbers describing the state of your inputs at a particular point in time. For example, for a joystick, we'd use a feature vector indicating whether each button is pressed and the current position of each axis. For audio, the raw audio samples are typically too complex a representation for use as a control input; instead we try to choose features that describe relevant aspects of the audio signal, such as the spectral centroid (which describes timbre).

At the other end of the system, you've got some parameters that you can plug into some piece of code that probably makes sound (or video, or something else, if you want), which we'll call your "synth." These parameters control your synth over time. For example, if you want your synth to play a melody, one parameter might control the pitch over time. Or if your synth is a drum machine, one parameter might control tempo, and another might control the number of loops currently playing.


Example-based learning

You, the user, specify the relationship from input features to output parameters by supplying the Wekinator with a bunch of example input/output pairs. For example, if you want to use the Wiimote accelerometer to control the pitch of a sound, you'd provide training examples of Wiimote positions/gestures along with the corresponding synth parameters. The Wekinator's job is to learn a model of the relationship between features and parameters, which will produce parameter outputs for new inputs (even those that may be different from the training examples).

The picture below illustrates how a supervised learning algorithm can create a webcam-based hand gesture labeler, using a training set consisting of example gestures (inputs) and labels (outputs). The trained model is capable of applying appropriate labels to new hand gestures.


Option 1: Classification

The type of algorithm the Wekinator can use to learn relationships from input features to output parameters is dependent on the type of parameters. A classifier such as AdaBoost or k-nearest neighbor is used to classify features into discrete categories. For a gestural control system using the motion sensor, you might want to trigger a different, specific note when the laptop is tilting left versus when it is tilting right. In the Wekinator, discrete classes are represented as integers, and they are numbered starting from 0. So your training set would be constructed by giving the Wekinator some examples of you tilting left and specifying a corresponding parameter of 0, then you tilting right and specifying a parameter of 1. You would program your synth to play one pitch (say, middle C) whenever it received a parameter value of 0, and another pitch (say, A 440) whenever it received a 1.

Option 2: Neural networks

The other type of learning currently supported is a neural network, which can learn to output any real-valued parameter, not just integers. If you wanted to use a gestural control system in which the laptop motion sensor controlled your synth's pitch in a continuous way, you could use a neural network. For example, you could tilt all the way to the left and specify a parameter of 200, tilt in the middle and specify a parameter of 1000, and tilt to the right and specify a parameter of 500. Then in your synth, you could use the parameter value received and treat it as a frequency value; after doing some checking to make sure it's not below 0Hz and not above, say, 2000, you would set the frequency of an oscillator directly to this value.

Installing the Wekinator

Option A: Macintosh OS X: Simplest install

Option B: On Windows or Linux: Simple install

B.0. If you're on Linux, install ChucK

B.1. Download and unzip Wekinator

Grab the latest .zip of the project (will require you to re-merge with your own files, if you've been working on things yourself):

Download it. Unzip it. Doesn't matter where, but avoid any place where a parent/ancestor directory has spaces or special characters in the directory name.

Feel free to unzip this to the same place every time you update the Wekinator to a new version, but be aware that any saved settings, learning systems, etc. that you've created in project/mySavedSettings/ will be overwritten unless you take care to copy them to another location first.

B.2. Optional: Install MiniAudicle

You may also wish to install the miniAudicle, a ChucK IDE, if you want to experiment with writing ChucK code yourself:

Option C: Installing from latest source code

Recommended only for people comfortable with the command line.

C.0. If you're on Linux, get a working version of ChucK, if you don't have one already.

Same as B.1. above.

C.1. Download and install Wekinator source from SVN

Step C.1.0 (setup: 1st time only):

  • For Mac: Set up SVN and ant by installing OS X developer tools if you haven't already. (Not sure if you've got svn? Type "which svn" in the terminal. If you've got it, you'll see a line printed that looks something like "/usr/bin/svn"; if not, you won't see anything but another command prompt.)
  • For Windows: Get SVN from Install it. Get ant and install it, for example from (this program may prompt you for the location of JDK on your computer-- assuming you have Java on your machine, it'll be something like C:\Program Files\Java\jdk_1.somethingsomething). Finally, restart your computer for good measure.
  • For Linux: Get SVN and ant. You can do it.

Once you've got svn and ant, open up a terminal window and cd to a directory where you want the project to live from now on. For example you could cd to your desktop. Then type in the terminal:

svn checkout wekinator-read-only

This will grab you a copy of all the code.

Step C.1.1 Do this every time you update:

Open terminal. Go to the wekinator-read-only directory. Type:

svn update

This will check out all the other changes into your directory. If you've made modifications to the code, it'll try to merge my changes with yours. Likewise, if you've created your own files, SVN will not move / delete / overwrite / change them.

Step C.1.2 Do this after step C.1.1, every time you update:

In terminal, go to directory wekinator-read-only/project/java/Wekinator/. At the prompt, type:


You should see "BUILD SUCCESSFUL" after this finishes running. If you don't see that, make sure you've done everything else correctly, otherwise email Rebecca at because it's probably her fault.

(This creates and updates the java executable jar file, which we don't put in SVN.)

C.2. Optional: Install MiniAudicle

You may also wish to install the miniAudicle, a ChucK IDE, if you want to experiment with writing ChucK code yourself:

Quick and easy walkthrough for Mac OS X

(Tailored for people using the OSX installer.)

Run Wekinator by double-clicking on /Applications/Wekinator/Wekinator (the bird icon)


  • Click "Edit chuck configuration" and a window will pop up.
  • In the "synthesis" tab, select "use a chuck synth class" button, then click "Choose file..."
    • Select ""
    • Hit "OK" to use this chuck configuration.
  • Hit "Run" to run chuck


  • If you don't have a macbook air, select "Laptop motion sensor" option (i.e., you're going to use your laptop motion sensor to control the FM synthesis program)
  • OR, if you have a macbook air, check the box for "Webcam" with the edge tracking option, then launch the edge tracker manually by double-clicking on /Applications/Wekinator/processing/builtin_extractors/edgetracker/application.macosx/edgetracker
  • Hit "Go!"


This will set up 1 neural network for each FM synthesis parameter -- we'll just use the defaults, so don't do anything.

  • Hit "Go!"


1. Create your dataset, which will map a few laptop tilt positions to a few FM synthesis parameter settings

  • 1A. In the middle right side of the screen, click "randomize" followed by "play" to listen to a random set of FM synthesis parameters. Do this a few times until you find a sound you like.
  • 1B. Tilt your laptop (or make a gesture in front of your webcam, if using the edgetracker) and hit "Begin recording examples into dataset." After a short time (maybe .5 second), hit the button again to stop recording examples.
  • 1C. Repeat 1A and 1B with a different set of parameters and a different tilt/gesture.

2. Train to build a working model from the examples you just recorded:

  • Hit "2. Train" on the left side.
  • Hit "Train now" on the right side.

3. Run the model to perform in realtime.

  • Hit "3. Run" on the left side.
  • Gesture away and listen to the sound!

You can always add new examples by going back to step 1. If you just record additional examples, they'll make your mapping more complex. Or you can completely re-create your mapping by hitting the "clear examples" button on the "collect data" view, then recording new examples from scratch.

If you like what you made, save it by going to File -> "Save learning system". Next time you load wekinator, you can reload this learning system from the "Learning setup" tab using the "Load learning system" button. This will reload your trained models as well as all your recorded data.

Quick and easy walkthrough for Windows

(Tailored for people who installed using a version later than April 2011.)

Run Wekinator by opening [Directory where you downloaded Wekinator]/project/java/Wekinator/dist/ and double-clicking on Wekinator.jar.


  • Click "Edit chuck configuration" and a window will pop up.
  • In the "synthesis" tab, select "use a chuck synth class" button, then click "Choose file..."
    • Select ""
    • Hit "OK" to use this chuck configuration.
  • Hit "Run" to run chuck


  • Check the box for "Webcam" with the color tracking option, then launch the edge tracker manually by double-clicking on [Directory where you downloaded wekinator]/processing/builtin_extractors/colortracker/
  • Choose two objects of different colors that you want to track. For example, markers or post-it notes work well. Hold the first one in front of your webcam and left-click on it in the color tracker application. Hold the second one in front of your webcam and right-click on it.
  • Hit "Go!" in the Wekinator.


This will set up 1 neural network for each FM synthesis parameter -- we'll just use the defaults, so don't do anything.

  • Hit "Go!"


1. Create your dataset, which will map a few colored object positions to a few FM synthesis parameter settings

  • 1A. In the middle right side of the screen, click "randomize" followed by "play" to listen to a random set of FM synthesis parameters. Do this a few times until you find a sound you like.
  • 1B. Hold your colored objects in some position and hit "Begin recording examples into dataset." After a short time (maybe .5 second), hit the button again to stop recording examples.
  • 1C. Repeat 1A and 1B with a different set of parameters and a different set of object locations.

2. Train to build a working model from the examples you just recorded:

  • Hit "2. Train" on the left side.
  • Hit "Train now" on the right side.

3. Run the model to perform in realtime.

  • Hit "3. Run" on the left side.
  • Gesture away and listen to the sound!

You can always add new examples by going back to step 1. If you just record additional examples, they'll make your mapping more complex. Or you can completely re-create your mapping by hitting the "clear examples" button on the "collect data" view, then recording new examples from scratch.

If you like what you made, save it by going to File -> "Save learning system". Next time you load wekinator, you can reload this learning system from the "Learning setup" tab using the "Load learning system" button. This will reload your trained models as well as all your recorded data.

Detailed walkthrough for all operating systems

0. Plug in any joysticks, controllers, etc. that you want to use.

1. Run the software.

  • If you installed using the OSX installer, double click on the Wekinator application bird icon in /Applications/Wekinator/.
  • Otherwise:
    • Find the directory where you downloaded the wekinator. For example, this directory might be /Users/rebecca/Dekstop/wekinator-read-only/ (if you downloaded from SVN) or /Users/rebecca/Desktop/wekinator/ . I'm going to call this directory WEKINATOR_DIR from now on.
    • Go to WEKINATOR_DIR/project/java/Wekinator/dist/ and double-click on Wekinator.jar to run the Wekinator. (Alternatively, you can go to this directory in Terminal and type java -jar Wekinator.jar at the prompt.)

You should now see a GUI pop up on your screen.

2. Optionally, start any external feature extractors or synthesis engines that haven't been started automatically in step 1. For example, if you want to use Max/MSP to extract features from a Wiimote, launch your max patch now (see below for details on how to configure your patch to actually send the Max data to the Wekinator). Or, say you want to use Wekinator to control a Max patch or Processing animation (i.e., a "synth" that's not a ChucK class): launch that code now, too.

3. The first time you run (and any time you update Wekinator by downloading the whole project .zip), you'll have to do some configuration for your system. Click on "Edit ChucK configuration...", then select the "System" tab.

  • If you're using an older version (before April 2011) or a Linux version, you'll need to specify the location of the ChucK executable on your system. (Newer version users shouldn't have to do this.)
    • This may be in /usr/bin/chuck or C:\Windows\System32\chuck.exe
    • Hit "Choose file," navigate to this file (not a directory), and select it.
  • If you didn't use the OSX installer, you'll then need to specify the location of the Wekinator project directory, which you've just installed/downloaded. Hit "Choose directory," navigate to WEKINATOR_DIR/project (where WEKINATOR_DIR is your download location, like I said above), and select that directory.

4. You'll probably need to do some additional configuration of ChucK (both the first time you run and in the future). Hit "Edit ChucK configuration" to configure the files that ChucK launches automatically for you:

  • In the "Features (input)" tab, you can optionally enable a custom ChucK feature extractor. (Default is DISABLED.) This is what you'd use if you wrote your own piece of chuck code to extract features from something, say an arduino board, or a custom audio feature using UAnae. (Instructions for that are below.) If you did in fact do that, hit "Choose file" and select that .ck file. Otherwise leave it unchecked.
  • In the "Synthesis (output)" tab, you must choose to use either a ChucK synth class (i.e., a piece of ChucK code that makes sound) or a synth running in some other environment (e.g., Max/MSP, Processing, etc.), which receives the Wekinator's output parameters via OSC.
    • Selecting "Use a ChucK synth class" means that you're using the output of the Wekinator to control sound in a ChucK file. Hit "Choose file" and select the .ck that you want to run. Note that this file has to define a class called SynthClass that adheres to the Wekinator synth API (see below). There are some example synths ready to use in WEKINATOR_DIR/project/chuck/synths/, so you can choose one of these to start with.
    • Selecting "Use a different Max/OSC synth module" means that you're using the output of the Wekinator to control sound/video/etc. in something other than ChucK, which will receive the Wekinator's output using OSC. You'll have to explicitly specify the number of parameters (i.e., output by Wekinator, and input by your synth) and whether they are real-valued or integer-valued (the Wekinator needs to know these things in order to know which types of learning algorithms to offer you). For integer-valued parameters, Wekinator needs to know how many values are legal; for example, if your synth expects a number from 0 to 3, you have a maximum of 4 values for that parameter. Finally, you must specify whether you want only the parameters from Wekinator, or if you want an array for each parameter specifying a probability distribution over the parameters. Just choose the parameters for now, the other option is buggy! Your synth should listen for these parameters on port 12000.
  • In the "play-along learning" tab, you have the option of also loading a ChucK "playalong score." (Default DISABLED.) This is an advanced feature that allows you to create a ChucK "score" for your synth, which sets the parameters of your synth over time, bypassing the Wekinator system. This is used for play-along learning (see our ICMC 2009 demo). If you've created a score (see below), select Choose File and find that ChucK file; otherwise leave this unchecked. Make sure your score is sending the proper number and type of parameters for your synth. There are some sample scores included in WEKINATOR_DIR/project/chuck/score_players/.
  • You can save this configuration for later use, if you like it. It'll also be loaded by default next time you launch wekinator, provided there are no errors.
  • Hit OK!

5. Run the chuck component by hitting "Run" in the ChucK panel of the main GUI. You should see a status that tells you ChucK is running sucessfully, or be taken automatically to the "Features setup" tab.

  • Troubleshooting: If ChucK does not run successfully, you can try to debug by going to View->Console in the top menu, then try running Chuck again. Most likely you did not specify your chuck executable or wekinator chuck directory correctly in the Chuck configuration, or there is a typo in one of the ChucK files you're running.

6. Once connected, go to the "Features Setup" tab. (You should be taken there automatically.)

  • Check off any features that you want to use. This tells Wekinator what inputs you're going to use to control the sound. You can use multiple types of features.
    • For starters, try built-in features like:
      • the motion sensor (Mac only)
      • audio centroid (any OS; requires that you have working audio input; will allow you to control Wekinator using your voice -- e.g., different vowels and consonants like "ahhhh" "fffff", etc.)
      • or color tracker (any OS; requires that you launch the color tracker app separately in wekinator/project/processing/processing/builtin_extractors/colortracker/[your os name]/[executable name, e.g., colortracker.exe].
  • If you're running a custom OSC feature extractor or a custom ChucK feature extractor, check those boxes and enter the number of features being extracted.
  • Optionally, you can add "meta-features" to these features. By selecting one or more features, then clicking "add meta-features from these features..." you can also extract 1st- and 2nd-order differences (estimating velocity and acceleration), a smoothed version (for noisy inputs), and/or a history buffer (i.e., a history length of 10 will store the last 10 values of each feature). Check the boxes of the features you want to use.
  • When you're done choosing your features, hit GO!

7. Now you should be in the "Learning Setup" tab.

  • In the "Advanced" subtab ("Simple" is disabled, sorry):
    • The first time you run, check the radio button to "Create new dataset."
    • For each parameter (i.e., each number output by Wekinator and input by your synth), you'll be able to choose the type of algorithm (a classifier if it's discrete, or a neural network if it's continuous) and choose which of the input features affect it. For example, if I have 3 synth parameters, and I'm using the trackpad as my input device, and I want the x position to affect parameters 0 and 1 and the y position to affect parameter 2 only, I would click "View & choose features" for each of the 3 params. For param 0 and param 1, I'd check only "Trackpad_0" feature; for param 2, I'd check only "Trackpad_1".
    • When you're done configuring your models, hit GO!

8. Now you should be on the "Use it!" tab.

    • This view allows you to collect data used to train your model, telling the Wekinator about your intended relationship between input features and output parameters.
    • For starters, put some numbers into the number boxes and hit "play" to hear what different parameter settings sound like with your synth. When you hear something you like, configure your inputs in a way that you want to correspond to that sound (e.g., by moving the laptop into a certain tilt position, if you're using the motion sensor features). Then hit "Begin recording examples into dataset" to collect some data, then "Stop recording" when you're done. Do this a few times, for different parameters and features.
    • You can see exactly what Wekinator has recorded by hitting the "view examples" button at the bottom of this view. It's basically a spreadsheet where each column is a parameter or feature, and each row is a "snapshot" of features and the corresponding parameters at a moment in time.
    • Advanced interaction: TODO explain this
  • TRAIN:
    • Once you've collected some data, you can train your model(s) (one model will be trained for each parameter).
    • The simplest way to use the TRAIN view is to just hit "Train now."
    • Advanced interaction: TODO explain this
  • RUN:
    • Once you've trained your models, you can run them!
    • In the RUN view, hit the "run" button on the top right. This will send the input features to the model, get outputs from the model, and send them as parameters to the synth, repeatedly in real-time. Hit "stop" when you're done. You should hear your synth changing and see the parameters being updated in the GUI.
    • Advanced interaction: TODO explain this
    • You can optionally reconfigure your models and evaluate them using cross-validation accuracy. TODO explain this. (advanced)

7. Interaction!

  • Feel free to go back and collect more data and re-train and re-run your models as many times as you want. (You'll have to re-train after changing your dataset, for the changes to go into effect when you run.) You can also go back to the Features Setup and Model Setup tabs and change those settings.

Organization of the Wekinator

  • TODO show picture of architecture
  • The GUI is in Java. You won't need to recode any of this (but Rebecca will!). The GUI jar file lives in dist/wekinator.jar.
  • The core ChucK system is in the core_chuck/ directory.
    • You shouldn't have to change any of this.
    • The main ChucK code responsible for communicating with Java and the feature extractors is
  • The ChucK synths are in the chuck/synths/ directory. Each synth implements a class called SynthClass, the structure of which you can read about below. If you want to create your own ChucK synth, you can start by editing
  • In play-along learning, a ScorePlayer object sends parameters to a synth object. (A score is simply a set of instructions at specific points in time.) The ScorePlayers live in the chuck/score_players/ directory. You don't have to implement a score player for your synth unless you want to do playalong learning from a ChucK score.
  • A feature extractor is a module that extracts features (also called attributes) from your inputs. For example, if you're using an audio input, a feature might be an FFT, or the peak amplitude of the audio signal. If you're using the laptop motion sensor, a feature might just be the values of each of the 3 sensor outputs. Your input is fully described to the wekinator by a vector of real-valued features, which typically describe the state of your input at a single point (snapshot) in time (though you could imagine a feature vector that describes its current state and its previous state). Custom feature extractors in ChucK live in the chuck/feature_extractors/ directory. They implement the CustomFeatureExtractor class. You can add your own custom feature extractor in chuck by editing (see for a simple example). You can also choose from existing custom feature extractors for more specialized audio or gestural features.
  • You can integrate synths in environments outside ChucK. For example, see the chuck/processing_synths directory. More info on how to do this below.
  • You can also integrate feature extractors in environments outside ChucK. See below.

Making Wekinator work with your own inputs & outputs

This section details how you can hook up Wekinator to be controlled by different feature extractors (e.g., inputs like Kinect, Wiimotes, custom sensors, etc.) and to control different "synthesis" (or animation, or whatever) processes (e.g., Processing patches, Unity3D game engine, your own C++/Java/Python/whatever code, etc.).

Your own feature extractors & "synths" (or animations, etc.) can be written in the language/environment of your choice, and communicate with Wekinator using OSC. Alternatively, you can write them in ChucK and communicate with Wekinator via function calls.

Implementing your own feature extractor in ChucK

Your custom feature extractor will live in a class named CustomFeatureExtractor. We recommend that you start with this file provided called (in the WEKINATOR_DIR/chuck/feature_extractors/ directory), which you can edit according to the "TODO" instructions in the code. Basically, you'll have to implement function bodies for setting up the feature extractor, and most importantly computing the features.

If you edit the code provided and put the computed features in the features[] array of your class, you're good to go! For an example of how to implement the setup and feature computation, check out WEKINATOR_DIR/chuck/feature_extractors/

To run, choose your custom chuck feature extractor from the ChucK configuration pane in the GUI. Make sure you set the proper number of features.

Also, when you run the GUI, make sure to check the box that says "Custom ChucK features" on the Features panel, and set the number of features to the number of features your feature extractor actually extracts (numFeats in the skeleton code).

Implementing your own feature extractor outside ChucK

All your feature extractor needs to do is send a feature vector to the Wekinator via OSC at the (possibly variable) rate of its choosing. The feature vector is a list of floats (MUST be floats), of arbitrary length (though this length must not change over time, and the order of the features must not change over time). Your extractor sends the features in one OSC message, "/oscCustomFeatures", to port 6448.

The feature extractor can also optionally send an OSC message containing the feature names, so that these names are displayed in the Wekinator GUI. This message uses the string "/oscCustomFeaturesNames" and contains a list of string names (one per feature, in the same order as the feature values are passed in the oscCustomFeatures message).

To run, start your feature extractor before starting the Java GUI. In the ChucK configuration, in the Features tab, check the box to indicate that you are using a custom OSC feature extractor, and indicate the number of features. Then, after running ChucK, in the main Wekinator "Features" panel, enter the number of custom OSC features to be equal to the length of the feature vector specified in your oscCustomFeatures message above. (Yes, this is redundant for now -- a fix will come soon.)

Implementing your own synthesis class in ChucK

Your synthesis code will live in a ChucK class called SynthClass and implement a pre-defined set of functions.

We recommend starting with synths/, which has instructions on what you should edit to make your synth work.

You'll need to implement the following functions:

  • fun int[] isDiscreteArray() : return an array whose length is equal to the number of parameters; each element is 1 if you want discrete classifier outputs, 0 if you want continuous outputs, for the corresponding parameter
  • fun int[] getNumClassesArray() : return an array whose length is equal to the number of parameters; each element is the number of classes expected by your synth for the corresponding parameter (where a class here means a member of the set of discrete outputs of a classifier like kNN; for example, to classify gestures into "moving" and "not moving", you would have 2 classes). Only matters if doing discrete classification.
  • fun int[] useDistributionArray() : return an array whose length is equal to the number of parameters; each element corresponds to a parameter. For each parameter, if you're using a discrete classifier, the array element is 1 if you want a vector of class membership probabilities for each output, or 0 if you only want a single class label for each output. If you're not discrete, it doesn't matter.
  • fun int getNumParams() : the number of outputs expected from the model(s). For discrete or continuous learning. For example, to drive the frequency and volume of a SinOsc independently, you would use 2 parameters.
  • fun void setup() : This function is called just once, when the system is setting up. Treat this like a constructor. In other words, do any work here that should be done before sound is made, like sporking shreds that should be running the whole time your synth is running (e.g., for smoothing parameters).
  • fun void setParams(float params[]) : sets your synthesis parameters according to the contents of params[], which will be of length equal to the number of parameters if you are doing continuous or discrete without distribution, or length equal to (# parameters) x (# classes) if you are doing discrete with the probability distribution.
    • In other words, this is the most important part of your synth! It describes how you're using the outputs of the model.
    • Note: You probably want to do some error checking here. Ideally, params[] will be of the appropriate size, but you should check. And ideally the values in this array will be what you'd expect, but you should never assume this is the case, particularly when using a neural network for learning a continuous value. Your "frequency" parameter value might come in as -1, 0, or 500,000 for example: it's your call how to use (or ignore) these types of values.

Your synth has a few other functions that need to be present, but which you shouldn't need to edit by default.

  • sound() and silent() turn the sound on and off. In the examples, I use a main envelope called "e," and I patch all my objects into e instead of into the dac. sound() ramps up e, and silent() ramps down to 0. If you want to do something more complex, for example turning off expensive processing when your synth is silent, go ahead and edit these functions.
  • Other functions: you really shouldn't edit. Just start from the skeleton, keep them at the bottom of your code, and ignore them.

When you've implemented your class, make sure you load it in the ChucK configuration panel in the Wekinator GUI.

Implementing your own synthesis class outside ChucK

Editing your synth

Your synth should listen for the OSC message /OSCSynth/params containing the parameters as floats.

In order to set up Wekinator to send parameter values to your synth, edit the ChucK configuration upon opening the Wekinator. In the Synthesis (Output) tab, select the option to control an OSC synth module, and configure this module using the "Configure" button, adding the appropriate number and types of parameters to control your synthesis module.

Your synth should listen for messages at port 12000 by default. Alternatively, you can change the Wekinator to send to a different port using this same configuration pane in Wekinator. You can also optionally send the parameters to a remote host (i.e., another computer) in that GUI, provided that host is on the same local area network.

If your synth contains its own GUI or other mechanism for changing parameter values independently of the Wekinator GUI (e.g., such as hooking your Max/MSP synth up to some max sliders), you'll want to communicate the parameter changes back to the Wekinator by sending the OSC message "/realValue" followed by the parameters as floats, to port 6448.

Play-along learning with an external synth

You can use a ScorePlayer written in ChucK to control your OSC synth. Your ScorePlayer looks the same regardless of the type of synth.

Or you can use a playalong score created from within the Wekinator GUI.

Or, if you're using an OSC synth, you can control that synth directly through some other process (e.g., some SuperCollider code), and communicate your synth's parameter values to the Wekinator through the /realValue message (see instructions elsewhere on this wiki, plus the downloaded examples) while recording data.

Implementing your own ScorePlayer in ChucK

This is a lot like implementing your own synth class. Start by copying one of the standard examples, like score_players/ Look at the "TODO" statements in to see what you need to edit.

Your score must know the number of parameters required by your synth, but otherwise it is not tied to your particular synth. A score player can be compatible with multiple synths; a synth can be compatible with multiple score players.

Your new file will have to implement the ScorePlayer class, along with some required functions.

  • fun void setup(SynthClass s, OscSend x) : Do any additional setup here
  • fun void playScore() : in the simplest case, this will include a while(isPlaying) loop that sends new parameters to the SynthClass object over time, as long as isPlaying is true
  • fun void sendMessage() : you may want to show the user a text message informing them what is happening in the score, and what the current parameter values are. This is optional.

There are a few other functions in the skeleton code that you can leave as-is (no need to edit).

Sample Code / Projects

Max/MSP feature extraction & synthesis

First, download the OpenSoundControl object from and install into the Max search path.

A sample synthesis patch that receives 9 continuous parameters from Wekinator and uses them to control sound synthesis in Max is located in the Wekinator distribution, in OtherExamples/blotar_synth.maxpat. This patch requires the blotar object, downloadable with PeRColate from

A simple feature extractor is located in the Wekinator distribution, in OtherExamples/SimpleMaxFeatureExtractor.maxpat.

Two more useful feature extractors use Tristan Jehan's analyzer~ object; they live in OtherExamples/tjanalyzer_bark_extractor.maxpat and tjanalyzer_feature_extractor.maxpat. They both require analyzer to be downloaded and installed from

Processing animations

(See the Webcam and Kinect sections below for examples of using Processing to write feature extractors.)

The Wekinator distribution includes 3 simple animation examples that can be controlled by Wekinator, in processing/example_animations:

  • processing_continuous_hue: Controls hue as a single continuous parameter. Range is 0-255.
  • processing_discrete_hue: Controls hue as a single discrete parameter, where 0=red, 1=green, 2=blue
  • processing_continuous_hue_and_position: Controls hue and x- and y-positions


These examples all use Processing to extract features from Kinect (with or without skeleton tracking) and pass the features to Wekinator. You don't have to use Processing by any means, but it's nice in that you can easily make a feature extractor that gives the user some visual feedback.


Example using depth camera only without skeleton tracking:

Download for a Processing patch that uses the OpenKinect library. This simple patch computes average depth in each cell of a 5x5 grid in the Kinect's visual field, plus the depth, x, and y position of the closest point. (28 features total.) See comments in the patch for details. You may have to run the Processing patch in 32-bit mode (Processing -> Preferences -> Launch programs in 32-bit mode).

Example using skeleton tracking:

First, install OpenNI/NITE (see next subsection) and Sensebloom/OSCeleton (

Launch Wekinator with 69 OSC features (an x, y, and z feature for each of 23 joints).

Run the osceleton program from the Terminal. For example:

cd /<i>directory_openNI-lives_in</i>/openNI/Sensebloom-OSCeleton-<i>some_numbers_letters_aasdfjaghelir</i>

You may want to run with the -w flag to see the Kinect input, especially if you're having problems:

./osceleton -w 

Then, assume the "Kinect calibration / airport security" pose (see

You should see something like this in Terminal:

New User 1
Calibration started for user 1
Calibration complete, start tracking user 1

Once you are being tracked, run the Processing sketch from (adapted from Stickmanetic). This sketch will display the tracked skeleton on screen and forward the joint positions to Wekinator.

Note on installing Kinect skeleton tracking:

The best instructions for installation that I know of live here: Install this, then test by running the Stickmanetic sketch. In order to run the original (non-Wekinated) Stickmanetic, I had to make the changes to the sketch described here in order for Processing to not give me errors:

On Windows

Using the Microsoft SDK:

I have not yet configured Kinect to work on Windows 7 with the official Microsoft SDK. It should be easy to write some .NET code that passes 3D joint positions (and other info) to Wekinator as a single OSC message; see "Implementing your own feature extractor outside ChucK" above.




A simple Processing patch can be downloaded at . Use this as a "Custom OSC" feature extractor with 9 features, each describing the average hue of a webcam cell.

You can also take a look at the "built-in" Processing webcam features for examples of how Processing can be used to extract features from a webcam. You can run these as "Custom OSC" feature extractors, instead of "built-in" extractors, by changing the Processing code to set the OSC send port to 6448 and the feature string message name to /oscCustomFeaturesNames.


The Processing webcam examples (edge tracker, color tracker) that come with the Wekinator installation may not work on Windows, due to some Processing/QuickTime unfriendliness.

The GSVideo library, available at, enables you to use the webcam on Windows without Quicktime.

Download a replacement Color Tracker feature extractor at Run this instead of the built-in color tracker (still select "Webcam: Color tracking" from the Wekinator GUI).


Gametrak jeff.png

The GameTrak is a nice, inexpensive hardware controller ( that we've used in PLOrk. Turn it into a HID device using this simple hack: Then, plug in Gametrak before launching Wekinator. In Wekinator, check the feature box for "HID device", hit "Configure," then pull the strings. If you've followed the instructions so far, Wekinator should detect 6 axes. You're all set!


  • I can't establish an OSC connection from the GUI! Help!
    • Wekinator requires that you only have one instance of the GUI and one instance of the ChucK code running at once. Make sure you don't have miniAudicle running. Try restarting both ChucK and the GUI and reconnecting.
  • My game controller isn't recognized! Help!
    • Make sure you plug in the controller before running your chuck code (or starting miniAudicle).
    • Every now and then a controller will fail to play nicely with ChucK. First, note that the controller configuration works only for controllers that act as HID joysticks or mice. If this describes your controller, check that ChucK recognizes it (outside the Wekinator) by running this code in miniAudicle or command line. If that code works, you've discovered a Wekinator bug and you should let us know. Otherwise, you may want to investigate alternative HID input environments (e.g., maybe Max/MSP will recognize your HID?), and you should follow the instructions above on implementing your own feature extractor outside ChucK.
    • Wiimote, in particular, is easiest to use when connected via a 3rd-party application (e.g., try the aka.wiiremote external). If you want to use a wiimote and need help, email Rebecca for some example code.
  • Neural net training is taking a long time! Help!
    • Training time will increase if you are using a lot of features and/or asking the network to learn a more complex function. Try decreasing the learning rate (e.g., to 0.03 or lower). (This requires that you check the checkbox with the option to view the NN training GUI.) Also keep an eye on the epoch error rate. If it's relatively low, but the training goes on, you can stop the training early by hitting "stop" and then "accept."
  • I get errors when I try to run the ChucK code. Help!
    • If you've written your own synth or feature extraction code, you can do basic debugging yourself by running ONLY that code in miniAudicle (it'll catch syntax errors). You can also write some test scripts for your classes, like I've included in the tests/ directory.
    • If this is a general error that happens even when you don't use custom ChucK feature extractors or synths, you may be experiencing a ChucK issue. This happens for certain Windows users. To get around this:
      • First, set up Wekinator like you normally would -- edit the ChucK configuration within the Wekinator GUI, then when you're done, hit "Export to .ck" file. Save the file somewhere you can find it. Then hit "OK" to exit the chuck configuration dialog.
      • Then, instead of running chuck from the Wekinator GUI, open a terminal, navigate to the configuration file you just saved, and run it -- e.g. by typing "chuck". (Note that chuck will have to be in your path for this to work. Alternatively, you can use a fully qualified path, like "C:\Folder yadayahda... \chuck\chuck.exe"
      • If all goes well, you should advance to the next GUI pane in Wekinator.
  • My OSC "synth" (or other parameter-receiving module) can't receive parameter value updates from Wekinator. Help!
    • Make sure your synth is listening at port 12000 (or an alternative port that you've specified in the Wekinator GUI for configuring your OSC synth)
    • Make sure your synth is listening for the same number of parameters as the Wekinator is sending, and for float parameters instead of ints or some other data type. Note that if you're using a probability distribution for any discrete parameters, the Wekinator will actually send "n" values for that parameter, where n=number of classes.
    • Turn off any anti-virus or firewall software that may be blocking OSC communication.