Difference between revisions of "Taps ISMIR"

From CSWiki
Jump to: navigation, search
 
Line 1: Line 1:
 +
= TITLES: =
  
TITLES:
+
* The smartest sound editor ever built
The smartest sound editor ever built
+
* (Feature-based Sound Design Framework/Workbench/System/null)
(Feature-based Sound Design Framework/Workbench/System/null)
+
* (Feature-aware TAPESTREA: A Integrated/Comprehensive/Smart/Interactive Approach to Sound Design Workbench)
(Feature-aware TAPESTREA: A Integrated/Comprehensive/Smart/Interactive Approach to Sound Design Workbench)
+
* (TAPESTREA: Augmenting Interactive Sound Design with Feature-based Audio Analysis)
(TAPESTREA: Augmenting Interactive Sound Design with Feature-based Audio Analysis)
 
 
* Interactive Content Retrieval for Intelligent/Template-aware Sound Design
 
* Interactive Content Retrieval for Intelligent/Template-aware Sound Design
Interactive Sound Design by Example
+
* Interactive Sound Design by Example
FAT-APE-STREAT: Sound Design by Querying
+
* FAT-APE-STREAT: Sound Design by Querying
Sound Design-by-Querying and by-Example
+
* Sound Design-by-Querying and by-Example
Finding New Examples to Sound Design By
+
* Finding New Examples to Sound Design By
Extending Sound Scene Modeling By Example with Examples
+
* Extending Sound Scene Modeling By Example with Examples
Integrating Sound Scene Modeling and Query-by-example
+
* Integrating Sound Scene Modeling and Query-by-example
* Sound Scene Modeling by Example with Integrated Audio Retrieval
+
* '''Sound Scene Modeling by Example with Integrated Audio Retrieval'''
 
* Facilitating Sound Design using Query-by-example
 
* Facilitating Sound Design using Query-by-example
 
* Enriching/Extending/Expanding Sound Scene Modeling By Examples using Audio Information Retrieval
 
* Enriching/Extending/Expanding Sound Scene Modeling By Examples using Audio Information Retrieval
Enhancing the Palette: Querying in the Service of Interactive Sound Design
+
* Enhancing the Palette: Querying in the Service of Interactive Sound Design
Enhancing the Palette: Integrating Audio Information Retrieval for Intelligent Sound Design
+
* '''Expanding the Palette: Audio Information Retrieval for Intelligent Sound Design'''
* Expanding the Palette: Audio Information Retrieval for Intelligent Sound Design
 
 
* Expanding the Palette: Audio Information Retrieval for Intelligent Data-driven Sound Design
 
* Expanding the Palette: Audio Information Retrieval for Intelligent Data-driven Sound Design
Enhancing the Palette: Audio Information Retrieval for TAPESTREA
+
* Enhancing the Palette: Audio Information Retrieval for TAPESTREA
* Expanding the Palette: Audio Information Retrieval for Sound Scene Modeling by Example
+
* '''Expanding the Palette: Audio Information Retrieval for Sound Scene Modeling by Example'''
Enhancing the Palette: Template-based Retrieval for Intelligent Sound Design
+
* Enhancing the Palette: Template-based Retrieval for Intelligent Sound Design
Enhancing the Palette: Using Audio Information Retrieval to Expand the Transformative Power of TAPESTREA
+
* Enhancing the Palette: Using Audio Information Retrieval to Expand the Transformative Power of TAPESTREA
  
AUTHORS:
+
AUTHORS (order ok?):
 
Ananya Misra, Matt Hoffman, Perry R. Cook, Ge Wang
 
Ananya Misra, Matt Hoffman, Perry R. Cook, Ge Wang
 
{amisra,mdhoffma,prc,gewang}@cs.princeton.edu
 
{amisra,mdhoffma,prc,gewang}@cs.princeton.edu

Revision as of 22:17, 23 April 2006

TITLES:

  • The smartest sound editor ever built
  • (Feature-based Sound Design Framework/Workbench/System/null)
  • (Feature-aware TAPESTREA: A Integrated/Comprehensive/Smart/Interactive Approach to Sound Design Workbench)
  • (TAPESTREA: Augmenting Interactive Sound Design with Feature-based Audio Analysis)
  • Interactive Content Retrieval for Intelligent/Template-aware Sound Design
  • Interactive Sound Design by Example
  • FAT-APE-STREAT: Sound Design by Querying
  • Sound Design-by-Querying and by-Example
  • Finding New Examples to Sound Design By
  • Extending Sound Scene Modeling By Example with Examples
  • Integrating Sound Scene Modeling and Query-by-example
  • Sound Scene Modeling by Example with Integrated Audio Retrieval
  • Facilitating Sound Design using Query-by-example
  • Enriching/Extending/Expanding Sound Scene Modeling By Examples using Audio Information Retrieval
  • Enhancing the Palette: Querying in the Service of Interactive Sound Design
  • Expanding the Palette: Audio Information Retrieval for Intelligent Sound Design
  • Expanding the Palette: Audio Information Retrieval for Intelligent Data-driven Sound Design
  • Enhancing the Palette: Audio Information Retrieval for TAPESTREA
  • Expanding the Palette: Audio Information Retrieval for Sound Scene Modeling by Example
  • Enhancing the Palette: Template-based Retrieval for Intelligent Sound Design
  • Enhancing the Palette: Using Audio Information Retrieval to Expand the Transformative Power of TAPESTREA

AUTHORS (order ok?): Ananya Misra, Matt Hoffman, Perry R. Cook, Ge Wang {amisra,mdhoffma,prc,gewang}@cs.princeton.edu

ABSTACT

We integrate music information retrieval technologies with TAPESTREA techniques to facilitate and enhance sound design, providing a new class of "intelligent" sound design workbench.

I. Introduction + Motivation

Motivation

  • Large corpus of unstructured and largely unlabled audio
 (sound effects, field recordings, soundtracks from movies and TV, etc.)
  • leverage audio analysis in interactive sound design (via TAPS)
  • vice versa

Goals

  • To aid sound designers in creating varied and interesting scenes (standard TAPS stuff)
  • To enable a human operator to quickly identify similar sounds in a large collection of sounds.
  • can be also useful for forensic audio applications, watermarking

II. Previous Work

Related Work

  • see references
  • Marsyas, Taps (sine+noise, transient, wavelet), feature-based synthesis
  • related systems generally falls into one of two categories, (1) "intelligent" audio editors,
 which generally extracted musical information, or (2) sonic browsers for search and retrieval.

TAPESTREA

  • Sound-scene modeling by example / Content-aware tapestrea analysis interface
  • Re-composing natural sounds

III. Feature-based Sound Design

Architecture

(big figure?)

Interactive template-based similarity search (database)

quering/marking recorded sounds for template discovery

IV. Results

V. Conclusion and Future Work

VI. References

Bregman, A. Auditory Scene Analysis. MIT Press, Cambridge, 1990.

   * NOT what taps does.

Chafe, C., B. Mont-Reynaud, and L. Rush. (1982). "Towards an intelligent editor of digital audio: Recognition of musical constructs," Computer Music Journal 6(1): .

   * 1st paper to deal with transcription without dealing with identifying notes

Dubnov, S., Z. Bar-Joseph, R. El-Yaniv, D. Lischinski, and M. Werman (2002). "Synthesizing sound textures through wavelet tree learning,". IEEE Computer Graphics and Applications 22(4).

Fernstrom, M. and E. Brazil. (2001)."Sonic Browsing: an auditory tool for multimedia asset management," In Proceedings of the International Conference on Auditory Display.

   * deals more with musical structures and notes

Foote, J. (1999). "An overview of audio information retrieval," ACM Multimedia Systems, 7:2(10).

Jolliffe, L. (1986). Principal Component Analysis. Springer-Verlag, New York.

Kang, H. and B. Shneiderman. (2000). "Visualization Methods for Personal Photo Collections: Browsing and Searching in the PhotoFinder," In Proceedings of the International Conference on Multimedia and Expo, New York, IEEE.

Kashino, Tanaka. (1993). "A sound source separation system with the ability of automatic tone modeling," International Computer Music Conference.

   * uses of clustering techniques for identifying sound sources

Misra, A., P. Cook, and G. Wang. (2006). "Musical Tapestry: Re-composing Natural Sounds," International Computer Music Conference. Submitted.

Misra, A., P. Cook, and G. Wang. (2006). "TAPESTREA: Sound Scene Modeling By Example," International Conference on Digital Audio Effects. Submitted.

Serra, X. (1989). "A System for Sound Analysis Transformation Synthesis based on a Deterministic plus Stochastic Decomposition," PhD thesis, Stanford University.

Shneiderman, B. (1998). Designing the User Interface: Strategies for Effective Human- Computer Interaction. Addison-Wesley, 3rd edition.

Tzanetakis G. and P. Cook. (2000). "MARSYAS: A Framework for Audio Analysis" Organized Sound, Cambridge University Press 4(3).

Tzanetakis, G. and P. Cook. (2001). "MARSYA3D: A prototype audio browser-editor using a large scale immersive visual and audio display," In Proceedings of the International Conference on Auditory Display.