Difference between revisions of "Taps ISMIR"

From CSWiki
Jump to: navigation, search
(Motivation)
m (Motivation)
Line 43: Line 43:
 
Sound designers who work with environmental or natural sounds are faced with a large selection of existing audio samples, including sound effects, field recordings, and soundtracks from movies and television, as a starting point. The TAPESTREA system [cite] facilitates the reuse of existing recordings by offering a new framework for interactively extracting desired components of sounds, transforming these individually, and flexibly resynthesizing them to create new sounds. However, the corpus of existing audio remains unstructured and largely unlabeled, making it difficult to locate desired sounds without minute knowledge of the available database. This paper explores ways to leverage audio analysis at multiple levels in interactive sound design, via TAPESTREA. It also paves the way for TAPESTREA in turn to aid audio analysis.  
 
Sound designers who work with environmental or natural sounds are faced with a large selection of existing audio samples, including sound effects, field recordings, and soundtracks from movies and television, as a starting point. The TAPESTREA system [cite] facilitates the reuse of existing recordings by offering a new framework for interactively extracting desired components of sounds, transforming these individually, and flexibly resynthesizing them to create new sounds. However, the corpus of existing audio remains unstructured and largely unlabeled, making it difficult to locate desired sounds without minute knowledge of the available database. This paper explores ways to leverage audio analysis at multiple levels in interactive sound design, via TAPESTREA. It also paves the way for TAPESTREA in turn to aid audio analysis.  
  
//
+
/*
 
* Large corpus of unstructured and largely unlabled audio
 
* Large corpus of unstructured and largely unlabled audio
 
(sound effects, field recordings, soundtracks from movies and TV, etc.)
 
(sound effects, field recordings, soundtracks from movies and TV, etc.)
 
* leverage audio analysis in interactive sound design (via TAPS)
 
* leverage audio analysis in interactive sound design (via TAPS)
 
* vice versa
 
* vice versa
 +
*/
  
 
==Goals==
 
==Goals==

Revision as of 20:52, 24 April 2006

TITLES:

  • The smartest sound editor ever built
  • (Feature-based Sound Design Framework/Workbench/System/null)
  • (Feature-aware TAPESTREA: A Integrated/Comprehensive/Smart/Interactive Approach to Sound Design Workbench)
  • (TAPESTREA: Augmenting Interactive Sound Design with Feature-based Audio Analysis)
  • Interactive Content Retrieval for Intelligent/Template-aware Sound Design
  • Interactive Sound Design by Example
  • FAT-APE-STREAT: Sound Design by Querying
  • Sound Design-by-Querying and by-Example
  • Finding New Examples to Sound Design By
  • Extending Sound Scene Modeling By Example with Examples
  • Integrating Sound Scene Modeling and Query-by-example
  • Sound Scene Modeling by Example with Integrated Audio Retrieval
  • Facilitating Sound Design using Query-by-example
  • Enriching/Extending/Expanding Sound Scene Modeling By Examples using Audio Information Retrieval
  • Enhancing the Palette: Querying in the Service of Interactive Sound Design
  • Expanding the Palette: Audio Information Retrieval for Intelligent Sound Design
  • Expanding the Palette: Audio Information Retrieval for Intelligent Data-driven Sound Design
  • Enhancing the Palette: Audio Information Retrieval for TAPESTREA
  • Expanding the Palette: Audio Information Retrieval for Sound Scene Modeling by Example
  • Enhancing the Palette: Template-based Retrieval for Intelligent Sound Design
  • Enhancing the Palette: Using Audio Information Retrieval to Expand the Transformative Power of TAPESTREA


AUTHORS (order ok?):

Ananya Misra, Matt Hoffman, Perry R. Cook, Ge Wang

{amisra,mdhoffma,prc,gewang}@cs.princeton.edu

(no. down with order.)

http://soundlab.cs.princeton.edu/apetreats/ismir2006/teaser2.jpg

ABSTRACT

We integrate music information retrieval technologies with TAPESTREA techniques to facilitate and enhance sound design, providing a new class of "intelligent" sound design workbench.

I. Introduction + Motivation

Motivation

Sound designers who work with environmental or natural sounds are faced with a large selection of existing audio samples, including sound effects, field recordings, and soundtracks from movies and television, as a starting point. The TAPESTREA system [cite] facilitates the reuse of existing recordings by offering a new framework for interactively extracting desired components of sounds, transforming these individually, and flexibly resynthesizing them to create new sounds. However, the corpus of existing audio remains unstructured and largely unlabeled, making it difficult to locate desired sounds without minute knowledge of the available database. This paper explores ways to leverage audio analysis at multiple levels in interactive sound design, via TAPESTREA. It also paves the way for TAPESTREA in turn to aid audio analysis.

/*

  • Large corpus of unstructured and largely unlabled audio

(sound effects, field recordings, soundtracks from movies and TV, etc.)

  • leverage audio analysis in interactive sound design (via TAPS)
  • vice versa
  • /

Goals

  • To aid sound designers in creating varied and interesting scenes (standard TAPS stuff)
  • To enable a human operator to quickly identify similar sounds in a large collection of sounds.
  • can be also useful for forensic audio applications, watermarking

II. Previous Work

Related Work

  • see references
  • Marsyas, Taps (sine+noise, transient, wavelet), feature-based synthesis
  • related systems generally falls into one of two categories, (1) "intelligent" audio editors,

which generally extracted musical information, or (2) sonic browsers for search and retrieval.

TAPESTREA

  • Sound-scene modeling by example / Content-aware tapestrea analysis interface
  • Re-composing natural sounds

III. Feature-based Sound Design

Architecture

http://soundlab.cs.princeton.edu/apetreats/ismir2006/architecture2.jpg

Interactive template-based similarity search (database)

quering/marking recorded sounds for template discovery

IV. Results

V. Conclusion and Future Work

VI. References

Bregman, A. Auditory Scene Analysis. MIT Press, Cambridge, 1990.

   * NOT what taps does.

Chafe, C., B. Mont-Reynaud, and L. Rush. (1982). "Towards an intelligent editor of digital audio: Recognition of musical constructs," Computer Music Journal 6(1): .

   * 1st paper to deal with transcription without dealing with identifying notes

Dubnov, S., Z. Bar-Joseph, R. El-Yaniv, D. Lischinski, and M. Werman (2002). "Synthesizing sound textures through wavelet tree learning,". IEEE Computer Graphics and Applications 22(4).

Fernstrom, M. and E. Brazil. (2001)."Sonic Browsing: an auditory tool for multimedia asset management," In Proceedings of the International Conference on Auditory Display.

   * deals more with musical structures and notes

Foote, J. (1999). "An overview of audio information retrieval," ACM Multimedia Systems, 7:2(10).

Jolliffe, L. (1986). Principal Component Analysis. Springer-Verlag, New York.

Kang, H. and B. Shneiderman. (2000). "Visualization Methods for Personal Photo Collections: Browsing and Searching in the PhotoFinder," In Proceedings of the International Conference on Multimedia and Expo, New York, IEEE.

Kashino, Tanaka. (1993). "A sound source separation system with the ability of automatic tone modeling," International Computer Music Conference.

   * uses of clustering techniques for identifying sound sources

Misra, A., P. Cook, and G. Wang. (2006). "Musical Tapestry: Re-composing Natural Sounds," International Computer Music Conference. Submitted.

Misra, A., P. Cook, and G. Wang. (2006). "TAPESTREA: Sound Scene Modeling By Example," International Conference on Digital Audio Effects. Submitted.

Serra, X. (1989). "A System for Sound Analysis Transformation Synthesis based on a Deterministic plus Stochastic Decomposition," PhD thesis, Stanford University.

Shneiderman, B. (1998). Designing the User Interface: Strategies for Effective Human- Computer Interaction. Addison-Wesley, 3rd edition.

Tzanetakis G. and P. Cook. (2000). "MARSYAS: A Framework for Audio Analysis" Organized Sound, Cambridge University Press 4(3).

Tzanetakis, G. and P. Cook. (2001). "MARSYA3D: A prototype audio browser-editor using a large scale immersive visual and audio display," In Proceedings of the International Conference on Auditory Display.