Designing, Playing, And Performing With A Vision-Based Mouth Interface

Michael J. Lyons, Michael Haehnel & Nobuji Tetsutani
The role of the face and mouth in speech production as well asnon-verbal communication suggests the use of facial action tocontrol musical sound. Here we document work on theMouthesizer, a system which uses a headworn miniaturecamera and computer vision algorithm to extract shapeparameters from the mouth opening and output these as MIDIcontrol changes. We report our experience with variousgesture-to-sound mappings and musical applications, anddescribe a live performance which used the Mouthesizerinterface.
This data repository is not currently reporting usage information. For information on how your repository can submit usage information, please see our documentation.