Ju et al., 2008 - Google Patents
Expressive facial gestures from motion capture dataJu et al., 2008
- Document ID
- 14064016522788266154
- Author
- Ju E
- Lee J
- Publication year
- Publication venue
- Computer Graphics Forum
External Links
Snippet
Human facial gestures often exhibit such natural stochastic variations as how often the eyes blink, how often the eyebrows and the nose twitch, and how the head moves while speaking. The stochastic movements of facial features are key ingredients for generating convincing …
- 230000001815 facial 0 title abstract description 132
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/06—Transformation of speech into a non-audible representation, e.g. speech visualisation or speech processing for tactile aids
- G10L21/10—Transformation of speech into a non-audible representation, e.g. speech visualisation or speech processing for tactile aids transforming into visible information
- G10L2021/105—Synthesis of the lips movements from speech, e.g. for talking heads
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/205—3D [Three Dimensional] animation driven by audio data
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/003—Changing voice quality, e.g. pitch or formants
- G10L21/007—Changing voice quality, e.g. pitch or formants characterised by the process used
- G10L21/013—Adapting to target pitch
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Chuang et al. | Mood swings: expressive speech animation | |
Mattheyses et al. | Audiovisual speech synthesis: An overview of the state-of-the-art | |
Xu et al. | A practical and configurable lip sync method for games | |
Taylor et al. | Dynamic units of visual speech | |
Ezzat et al. | Trainable videorealistic speech animation | |
Deng et al. | Computer facial animation: A survey | |
Deng et al. | Expressive facial animation synthesis by learning speech coarticulation and expression spaces | |
Pham et al. | End-to-end learning for 3d facial animation from speech | |
US20020024519A1 (en) | System and method for producing three-dimensional moving picture authoring tool supporting synthesis of motion, facial expression, lip synchronizing and lip synchronized voice of three-dimensional character | |
US20120130717A1 (en) | Real-time Animation for an Expressive Avatar | |
King et al. | Creating speech-synchronized animation | |
EP1203352A1 (en) | Method of animating a synthesised model of a human face driven by an acoustic signal | |
Ju et al. | Expressive facial gestures from motion capture data | |
Chang et al. | Transferable videorealistic speech animation | |
CN113077537A (en) | Video generation method, storage medium and equipment | |
Theobald et al. | Near-videorealistic synthetic talking faces: Implementation and evaluation | |
Ma et al. | Accurate automatic visible speech synthesis of arbitrary 3D models based on concatenation of diviseme motion capture data | |
Li et al. | A survey of computer facial animation techniques | |
Filntisis et al. | Video-realistic expressive audio-visual speech synthesis for the Greek language | |
Chen et al. | Expressive Speech-driven Facial Animation with controllable emotions | |
CN116828129B (en) | Ultra-clear 2D digital person generation method and system | |
Tang et al. | Real-time conversion from a single 2D face image to a 3D text-driven emotive audio-visual avatar | |
CN117152285A (en) | Virtual person generating method, device, equipment and medium based on audio control | |
CN115311731B (en) | Expression generation method and device for sign language digital person | |
Edge et al. | Expressive visual speech using geometric muscle functions |