Conversational Gestures Synthesizer is an online method for automatically generating and synthesizing gesture animations whose intensity and style are driven by live spoken speech, as well as a specified conversational attitude. Body gestures are adapted in such a way to have a believable strength relation between speech and gestures, whereas gesturing style is matched with the current conversational attitude.
The method is data-driven and uses pre-recorded mocap motions to generate new ones. The pipeline is made up of three stages, the preprocessing stage, the online generation stage and the postprocessing stage. The preprocessing stage is responsible of segmenting all the mocap motions and creating a motiongraph structure, and it is the only stage that is offline of the three. The online generation stage takes the speech input, extracts prosody features out of it, and uses the constructed motiongraph to select appropriate motion segments, while the postprocessing stage concatenates the selected segments together and creates the final animation.
This algorithm can be used in any real-time application like games, virtual words or simulations to add automatic gesture animations to any virtual character. It is written in C#, however it is designed in such a way to be game engine agnostic. A Unity3d wrapper and custom editor were written in order to render and display the results.
No technical description is available