Biomechanical modeling of articulators such as the tongue has been pioneered by a number of scientists, including Reiner Wilhelms-Tricarico, Yohan Payan and Jean-Michel Gerard, Jianwu Dang and Kiyoshi Honda.
The ArtiSynth project, headed by Sidney Fels at the University of British Columbia, is a 3D biomechanical modeling toolkit for the human vocal tract and upper airway. The Directions Into Velocities of Articulators (DIVA) model, a feedforward control approach which takes the neural computations underlying speech production into consideration, was developed by Frank H. A geometrically based 3D articulatory speech synthesizer has been developed by Peter Birkholz (VocalTractLab ).
#VOCALWRITER FULL#
A full 3D articulatory synthesis model has been described by Olov Engwall. Examples include the Haskins CASY model (Configurable Articulatory Synthesis), designed by Philip Rubin, Mark Tiede, and Louis Goldstein, which matches midsagittal vocal tracts to actual magnetic resonance imaging (MRI) data, and uses MRI data to construct a 3D model of the vocal tract. Recent progress in speech production imaging, articulatory control modeling, and tongue biomechanics modeling has led to changes in the way articulatory synthesis is performed. Another popular model that has been frequently used is that of Shinji Maeda, which uses a factor-based approach to control tongue shape. This synthesizer, known as ASY, was a computational model of speech production based on vocal tract models developed at Bell Laboratories in the 1960s and 1970s by Paul Mermelstein, Cecil Coker, and colleagues.
#VOCALWRITER SOFTWARE#
The first software articulatory synthesizer regularly used for laboratory experiments was developed at Haskins Laboratories in the mid-1970s by Philip Rubin, Tom Baer, and Paul Mermelstein. (1968) have made an analog computer simulation. by Nakata and Mitsuoka (1965), Matsui (1968) and Paul Mermelstein (1971). Kelly and Lochbaum (1962) made the first computer simulation later digital computer simulations have been made, e.g. (1968) and Baxter and Strong (1969) have also described hardware vocal-tract analogs. Rosen (1958) built a dynamic vocal tract (DAVO), which Dennis (1963) later attempted to control by computer. The first electrical vocal tract analogs were static, like those of Dunn (1950), Ken Stevens and colleagues (1953), Gunnar Fant (1960). However, historically confirmed speech synthesis begins with Wolfgang von Kempelen (1734–1804), who published an account of his research in 1791 (see also Dudley & Tarnoczy 1950). 1003), Albertus Magnus (1198–1280) and Roger Bacon (1214–1294) are all said to have built speaking heads ( Wheatstone 1837).
Oh, and you can use phonemic control to get exact pronunciation and emphasis.There is a long history of attempts to build mechanical " talking heads." Something more advanced looks to be VocalWriter. You can also modify the speech generation programatically, so maybe the singing programs did that. A quuick play with those commands shows it's easy to change overall pitch and speed, but I can seem to change pitch through a sentence. I found the original Inside Macintosh pages on embedded speech modifiers, which I *believe* is what the various apps use to change the text. It's Classic, so I haven't tried it - View image here: -Īs far as I can remember, these apps work by using MacInTalk commands to modify how the text-to-speech reads the text - you can modify intonation, phonetics, speed and pitch. Useful Software appears to have something very similar - click on SimpleSong. I'm sure there used to be an old shareware app called "Sing!" that does exactly what you were looking for, but I can't find any reference to that any more.