I'm hoping to get David connected to the engineering staff at Dynavox-Tobii who have developed the most realistic-sounding speaking voice that I've ever heard. It seems to me that singing voices would be a logical progression. I don't know if David was successful in contacting their R&D engineers.
Anyone who's familiar with voice synthesis won't have trouble finding out about
Sinsy - there's a ton of information on the internet. Basically, it uses HMM (Hidden Markov Model) synthesis and then modifies it from spoken to sung speech.
The source for
Sinsy has been released on SourceForge. Unfortunately, it's beyond my skill to build it, and they apparently only included a single Japanese voice.
Another interesting technology is a voice bank (I've forgotten the name) where they take snippets of volunteer's speech then concatenate them and process the results through a Melodyne-type processor. That idea shows a great potential.
I've heard demos that have spliced together segments of actual speech phonemes along with formant synthesized speech. The results take on a strong character of the original speaker.
Edit: I see on the
Sinsy SourceForge discussion board from a couple weeks ago that they're planning on releasing the English version in the near future. Part of the issue seems to be that the English version was done in an ad-hoc manner.
They suggest that if you can't wait that long, you could do it yourself by modifying the source code.