For compute lip sync to audio with transcript, a way to prioritize one or the other when there is a conflict.
When using an audio track and a transcript, whenever one does not match the other, currently, it doesn't do anything at all. So your puppet just sits there with his mouth closed during those sequences where there is a conflict between transcript and audio.
What would be immensely helpful would be a way to tell the lip sync process to, whenever there is a conflict, just use the audio or just use the transcript for the lip sync visemes generated, or nothing at all, like it is now. This way, there is at least something to work with in those cases rather than nothing.