The company behind this incredible tool is Synchronicity Labs, a next-gen AI-powered lip-sync startup. Here's how they did it:

  1. First, they used GPT-4 + prompt engineering for translation
  2. Then, 11labs for voice training + text-to-speech in other languages while preserving the OG speakers voice
  3. Finally, they used wav2lip-2 API to lipsync the video to the translated audio in HD [sota model + API in beta by synchronicitylabs]

    All notes