Today I got fooled by an AI video

Mind-blowing videos, but it looks like the review was made with clips provided by the company, not made by the reviewer, and the AI isn’t available for mere mortals to use yet. I’m impressed that the model was created from only 18 thousand hours of training video.

https://www.datacamp.com/blog/omnihuman

But your mentioning this reminded me that there was a “lip synch” tab at Kling AI when I was trying it yesterday. I didn’t look into it at the time and hadn’t thought about it again. But I looked at it and it doesn’t directly convert still images into lipsynched videos but does convert text-to-video or image-to-video clips created in an earlier step or clips that you upload. It seems to be limited to 5 to 10 second lipsynch clips. You can either upload a sound clip or generate audio with text-to_speech. I searched through my archives of image-to-video clips I had made in the past to find a suitable candidate and picked one generated by Luma Dream Machine (from a still image created with SDXL). At first I looked around the web for an appropriate sound clip but then choose to go with a text-to-speech tongue-twister. I actually started to make this reply last night but the video stayed in Kling’s queue for several hours overnight.

The result isn’t as impressive as the Omnihuman clips, but it is available right now. And pretty cheap, too. Generating a 5 second image-to-video clip costs 20 credits but lipsynching a 5 second clip costs only 5. I would have already experimented more if it wasn’t hours waiting for the first clip.