Okay, this is an image that I generated in Bing/Dall-E 3 and outpainted in Stable Diffusion. I used that image as an input for Runway ML Gen 2 and ran it for 8 seconds (no prompt, image only). I then sped it up by 4x and did the reverse section in an off-line video editor. I converted the video to a animated gif (because I haven’t bothered setting up a new Youtube channel after losing the old one).
This is pretty impressive. Especially how it accurately picks out both Yoda in the foreground and the creature in the background (supposed to be a rancor) as active objects.
(Eta: hid the gif because it could get obnoxious infinitely repeating.)