What will be our defense when CGI is indistinguishable from live recordings?

You can apply this exact argument to emails or any other text document. I can trivially type up an email I want to claim someone else wrote and try to use it in court. But no one will believe me if the only record I have is a printout from my own computer. Everyone understands that typing text is trivially faked.

However, if the email came in via gmail, and Google’s servers say it came from Defendant@gmail.com, and Google’s Logs show the email was written and submitted from an IP address associated with the Defendant’s house, at a time when the defendant’s phone was recorded as being home (per Google’s servers), now it’s pretty clear and convincing evidence.

So that’s what has to be done by video. Once CGI is so good anyone can fake any video, of course, it wouldn’t be reliable any longer on it’s own. But if the camera was streaming to a cloud managed by trustworthy third party servers, that’s another story. Soon, in order for security cameras to be accepted as valid evidence in court, they needed to have sent their video over a network to a third party.

As a side note, this problem is completely identical to problems we have had for over a century in proving who invented someone first. Generally, an inventor must keep a notebook, and if they made a major discovery, they would need to give that notebook to a trusted third party who will stamp it with the date and time it was received and then store it securely. Like giving it to a university patent office and having it secured in their vault (if the inventor were a professor there).

Since these notebooks are mostly just handwritten notes, without the step of putting them in the custody of a third party at the time the note was made, you can’t really prove that the inventor didn’t hear about the invention announced by someone else and then just write the same thing in their notebook.

So that’s how to secure cameras. At least for now. Eventually, realtime CGI manipulation might be possible, and at that point, the only trustworthy cameras would be cameras owned by third parties.

The progress is more likely to be a logarithmic curve rather than any other, with big leaps in the beginning and smaller tweaks as we get down to smaller and smaller details. We’ve come from block graphics to my OP link in just 25 years. I know it started before then, but Doom was effectively block graphics in '93. The video effects available on today’s common gaming computers couldn’t have been matched with millions of dollars of hardware and software just a decade ago.

There will be incremental steps before we get to fully rendered VR style scenes. Taking an existing recording of someone being handed a magazine and editing it to be an envelope of money or a package of heroin. The editor just has to ensure that the final video matches the artifacting and other digital fingerprints of the original recording. Or is at least consistent with whatever camera they want to emulate. If the camera person recorded the event with ill intent, the original will never have been made public anywhere to refute the edit.

Another option is to take a video of the target saying something and then making slight edits to their face to lip sync whatever it is you put into the sound track.

Remember that the end result won’t intended to be a studio or sound stage quality recording. There will be enough trash to cover a lot of fudged details.

How did the law survive the existence of lookalikes? Forget CGI of Obama or Trump, just hire a really good lookalike and then make a video of them in a real hotel room getting a real golden shower, no fully CGI video needed at all…

As already noted, the quality of CGI seems a bit unimportant, we are already at the point where we can produce videos that seem perfectly real. That are real in fact, save for the one or two small but vital details that got modified.

Yes, I agree that the technology will advance at a rapidly increasing rate.

But the video material most often used in criminal and civil cases bears little resemblance to what people imagine it to be. Perhaps surprisingly, the example you use of a magazine being changed to an envelope of money is a good example of this. Unless the video system has been specifically designed and installed for the purpose of identifying what is being handed from one person to another, it is likely that the magazine will only be seen as a small group of pixels. (Just take a look at typical video of a bank robbery. The whole purpose of these systems is to provide facial identification, and they do a good job. But can you easily see what is in the hand of someone standing toward the back of the bank lobby? Many, many surveillance systems in stores and bars still record at no more than 352x240 pixels.) It is actually fairly easy to manually change the pixel values to make an object of this size appear larger, smaller, darker, lighter, or whatever. It doesn’t take any sophisticated software.

But…it is also surprisingly easy to detect these changes. There is also the whole question of authentication, as mentioned in other posts. The first two questions I usually get are “Can you form an opinion on whether this is authentic?” and “Can you form an opinion on whether this is reliable?” To answer either one, I have to know how it was made. And remember, in a court case or deposition it’s sometimes just as effective to form/state an opinion that a piece of evidence CAN’T be authenticated as it is is to say that it is NOT authentic.

Of course, opposing counsel will always have an expert who is ready to state the opposite, but that’s how expert witnesses eat and pay their mortgage.

I deal with video, & GPS, & GPS video (Garmin Virb). There are places GPS doesn’t work - in a building, in a (commercial) airplane, underground. There are places cell phones don’t connect to a network - in a (commercial) airplane, top floors of a skyscraper, in a canyon/gorge, underground. There are also tons of cameras that don’t connect to anything - actual camera that can shoot video, action cams, (like GoPro’s; they can piggy back to your phone to live stream, but don’t do that natively & using that feature eats into battery life).

What if I happened to see a certain famous person, say a orange billionaire on the NYC subway. I pull out my camera to film him & whadayaknow, he reaches out just a little in the rush hour crowd & grabs that pretty girl by the p***y. Just because it wasn’t written to a server or have GPS tracking doesn’t mean it wasn’t real caught by my camera.

Well. The clearest way to establish that the video is real would be the same as witness testimony now. If you’re the only witness to a bit of pussy grabbing, apparently, nobody will believe you. But if several people with different phones capture it from different angles…eh, maybe. For now. The more data that was gathered, the harder it is to fake without making a mistake. Higher resolution cameras, IR cameras (new iphone detects IR), etc.

You know, today’s special effects are probably mostly up to the task of faking a video if the quality is one of those gas station security cameras that use worn VHS tapes and record 4 camera feeds to the same tape…

Really all you would need would be three people, your victim/grabee, the photographer, and the video editor. You probably wouldn’t want the victim or photographer to be the editor because if they were good enough to fake up the photo/video, they’re probably well enough known to automatically be suspect if tied to the video.

Here is the same type of thing, but much higher quality video result.

You’re right, it doesn’t. But it might mean that you have a harder time convincing people that it was. If your camera has a real time clock and can sign its videos, then you can still at least prove that the video was produced by your camera at a certain time. It could (if we wanted the signing to work this way), also tell you where the camera was at the last time it could connect to GPS, so there’d be some bounds on where the video was taken.

If you didn’t have a camera at all, and you could just tell people what you saw, then you’d have an even lower standard of evidence.

The thing about EXIF data on photos and video is that there is no function for non-repudiation. It’s the digital equivalent of a Post-It note stuck to a photo and can be removed or changed with no evidence of tampering.

Yes, and I want to clarify that when I talk about digital signing, I’m not talking about existing EXIF.

I’m talking about a yet-to-be-developed cryptographically secure store of that information which is ideally removable (you can share a photo or video without sharing the metadata), but not modifiable (you can’t share a photo or video with false information)

Nothing needs to be developed. The same procedure for digitally signing an email works equally well for any data structure.

Reputable news organizations? An oxymoron if I’ve ever heard one.

Off the top of my head, in this thread, I’ve mentioned several features that I’m pretty sure are not compatible with how emails are signed. I’m sure someone who’s thought about this for longer than ten minutes will figure out some more.

I’m pretty sure that it’s completely compatible with how emails are signed. The image, or email and header, are hashed. The hash is encrypted with the sender’s private key. The encrypted hash, sender’s public key, and unencrypted content are packaged and transmitted. The receiver can view the content without needing the key, or if they need confirmation of authenticity the hash can be decrypted with the included key and compared to the hash of the content received.

I had been going to let this thread sink, but I just found a couple of examples of renderings using Unreal Engine 4. Keep in mind that these can be done in real time with a higher end home gaming system (<$3000).

Snappers Advanced Facial Rig for Maya and Unreal Engine

Unreal Engine 4 - The Best Looking Characters Ever!

Hopefully not old enough to be considered a zombie…

Apparently there’s a term for faked videos of people… Wait for it… Deep Fakes.

And DARPA looks to be taking them very seriously.

These New Tricks Can Outsmart Deepfake Videos—for Now