This person does not exist

It is probably similar in principal (but more sophisticated in exectuion) to this. (I used to have a small executeable of something like this where you could use any image that you want, but I’m not seeing it now.)
(Also–a really practical spinoff from this kinda thing.)

Apparently black people who don’t exist don’t exist. I must have flipped through 50 pictures, and I didn’t see one who looked black or sub-Saharan African. There were one or two who looked like they might be north African.

Cool, thanks.

Are the pictures generated on the fly when you click the link? Or is it a random selection from a library of previously generated images?

That does not seem similar. Both programs have involve pictures, but the approaches are otherwise completely different. Moreover, they are not even solving the same problem: one is an attempt to optimize a collection of triangles so that they resemble a given input image (e.g., Mona Lisa), while the other attempts to learn and imitate the class of images it sees (so trained on, say, cars it will output imaginary cars of its own design).

In your link, the program evolves chromosome data used to draw a picture using a genetic algorithm. Each individual is evaluated on how closely it reproduces the sample image. In the Nvidia paper, a collection of 70000 images (not one image) is used to train a neural network (there does not seem to be any evolutionary programming involved). The one thing in common is that there is evaluation of the performance of the network, but the performance is based on how well (quality and diversity) it reproduces an entire class of images (not a single image), and the quality also takes into account presence and absence of “features”, not merely a pixel-to-pixel comparison.

The linked article in the OP says it produces a new picture every two seconds.

It seems to allow me to reload a bit faster than that, so I would guess it keeps at least a small cache to serve when the next picture isn’t ready. Though it’s possible it actually does put in a slight delay in loading if it isn’t ready.

I call ‘Bullshit’.

I went through about thirty iterations and while I didn’t find anything wildly out of line like the above, there was a certain subtle…oddness to at least half the pictures. For example, one person was wearing eyeglasses that were slightly different between the left and right halves. Others had seemingly unlikely proportions among the upper, middle and lower portions of their faces, and I noticed a tendency toward androgeny as mentioned by a couple of other posters.

Still, interesting. Thanks to the OP for posting that.

Seems right in line with Deepfakes to me. (Video is more complicated, but they also discard more unrealistic results.)

, I looked at some of those images and there were elements that reminded me of Fred Savage.

This sort of thing, I think, serves better to illustrate the failings of AI, than its successes (while they are impressive). The errors are due to the AI fundamentally not knowing what it is that it’s producing; to it, it’s just a set of pixels sufficiently close to reference data, but not something like ‘a face’ or ‘a cat’. It needs an additional model of do’s and don’t for cats and faces—it needs to somehow know that cats don’t have giraffe-like necks, for which it needs to know what a neck is; that is, it needs some model of the components of a cat, what assemblage of parts makes a cat, and what those parts themselves are. I think this is really going to be the next big challenge in AI.

I can’t help but be struck by the similarity to what psychologists call ‘System I’ versus ‘System II’-thinking. System I is the sort of fast, heuristic, recognition-engine that seems to work like a neural network; System II then comes in to create an explanatory hypothesis, build a model of what’s perceived and check whether that model is plausible, fits known models, and so on. So it seems like AIs lack System II-thinking to ‘weed out’ stuff that System I recognizes as being similar enough to some reference class, but which contain disqualifying elements only obvious upon checking the semantics of the images.

Most of the women are wearing unmatched earrings!

Unless this is some hip new fad about which I was not informed…

The second and fifth images I got were of black people.

I don’t know why, but neural network generated faces always creep the hell out of me, especially since they’re getting more and more realistic.