Sure is, and it’s producing promising results in fields such as speech recognition, natural language processing and image/video content classificiation.
The new approach is often called ‘deep learning’ which (vast oversimplifcation ahead) uses hierarchical stacked layers of neural-network-like structures - for example, at the lowest level, you might have something that’s only looking at groups of 4 pixels and has been trained to recognise high contrast - above that, you might have a layer that is looking at the outputs of the layer below and is trained to recognise ‘edges’ - and so on - until you get up to layers that are able to recognise things like specific features of an image (faces, dogs, buildings), or other things such as painting styles, ‘mood’, or whatever.
The interesting thing about this process is that it need not have any cutoff limits in it- to a human, there will be a certain point at which a random arrangement of dots starts looking like a face - any more random and it just doesn’t look like a face. To a trained deep learning network, the arrangement of dots can be given a precise score on how ‘face-like’ it is - and this scoring is continuous right down to zero.
So something like Deep Dream (the famous Google experiment that changes everything into dog faces) works by first having been trained to recognise dog faces - when you point it at any image, it will be able to mathematically determine which bits look more ‘dog-facey’ than others (even though no such resemblance can be perceived by humans) - it then makes a set of random changes to those areas of the image and re-assesses them - the highest-scoring examples are retained and the process is repeated - the result being that it ‘evolves’ dog faces out of the noise.
For further reading, there’s some good, very accessible content explaining deep learning on the ComputerPhile YouTube channel.
It might be more accurate to say that humans can’t consciously recognize image features at below that threshold. Given the resemblance of Google’s output to dreams, it’s possible that that’s how human dreams actually work, by amplifying tiny false “signals” in the noise.
Maybe - but I suspect even our unconscious brain processes may have some sort of ‘minimum usefulness’ threshold - deep learning algorithms only have that if we explicitly add it. A deep learning algorithm will just give you an precise percentage answer as to how a raven is like a writing desk.
Slight nitpick: the changes aren’t random. You calculate the gradient of the ‘dogniess’ function with respect to the input pixels, given fixed weights. You then take a small step in the direction of more ‘doginess’.
Contrast this with the training process, which calculates the gradient of the loss function with respect to the weights, given fixed inputs, and then takes a step in the direction the minimizes the loss function. This is why it is often described as running the network in reverse.
Ah, OK - I thought it was doing it by mutation, re-evaluation and selection (there are some others that are doing in that way - and others using adversarial sets of networks that are either trying to create images that the other accepts as real, or trying not to be fooled by an image that is fake.
I can’t stop thinking of the “Dog Show” sketch by John Finnemore, where he proposes awarding Best in Show based on “which dog looks most like a dog”, and it devolves from there…