Yes. Everyone using deep learning for autonomous driving (whether they’re targeting L2 or L5) has built sophisticated driving simulation systems. Basically no different from a high quality driving game, but with an emphasis on absolute realism. This data is used to train shipping products.
There is the danger of course that the net will fit itself to some peculiarities of the simulation, which is why real data is also important. But massive amounts of the training happen on synthetic data, and each new “interesting” incident that comes through the real datasets sets added to the synthetic one–but with the addition of happening with varying lighting, weather, etc. conditions.
Everyone is also using data amplification, which is arguably a subset of synthetic data. Take some real data, but transform it in various ways–mirror it, rotate it, change the color balance, add noise, etc. It can only go so far, but when collecting data (or just labeling) is expensive it can make your dataset go much farther.
Also in the category are adversarial networks. Essentially, you train two AIs: one to consume the data, and the other that tries to produce data that “fools” the first one. Photoshop has been shipping this for a couple of years now for their various “neural” filters. Probably you’ve seen a zillion photographs already that have been altered (to remove unwanted portions, or to extend beyond the borders), etc. using these features.
7 years ago, Go was in the category of “yes, computers have solved chess, but Go requires special human intelligence that we are many years from emulating”. 5 years ago, AlphaGo completely dominated humans.
At any rate, I agree that humans are currently much better at learning from limited data than AI systems, but this is improving all the time. And, well, it may not matter. Enormous datasets are available. Arguably, the human ability to extrapolate from limited data may prove a negative, as we can see by various forms of faulty generalization. Humans don’t have a choice because our possible dataset is very limited. An AI can be more careful about generalization if it has far more data to work with.
No doubt, but neural nets still have a lot of life left, and it would not shock me if they were never really supplanted. New topologies, new training systems, etc., of course. But still with a simple weighted-sum-and-nonlinear-transform at the core (just as computers still have a simple transistor at their heart).