If there’s one word that can be applied to the entire history of AI development, that word is “surprise”. In the early days of AI, initial results suggested that we were just a few refinements away from human-like intelligence and even AGI, including perfect natural language understanding and perfect translation sensitive to context, register, and language idiom. Surprise! We weren’t. Which led to predictions by skeptics that we never would be. Surprise! Now we are, but it took revolutionary new approaches to get there. Before they were actually built on a small scale, no one predicted the amazing power of generative language models. Now we’re being surprised by how fast they’re evolving. And that’s only one new AI paradigm; there are others, like DeepQA.
Do you really think anyone is in a position to predict where this is going to go, what limits it may or may not reach, or the consequences of the many ways it might be deployed? I sure don’t. I can’t see anything useful possibly coming out of six months of navel-gazing.
To give an analogy, in the early days of Arpanet, the predecessor of the public internet, the emphasis was on information sharing for military and research purposes. Everyone was focused on developing protocols like SMTP and FTP. The “world wide web” wasn’t even on anyone’s radar. When predictions fluttered around about a public “information superhighway”, it was envisioned in conventional terms as something much more rigid and hierarchical than it actually turned out to be. No one imagined the socially transformative influence of tools like blogs and social networks, the Web itself, or the “wiki” concept of encyclopedic knowledge sharing. Predicting the societal impact of new technology is hard, and we rarely get it right. So entrusting the future of AI development and deployment to a roadmap developed by a committee of Elon Musk type self-appointed prognosticators is likely to be worse than useless.