Those are all relatively straight-forward pattern recognition tasks. Fore sure they are beyond what previous “expert systems” could feasibly do and thus “impressive” to someone who doesn’t understand the limitations of generative models in terms of their ability to ‘comprehend’ only concepts within their training set, but it isn’t creative problem-solving in balancing the complexity of various competing goals and requirements in the real world.
After observing a year of people making prognostications regarding what LLMs, diffusion models, and other generative machine learning, I’ve come to a few conclusions: first, that people tend to over-estimate how much ‘AI’ is realistically going to be able to do in fields other than their own. I’ve seen predictions that generative AI will eliminate whole occupations such as lawyer, medical speciality, educator, mathematician, astronomer, et cetera, under the assumption that these fields largely primarily involve worth that is repetitive/iterative tasking or doing a bunch of calculations. In fact, while advanced expert systems driven by generative AI will become one of the main tools in these fields for evaluation and research, no current system of generative AI or an extension of it is able to fill the crucial aspects of these roles in applying a diverse array of real world experience including the interaction with human clients/users to solve complex problems. Someone above mentioned that statisticians would be one of the occupations essentially taken over by AI, which is kind of absurd because statisticians (at least, good ones) don’t just build models and crank calculations, but critically assess the validity of their assumptions and the application of a particular distribution or method. Generative AI might replace ‘data scientists’ but not professional statisticians, at least not any time soon.
Second, the business purpose behind LLMs is to create a human-machine interface that ‘tricks’ the human side into interpreting that the machine side is a thinking, conscious intelligence. It does so by mirroring human language response and thereby feeding into the natural perception of people to anthropomorphize and apply a ‘theory of mind’ to things that interact with them. It is conceptually no different than a pet owner ‘reading’ human verbal responses the mannerisms of their dog, even though it is pretty clear that dogs do not understand human grammar, and while LLMs are sophisticated enough that they can even ‘fool’ some machine learning professionals into believing that actual consciousness is occurring somehow, there is zero objective evidence or any reason to believe that anything akin to human cognition is occurring inside the model. These models are statistically aping the language use in the data sets they are trained on, which is why companies training these models are so circumspect about what they allow their production-level machines to integrate.
Third, the fact that LLMs and other models are prone to ‘hallucinations’ (i.e. making shit up), producing confidently false results, using intellectual property without recognition of the contributions of the creator, et cetera, isn’t going to stop businesses large and small (but especially large corporations who view human workers as just fungible ‘resources’ anyway) from trying to apply them to supplant and replace human workers, resulting in further degradation of creative merit, well-honed skills, and expertise in products and services. We’ve already seen how television and movie production companies want to use generative AI to ‘create’ output, and then just have a human script doctor hone it even though anyone who has tried to generate dialogue from a chatbot can attest to just how derivative and plebeian the results are (even when they do make sense and do not contain basic conceptual errors about how the world works). People have tried to use chatbots to write legal opinions only to see them confidently create citations and produce authoritative legalese which is actually utter nonsense. The very idea that a generative AI would be placed in a position to perform safety critical analysis or produce something of vital importance such as a fiscal analysis or medical diagnosis without thorough expert human review is frankly frightening, even as it is already occurring and producing exactly the kind of garbage result you would expect. Shitty screenplays and childrens novels that are just a jumble-fuck of existing ideas and trends is one thing; made up fiscal trends driving market investment or identifying non-existent maladies (because the purpose of a diagnostic bot is to find disease, and if it finds nothing what is the point of its existence) is quite another.
To the question of the o.p., the answer is not to treat college as a vocational training institution (as too many people do) but to use it to learn a diverse array of knowledge and skills that are more broadly applicable than to one currently popular occupation or line of research, and more specifically, to master the art of how to research and learn new information and ideas. People with a broad area of knowledge and the ability to apply it to novel situations will always be of value, particularly in a world where ‘AI’ bots dominating narrowly defined occupations gin out an enshittified simulacrum of human thought and reasoning.
Stranger