Unsure of what to study (AI-concerns on obsolesence)

Those are all relatively straight-forward pattern recognition tasks. Fore sure they are beyond what previous “expert systems” could feasibly do and thus “impressive” to someone who doesn’t understand the limitations of generative models in terms of their ability to ‘comprehend’ only concepts within their training set, but it isn’t creative problem-solving in balancing the complexity of various competing goals and requirements in the real world.

After observing a year of people making prognostications regarding what LLMs, diffusion models, and other generative machine learning, I’ve come to a few conclusions: first, that people tend to over-estimate how much ‘AI’ is realistically going to be able to do in fields other than their own. I’ve seen predictions that generative AI will eliminate whole occupations such as lawyer, medical speciality, educator, mathematician, astronomer, et cetera, under the assumption that these fields largely primarily involve worth that is repetitive/iterative tasking or doing a bunch of calculations. In fact, while advanced expert systems driven by generative AI will become one of the main tools in these fields for evaluation and research, no current system of generative AI or an extension of it is able to fill the crucial aspects of these roles in applying a diverse array of real world experience including the interaction with human clients/users to solve complex problems. Someone above mentioned that statisticians would be one of the occupations essentially taken over by AI, which is kind of absurd because statisticians (at least, good ones) don’t just build models and crank calculations, but critically assess the validity of their assumptions and the application of a particular distribution or method. Generative AI might replace ‘data scientists’ but not professional statisticians, at least not any time soon.

Second, the business purpose behind LLMs is to create a human-machine interface that ‘tricks’ the human side into interpreting that the machine side is a thinking, conscious intelligence. It does so by mirroring human language response and thereby feeding into the natural perception of people to anthropomorphize and apply a ‘theory of mind’ to things that interact with them. It is conceptually no different than a pet owner ‘reading’ human verbal responses the mannerisms of their dog, even though it is pretty clear that dogs do not understand human grammar, and while LLMs are sophisticated enough that they can even ‘fool’ some machine learning professionals into believing that actual consciousness is occurring somehow, there is zero objective evidence or any reason to believe that anything akin to human cognition is occurring inside the model. These models are statistically aping the language use in the data sets they are trained on, which is why companies training these models are so circumspect about what they allow their production-level machines to integrate.

Third, the fact that LLMs and other models are prone to ‘hallucinations’ (i.e. making shit up), producing confidently false results, using intellectual property without recognition of the contributions of the creator, et cetera, isn’t going to stop businesses large and small (but especially large corporations who view human workers as just fungible ‘resources’ anyway) from trying to apply them to supplant and replace human workers, resulting in further degradation of creative merit, well-honed skills, and expertise in products and services. We’ve already seen how television and movie production companies want to use generative AI to ‘create’ output, and then just have a human script doctor hone it even though anyone who has tried to generate dialogue from a chatbot can attest to just how derivative and plebeian the results are (even when they do make sense and do not contain basic conceptual errors about how the world works). People have tried to use chatbots to write legal opinions only to see them confidently create citations and produce authoritative legalese which is actually utter nonsense. The very idea that a generative AI would be placed in a position to perform safety critical analysis or produce something of vital importance such as a fiscal analysis or medical diagnosis without thorough expert human review is frankly frightening, even as it is already occurring and producing exactly the kind of garbage result you would expect. Shitty screenplays and childrens novels that are just a jumble-fuck of existing ideas and trends is one thing; made up fiscal trends driving market investment or identifying non-existent maladies (because the purpose of a diagnostic bot is to find disease, and if it finds nothing what is the point of its existence) is quite another.

To the question of the o.p., the answer is not to treat college as a vocational training institution (as too many people do) but to use it to learn a diverse array of knowledge and skills that are more broadly applicable than to one currently popular occupation or line of research, and more specifically, to master the art of how to research and learn new information and ideas. People with a broad area of knowledge and the ability to apply it to novel situations will always be of value, particularly in a world where ‘AI’ bots dominating narrowly defined occupations gin out an enshittified simulacrum of human thought and reasoning.

Stranger

thx for your perspective - always appreciated !

if you don’t mind: how do you feel about AI “coding”?

  • what can and what cannot be done with regards to coding by AI?
  • how good (as in “efficient”) is the code?
  • outlook 1/3/5 years ahead?

again, thx (and welcome back)

I’m probably not the right person to ask for a definitive opinion on the future of “AI coding” or “prompt engineering” or whatever as despite my pretensions as a competent Python application developer and aspirations to someday master Scala and maybe Rust (so I can stick them next to C/C++ and Fortran on the part of my resume practically no one reads) I’m not a software engineer or programmer by ostensible trade, nor have I used any generative coding tools myself, the Python code I have seen from these tools is…perfectly cromulent and certainly more PEP 8 compliant than most Python programmers, but not terribly sophisticated. It is about what you would expect of someone who did a Udemy or O’Reilly Python comprehensive course and started cranking out small applications. I think trying to get it to build a more complex enterprise application or do something highly technical like a sophisticated filtering algorithm would quickly run into difficulties in just trying to describe what is wanted of the code.

I have no idea how efficient the code is since all I have seen are small and not particularly computationally intensive applications, and while I expect the sophistication of these tools to improve substantially I wouldn’t even try to estimate a timeline. I expect that some hybrid of generative tool and expert application interface/user experience developer will eventually come to dominate user interface design because nothing sucks worse for a programmer than fussing with buttons and fields and making scrolling pages behave. And this kind of basic ‘gruntwork’ is exactly what AI tools should be used to automate, just as word processors readily handle a lot of the business of formatting, spell-checking, correcting/updating, managing bibliographies, et cetera that a typist circa 1985 would be doing by hand. The ideal use case for AI tools is to minimize the busywork and allow experts to focus on creative and user relations aspects, and for highly technical research positions to focus attention on critical publications, handle the processing and initial interpretation of ‘Big Data’, do the bulk of writing on proposals, et cetera.

I would not, as a college student today, focus on trying to be an “AI coder” or “prompt engineer” or whatever any more than I would advise a comp sci student to focus their effort on learning COBOL or Pascal, as the specific skills and demands of the future will almost certainly be very different than what is being done today in interfacing with these tools. I would suggest learning some STEAM fundamentals (science, technical, engineering, arts, maths) to the extent they are interested, and also developing actual in-person social connections vice “social media” and other transitory online tools of the current era. I have nothing to speak for how much time I burned on IRC, Usenet, BITNET LISTSERVs, et cetera, but still keep in touch with a few SPS colleagues, and I use my physics and engineering fundamentals on a daily basis even in areas of daily life; and frankly I wish I had spent more time expanding my ‘creative’ interests.

Ultimately, college is a place to learn who you are (at least a bit), what interests you intellectually, and hopefully make some lifelong friendships and maybe some connections to aid you in your professional development. It is a place to learn new ideas and fields that you didn’t even know existed in high school. I wish I had known going in that biology was far more than just ‘button-counting’ and ‘stamp-collecting’ and had delved more into it, both because I find it fascinating and because there is such a diversity of opportunity and technical development compared to the ‘physical’ sciences that it is perhaps the field most ripe for career opportunities. And if STEAM fields are not your thing, there are still plenty of useful nettles to grasp, provided one does not spend the entire experience inebriated and various compromising positions. I think the prognostications that AI will eliminate whole professions are overblown, or if true, will impact virtually every profession, so you might as well engage in what interests you rather than trying shape a curriculum to anticipate future occupational developments regardless of your inclinations. It is far more important do develop genuine interests and enthusiasm, and create connections (including internships and co-ops) that could turn those into actual vocational opportunities than to chase whatever job trends are currently fashionable.

Stranger

Professional coding is not my field either, but I know hacker types for whom it is, and they do use GitHub Copilot, simply because it is efficient if you need to write a lot of code. (In the sense that it autocompletes a bunch of stuff and you end up writing more lines of working code in a given amount of time.)

I am a professional software developer. ChatGPT is accelerating coding dramatically. Where it really accelerates you is when you have to do something easy, but using tools you are unfamiliar with. Or, doing something straightforward that is tedious and takes time.

In my last job, my last task was to take an internal system and convert it to Oauth2 authentication so external users could get at it. As a professional, you can get an ask like this even though you may never have used oAuth in your life and have no idea how it is set up and used. So off to the web to research, study examples, etc. Then you build some code, and find out it doesn’t run because you missed an include or didn’t understand something about the interface. So debug, test, try again…

What can make it even more tedious is that some tasks could use multiple available libraries, so you have to evaluate them, figure out the right one to use, check the licensing of it to make sure it can be used in commercial code, etc.

Then you get to write unit tests to test the code you’ve written, add the documentation, add your code to the build, modify the build scripts to include the libraries, etc. What should be a morning job can stretch into a couple of days if things go wrong or things get confusing.

Today, I would just point my legacy login code at co-pilot and say, “add oAuth2 to this code, build the unit tests needed to test it, and add it to the project.” Check teh code and the unit tests to make sure they test the correct things and that they pass, run an integration test to make sure the code behaves well with others, and you’re done.

No, AI’s might not be writing a million lines of navigation code for a spaceship yet, or write a transactional, performant multi-user program, but then 99% of programmers don’t do that either. They glue code libraries together to make unique applications. AIs can do that perfectly well.

I was also a UI lead. AI is accelerating this even faster. You can now take a napkin sketch of a UI concept, take a picture of it with your phone, and tell ChatGPT to build it in code. It will. You can then take a snapshot of your company’s application and say, “format it to match this standard”. And it will do that too. Days of effort in seconds.

More importantly, it unlocks your creativity, because once you have a general design concept you can say, “Make ten variations of this.” and then you can test them. You can put controls on a page, create a use case, and ask ChatGPT to execute the test case on multiple UIs and calculate the mousing distance for each. You can try many different design concepts against users quickly and easily. And once you have one page done, you could give ChatGPT napkin sketches for the rest and say, “Build them like the first page, and link them together into one application.”

Then when you are done that, you can say, “Now write a version of this in SWIFT and make an iPhone app skeleton out of it.”

This just scratchs the surface. AI’s will be running our tests, examining legacy code for security and bugs, doing customer support, etc.

Today, new programers often have to bring a large list of technologies they have learned to be competitive in a job. So colleges spend a lot of time just exposing their students to a lot of stuff they can put on their resumes. WIth AI, learning and using new tech becomes much easier, so a focus on the ‘science’ of computer science would be a good move. Learn to be the person who makes those complex libraries, not the one who uses a few lines of boilerplate to link them together.