Talk me off the ledge: Will AI destroy my industry?

The real danger of “artificial intelligence” (even pretty limited ‘generative’ AI) is that the convenience will encourage people to pick the cheap and easy option rather than to do the work to develop their own skills, knowledge, and ‘voice’. Nothing that ChatGPT or Bing any of these other engines—and they are just machine synthesis engines, not actual autodidactic intellects—is in any way original, although lacking the constraints that most people have in terms of their cognitive constructs it often appears superficially novel and more importantly can be produced in a fraction of the time that a writer can write an essay or an artist can produce an image. Since much of what is produced as commercial art is derivative anyway, it looks and feels equivalent (at least, to the unstudied eye) to human-produced content but it really just serves to erode interest in developing the skills and knowledge base to do this work in a manner analogous to how the electronic calculator and then computer algebra systems have caused a decline in skill of basic arithmetic and complex mathematical manipulation by all except an obsessive minority.

When we do actually produce a genuine artificial general intelligence (AGI) that is capable of original work, the output will probably not even make sense to us except in a very trivial manner of direct application. That is to say, if an AGI developed into the equivalent of Federico Fellini or Martin Scorsese, it would produce the analogue of a ‘film’ that wouldn’t even make sense to human viewers, and of course we will not have any insight into how it functions ‘under the hood’ even at a granular level. Fortunately, despite all of the hype, I don’t think current ‘chatbots’ are anywhere close to this kind of breakthrough, and are actually probably a dead end toward genuine AGI. Which is not to say that they won’t have a dramatic impact upon intellectual workers, and that people currently producing art and other mind labor content aren’t going to have to learn to work with and utilize these systems because they are being adopted writ large as fast as people can figure out an application and appropriate training set, regardless of whether this is really a good idea or that they are as reliable as advertised.

It seems like a lot of people aren’t aware of it but there is a vast amount of online content that is automagically generated and has been for about the past decade. Given a few basic prompts and a selection of images, you can put up an automatically generated YouTube video on any topic that is…well, annoying and repetitive, but at least as good or better than most of the content uploaded by actual human contributors. The only thing special about ChatGPT and other chatbots is their near real time interactivity, which is possible through the enormous amount of computing power that can draw from a vast array of ‘trained’ data and the connections made between different elements via the complex neural network that is weighted through extensive training. This is emergent and in a sense represents a breakthrough of sorts, but really one of complexity versus actual capability. Whether you want to argue that a chatbot ‘understands’ what is asked of it is something of a semantic argument, but they are now sophisticated enough to respond to a pretty general prompt with a product that approaches being useful. And as you note, marketing blurbs and the like contain very little semantic information, anyway; they’re basically a way of packing some small amount of useful data into an aesthetically pleasing package of words, images, and music depending on the format; it is basically the perfect application for a generative AI.

Stranger