AI is wonderful and will make your life better! (not)

Except they don’t “[lack] any discernible bias”, and in fact LLMs show an almost inevitable propensity for bias that the companies training them for use as chatbots are constantly having to tweak them in order to prevent them from exhibiting offensive, outrageous, and inflammatory responses, and providing information that amplifies prejudices of the users:

It’s not, although reducing what the human brain does to ‘token processing’ is a common way of trying to make that comparison. In fact, a brain (human or otherwise) is intaking a constant stream of sensory information, decomposing the various types and sources of data into a wide array of purpose-specific regions of the brain, and synthesizing the perception of the world through various models of informed by lived experience which are ‘filled in’ through (sometimes erroneous) anticipation cues, and constantly modified by the experience to become more refined. Brains don’t use backpropagation to learn, neurons are not simple virtual input-output functions, and processing occurs in more than just in the transformations that occur within neurons. In fact, while artificial neural networks (ANN) are based upon primitive theories of how the brain functions and are useful as heuristic architectures they are actually a very poor representation of how actual neurons in an organic brain work, which you might expect because the effective ‘clock speed’ of a mammalian brain is orders of magnitude slower than a modern digital computer but can still outperform an LLM in many ways.

The ‘advancements’, aside from improvements in the efficiency of transformer networks, are essentially just training on ever more data and providing more reinforcement, which begs the question of how a human with a 25 watt brain and a couple of decades of experience (at least a third of which is spent in standby mode) can be far more accurate and sensible than an LLM that has consumed many libraries worth of text (and for multimodal models, images) and millions of kilowatt-hours of energy. Of course, LLMs are not optimized for solving physics problems so that they can learn the math and mechanics from mostly text might be a little astonishing to some, although what it really shows is the level of logic built into the structure and metasemantics of language, not some kind of innate reasoning capability which indicates an ability to actually performing conceptual reasoning.

Let’s be clear the the vast majority of people using chatbots are “naive users” who have essentially no understanding of how an LLM functions, even insofar as the actual experts understand the basic functionality. They do not have the technical knowledge, perspicacity, or frankly interest to understand what is going on within the ‘black box’ of a chatbot, and many are using it to get information for which they either don’t have the knowledge or interest to fact-check. Even as Sam Altman et al note that LLMs are not ‘totally reliable’ they still advocate for the use because at this point their entire demo is that the public is using them en masse so they must bring some value, right? More importantly, the responses that LLMs give, with perfect spelling and grammar, authoritative voice, often precise figures, implicit knowledge presented as factoids dropped into responses, and citations if you ask for them give the appearance of a very knowledgeable agent which most people imagine to be drawing from a database of all human knowledge stored somewhere in the secret netherworld of the Internet. That it is actually just chunking prompts and producing results based upon statistical adequacy of the ANN trained on masses of textual data doesn’t occur to the vast majority of users and they wouldn’t even understand how that works if you explained it with diagrams. Users–people–inherently trust someone or something that sounds really good, especially if they don’t actually understand what the answer should be enough to sanity-check it even though (as in the example above) a knowledgeable person can immediately intuit an error the response.

You can define this as a “people problem” if you like, but it is borne out of executives seeing other, more ‘tech savvy’ companies implement ‘AI’ in the form of chatbots, code generators, and so forth, and out of a ‘fear of missing out’ rush to implement it as well in the name of an ‘efficiency’ that they don’t actually understand, haven’t developed useful metrics to measure, and aren’t critically evaluating or training employees to apply or not as suitable. It isn’t even “a panacea for incompetent employees” or is solving some particular problem that execs didn’t realize they had other than reducing headcount and payroll expenses (not to mention the pesky employees with all of their issues and demands to be treated with decency). It is literally execs being told, and repeating to each other, that this is the future and a failure to enthusiastically embrace ‘AI’ technology they don’t even understand will be their corporate death knoll, leaving employees to toe the line between somehow showing that they are using this technology in a useful way and still getting real work done under expectations of somehow multiplying their output. (I recently heard of a CEO lecturing his workers in an ‘all hands’ staffer that he expected a “3X, 5X, 10X increase in productivity” on some fanciful basis completely ungrounded from any reality about what his employees actually do.)

I’m not opposed to the development or application of the various technologies under the broad label of ‘artificial intelligence’; I took classes in complex adaptive systems and the philosophy of self-learning systems, I’ve used machine learning methods for going on 15 years for various data analysis and design optimization purposes, and I recognize that there are many types of real world problems for which some kind of deep learning and synthesis approach is the only practical way forward. LLMs are actually really interesting in the theoretical sense for how they have essentially proven out many speculations in computational linguistics that were virtually impossible to demonstrate except by implementation of a complex language manipulation system, and as a natural language interface they have a valid use case provided that the application can be suitably constrained to within the domain of application. But I also think that such applications need to have some rigorous method of validation before they are put into ‘production’ use, and certainly before they are foisted upon an uncritical public who are not equipped to be anything but credulous by some of the ridiculous and unsubstantiated claims about ‘AI’.

I am for sure tired of hearing the bombast about AI and how these systems have a ‘spark of consciousness’ with zero evidence other than the ‘feeling’ of people working on them who are predisposed to want (or fear) that development, and I am really fucking tired of being insulted, lambasted, portrayed as an ‘idiot’ who doesn’t understand how chatbots work, threatened if I speak up with criticisms and credible doubts about absurdly unsubstantiated claims, and dismissed as a ‘luddite’ or ‘technophobe’ when I point out very obvious flaws and demonstrable falsehoods in the claims enthusiasts make toward the sagacity of LLM-based AI tools or how we’re just on the cusp of artificial general intelligence when these tools still make the most foolish of mistakes and ‘hallucinate’ nonsense if you start exceeding their attention window. I’m also pretty aghast at the lengths that AI makers and those flogging for them go to cover up for all of the ills that they actually do in the service of making a handful of people ‘wealthy’ via overblown speculation and promises of a post-scarcity utopia (for some at least) built on the backs of everyone else, and foisted onto us by companies competing with each other to run ever faster toward a cliff of disappointment when it turns out that all chatbots are really good for is mostly novelty, and the main use case for LLMs in general is to replace phone agents or be able to talk to your phone without long pauses.

Stranger