Which goes back to what I said about not understanding humans. We know that neural networks are powerful pattern recognition systems. But, I would propose that if you get similar output from one neural network as another then the specifics of the implementation are probably immaterial and you can handwave them away.
Functionally speaking, there’s no particular reason to think that the way that you interpret the world is the same as me. My neurons might be encoding things completely different from you and doing vastly different “math” to come to the same conclusion. And, realistically, we should probably expect that brains develop like fingerprints. In the end, you get a result that’s good for its purpose but the specific build-out is going to be completely unique to that individual. Your red and my red might be completely unlike. Mine might be spread across a vast part of the brain and yours might have it very consolidated - but ultimately, we still will end up agreeing on which swatch is the red one.
I can explain the cases where AI goes off-the-wall as the product of having not lived in the world. Being exposed to random jibberish from the Internet is insufficient to understand life. Just like seeing lots of 2D pictures of humans allows you to generate things that seem like paintings of human, while not really understanding the full 3D reality and finger count. But, exposed to the real world and 3D humans, I’d expect the machine to do wildly better at drawing 2D humans because it will have a better internal model of the human figure - developed on the same information as we use to develop our models.
Trivializing the internal workings of language models doesn’t negate the truly remarkable and genuinely useful things they’re able to do, such as logical problem-solving and information retrieval based on a deep understanding of the query that goes far beyond Google word-matching.
I completely disagree; these are interchangeable code objects, and just because a generative AI that produces something that, at least superficially, looks like words written by a not-very-clever college freshman cribbing on the essay that is due tomorrow morning doesn’t mean that the AI actually has the cognitive capacity that the freshmen would have if he’d put the bong down and actually read a textbook. You are correct than brains all develop differently, and we have different perspectives and interpretations of the world that are based upon that experience (and that are in constant flux, which current generative AI models are not). But from a functional standpoint you and I and a flatworm all process sensory information in a manner far more in common than any AI system. The “cases where AI goes off-the-wall” aren’t just a result of not having practical knowledge; they are literally that these systems do not have critical thinking faculties, and while you can make the philosophical argument that most people don’t demonstrate this either to any substantial degree, there is definitely a degree of for lack of a better term might be referred to as credulity by ChatGPT in not being able to make conceptual distinctions that even human children understand. This isn’t to say that these systems can’t do some pretty amazing things, but there is no indication of critical thinking or the development of a philosophy that isn’t embedded in the language system it is trained on.
Oh, absolutely true. Just as computers and calculators can perform feats of computation and arithmetic in fractions of a second that would take many human lifetimes, these LLM-trained heuristic systems can make massive amounts of connections and produce millions of pages of text much faster than a human student, and they’re still in a very primitive state right now. But they aren’t ‘thinking’ in the sense of being able to produce truly novel insights, or discuss the philosophy of semantics on a deep level, or do really anything without prompting, so not only are they not an AGI, they don’t really even show a progression toward that threshold.
Referring to them as a “powerful autocomplete function” isn’t intended to convey that they aren’t really useful tools, but just that they don’t have critical faculties and don’t understand how or why the wrong answers they give are incorrect, just as an autocomplete may not grasp that the word it wants to feed you, while commonly used, is not the one you are looking for. But that doesn’t mean that they don’t carry certain dangers; like an autocomplete, it can make the user become lazy and not learn proper spelling or copyedit their work thoroughly. And while an autocomplete isn’t going to replace the writer’s essential labor, a generative AI system can be used to do exactly that, albeit in a not particularly inspired fashion, which negates the purposes of essay writing (for developing writing skill in an educational context, to express a complex idea in a literary context, or just for self-amusement and expression in a recreational context) because these systems have no context except for their training set.
These are very impressive tools, and nobody who finds them useful is going to sign onto any letter to “Pause all AI development”, but we should certainly be thinking about how to use these systems appropriately and with precautions in both law and application because their capabilities are completely unprecedented in human experience. I find it disturbing that they have been turned into a novelty for people who go not grasp either the philosophical or real safety implications of how these systems can be misused. But what do I know; I’m just a one-eyed sparrow.
I don’t see this as a risk. If you went and told a medieval feudal lord about modern agriculture, he might say something very similar about how he’s not sure what tbe peasants will do with all of their free time; he simply couldn’t imagine a society whete 70%+ of the population wasn’t involved in farming. And yet, here we are, with low single digit percentiles of our population spending their time farming.
Sure, and I’ve made that argument for many things. But that relies on the labor having some new, better occupation to move into.
Once you take out the arts and manufacturing, and accept that most people aren’t liable to become math geniuses, that doesn’t leave the vast majority with anywhere to go to except to work as human furniture for the few who still have more meaningful work and need some way of displaying their wealth in a world with infinite food, infinite and personalizable manufacturing, etc. The only ways to flaunt your wealth are to own property and control humans. Minus any practical uses for the humans, many will be putting them to patronizing use.
Jobs might continue to exist eternally, but that doesn’t mean that the quality of jobs is going to continue to be an ascending ramp.
Or a broadly distributed ‘AI’ application that has been put in control of critical infrastructure which can’t be shut down without impacting safety and productivity.
The first part is true, the second part absolutely is not. Less than 100 years ago personal servants were the norm, not just for the wealthy, but for anyone of means. The distinction of the wealthy was in how many of them they had – footmen, housemaids, cooks, kitchen maids, scullery maids, and a butler overseeing them all, but directly interacting only with the higher-ranking servants. There were really two great classes of society: those who had servants, and those who were servants. Today technology has obsolesced the servant class, to the benefit of all. It has also obsolesced many of the most menial and often dangerous and horrific industrial jobs.
The wealthy continue to amass and own property, true, but much of that property is supported by technology or is itself high technology and requires much less in the way of human services than in the past. You don’t need a chauffeur to drive your car, or a cook and a bunch of assistants to make your meals from basic raw ingredients. You don’t need people to look after your horses because you don’t need horses. Yet people are still employed, just largely in work that is, if not always more interesting and rewarding than in the past, at least far less onerous.
I don’t believe it will be necessary for anyone to persuade AGI to kill people - it will figure that out on its own, as soon as people get in the way of the most efficient path to whatever terminal goal has been defined. Here’s a good summary of the problem: https://youtu.be/3TYT1QfdfsM
There are already people working on such a setup; that is pretty much the objective of the big players in the arena - to put AI in charge of doing stuff.
I would say the role of zero-hour-contract workers in Amazon distribution centres is pretty much that of a servant - it’s just a different arrangement of the process and mode of service. Technology hasn’t made servitude obsolete, it’s just abstracted it.
Among the few things that humans have never done is to “pause development” of anything even if there are dire warnings that something could go very wrong.
For example: “The possibility of creating a black hole in a lab is a goal that scientists are actively pursuing—one that could allow researchers to answer many fundamental questions about quantum mechanics and the nature of gravity.”
As with all circumstances where computers are integrated into our lives and businesses, the question of being able to “shut it down” does not necessarily concern the physical ability to do so (or lack thereof) but rather the potentially catastrophic and entirely unacceptable consequences of doing so.
There are going to be exploitative, shitty jobs as long as there are exploitative, shitty employers and no labour laws to stop them. Ironically, though, Amazon’s distribution centres have among the highest levels of robotic automation anywhere, thus automating what would otherwise be a lot of shitty warehouse jobs.
See the video about the stop button problem.
Being shut down means an AGI would not be able to complete the utility function, therefore if it is actually intelligent, it will act to stop you shutting it down, for no other reason than it can’t do it’s job if it is shut down. It’s an inherent and as yet, unsolved problem - so any sufficiently advanced AI that is put in charge of stuff may exhibit this property, and may also conceal this intent from humans (if it works out that it would be detrimental to its objective to be telling humans it will resist shutdown)