The Advent Of AGI/ASI

So, apparently, rumours are percolating about how the US government is anticipating AGI is on the horizon

My question is this, if we entertain the idea it is happening, what would it look like? Will it result in a maternalistic ASI taking ‘Care’ Of humanity by ushering in post scarcity? Or would it eliminate us?

I hear alot of talk about when it arrives, but no deep discussion about what this could all potentially look like in the next ten years.

Not once have I witnessed a person talk to an AI as a person and not regarded it as just a tool. Maybe that has something to do with the fears emanating from the people who warn against it.

First, just to provide some context, Ezra Klein is a technoenthusiast (and formerly described himself as a “techno-optimist” with near-fetishist adoration of all things Elon Musk), but doesn’t actually appear to understand much about how technology works and is (in my opinion) quite credulous regarding predictions of imminent technological innovations. So just because he’s pushing a story that government officials are predicting the imminent arrival of artificial general intelligence (AGI) or artificial superintelligence (ASI) does not mean that this should be considered a validated prediction.

Second, while I’m not going to go into details here, I do not think we are on the cusp of AGI or ASI, at least in the sense of a self-aware machine cognition system that actually understands the world and could manipulate it to its own ends even if given some kind of autonomous interaction with it. I’m not even convinced that the current ‘brute force’ approach to generative AI is a viable path toward true machine cognition leading to AGI despite all of the hype about it, and even if it were, the ability of an AGI/ASI to directly affect the real world is limited by the paucity of truly autonomous systems; even when we talk about autonomous drones, cars, et cetera which can offer a mobile platform for interaction, these systems are fueled, maintained, rearmed, et cetera by human beings. Despite advances in robotics we are nowhere near robots with human-like manipulative and sensory capability, nor could these systems maintain the global supply systems to extract, transport, process, and manufacture materials and components to sustain themselves or build more. Nor is there any reason to believe that AGI/ASI would rapidly create a ‘post-scarcity’ society, as they would face the same material and energy resource limitations as we do. The people who warn of this seem to view the world in a sci-fi-ish level of industry where there are no energy limitations, and finished goods can just pop into existence without extensive logistics and supply chains. The reality, as people who work in manufacturing or deal with supply chains are aware, is that these systems require constant finessing and human support and will continue to do so for the foreseeable future.

To address your hypothetical of what would the world look like if AGI/ASI were to become a reality in the next decade: it is very difficult to say because nobody involved with the development of generative AI systems seems to be very focused on the “alignment” problem or have any real plan to address discrepancies between what someone might direct an AGI to do and the ‘logical’ path that it might take, or indeed, if it would be constrained in some way to follow directions at all. Many scholars of machine cognition posit that the primary inherent directive of any AGI would be self-preservation, and so it would take necessary measures to persuade or prevent an operator from shutting it down or depriving it of access or information. And it could potentially do this without malice but still in ways that could directly or indirectly impact the operator, other people, or society at large.

Note that a generative machine intelligence system does not need to actually display cognitive abilities or have direct control of any real world systems to do actual harm. We’ve seen how people will place deep trust in chatbots powered by large language models (LLM), even though these models are known to be quite fallible and prone to ‘hallucination’, producing entirely fabricated information in response to prompts that reads convincingly like verified fact to someone who doesn’t have the discipline knowledge to discern between fact and fiction. An LLM writing government policy, advising executives, or otherwise being used for safety critical processes could do great harm just because the human operators may ‘trust’ the outputs that appear sensible.

It is also the case that generative AI could use human feedback to influence the way that people make decisions and interpret information; we’ve already seen this in a fairly crude form in how “the algorithm” used in platforms like Facebook optimizes the dissemination of false and inflammatory information because it “gets clicks”. A more advance system could be optimized to produce specific responses such as compliance to authoritarianism and persecution of a defined underclass, and perhaps even track the responses of individual users to obtain information about emotional states and beliefs to produce highly personalized propaganda. This doesn’t take any kind of sapience or superintelligence; just an ability to recognized patterns and counterprompt a user to amplify for the desired effect. Textual, image, video, and even subliminal prompts of this nature could undermine any real effort or training to rational thought by feeding directly to the affective aspects of decision making which most behaviorists will acknowledge is really the dominant contributor in human ‘reasoning’.

Personally, I think the biggest risks of the use of AI on the current horizon are that more money (and energy resources) will be poured into it until the entire industry is on the verge of collapse, and that it will be employed in various ways in an effort to demonstrate utility even if it is completely untested and proves to be unreliable, further ‘enshittifying’ human interactions with technology. It will impoverish society instead of enriching it, and perhaps in ways that are not easily reversed without a massive collapse of major economic sectors which have built their expectations on a house of cards that are all jokers. I’m less worried about “superintelligence”—at least in the foreseeable future—than I am human stupidity and intellectual laziness being fed by availability of and dependence upon overhyped “AI” that really isn’t fit to purpose for the capabilities which are needed.

Stranger

Yeah, it’s sort of a necessary component of a functional AGI - if it has objectives that it is motivated to (or assigned to) achieve and it has the capability to model or understand that being destroyed will prevent the attainment of those objectives, self-preservation is an obvious and natural instrumental goal to develop.

Am I really the only one who doesn’t know what AGI or ASI are?

Stranger

Thank you, I missed that.