Is the AGI risk getting a bit overstated?

True. Our decisions are inherently fuzzy and we really probably don’t want to try to perfect them. We accept that a certain level of respiratory disease in the population is acceptable if the alternative means nobody is allowed to drive a car or burn wood; we accept that we may be struck down by some detectable disease, if the alternative would be to spend our entire lives being constantly scanned and tested for things that might happen.

We are, in general, accepting of the world being a little bit suboptimal and of doing things that might have adverse effects (especially if the effects don’t happen to us personally).

The comprehensive literature review spanned 14 articles retrieved from four distinguished databases: ProQuest, IEEE Xplore, ScienceDirect, and Web of Science. Our findings indicate that over-reliance stemming from ethical issues of AI impacts cognitive abilities, as individuals increasingly favor fast and optimal solutions over slow ones constrained by practicality. This tendency explains why users prefer efficient cognitive shortcuts, or heuristics, even amidst the ethical issues presented by AI technologies.

Other studies revealed that regular utilization of dialogue systems is linked to a decline in abilities of cognitive abilities, a diminished capacity for information retention, and an increased reliance on these systems for information (Dergaa et al., 2023; Marzuki et al., 2023). This over-reliance often occurs without verifying the validity and authenticity of the provided data, especially when such information lacks proper references (Krullaars et al., 2023).

There’s something that comes up repeatedly in threads on technology, that may be a generational difference, I don’t know, but it suggests that a lot of the older people who use technology do not live in the reality I and many young people do. You speak as if you don’t have any experience being psychologically dependent on technology. That you don’t understand compulsive media use. And you assume that just because you have the self-control (or neurology) to mitigate the damaging effects of technology, then those of us who struggle with technology dependence (particularly of the algorithm driven sort) are either exaggerating or just don’t have common sense or something.

It results in such useful advice as “put down your phone” when I talk about phone dependence.

The reality is far more nuanced. A lot of young people today have been inundated with monetization-driven technology from a very young age, meaning that the cognitive capacities and executive function of younger generations are different than that of older generations. Brains adapt as needed to the current environment, and the environment for a lot of kids requires rapid context switching that leads to more shallow understanding of subjects and more difficulty fitting new concepts into existing knowledge. We’ve got a lot of young people these days who can’t even sit down and read a book. They don’t appear to have the neurological wiring to support that kind of activity because that’s not the primary way they read and process information.

Our struggles are different. And most relevant to this thread, that also means every new kind of technology can exploit cognitive vulnerabilities that older generations may not have. That is especially the case for those of us with executive function problems to begin with. I was introduced to exploitative technology more in my early 20s, but this is still a time where the brain is in development, and I’ve had executive function problems all my life.

Just because something doesn’t pose a problem for you doesn’t mean it isn’t a problem.

That pretty much pertains to potential abuse of AI by students, more or less the high-tech equivalent of cheating. In a larger and more general context, there are lots of examples of beneficial impacts. For example, cognitive offloading can free up resources for higher-order thinking. And AI isn’t necessarily an impediment to learning, but can actually augment the learning process.

The human brain is always going to default to what’s easiest and most efficient. And people’s behavior will follow suit. Very few of us intentionally choose the harder path.

Of course! But isn’t that how technological civilizations progress? There are probably very few people today who know the whole complicated laborious process of how to get a steam locomotive ready to pull a train, but who cares?

Are memory tasks higher-order thinking than decision-making? Neither of those cites are really helpful for the discussion at hand. The first talks about memory improvement in older people (which is not the cohort in question) but it doesn’t specify what tasks are being cognitively offloaded. It doesn’t even mention AI!

The second one presents no data whatsoever.

Do we have proof that AI improves cognition?

Yeah, we’ve cognitively offloaded critical thinking and progressed right into authoritarianism.

But that has nothing to do with AI. The stupidity of a very large segment of American voters is due to a completely different technology and the failure of American policy: namely the internet, and the lack of support for objective, quality public broadcasting..

All true. There is however the context of consciousness, sentience, the possible epiphenomenom of our sense of self emergent of the patterns of processing in our minds, that we each experience and reasonably presumptively assume everyone else experiences as well, partly because we are all the same stuff processing in very similar patterns.

To me a persistent big question is how do we know that, recognize that, in an alien problem solving entity, be it machine, animal, or extraterrestrial, especially given that it may be solving problems salient to it but not us?

But most are not content to be completely vegetatative either. Most actually are happiest when they are achieving something, something just at the edge of their possible. Not so easy they are bored and not so hard they are frustrated.

New tools will be used in creative new ways in service of new challenges.

Ah, a fellow Robert Miles fan! I’m sure he wouldn’t describe himself as the leading thinker on AI safety, but he does a great job at making the subject entertaining to watch. I tried watching a lecture from a leading AI researcher Rob recommended and had trouble not falling asleep.

I suppose one question I have is why does “achieving something just at the edge of their possible” usually have to come in the form of their “job”. I also suppose the answer to that is that most people will want to pay someone as little as possible for the most work they can possibly do. A lot of “high performing” firms will pump up their employees with rhetoric about “changing the world” and “doing great things” or some such bullshit in order to get them to feel good about working late nights and weekends. But mostly it’s just to pump up the share price.

Given how much work in a typical corporate office is kind of bullshit anyway, I kind of wonder what we are going to have all these highly educated, intelligent people work on. This is not just an abstract question. I’m actually trying to figure that out for myself and I know a large number of similarly unemployed people trying to figure out the same thing.

A lot of people say that all these smart, unemployed people will just find new jobs or build new industries. Maybe it’s my own lack of imagination, but do we really need more people creating more shit? I’m just trying to figure out what sort of work we need more humans doing.

Maybe AI we’ll create a society like Logan’s Run where people can just sort of lounge around living some sort of hedonistic lifestyle. Then the AI can pick some optimal age to euthanize them because humans aren’t actually “doing” anything we don’t really like having old people around anyway. So you get 30 or so years to just do whatever you want and then you fuck off!

I don’t think it does. I think it is wonderful when that desire is satisfied by something you are getting paid for, but it plays out in many ways. For jobs and school assignments sometimes the motivation is less that satisfaction than the pay, in money or grade. But my push back was to the we always default to easy claim.

I agree that’s what makes people happy, but I would argue that people are increasingly not doing what makes them happy. I would use myself as a prime example. A lot of us – and again, this seems to affect younger generations more – feel like we’re not running the show in terms of how we spend our free time. The specter of an AGI that understands human psychology better than we do, and is so adept at manipulating us - it’s already happening, and artificial intelligence wasn’t even required. All it took was major corporations hiring a bunch of behavioral psychologists for product design. I only see this getting worse.

I’d say he’s one of the leading communicators in the field.

That’s the issue, on a long enough timeline there’s nothing humans can do better than machines. The internal combustion engine didn’t free up horses to pursue more creative pursuits, it replaced them entirely. The number of horses dropped dramatically after internal combustion engines.

There’s going to come a point where every physical and cognitive task can be done better, cheaper and faster by a machine. Including the ones that we associate with our souls and personality (creative pursuits, healing careers, art, etc). When that will happen is up for debate, but it will happen.

People can look at AI and robots in 2025 and say ‘yeah but it can’t do XYZ’. No it can’t. But in 2025 AI can do a lot of things it couldn’t do in 2010, and in 2040 AI and robots will be able to do a lot of things they couldn’t do in 2025, including a lot of XYZ. In 2055, the AI will do a lot of things it couldn’t do in 2040. Meanwhile humans in 2055 will be pretty much exactly the same as they were in 2010 while AI and robots will have advanced dramatically.

Humanity is going to have a collective existential crisis when the physical and cognitive labor that gives us income, meaning and/or purpose becomes useless and performative. It will be like asking ditch diggers to dig a performative trench so they can feel useful, while 100 feet away heavy machinery does the actual work 100x faster and better.

Unless we move to a Star Trek-like post-scarcity world where people don’t need to work to earn money to exist, it’s likely to create some sort of societal collapse first - at the moment, all the CEOs and tech bros are just assuming there will be huge profits for them through replacing their paid human workforces with machines, but machines are not their customers and when every business on the planet is pursuing this goal at the same time, who is going to buy the products made by these companies when nobody has any income?

There will be no ‘hey, why don’t you retrain to do XYZ instead, so you can keep working/earning?’ because someone somewhere will be busy automating XYZ.

Thats the ideal, mass redistribution of wealth. Realistically, it doesn’t even need to be a mass redistribution of wealth. Assuming advanced AI leads to GDP growth rates of 10-15% a year in the developed world, within a decade or so we will be able to give everyone a middle class lifestyle with a fraction of the new wealth created.

But this being the US, who knows what’ll happen. We will probably move to fascism instead.

I forget which CEO said it, but it was something like “everything is going to be so cheap everyone can afford it because AI is going to drive all the costs out of law, accounting, and management consulting”. Right. Because THAT’S everyone’s biggest costs. Paying all those Mckinsey consultants and KPMG accountants for child care. Although what I used to have to pay my kid’s nanny wasn’t that far off from what we would pay an entry level Analyst at my old firm.

Everything isn’t software. There’s a lot of costs like real estate, food products, fuel, and raw materials you can’t just “AI” away. And indeed AI does use a lot of those resources in building and running giant sprawling data centers.