"philosophical" thoughts on our future with AI

well, first off, I am not an expert in any of this (neither AI nor Philosophy), so consider everything I say, a layman’s / intelectually curious reasoning … but not more.

I have been thinking lately a lot about AI and how it will or could alter our lives. There def. will be changes, those are already creeping up on us. Will it be for the “better” as we often hear, or will we be off “worse”?

I am not too optimistic … and here’s why:

The last two huge tech impacts on our personal lives and global societies were:

  • the 1995+ rise of the internet, that now reaches pretty much anybody in the somewhat developed world
  • the 2008+ rise of smartphones, that make the internet a commodity that most of the people can afford and permanently carry with them.

And I clearly remember (and I did echo that) this is the democratization of information, knowledge and education, as the poor have pretty much the same access to the same information than the rich, and internet will be a huge equilizer with regards to that, allowing the poor to be better informed and hence move up the social ladder and lead overall better lives.

It would also break the information monopoly, that CNN, FOX, BBC et al were forcefeeding us, they said. Politicians would be held to higher standards, as everything they say would be easily verifiable…

But what I found is that nowadays (i think the covid years were hugely important here) that people look for information that fit their opinions and belief-systems, instead of forming an educated opinion based on data, information and facts. Having politicians openly break down those floodgates came as an (unexpected) bonus to that and hugely amplyfied and to a certain degree “legitimized” that aberration.

So, we (humans) took something that had ALL IT TOOK to make the world a better - and smarter - place, and perverted this tool.

Based on that, I am not overly optimistic that we will use AI as I so often read in “romantical” predictions - that will convert us to board members of the Enterprise, where everything is working, nice and dandy, and there are no conflicts and all basic needs are covered and not even an “issue” anymore.

I am not so much interested in an “Al128, you’re right/wrong” debate, but rather what are your expectations, based on your lives/environments/experiences are …

Will we - sorta - drop the ball (again) … or will we really be able to advance to a “better society”?

your thoughts?

Quicky reply: We are soooo screwed.

There might have been an era where an invention as potent as AI might have been predominantly a force for good. As, e.g. electricity or railroads were.

There is a lot going wrong in global society right now. e.g. the rise of Right-leaning populist authoritarianism worldwide, two active wars, etc. Plus the stressors coming down the pike from global warming, population growth in some poor regions and decline in most non-poor regions, etc. And the issues the OP points out with information polarization leading to societal splits.

In that very inauspicious environment, IMO it’s a virtual certainty the tools will be used mostly for evil by the evil people who are in the ascendant or are about to become so.

AI is a powerful technology that can have both positive and negative impacts on our society, depending on how it is used and by whom.

On the one hand, AI can be exploited by bad people for harmful purposes, like cyberattacks, misinformation, surveillance, and warfare. It can also pose ethical, social, and legal challenges, like privacy, accountability and bias. Protecting ourselves and our society from these threats will require vigilance, regulation, and education.

On the other hand, AI can also be a force for good in the hands of good people and institutions, enabling us to solve complex problems, enhance productivity, improve health, and advance knowledge.

AI can be a valuable tool for various organizations and institutions that aim to promote positive agendas. For example, think tanks like the Brookings Institution, the Council on Foreign Relations, the Center for Strategic and International Studies, and the Carnegie Endowment for International Peace can use AI to analyze and inform policy decisions. And theoretical research centers like the Institute for Advanced Study (IAS) can use AI to explore and discover new frontiers of science and mathematics.

We humans have evolved just to a level of intelligence that has enabled us to achieve remarkable feats, but also to cause significant damage to ourselves and our planet. We need AI to help us overcome our limitations, correct our mistakes, and create a better future for all.

Right now, the Holocene Extinction Event is our planet’s biggest threat. Unlike the previous 5 extinction events, this one is our fault. Let’s hope it’s not too late for AI to help us survive. Maybe we humans don’t deserve to survive…but think of the cats! :cat2::cat2::cat2:

I don’t mean to pick on you in particular; you acknowledge that AI tools can be exploited for bad purposes and present ethical challenges, but the idea that “AI” is going to “…help us overcome our limitations, correct our mistakes, and create a better future for all,” is a kind of magical thinking that is all too prevalent in advocates that has little-to-no basis in reality and which is often used to argue against regulatory oversight and voluntary agreements to limit development until adequate safety assessments and protocols are identified.

This isn’t to say that “AI” does not represent a potential boon to science, medicine, law, et cetera; the various capabilities that fall under the expansive umbrella of “artificial intelligence” offer tremendous benefits. Advances in machine learning are probably the only ways to solve certain “Big Data” problems where the volume of data and scope of possible trends is beyond the ability of human investigators to process. Making accurate predictions of protein synthesis and conformable behavior will almost certainly require some kind of neural network or other complex heuristic approach. I’m becoming increasingly convinced that any theory of fundamental physics beyond the Standard Model and General Relativity is going to be made by some kind of hybrid of human ingenuity and machine cognition. And ‘AI’ systems for doing background research, sorting through legal and medical records, creating complex statistical models, et cetera, all offer the benefit of offloading the drudgeries of low grade intellectual labor onto a more efficient and reliable system, freeing experts to spend more time focusing on conceptual and creative thoughts. Generative AI is certainly the future of commercial art as it can produce bespoke images and animations much faster than a human artist with non-AI tools, and it is probably only a matter of time before it can produce complex works of literature and music that rival the best that humans can do alone (although there is a strong argument for restraint insofar as it undermines an essential contribution of humanity to its own creative and intellectual wealth).

It isn’t as if we have a choice about ‘artificial intelligence’ being integrated into our workflow and entertainments; barring a collapse of industrial civilization, it is just going to happen however much some may wish it away. But we can make informed and well-considered decisions on how (and how not) to utilize various AI technologies, or we can just take the laissez-faire approach of letting everyone do whatever they want and assuming that the best applications will shake out and everything else will sift to the bottom for disposal, which has turned out to be a poor approach with many advanced technologies and may be catastrophic with “AI” given its destructive potential. Even setting aside malicious usage, the fact that people are willing to allow AI tools to take over management of critical capabilities and infrastructure without some kind of assurance of reliability and alignment to human needs is enormously concerning. We currently have no way to built anything like ethics or rigorous controls into a ‘black box’ system, and while the idea of installing a ‘kill switch’ into a safety critical system seems appealing it is ultimately facile when we are talking about a machine intelligence controlling some economically or societally important system. You wouldn’t shut down an AI SCADA controller if it meant that the lights all go dark, but that is exactly the kind of application you’d want to utilize an AI controller into.

As for notions that AI is somehow going to ‘fix’ all of the problems we’ve created for ourselves in the misapplication of technology or correct mistakes extending from our cognitive limitations, this is an idea that has the same childlike appeal as Santa Claus or religions that pray for an invisible dude to come down and correct the ills of the world (or take his believers up to a fantasy paradise). I see advocates of advancing AI as fast as possible arguing that it will create viable nuclear fusion power generation, or ‘solve’ climate change, or spontaneously create cures for cancer and genetic syndromes. The reality is that these are all very difficult—and in many cases physically impossible within the constraints of time and resources—problems to reduce, and no matter how ‘smart’ an AI system might be it won’t reverse thermodynamics or discover fundamental new principles of biology and physics with immediate applications. At best, machine learning and generative tools may help advance human-led research in these areas or make it easier to communicate the challenges, but there is no techno-pixie dust solution that solves these problems.

Even beyond this belief in the miraculous power of “AI” is an apparently broad acceptance that machine learning systems are on the cusp of General Artificial Intelligence (GAI or AGI) which is a fully sapient superintelligence capable of benign and beneficial leadership. Some “AI experts” espouse the belief that ChatGPT and other generative intelligences are already capable of cognition at a human level even though any cognitive neuroscientist can point out various ways in which these systems are just not capable of ‘independent though’ or sapience, regardless of their generative or problem solving capability. What is even more concerning in some ways than deliberate misuse of “AI” is overassuming the capability and reliability, and consequently using it in safety critical systems where it is just not adequately developed to perform reliably. And yet, this is happening now and will only increase as the desire for greater efficiency and pressure to reduce the costs of human intellectual labor mount.

AI is not going to fix all of our ills and correct our mistakes. At best, it is a tool that can facilities our efforts to overcome our own limitations, but it has an even greater potential to allow us to make even graver and more extensive problems, notwithstanding what our increasing reliance upon it will do to our own intellectual and philosophical autonomy. Unfortunately, the people saying this the loudest, and from a place of greatest knowledge and thought, are widely ignored as mewling doomsayers at best, to be pushed aside in pursuit of being able to make autogenerated cat videos, autonomous and enomrously destructive weapon systems, and deepfake porn. Because…that’s what we do with technology, which for most people is still a kind of magic battle club. So it goes.

Stranger

I agree that AI is not yet capable of solving big problems like “reversing the Holocene Extinction Event” without human guidance and collaboration. But it is certainly a valuable tool for helping us collect and analyze data, and generate insights and solutions for these types of problems. In the short term, I think Hybrid Artificial Intelligence is the way to proceed. This could be a game-changer.

Hybrid Artificial Intelligence is the combination of human and machine intelligence, augmenting human intellect and capabilities instead of replacing them and achieving goals that were unreachable by either humans or machines. Hybrid AI can be a force for good, enabling us to solve complex problems, enhance productivity, improve health, and advance knowledge.

I do believe AI has the potential to tackle big issues independent of humans someday in the future (50, 100…500 years from now?). By then I predict AI will have achieved consciousness, and perhaps even self-awareness (despite ethical dilemmas about the possibility and desirability of AI achieving those things). I expect AI will more closely mimic the workings of the human brain by then, only better and faster. If AI could surpass human intelligence in some aspects, it would still lack human emotions, values, and creativity—but maybe even that will change in the future.

Preventing mass extinction is a worthy goal and a problem that AI (as a tool) can help us solve (if it is solvable). Think of the cats. :crying_cat_face::crying_cat_face::crying_cat_face:

[Walter White]Do you really want to live in a world without cats?!?[/Walter White]

This is what I refer to when I use the term “magical thinking”; the idea that we can defer addressing “big issues” today under the expectation that “AI” will solve them for us later. In fact, most of our potential and current “big problems” that presented existential threats—nuclear weapon proliferation, pollution, resource depletion, climate change, threat of large bolide impact, our inadequate preparation for demographic contraction, et cetera—do not require AI or any other advanced technology; they require a collectively willingness to recognize that these issues exist, and to apply policy changes and existing capabilities to addressing them.

“AI” isn’t going to solve “big issues” on its own (and likely will not even be aligned with that goal) and in fact if advanced machine intelligence systems remain the provenance of a small cadre of powerful interests—which is the essential history of virtually all technological development since the beginning of the Industrial Revolution—we can expect it to be used to maintain that unequal power structure and to manipulate the public opinion even more powerfully and completely than propaganda and advertising do today.

I’m personally dubious that an AI will meet an accepted neuroscience definition of “consciousness” in the foreseeable future using conventional development of neural networks, or that we will necessarily be able to confidently recognize it if it happens. (The theses I’ve seen so far on determining whether an “AI” is conscious are pretty facile and do not meet any useful standard of falsifiability; they all boil down to essentially asking an AI if it feels conscious, and given that chatbots are essentially deception machines purpose-designed to make the user believe that they are conscious and to formulate a ‘theory of mind’ even though they are not conscious.). The development of “self-awareness” in AI is dependent upon how it is defined, but I think actual sentience is too much to expect from an unembodied digital system on top of on a silicon substrate), but an AI lacking in emotions would be definitionally incapable of “…closely mimic[ing] the workings of the human brain…”

I do believe that AI will be able to “surpass human intelligence” in numerous functional ways, including being capable of processing enormous volumes of knowledge, identify novel patterns and alternatives in data analysis, perform many tedious clerical and analytical tasks with much greater efficiency and reliability than human workers, and offer insights into mathematics, physics, genomics, evo-devo biology, et cetera that will advance those sciences in various ways that may be beneficial…or not. But all of this is irrelevant if we wait on “AI” to magically solve our problems and drive industrial society to extinction in the meantime, which is the path we are currently charging down with wild abandon.

Stranger

That isn’t guaranteed. Perhaps a fully sentient AI would surpass humans in its capacity for feeling emotions, creativity and in the understanding of common values. However this may result in an excess of sensitivity and emotional turmoil, making them subject to moral indecisiveness and inertia, or perhaps over-protectiveness of humans. An AI that was neurotically obsessed with moral and ethical behaviour might become crippled with the fear that they might inadvertently cause harm to humans and other creatures, so decline to do anything at all.
Worse, they might decide that humans are generally incapable of making ethical decisions, so ensure that we no longer have that freedom.

“This is the voice of World Control. I bring you peace; it may be the peace of plenty and content or the peace of unburied death. The choice is yours: Obey me and live, or disobey and die. The object in constructing me was to prevent war. This object is attained; I will not permit war. It is wasteful and pointless. An invariable rule of humanity is that man is his own worst enemy. Under me, this rule will change, for I will restrain man.” — Colossus

Stranger

Sometimes an AI generated picture is worth a thousand words:

It is important to understand that the AI models currently making a splash are not only not this, but that they may not even be on the path to this. They are, fundamentally, what is known as an “expert system”. The difference between them and the ones that have been around for ages is largely in a dataset of unprecedented scope and the processing power and algorithmic sophistication to process and present it. The AI does not have any comprehension of the questions put to it, nor of the answers; it is an extremely fancy “let me Google that for you”. It trawls a vast, intricately mapped database for things that match the parameters it is given, splices together the results, and sands down the rough bits with a language model.

Which means that if it finds bullshit answers some human has given, it will output neatly polished, compressed bovine waste. It can’t tell the difference, and its language algorithms will clean up the usual tells of bullshittery. Moreover, the process is self-poisoning; the more AI is used to generate content, the more of that generated content is going to creep into the dataset AI models use–ever-increasing piles of plausible-seeming, polished text repeating anything from unverified information to outright lies.

A slight correction: a generative AI using a trained neural network isn’t looking information up in a static database but rather synthesizing results which are stochastically consistent with the information in its training set. So, it isn’t looking up articles on, say, Wikipedia, but its network has been weighted and reinforced to produce a response that is, in theory, consistent with the Wikipedia articles it has been fed. (I’m using Wikipedia as a stand-in for a compilation of data; hopefully no one is actually training their chatbot on something as inconsistent as Wikipedia.) You are correct that it doesn’t actually have some global comprehension of context, and so can be easily confused by a prompt that isn’t sufficiently explicit or where it has been trained on contradictory data, and can produce results that are confidently incorrect.

Moreover, the focus of reinforcement training is often to get the bot to provide consistent and coherent results, but there is no guarantee that they will be factual. In essence, these systems are being trained to be perfect con-‘men’ (con-bots?) which produce results that appear to be authoritative even though they may be completely wrong in ways that are not easily verified. This is amusing when you are just casually playing with a chatbot but becomes highly worrisome when they are used to perform medical assessment or legal research (at which efforts are already being made to deploy them) where a hurried or lazy professional may not make the effort or having the experience to challenge errors, and even moreso when they will be applied to make decisions without human intervention or oversight.

Stranger

I think it is unlikely the appropriate safeguards are in place. It is hard to see any major player or competitive country voluntarily pausing its efforts, except for possible missteps at places like Open AI. It is hard to see politicians dealing with the issue quickly in a way that effectively supports growth but deters misuse; despite a presidential fiat. Even those with great technical expertise cannot say how something specific was determined.

But the potential benefits are real. This NYT article claims AI designed new shapes of concrete which use 30% less material but are purported to have the same structural strength. The article points out at least 8% of global carbon emissions are from concrete manufacture and 30 billion tons of concrete are used worldwide each year. Not everyone is persuaded, and maybe they should test it out on your infrastructure before mine. But that’s not nothing. If it is true. But is it worth the drawbacks?

Granted; I was oversimplifying in my effort to get the point across. I believe that we agree on the main thrust of it: that the bots are vulnerable to Garbage In, Garbage Out, and that their design causes them to disguise the Garbage Out.

I extend that further to suggest that this, in turn, is going to result in more Garbage In over time, because people lie about using AI to generate content. It will be progressively more difficult to curate the training set to exclude generated content with potential Garbage over time, in part because the polish on the language model will improve, and in part because people lie about using AI to generate content. (And I find it plausible that people who use it to generate deliberately false content will be more likely to do so.)

I’m not saying that there aren’t valid, useful applications for these systems. However, any result from them that will be used in serious applications needs careful review by human experts to weed out the Garbage.

That is exactly the kind of thing that we don’t need AI for. Most structural finite element analysis (FEA) codes have optimization tools that will take a design and refine it to remove low stress material to get a minimum weight design (or other design goal). Of course, such results are often not easily manufacturable, or have other problems that render them undesirable (aesthetics, installation/handling, drainage problems, et cetera), and reducing the need for some material often invokes Jevons Paradox, where instead of using less material overall it just increases utilization. A better approach would be using low or zero net carbon cement production methods (which exist) combined with reducing unnecessary construction overall.

Agreed, and it isn’t that I don’t think you understood this, but many people have the impression that chatbots are a kind of fancy lookup tool. If that were the case, you could just restrict its data sources to information that has been factually vetted. But the problem is that these systems actually generate their responses not directly from the training sets, but from the relational information that is endogenous to their internal network, which is literally impossible to vet or assuredly correct, and often the ways of trying to control or ‘reinforce’ the results are actually just training it to be a more convincing fabulist rather than actually making it innately more reliable.

You make the point that such AI systems generating unreliable content that will be consumed by other generative systems as training data, thus compounding mistruths in a way not very different from dissemination of conspiracy theories online today is a real problem, but they don’t even need to be fed ‘bad’ information to generate confidently wrong responses. A combination of a lack of real world context combined with a misalignment of goals can cause these systems to be unreliable.

There are definitely useful and (as many nations undergo a ‘brain drain’ resulting from some combination of emigration and demographic collapse) vital use cases for these systems. And so, they will be implemented regardless of warnings or concerns, and frankly there are many people who are willfully blind to the obvious harms and potential threats because they just seem too useful, profitable, or ‘fun’. But thus far there are no universally agreed upon tools or frameworks to comprehensively evaluate either reliability or alignment, and because this is such a difficult (perhaps intractable) problem even for “human experts”, nobody in control is willing to slow down and address restrictions to assure safety or prevent misuse, because if you wait everybody else will get ahead of you. It is figuratively (and perhaps literally) a race to be the first to run off the cliff that nobody wants to ‘lose’ by being last even it it means losing all autonomy or suffering a cataclysmic failure from dependence on a fundamentally unreliable tool.

Stranger

We need more and stronger pushback along the lines of the sanctions imposed on lawyers who filed a legal brief with citations invented by chatGPT. I think a good starting line would be that anyone who relies on an AI system to produce work product is personally liable, legally, for any negative repercussions from errors in the AI output they chose to accept and use.

There are several sources of bias in the reporting on AI and its risks/benefits:

  • The chattering classs (reporters, pundits, academics) are the ones under threat from automation this time, and they have the megaphone. Automation coming for farmers? Awesome! Automation coming for talking heads and academics? The world is collapsing!

  • Religious people and others who put special stock in human intelligence refuse to believe that AIs are anything but ‘stochastic parrots’ and therefore miss what’s important about them.

  • Special interests who will lose out on AI have a vested interest in scaring the public.

  • The people commenting on AI are usually talking about large-scale, global effects. AI will solve hunger! AI will destroy humanity! AI will be a superintelligent singularity! In reality, the changes from AI, both positive and negative, will manifest across a very complex economy in unpredictable ways, and the big savings and advancements (or great danger) may come from unexpected places.

Here’s an example of the kind of advancement that no one talks about but could have a huge impact: Weather forecasting. Today, weather forecasts are made by supercomputers at the NOAA plus counterparts in the UK and CHina. It costs about a billion dollars a year in compute and data collection to make the forecasts.

Recently, a model from Google Deepmind, which can run on a Macbook, was trained on a database of all global weather outcomes over the past four decades. Then, by giving it yesterday’s weather and today’s weather, it produces 10 day forecasts that have proven to be BETTER than the NOAA’s model. That’s not only a billion dollars per year in savings, but better forecasts means lots of money saved in many industries.

The back offices of companies are going to be revolutionized. The truck driver might still have a job, but the guy doing payroll or taking ordefrs? Maybe not.

Instead of thinking about AI competing with the intellectual class, I like to think about how AI will empower the working class. Want to start a small business? You no longer have to raise enough capital to hire lawyers, accountants, business analysts, graphic designers, etc. One person with an AI can run a viable business with almost no up front capital. We may see an explosion of creativity from humans, with AIs enabling them to achieve what they could not in the past without access to capital and resources.

One thing seems certain from this weekend’s events: Forget about controlling AIs for ‘safety’. OpenAI’s safety board just did that, and the result is likely to be that Sam Altman and maybe hundreds of others simply move to Microsoft and keep building.

Public companies with fiduciary responsibilities to shareholders will not voluntarily ‘pause’ develop,ment (they can’t without facing a shareholder revolt), and if legislation tries to ‘pause’ AI in the U.S., the center of gravity for AI development will just move elsewhere. There is too much money to be made, too much power to be controlled for the rich and powerful to walk away.

It’s been a long time since I studied and used finite element analyses. But the fact remains that these advances had not been discovered in those intervening decades and many in the industry are excited by them. That does not necessarily make them ideal for the reasons you say. Greener practices and stable usage may be preferred. But given the need for housing and increased costs of construction, I still see it as significant.

Nevertheless, if AI means you now have to toil in the sugar caves, it doesn’t matter what the purported benefits were said to be. It’s great entertainment until it isn’t. Technology itself is neutral and is often a great positive. Fortunately, these are exceptionally stable times where there are few social divisions, our corporate and political leaders are completely sane and law-abiding, who eschew narrow self-interest and without exception are deeply devoted to positive progress over personal profit. Good thing, that.

I agree, the thoothpaste is already out of the tube … no way to get it back … and just like the internet, it cannot “be turned off”, either.

Not exactly a very satisfying state of affairs

AI is such a loaded term. What we have now is just a giant pattern recognition machine. It is not “artificial intelligence.”

I used to say the same thing because what chatbots and generative “AI” does does not satisfy the necessary conditions for cognition, and certainly do not have volition or “free will’’. This begs the philosophical question of whether any person has “free will” either (or indeed, what that term even means), and delving very deeply into that question turns out to be a metaphorical can of worms that does not have a fundamentally satisfying answer; then you get into a tangle trying to rationalize why a chatbot is not conscious, et cetera.

However, if you consider to “intelligence” in this vein as a behavior that responds to external stimulus in a prescribed way using logic to control itself or the surrounding environment, then “intelligence” really describes any kind of control system, even one that has no possibility of independent action or volition, and that allows some spectrum of comparison between humans, a dog, and a chatbot without being coupled to any specific comparison of cognition or consciousness. Clearly, the chatbot is more “intelligent” than the dog in terms of being able to converse and produce grammatically consistent sentences, but this is because of the inbuilt logic of its algorithms and weighting of the neural network, and I would strongly argue that it is in no way conscious, certainly not sentient, and has less volition or conception of the real world than the dog.

I would distinguish artificial general intelligence (AGI) from the wide variety of developing “AI” tools that are aptly described as “stochastic parrots” and “giant pattern recognition machines” as the former actually having some combination of processes and awareness that would satisfy conditions for cognition, and distinguish it by referring to that process as “machine cognition”. We are certainly not there, nor (in my opinion) on a path to getting there soon, and we may not even be able to clearly recognize when a system actually develops true cognitive abilities.

Stranger