Doing a pre-generative-AI Save Game

I’m not sure in which forum to post this.

With the emergence of generative AI in text writing, code writing and visual art, there are already cases where it’s not possible to tell what was generated by a model vs by a human; it seems obvious that over the next few years, it will become more and more difficult to tell this difference.

Many people, myself included, consider that this will devalue human-generated art (including code), will have unpredictable effects on the world economy and geopolitics, and will eventually make most humans more stupid and/or less happy and/or poorer.

Let’s assume that there is value in preserving the human-vs-AI distinction.

We’re seeing calls for AI-generated text and art to be clearly labeled as such, via some sort of watermark, so that humans (high school teachers, decision makers, consumers) can keep making the distinction and preserve the value of human work.

I don’t think that can be done realistically, because many people have a vested interest in making the AI’s work pass for their own. This can be as easy as having ChatGPT write a text on a subject, then rephrasing parts of it and typing it in a new document. And, of course, we’re surely heading for a world where most content will be computer-generated.

But would it be possible, instead, to somehow label the parts that were done by humans?

For instance, could we take and preserve a snapshot (a Save Game) reflecting the corpus of human knowledge and creativity as of, say, 2021? How would you suggest doing this in practice?

ETA: Yes, I know some wiseperson will ask ChatGPT about this, and then paste its response here.

It’s possible now to do a search and see if some document was published on the internet prior to 2020 and you can be reasonably sure that if it’s of decent quality, it was probably created by a human. If you don’t find it online or published on dead trees, clearly and credibly marked as to its date of origin, all bets are off. Going forward, it’s going to be very hard to know how much automated help a person had creating that text. I don’t think any set of “watermarks” will last long with so many people having a strong interest in defeating them.

Unfortunately, even if you are 100% absolutely sure a document was written by a human, that in no way implies it is not bullshit. In many cases, it is the contrary. There is no alternative to using your brain, and your labels (NB if some document is labelled as authored by a human, why would you not assume the chances that is a blatant lie are sky-high?) will not help.

That’s basically what I’m asking. Is there a technical solution to at least this problem?

Can we make a culture-wide NFT? Make a printout of Wikipedia engraved in titanium encased in sapphire crystal? Certify post-2021 works only if they are written in blood?

We do not have a choice, we will have to allow the system to adapt.

I don’t believe that AI will devalue human writing and visual art, although it will likely change it, and arguably for the better. Just focusing for the moment on text generation, offhand I see writing falling into three general categories:

  1. Writing about and synthesizing existing information, meaning to summarize, integrate information from multiple sources, and draw conclusions. A student essay or a policy institute report is an example of this kind of writing.

  2. Reporting new information. News writing and scientific papers on new discoveries would be examples.

  3. Creative writing. This would be fiction and poetry.

In any foreseeable future, AI will be able to do some (not all) of item (1). It won’t do (2) because that requires humans out in the world observing and discovering. And I’d argue that it won’t do (3) except in the weak sense of imitating existing distinctive works, but not in any genuinely creative sense and certainly not in the sense of creating new styles. We may have to revisit this when we have sentient AI, but meanwhile true creativity will remain the province of humans.

This sounds to me like a winning proposition that should be embraced rather than feared. We can use AI as a significant helper in deeply mining information and helping us write more informative reports than we could likely do on our own and helping us gain new insights, while at the same time even doing some of the writing grunt work for us. The main problem I see with this is the opportunity for students to cheat instead of learning the important skills of reading comprehension and good writing. That’s something we’re going to have to solve. Otherwise I see this capability as a boon to humankind.

I made an analogy in the other thread that’s worth repeating. In the early days of painting, the art was entirely devoted to realistic portrayals of whatever the subject was – landscapes, objects, portraits of people, whatever. The skill of the artist was largely judged by the realism of the artwork. Then photography came along. It changed everything, but didn’t destroy art. Art moved on to phases like Impressionism, Cubism, and abstraction. Meanwhile pictures of whatever interested them, including their own portraits, became available to everyone. Photography inspired progressive advancements for those engaged in the visual arts, and was democratizing for the rest of us. I think it’s an apt analogy for what AI will do for writing.

Sure, there are potential downsides to AI, but it seems to me that a lot of what you are saying is just mood affiliation. I tend to be an optimist about AI, and all I can see are possibilities.

For instance, a lot of people seem to believe that AI will somehow increase the gap between rich and poor, because the rich will own the AIs and the companies and no longer need the workers. But there’s another way to frame this: AIs are going to free the workers from needing the corporations and the rich will lose power and control.

Think about starting a small business. Today you really can’t unless you can afford a lawyer, a tax accountant, numerous inspectors and professionals to advise you on regulations and code, and much more. This costs money. It sets the bar high on entrpreneurship. It freezes out the poor from the capitalist economy.

In the near future, everyone who needs it is going to have an army of white collor labor at their disposal to take the place of the middle managers, the gatekeepers, the bosses. AI will allow us to self-organize, find our own markets, make our products known, write our documentation and code, and all the other stuff that put control of corporations in the hands of the rich.

The effect on productivity will be immense. Imagine software companies the size of Microsoft being run by 100 engineers and a lot of AI. Small teams are much more productive than large ones, but large scale projects accrue people and become less efficient. With AI, two people with a vision may be able to run a billion dollar company. And start with nothing.

Then there’s education. Education is in dire straights, and AI might save it. Those poor inner city kids who want to learn but are stuck in horrific schools in violent neighhorhoods can learn anything they want at home, with the best tutor ever. And so can kids in villages in Africa. A future charity might be one that raises money to buy AI tokens for Africa.

Injecting AI into the market should make it more efficient and raise profits. Analysis will be better, project management better, discovery is going to become a lot faster.

I expect the rate of invention to skyrocket as creative people around the world will now be more effective in researching, brainstorming, designing and testing new things.

Science is about to accelerate. Wait until we turn AIs on to the terabytes of data that will be coming from the Vera Rubin observatory. SETI already turned an AI on data they had already analyzed and rejected, and it found 8 candidate signals. How much are we going to learn when astronomers really learn how to apply AIs to all the data we have already collected and the explosion of new data yet to come.

AI could destroy us. Or it could usher in the 5th industrial revolution. Make your bets.

So, you have no understanding of education?

Stranger

I’ve got plenty, thanks. If you prefer, look at it another way: Poor kids lack a lot of resources for education that other kids get. AIs can provide a lot of those resources. I wouldn’t expect kids in Africa to sit and learn on their own, but a school in Africa empowed by AI can provide the resources of a good school to a small village, including expert tutoring in almost any subject. Combine that with satellite internet, and even people in poor villages can start companies and compete in the global marketplace. This is already happening with services like Fiverr, but it will be expanded to all kinds of ventures.

Here’s another positive take: AI is basically going to shift a lot of energy consumption from physical manipulation to information creation. AI’s use gobs of energy, but they also save energy by reducing the amount of physical labor it taks to do things. But data centers are much easier to power than huge industrial infrastructures. They can be located in areas of natural energy, which we’re already doing in Iceland and other places with abundant cheap energy. We could build remote data centers powered with nuclear power that remains far from populations. AI might help us with global warming in many other ways as well. A lot of processes are going to become more efficient.

No, I don’t think you do have an understanding of the purpose of education, particularly childhood education. You have the technocratic view that education is a process to generate workers with some threshold of intellectual capability, and while that is a benefit of comprehensive education, the actual purpose is as much socialization, instilling a sense of ethics, empathy, and civil responsibility in students to prepare them to become not just members of a workforce but productive members of the society in which they live. Using ‘artificial intelligence’ to tutor students in particular topics to make better use of a teacher’s divided time is one thing but the notion of replacing human teachers entirely with “AI”—even if it could be trusted to reliably provide useful and truthful information, which is in question—is far from the egalitarian vision you describe and would actually create even greater stratification between the have (those who can afford human tutors or who have access to mentors) and have-nots, as well as devaluing the teaching profession as a whole. It is just another example of trying to us machine cognition as a wholly inadequate replacement for human intellectual labor without consideration for the broader impact.

In general, the utopic view of how much AI is going to “free the workers from needing the corporations“, “allow us to self-organize, find our own markets, make our products known, write our documentation and code, and all the other stuff that put control of corporations in the hands of the rich“, “have an army of white collor labor at their disposal to take the place of the middle managers, the gatekeepers, the bosses”, et cetera is only vaguely conceived and ill-considered, notwithstanding the impact that this presumably rapid displacement of white collar intellectual workers would have on the consumer side of the economy as their jobs and even whole professions are subsumed. The notion that all of these workers are just going to become overnight entrepreneurs running their own hypercorporations driven almost entirely by ‘bots (providing what goods and services and to whom? how will this ‘productivity’ be realized in net economic value? what becomes of those who don’t have access to AI systems or the skills to apply them effectively?) is so farcical that if this were not a notion being seriously propagated by proponents in tech industries and powerful legislative lobbying interests would be risible.

We don’t have to guess at how this will play out because we already have the example in the Internet; in the ‘Nineties the technocratic crowd was assuring us that access to information free of the traditional venues of media outlets, universities, libraries, et cetera would make us all more free and democratic, as well as open up a vast marketplace of intellectual and economic opportunities. To the extent that this has come to pass, it completely failed to anticipate how that ‘marketplace’ would come to be dominated intellectually by conspiranoia, false narratives about ‘pedophile Democrats’, social media platforms specifically engineered to engender outrage and divisiveness, and on the economic front the dominance of most online commerce by a few major players which have fed globalization and undermined actual entrepreneurial and local small businesses to the extent that you either play on one of those major platforms like Amazon or AliBaba, or you resign to the constant hustle of trying to find and promote some particular niche. The world-spanning Internet has been a boon in many ways but it is not without its significant problems that enthusiasts try to avoid discussing and that legislators don’t even understand well enough to attempt to regulate even if it were within their very limited abilities to do so.

It is clear that these ‘AI’ capabilities are already being used in an effort to replace creative and integrative labor with a synthesis of data and promotion. And therein the danger lies; not that these systems are going to take over and launch a spree of murderbots upon us all, but rather that major interests will adopt them without really understanding the limitations or applying well-considered restraint, and then we will spontaneously discover the actual deficiencies of these systems. And it is clear that a lot of people promoting the broad adoption of these, including the respondent here, have no actual understanding of what these ‘artificial intelligence’ systems do or how they function. Chatbots like ChatGPT are not knowledge creators or even knowledge aggregators; they are instead consumers of data of which they have no real contextual comprehension, and instead use Bayesian predictors to assemble responses to prompts that give the trick of mostly appearing to understand without actually bringing any genuine novelty or inspiration, and indeed often fail in such comically ludicrous ways it is clear that they are less comprehending of the world around them than a three year old, which should be expected because of course their entire ‘world’, such as it is, is just the data that is fed to them. They don’t follow any system of ethics or civic responsibility because they don’t have them. When we do develop actual superintelligence systems that perform real acts of cognition instead of statistical trickery powered by enormous computational capacity, then I’ll be at least more impressed…and more concerned about just how much intellectual autonomy we will hand over as a society for the illusory ‘freedom’ of not having to think too hard about anything.

This is not to say that ‘AI’ (or as I prefer it, ‘machine cognition’ and ‘adaptive analytics’, depending on the application) cannot offer significant benefits. Certainly when dealing with ‘Big Data’ problems, using tools that can process and distill down far more data than any team of people could absorb in a lifetime is imperative, and in many ways these tools offer the ability to offload the drudgery of clerical-type tasks and allow knowledge workers to use a greater degree of their intellectual potential in a useful manner. But so many of the supposed applications are the equivalent of a college student using ChatGPT to write an essay for them to turn in as their own work without consideration that the point isn’t the product but the process, and by having a ‘bot do all of the ‘hard’ work of researching, congealing an argument, and parsing it into a well-reasoned position deprives them of the essential opportunity to develop those creative and rational capabilities themselves. A forktruck can easy lift several thousand pounds in one move which provides exactly no fitness benefit to the operator who does so in lieu of doing Olympic squats or swinging kettlebells, and having ‘bots do the work of thinking or teaching leaves people free to…stare at a screen, scrolling through ‘tweets’ which were selected by an algorithm (another ‘artificial intelligence’ of dubious merit) to make sure that they keep scrolling endlessly.

Color me unimpressed with the assertions that ‘AI’ will benefit us all in some idealistic fashion, or level the economic playing field, or eliminate inequality in education or other social venues. It hasn’t happened, it isn’t happening now, and there is no clear reason to expect this to suddenly become a reality in the foreseeable future, notwithstanding the overestimation of just how much that language models and similar data-based linear algorithmic prediction machines will be capable of doing in the near future.

Stranger

I believe we are on the verge of a major paradigm shift in the functioning and hierarchical structure of human society, due to the continued development of AI and robotics.

AI will do many, if not most, jobs better than humans, including artistic creation. I foresee much anxiety and many mistakes made as we make the transition into AI control, but ultimately, I believe human civilization will be better off with AI taking over more and more jobs.

I believe we will be better off simply because AI will do a superior job in the human jobs it replaces. It may not be far superior in the beginning, but it certainly will be down the road a bit.

Superior “brain-power” will be most beneficial to society in the most critical jobs. For example, I will feel more secure putting the problem of climate control into the hands of advanced AI. We’re heading toward a human-caused extinction event; we need something smarter than humans to get us out of it.

I’ll feel safer putting the justice system into the hands of advanced AI, assuming it’s programmed to be completely free of bias. I’ll be fine with advanced AI robo-judges and robo-attorneys (as long as their “asshole chip” is crippled :smiley:).

I’ll feel safer on the streets with advanced robo-cops programmed not to be trigger-happy.

I’ll feel my health is in better hands with advanced medical AI that can instantaneously access, and analyze the complete list of human pathologies to make accurate diagnoses and weigh all available treatments to formulate the best treatment plans.

I’ll feel safer on the roadways with vehicles driven by advanced versions of automotive AI.

I look forward to the artistic creations of advanced AI. I anticipate a day when Beethoven’s 9th Symphony is considered amateurish by comparison.

I do predict a big problem with people losing many jobs to AI in the near future. However, I also believe things will eventually shake out for the best after the “growing pains” transition. In the meantime, using early version AI as an assistant, rather than a replacement is the least disruptive way to proceed.

Ultimately, after the transition, I believe general organoid intelligence (OI) will be the name of the game, and many of us will opt for a hybrid brain/OI mind. Call us cyborgs if you must—that is a pretty cool name.

So, the future Bobs, Teds, Carols, and Alices will be able to take pride in the jobs they do. They just won’t be completely human. I’m OK with that.

Are you crazy? You want to GIVE the AI Overlords a reason to collect human blood? You’re just hastening their inevitable betrayal.

I, for one, welcome our new weather control overlords.