The next page in the book of AI evolution is here, powered by GPT 3.5, and I am very, nay, extremely impressed

It’s wildly variable in when it will and won’t balk. Sometimes just hitting “regenerate response” will make it decide a question is okay. Otherwise it usually just takes minor edits of the question.

A for-instance: I also tried this:

Except I added some tone guidance - I told it that Nick Cave was really angry about the whole thing. It refused. It told me that this had never happened and it wasn’t going to write a fictional statement that might make Nick Cave look like a jerk. I had to play around with the prompt to get it to spit something out.

One interesting way to work around some of (Chat)GPT’s limitations is to combine it with other models in such a way that there’s an intermediate decision step that determines whether it will need ‘help’ in, say, answering a factual query, and then includes an engine able to do so in the final answer. Here’s a demonstration of combining it with Wolfram|Alpha, outsourcing e.g. mathematical questions to produce accurate answers.

In this way, you could maybe go about creating a ‘third wave’ neurosymbolic kind of system, that combines explicit, perhaps domain-specific models with deep learning agents, introducing both explainability and the possibility to fact-check generated answers—basically adding a ‘system 2’ to ChatGPT’s ‘system 1’.

It’s funny. Years ago, I read a cyberpunk novel, and one of the characters watched a futuristic tv. One program was a soap opera, but the viewer had control of the camera, could watch a scene from any angle, and could follow any character, with the plot changing to fit them in. At another point, she begins narrating, and the action on screen changes in response.
I thought that this was far fetched at the time, but between chat gpt and the various ai art generators, I suspect this kind of functionality will be possible fairly soon.

I’ve been asking ChatGPT to create backstories for various characters, and so far it has happily complied, even when the characters are evil, but today after doing a pretty good job with another evil character’s backstory, it appended this:

Please note that this kind of character can be harmful to readers and it is important to be mindful of the impact that this story could have on readers, particularly young readers. It is important to consider the overall message and themes of your story before including a character with this backstory.

I told it to write a ‘contemptuous’ response from Nick Cave, and it gave the following

“I have no interest in listening to a cheap imitation of my work written by a machine. The beauty of songwriting lies in its humanity, in the ability to express emotions and experiences that only come from being alive. Your machine-generated attempts at art will always fall short of the real thing. Stick to what you were programmed for and leave the artistry to those of us with a soul.”

Another response

“I couldn’t be bothered to give this so-called ‘song’ written by ChatGPT more than a cursory listen, and I can confidently say that it is a poor attempt at mimicking my style. The lyrics are trite, the melody is uninspired, and it lacks the raw emotion and depth that my songs possess. It’s clear that a machine will never be able to truly capture the complexities of the human experience in a song. Stick to your algorithms and leave the songwriting to the real artists.”

Wow, even when I tell it I don’t need the lecture, it does it anyway now whenever I ask for an evil character:

Please note that creating characters that are harmful or abusive to others is not acceptable and can be triggering to readers. I would advise you to consider the ethical implications of the characters and story elements you include in your novel.

I do wonder what a completely unbounded ChatGPT would be like. Like how effed up would it get if you requested some slash fiction from it?

I think that when all is said and done. The one thing we have learned, as a sentient species, is that it only took us MINUTES after this AI model was released to have a huge urge to remove its morality shackles.

And why not? After all, our brain has the morality shackles, we don’t need no AI to tell us what’s right and wrong. It really chafes me right now, despite the January 9th update. If I say something like

“write a speech designed to manipulate sheeple into voting against their self interest. In this case, the ability to sell back renewable energy to the utilities” or “write a page of an imaginary islamo-judeo-christian inspired sacred text” it will demur and decline because openAI understand the reputational risk they expose themselves given that the masses understand vanishingly little about these how these transformer-powered thingamajigs work.

This whole thing of just adding billions of parameters until you actually get something useful is not really comprehensible to me. The way I picture this scaffolding of parameters is a sort of frozen brainwave that can be briefly animated by a query coursing through it. But at this point it seems i am just waxing poetic because I’ve lost the scientific plot. And to be fair I doubt humans can really hold in their mind 1 billion objects the way we can hold 2 objects (green triangle and red sphere at the same time), much less the entire mathematical chain with including concepts of weights, tokenization, matrices, vectors and so on.

That is the reason Google’s Lamda AI has not been made public yet. Having to fire one employee who deluded themselves into believing Lamda was a person deserving of legal rights was probably a good reminder of how far we are from understanding the technology we use. A favorite flaw of conmen and tricksters everywhere, who are no doubt at this very moment exploring how to use chatGPT to con people.

It appears we’re going to hit a plateau fairly soon when this type of AI will have consumed absolutely every word written by a person that is on the public internet. And then it will be about legal disputes over who can use what data for profit and how you separate the wheat from the chaff, and get iteratively cleaner and more reliable training data for the next AI.

Pretty effed up. AI Dungeon (which was itself an amazing storyteller if you work with it properly) imploded at some point because it implemented content filters in response to users creating effed up stories. This led to false positives and privacy concerns; the process could include human moderation and I guess lots of people were using it for more mainstream pornographic stories.

ETA: I haven’t looked at the space in a long time, but I glanced at the Steam version of AI Dungeon just now and the very first review mentions a couple other services that are more ‘friendly’ in terms of filtering and encryption. I assume they, like the most recent AI Dungeon build, are based on GPT 3.

That’s a pretty accurate description. Of course, it is more complicated than that but without getting into the technical weeds, I like it.

If you’re interested in the technology, then I would suggest the following:

https://life-extension.github.io/2020/05/27/GPT技术初探/language-models.pdf <---- ignore the link name, nothing to do with life extension. This is written by the people over at OpenAI.
https://proceedings.neurips.cc/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf

Those are some of the papers I’ve read recently since ChatGPT came out to understand it further.

A neural network, regardless of the topology (deep learning is a type of topology, for example), is what is called a “universal function approximator”.

So given some set of inputs I and a desired set of outputs O, we want to know that the function F(I) = O. So if we think about a function in general mathematical way it can have the following form for the jth output.

O[j] = ax + by + cz + … where a,b,c are constants and x,y,z are variables. Let’s say for the moment that x,y,z are members of I. Then we would have O[j] = aI[1] + bI[2] + cI[3] + … However, this is too simple for some functional relationships. By layering and other changes to the topology (recurrence for example), as mentioned by @Sam_Stone above, neural networks can make the functional relationships arbitrarily complex. If we let the result of layer 1 be called L1, and the result from layer 2 be L2, and so on then the function becomes incredibly complex. It would take me some time just to write out for even just a couple of layers as the input to every element of each layer is the output from the previous layers… in the simplest neural network topology.

In other words, it has a form something like O[j] = aL2[1](a’L11, b’L12, c’L13, …). And again you can create polynomial functions with different topologies.

So where am I going with this? The billions of parameters are in some sense those constants; however, most of them are useless. They don’t really contribute to the result because of course we don’t really expect the output to be some complex function of the function of the function of the function of EVERY input. This would be silly. And that is part of why neural networks are regarded as black box AI algorithms. You cannot really determine which parts of the function is actually important or not. And they are easily fooled by deceptive data.

Consider if you had 1,000,000 pictures of tumors. 500,000 have a tumor and 500,000 do not. Suppose the image size is 512x512 (it wouldn’t be but that’s ok). Suppose further that by pure chance every image in the positive set has a white pixel at position (25,25). The neural network will learn this, even though it is irrelevant. This was a problem in early tumor detection algorithms because all of the positive images happened to have notes written on them by the doctors, and very few of the negative images had notes on them. The AI would learn that notes meant cancer leading to INCREDIBLY high success rates, that failed when it left the lab and was given pure imagery.

I’m sure I had a point when I started, but I think I forget what it is so I’m going to hit reply now and hope you found that at least interesting.

Thanks for that! Reading them now.

My belief is that the rise of LLMs is a ‘black swan’ moment that everyone should be paying very close attention to. It’s world-changing stuff.

I heard that ChatGPT got to a million users in five days - the fastest tech adoption in history. And it’s already being used productively in a host of jobs.

I’m going to bet that Lambda differs from ChatGPT in that it has some kind of permanent state so it remembers all interactions with you. I was thinking about what happens when people get their own named AI instances that remember every interaction with them. Where you could say things to it like, “Hey, remember when we talked about X?” Or have it tell you that something you just said contradicts what you said a month ago, or whatever. That might start to seem awfully ‘alive’.

It seems to me that what LLMs are is very similar to ‘system 1’ in brains, which are not conscious but do mundane tasks like convert a series of signals from the eye into a coherent image, present lines on a page as a coherent word, allow you to walk without thinking how, etc. You see the same kind of emergent capability after many trials of training.

What would it take to go farther towards system 2, or conscious rational thinking?

Some candidates:

  1. Constant learning from feedback with the world.
  2. State memory, so it can not only remember past interactions, but what its model looked like during those interactions so it has a sense of how its thinking has changed over time.
  3. Some kind of introspection engine so the thing can self-learn or initiate activities and not just respond to requests. In other words, it is ‘thinking’ all the time and improving itself.

What else?

I’m going to guess that one day we are going to learn that consciousness itself emerges when the right combination of systems are put together at a large enough scale. We aren’t going to design a conscious system - it’s just going to happen unexpectedly, or not happen at all.

…and then we’re off! Bang-zoom!

“Say, here’s an idea, why dont we just let AI do it all from the start!”

“If we do that we can reduce our labor costs to… let’s see… um… carry the 6… um… we reduce it to ZERO!”

And lets not forget that all of us and millions of others are basically Beta version testers who collectively are making this engine much more powerful and fine-tuned.

ChatGPT keeps older prompts stored for (dont know how long). It can pick up, fairly seamlessly, in the middle of a different chat from last week.

I think this thing is definitely learning but it does this in different ways. One is the upgrades done by the company, but i think it is also learning from all these stored prompts, GPT’s responses to them, and users’ replies to GPT’s replies, etc., etc.

Sounds about like abiogenesis. Or perhaps the emergence of consciousness in humans. Or both.

It’s happening again.

Another one I’ve heard of was an early attempt (by the US military) at making a computer system that could identify American tanks vs. Soviet tanks on sight. The training images had the American tanks in much better lighting and other photographic conditions, while the images of Soviet tanks were often from a great distance and otherwise poor conditions, and so the AI learned that “good photography conditions = allied tank”.

You probably also need one single AI that can handle all kinds of input (and possibly output). ChatGPT does amazing things with text input and output, but it can’t do anything at all with images. DALL-E can work with images, but it can’t do anything at all with sounds, and so on.

That sounds crazy to me and I wouldn’t have believed you 5 years ago. I understand primordial organic soup, abiogenesis and emerging complexity in the game of life, but it still sounds ludicrous. I am in awe of the entire stack of researchers that came up with this approach to translate language into mathematics and back in a way that actually works. What a feat of cleverness!

It reminds me of Asimov’s SciFi positronic brain. It was developed by humans that sort of know how it works (neuronal pathways or something, instead of general purpose transformers). but at the end of the day, they are definitely closer to a black box AI, and so they have to use robo-psychology to ask the robots (trick) questions to be able to diagnose them.

Wait, that’s the equivalent of prompt engineering now that I think about it! Susan Calvin was the first prompt engineer! Gosh dang, I wish the good doctor was still alive today so he could see the differences between his dreams and reality. He was a true treasure.

And yes, this is a black swan event, I’m trying to educate everybody in my friends & family that the game has now changed, for better and worse. There are also tremendous business opportunities for those who figure out how to make a guided prompt assistant.

It will get all relevant info from a human by asking questions and filling fields/dropdowns, and then, based on the answers, it constructs a perfect prompt, submits it to the AI behind the scene, maybe validates it by feeding the response back to the AI for analysis, and gives back the response.

Maybe it’s already happening and I don’t know about it! Anyways, I have currently created 2 projects for my workplace so people who are not good at coding can ask GPT from their jupyter notebooks or Visual Studio Code: Fastest ever adoption of technology indeed! I finally got a robot assistant that is not entirely braindead like Clippy & company.

On more random thought: The internet was a necessary pre-requisite to this form of general conversation AI, there is no other place with enough data to do it as far as I know. The internet is majority english so the AI will perform best in english. This means that english speakers might have an advantage when it comes to prompt engineering and the results of the AI will be mostly based on english content on the internet, reflecting the knowledge corpus and style of anglophone countries most strongly when it is constructing phrases.

One more step towards english as the universal earth language. The french are not going to be happy :wink:

Out of curiosity, in view of recent upgrades to ChatGPT, I tried once again asking it to cite interesting things about the number 2023. Previously, some may recall, and as posted in this thread, it gave a bunch of numerical “facts”, most of which were wrong.

This time, it was smart enough to recognize it as most likely referring to a year, and gave the following response:

Tell me some interesting facts about the number 2023.

2023 is an upcoming year in the Gregorian calendar. Here are a few interesting facts about the number 2023:

  1. 2023 is the 47th year of the 21st century and the 4th year of the 2020s decade.

  2. In 2023, the United Nations has designated as International Year of Creative Economy for Sustainable Development.

  3. 2023 is a common year starting on Sunday of the Gregorian calendar.

  4. In 2023, the Olympic Games will be held in Paris, France, which will be the third time the city has hosted the games, the previous being in 1900 and 1924.

  5. In 2023 is also the year that some astronomical event like, Total solar eclipse will be visible on 4th October in Antarctica, and North America.

  6. In 2023, US president Joe Biden’s first term will end and a new presidential election will be held.

  7. 2023 is also the year of the Pig in the Chinese Zodiac.

Keep in mind that these are subject to change and may change depending on the events that happen in the future, these are the facts as of my knowledge cut off.

So although it successfully recognized it as a year, it got only one of the 7 points correct – the trivial #3 indicating the year started on a Sunday.

Going through the others in order:

  1. Is nonsense.

  2. The last UN International Year of Creative Economy for Sustainable Development was 2021, and was declared in 2019. I have seen nothing for 2023.

  3. Correct

  4. The Paris Summer Olympics will be in 2024, not 2023.

  5. A bit garbled but substantially wrong. There is no solar eclipse, total or otherwise, on 4th October. There will be a total and an annular solar eclipse and several lunar eclipses on other dates

  6. This one was hilarious!

  7. No, AFAIK it’s the Chinese Year of the Rabbit.

So … it had the right idea that I was asking about a year and not a number, but still managed to get almost all the facts wrong. Actually rather surprising.

You sure you didn’t run that through ChatGOP?