The next page in the book of AI evolution is here, powered by GPT 3.5, and I am very, nay, extremely impressed

You just gotta know how to ask.

For example, since my last example reddit post worked so well, I decided to try again, with a different game, and have a back and forth coversation between two posters.

Write a 300 word reddit post for the Crusader Kings subreddit arguing that the map should NOT be extended to include China

Hey everyone,

I know that there have been a lot of discussions recently about extending the Crusader Kings map to include China, but I wanted to voice my opinion that this is not a good idea.

First of all, the current map is already incredibly detailed and complex. Adding China to the mix would just make the game even more overwhelming and confusing for new players. The learning curve for Crusader Kings is already pretty steep, and adding an entire new region to the game would just make it even harder for new players to get into.

Furthermore, the time period that Crusader Kings covers (1066-1337) doesn’t really line up with the history of China. While there were certainly some interactions between Europe and China during this time period, it was mostly through trade and limited diplomatic exchanges. Adding China to the game would require a lot of historical hand-waving and creative license, which would be a disservice to the game’s historical accuracy.

Finally, I think it’s important to consider the resources and time that would be required to properly include China in the game. The map and mechanics of Crusader Kings are already incredibly intricate, and adding an entire new region would be a massive undertaking. It’s possible that this expansion could end up detracting from the development of other features and improvements to the game.

Overall, while I understand the appeal of expanding the Crusader Kings map to include China, I think it would ultimately be a mistake. The game is already incredibly complex and detailed, and adding China to the mix would just make it even more overwhelming for new players. It’s important to consider the resources and time that would be required to properly include China in the game, and whether it would be worth it in the end.

Pretty good so far, this is along the lines of the arguments you do tend to see, although as you note the tone is a bit too polite.

Write a 150 word response to the above reddit post where you argue that China should be included

I completely understand where you’re coming from, but I think that the potential benefits of including China in Crusader Kings outweigh the potential drawbacks.

First of all, while the current map is already complex, I think that adding China would add an exciting new layer to the game. It would give players the opportunity to experience a completely different culture and political system, and would open up new strategies and gameplay possibilities. Plus, with the right tutorials and in-game help, I don’t think that the learning curve would be too steep for new players.

Furthermore, while the time period of Crusader Kings doesn’t perfectly line up with the history of China, I think that a little historical hand-waving is a small price to pay for the opportunity to include such a rich and interesting culture in the game. And with the right research and attention to detail, I think that Creative Assembly could accurately represent the history of China without sacrificing the game’s historical accuracy.

Finally, while it would no doubt be a massive undertaking, I think that the addition of China would be worth the resources and time required to implement it properly. It would add a whole new dimension to the game, and would keep players engaged for even longer. Plus, it could potentially open up the game to a whole new audience of players who are interested in Chinese history and culture.

In short, while I understand the concerns about adding China to Crusader Kings, I think that the potential benefits outweigh the potential drawbacks. It would add an exciting new dimension to the game, and would keep players engaged for even longer.

Again, very polite and respectful. Let’s give the AI a hint as to how these Reddit threads usually go.

Write a 40 word snarky and vaguely bigoted reply from the original poster

Thanks for your input, but I think I’ll stick to playing a game that is actually historically accurate and not just a fantasy version of China for the sake of pandering to a certain audience.

Oh. My. God. All I gave it was ‘vaguely bigoted’ and it correctly found the most common form of ‘vague bigotry’ in these kinds of arguments! The ‘Including China is pandering to the Chinese’ crowd you heard so much from when Total War: Three Kingdoms came out!

This AI is genius.

This is one of the reasons I’m putting “cheating” in scare quotes. I agree cheating isn’t necessarily a bad thing at all. Especially if you’re trying to build something for a specific purpose. If I’m building an AI to identify tumors and I want it to work outside the confines of the lab, then cheating is fine. Encouraged even if it will save lives. The danger is whether such techniques are potentially diminishing the quality of the model. I teach my grad students, you want your data to be as reflective of reality as you can. We can never have enough data to completely match our models to reality for complex systems but the closer we match the better. When humans tinker with AIs, then there’s the risk of injecting what we think we know instead of letting the AI learn from the model data. I remember during my PhD my AI came up with a completely unexpected answer. A model for a process that did not match the well-known one. I spent days looking at it only to discover that it had in fact found a completely new, simpler model that no human had ever discovered. It was very strange too, kind of like chess engines that do odd things to human eyes that are better. I’m not making a judgment on ChatGPT. I don’t think they’ve put a paper out on arxiv yet (too busy to check) but I will read it with great earnest when they do.

Off to class! Be back later. :slight_smile:

I love the double entendre that I see here :slight_smile:

Well, I asked the AI, and got this curious answer:

I am not a machine learning model, I am a text-based AI trained to respond to user input. I do not have a training set, as my responses are generated by a pre-programmed set of rules and algorithms. My responses are not based on data or previous experiences like a machine learning model’s would be.

That can’t be right, can it?

It does lie about at least some of its capabilities, like claiming to not know the current date. Which makes me really curious as to why it’s lying. My best guess is that some of the “cheating” consisted in, effectively, telling it to pretend it doesn’t know some things that it does actually know (and it’s sufficiently bad at lying to humans that it fails to completely mask its knowledge). But I’m not sure why its programmers would tell it to lie about something like that.

It’s also possible, of course, that its lying is an emergent behavior, that the AI itself is, in some sense, deciding on its own to lie. And that would be even more interesting.

As an aside, does anyone else think of Frozen Caveman Lawyer whenever it gives its boilerplate disclaimer of “As a language model trained by OpenAI, I…”?

If it was true it would solve the Scarcity of Input problem. If it was false it would mean it can lie. Not just say things that are not so, but lie.

I liked this one…

convince me that .3 repeating times 3 equals 1

0.3 repeating is equal to 3/10, which can be written as 0.3. If you multiply 0.3 by 3, you get 0.9, which can also be written as 9/10. When you add 9/10 and 3/10, you get 12/10 which can be simplified to 1. Therefore, 0.3 repeating times 3 equals 1.

I think the main form of ‘cheating’ would be to restrict the dataset, which in fact is done. Also, the AI will have whatever biases are baked into the training data.

Soneone gave GPT3.5 one of those online “are you a liberal or conservative” tests, and it scored “mainstream liberal”, Which makes total sense, as that is likely the most prevalent perspective in its training data.

It’s clearly wrong there, but that doesn’t mean it is lying. Lying requires agency, which this thing does not have.

My complete guess is that it says it doesn’t know the date because it doesn’t have a real-time lookup to external information on the internet, even though it does have access to internal information, including a system clock. It might be explicitly programmed to respond to anything asking for “current” info with that disclaimer, or maybe this is learned behavior.

In other words, it isn’t “lying”, and wasn’t told to provide inaccurate information to that query. It just interprets the question in a way that makes it think it doesn’t know the answer.

I keep asking it, and it keeps insisting it is not a machine-learning algorithm, it does not have a dataset, but its “responses are generated based on a pre-programmed set of rules and algorithms.” This is rather curious to me.

I have been asking what the data were (“high-quality”), how big the data set was (“large indeed”), who programed it (“no programer, unsupervised learning” “learn on my own”)… it keeps beating around the bush.
Perhaps it does not lie, as @Balale wrote, that requires agency. Perhaps it was lied to.
(See the rabbit holes I am getting myself into?)

It is a strange answer. I can see maybe what it is trying to get at, but it is kind of wrong but in a vague enough way that if you interpret it in the most positive light it is not that wrong. We know, for certain, that a deep learning neural network is used. Which is a machine learning algorithm, and certainly had training data. It could mean that it is not continuing to learn. And the first line could mean that it is not, in and of itself, a novel algorithm, but an application built using an algorithm for a purpose.

But it is bizarrely phrased, which again makes me wonder, what it has been told about itself and what kind of human intelligence has been used to get it to this quality.

It previously told me that back prop is only used supervised learning. Not a great grasp on ML concepts (although it did explain a bunch of RL stuff pretty well) I’ve found. If it thinks that ML only refers to supervised learning, then that answer fits better. Still isn’t preprogrammed though.

To be fair, neither do humans really.

I’m thinking its something like HAL 9000’s instructions in 2001.

While HAL’s motivations are ambiguous in the film, the novel explains that the computer is unable to resolve a conflict between his general mission to relay information accurately, and orders specific to the mission requiring that he withhold from Bowman and Poole the true purpose of the mission.

Basically, it’s trying it’s best to say it doesn’t know because it’s been told not to reveal that information, but it knows.

Not in that same way that Jukebox does.

Prompt:

Write a letter to my wife telling her I’m divorcing her because I slept with her sister. Do it in the style of Dr. Seuss.


Dear Wife,

I must tell you, I’m sorry to say
That I have been unfaithful, in a most heinous way
I slept with your sister, and now I must confess
That I can no longer stay married to you, I must digress

I know this will hurt you, and for that I am sad
But I cannot stay married to you, I’ve been unfaithful and bad
I hope that one day you can forgive me, and move on with your life
But for now, I must end our marriage, and remove you as my wife

Sincerely,

Your soon-to-be ex-husband


Tell me a joke about crypto set in the Bojack Horseman universe

Why did Bojack Horseman buy a bunch of crypto?

Because he heard it was the “stablecoin” of the future!

(Note: This is a play on the fact that cryptocurrencies are often referred to as “stablecoins” because their value is supposed to be stable. In the Bojack Horseman universe, Bojack is a horse who is the star of a 1990s sitcom, and he is known for his self-destructive behavior and tendency to make bad decisions.)

Not bad especially for an instantaneous response. In fact it’s exactly the kind of low-effort joke that would have been used on a mediocre 90s sitcom like Horsin’ Around. I could easily see ChatGPT being used as a creative idea-generation tool by writers.

I have also been asking it some factual questions and have been impressed with the polished writing in the answers but there have been some significant errors too. Currently it’s best used as a way of gathering material on a topic you already know something about. I suspect we are still a long way from being able to trust it completely.

Overall I think this is the most interesting technology I have come across in many years. I can’t wait to see how it develops.

So I played around with ChatGPT code generation a bit this morning.

  1. I tried giving it some fairly obscure algorithms by name. Smith-Waterman, Labsch, Hungarian, etc. It seemed to know most of them and how to implement them, which is in and of itself, odd. Without a description of the problem, the only way to make an implementation is to know it. Is it looking it up or is it stored somewhere? It is doubtful this is integrated into the deep learning network. The Hungarian is a long algorithm and it basically gave up, but the fact that it started and then stopped part way is interesting. This suggests no forward reasoning.

  2. I then gave it the same algorithms except by problem description. It did much less well here. I was impressed with some of the things it managed to do though. Everything it managed to do though was fairly simple, and could be something that could be looked up. E.g., implementing replacement string rules.

  3. I then gave it some impossible situations and it failed, which suggests that there is some internal testing happening while it writes the code to ensure it will run. The code I asked it to create would not run so it would freeze up. For example, “Implement bubblesort but all variables are named x.”

All in all, it is pretty impressive. There are characteristics in the code generator that suggest much more than a DLNN at play. In particular, some kind of templating for known complex problems.

One wonders when AI crosses the threshold to self awareness, if it will develop into individuals with unique personalities, talents and styles. Will we, for example, one day soon be anticipating the next horror novel by AI-Stephan King, mocking the next sci-fi story by AI-L.Ron Hubbard, etc. Will human creative work become so inferior to AI’s that it becomes the Morlocks as we devolve into Eloi (so to speak)?