The next page in the book of AI evolution is here, powered by GPT 3.5, and I am very, nay, extremely impressed

I noticed apparently OpenAI’s new chat history feature doesn’t work so well at scale. A few days ago, my chat history disappeared, replaced by wording that said something about high demand and the previous chat history would be back soon. Then today or maybe yesterday, the promise disappeared, and all I have is the more recent history created since the old one went away. I guess I’m going back to backing up the more interesting conversations in a text file.

When Cyberpunk 2077 first came out a couple years back, I was talking to my wife (not a video game person, aside from some Nintendo games) about the controversy surrounding release. There were definitely some issues, things that were unoptimized or broken, but to me it seemed like the main problem was that Cyberpunk is a pretty good modern open world RPG game, but it wasn’t revolutionary by any means. What some people wanted - being able to talk to any random person on the street, or to any shopkeeper, etc seemed like something that would come in a decade or two, not something that was possible then.

Someone has modded the ability to converse with any character in freeform text conversations powered by ChatGPT that take the local circumstances into account into the game Bannerlord. I could see a AAA title released with a polished version of this by the end of the decade.

Hm, I wonder if there’s a way to tell is “Some of the townsfolk have clues to the location of the bandits’ hideout”, or the like?

As a Bannerlord mid, I doubt it, but with a future game developed with this capability in mind from the ground up? I’m sure that’s only scratching the surface of what’s possible.

Again, I think people are missing what ChatGPT is actually valuable for today - and it’s a lot. Not as a search engine, or an Oracle or fountain of truth, but as a workaday assistant for boring tasks. Twitter is full of threads for non-technical people who are using ChatGPT in the work daily. Some examples:

  • A mid-level manager had to write summaries for upper management. ChatGPT is fed all the details, and generates very good summaries. Saves him several hours per week.

  • An insurance agent who spends most of her time writing letters rejecting claims, accepting claims, seeking information, etc. Says ChatGPT has tripled her productivity.

  • A speech writer who uses ChatGPT as a brainstorming tool. “I want a speech with the flavor of “I have a dream”. Give me three sample opening paragraphs.”

  • Teachers using ChatGPT to produce lesson plans, write evaluations, etc.

  • Person who has to write one of those boring company newsletters once a month announcing product delivery, management changes, awards, etc. Now writes the whole thing with ChatGPT. Give it an example of previous newsletter content and the facts for the next one, and it’ll just spit it out for you.

A knowledge economy is full of mid-level employees who have to generate reams of textual information, ChatGPT is going to heavily impact them. It already is.

Some are saying that ChatGPT has the fastest adoption by non-tech workers of any tech in history. I can believe it, because for many people it is useful right out of the gate with almost no training and no prerequisites.

As for tech people, one of the interesting applications is generating UI prototypes. You can give chatGPT a description of your user interface, and have it spit out an XML or other language-based page layout you can render. Suggest improvements, and it will spit out another one instantly. Tell it to give you ten copies with ten different color schemes that don’t clash, or try out a dozen different layouts. Repeat until you have what you want. Then you can tell it to write the background code to hook up events to all the UI elements and you have functional prototype.

As someone who’s built a zillion such prototypes, I wish I had that back in the day.

My guess right now is that ChatGPT and its inevitable rivals and specialized versions are going to be as big a deal as any tech to come along since the cell phone. It just may not be as visible.

For most of those “information workers”, I have to wonder why the information they gave the AI wasn’t considered adequate as their job product. Like that worker charged with writing the newsletter: They still have to give the AI the list of facts. Why can’t the newsletter just contain that same list of facts, unadorned? It’d probably make it a lot easier to read than all of the florid prose folks usually insist on adding (and which result in nobody ever actually reading newsletters).

I’ve had a great time with ChatBot and giving it some 20-year-old stories that I was working on and trying to figure out where to take them and how to connect various aspects of the plot together. And it’s been super helpful with that. Like, uncannily so, doing wonderful jobs of summarizing character motivations and how to explore them in regards to other characters, suggesting further scenes for the story, and the such. One of my favorite suggestions was a silly little absurdist story about a person finding himself in the afterlife in some spiritual version of O’Hare Airport, and suggesting some additional characters, and it came up with Samson, the talking suitcase that was left behind by his owner, which completely fit the type of story and absurdist humor I was looking for. Now whether I have the energy to actually write a complete story out for myself, I don’t know, but twenty years ago it would have been a wonderful catalyst to continue my wacky thoughts.

The more I’ve played around with this, the more awesome I think this Chat CPT 3.5 is. I thought it was bananas when I first played with in December. I’m even more impressed as time goes on and I ask it to do more complicated things and help me flesh out ideas.

And, once again, this is just the beginning. The fact that this exists blows me away.

GPT stands for “Generative Pre-Trained Transformer.” The ‘pre-trained’ part is critical. ChatGPT knows nothing about what your company is currently doing. Its database ends in 2021 and is based on public internet data, so it also wouldn’t have proprietary company info in the firat place. So today, you have to give it your data, and it will format it or rewrite it for you.

I could see future products that ingest real-time data or have modules that pick up documents going into various text repositories in the business and ingesting them so it’s always up to date. Then you could just tell it to write you summaries. Or do a lot more - once these things can be updated dynamically, they may become amazing business analysts: “Tell me where we improved cost controls in the company last quarter, but also include any negative effects from those cost controls.”

For now, you have to do a little work to give ChatGPT the data.

I’m reminded of a bit in Unseen Academicals, one of the Discworld books. Basically, there are four characters, a smart man, a dumb man, a smart woman, and a dumb woman. The dumb man is in love with the dumb woman, and wants to send her love poems, so he asks his friend the smart man to write some for him. And the dumb woman, on receiving the poems, can’t understand the highfalutin’ language, so she asks her friend, the smart woman, to translate them for her.

So we’ll get a smart AI that writes a bunch of prose for a newsletter, and then all the recipients of the newsletter will ask their AI assistants to summarize all that prose into just the facts.

From what it sounds like, it seems like the best use that people have been able to find is for creative tasks, where accuracy is not an issue, because you’re not asking for facts you don’t already have. That’s certain a great use case, but it really can’t do that any better than a mid-level writer, because that’s what it’s generally trained on. I had a friend on Facebook summarize it as something like “People who write for less than $X per hour will be out of a job soon”, where I don’t recall what X was, but the point was that boring humdrum writing that needs a bit of creativity is now possible for AI to generate, so you need to be able to add some convincing displays of creativity or technical knowledge outside its training set to not be replaced by it.

Just for fun, I pasted the last few messages into ChatGPT and asked it to summarize what each person said.

Recognize what it had to do here: Here’s a sample of the text string:

On its own it figured out what the member na,es are, that ‘guest’ and ‘3h’ and ‘reply’ are not part of the user’s text, and then to summarize what they were saying. I could easily have asked it to put the summary in a table or any other format.

Many people have to this kind of “pull out the relevant details and summarize” stuff all the time at work. This took me 15 seconds.

Then I prompted it “Can you answer Chronos’ question regarding data input?” Here is the reply:

For double extra fun, I had CharGPT give me the answer in the form of a sonnet:

So there you go!

…That’s a lot closer to a proper sonnet than the previous attempts in this thread were: Before, it seemed to just treat “sonnet” as a synonym for “poem”, with no structure beyond end-rhymes. But that has 14 lines, as it should, and the lines are all close to 10 syllables, and some of them even in iambic pentameter, and it has the correct rhyme structure in the first 8 lines and a different (though not quite right) structure in the last 6.

Either it has actually been learning, since this thread started, or it’s been upgraded. And yes, I know that it would tell you that neither has happened. But I don’t think we can trust its own word on that.

More likely, the people who tried it had already tried another form of poem first, and that screwed it up. I’ve seen people say that once you get it to write something in a certain style, it will remember that and if you ask for another style, it will give you some kind of hybrid until you clear your session, or just keep giving you results in the style you first asked for.

I can’t guarantee that’s true because I haven’t tried it, but it’s more likely than that it’s been ‘learning’. That database is frozen. It took months to generate. But what has been going on is programming around it, such as racism filters and that kind of stuff. But I don’t think any of that would affect its ability to write sonnets.

The other possibility is that it’s just hit-and-miss depending on the prompt.

I have had it eke out a sonnet in proper rhyme scheme and line lengths last month, when first playing with it. It tended towards tetrameter rather than pentameter and overall had a sing-song basic nursery rhyme quality to it. But that one is a lot closer than any of my attempts to something resembling a sonnet. Still woefully short, but better than what I’ve yet seen. It hasn’t changed, right? I mean, todays version is still the same as the version from the beginning of this thread, no? Interesting it’s done a better job that time.

Just looked at my prompt. The key may be that I said, “write the answer in the form of a Shakespearean Sonnet” Maybe adding ‘Shakespeare’ was the magic incantation.

I’ve found it very helpful to “activate” its background knowledge by asking it to describe the characteristics of, say, a Shakespearean sonnet and then telling it to stick with those parameters.

I agree with this completely.

There are myriad specialized GPTs that already exist or are coming. One for lawyers, one for certified translators (can’t use google translate for serious business…yet) and the list will go on and on endlessly. It’s very scary and exciting to think about the increase in productivity.

I’ve used it in my job for tedious tasks such as you describe, simple-enough scripts, instead of using my brain for several minutes, Assistant spits out an answer in seconds. I cannot and will never be able to beat that.

A colleague I work with uses it to write emails for him, lol. It does a better job than him. This is going to be a huge deal for non-native speakers of different languages.

It will also help the email scammers sound much more professional and make them harder to identify.

I don’t know the answer, but my previous example of asking it to write a paragraph in the style of PG Wodehouse was revealing. It unsurprisingly picked a Bertie and Jeeves scenario, and did manage to work in a common plot point involving Bertie getting unwittingly engaged. But many of the other details were completely inappropriate and uncharacteristic, showing the extremely superficial nature of its construction. When I asked it to do a paragraph from a Blandings story instead of Jeeves, it managed that, too, and in fact did it more convincingly.

[quote=“Pardel-Lux, post:233, topic:975945, full:true”]
My feeling is not that it’s language skills are too good to account for the input, the input is enormous (can someone ask for me how big it was?), …[/quote]

Sorry if someone has already posted this:

As an approximate numerical value, the corpus that I was trained on is around 570 GB of text data. This includes a diverse set of text from various sources such as books, articles, websites and more. It’s important to note that the corpus is constantly updated and refined to improve the accuracy and coherence of my responses. The size of the corpus can change over time as new data is added and previous data is removed.

It has been upgraded twice since the thread started. The January 9th update made the answers even better than before.

Release Notes (Jan 9)

[…]

  1. We made more improvements to the ChatGPT model! It should be generally better across a wide range of topics and has improved factuality.

Release Notes (Dec 15)

[…]

  1. General performance: Among other improvements, users will notice that ChatGPT is now less likely to refuse to answer questions.