Well … the first part is. But it is the 4th year of the 2020s decade (2020, 2021, 2022, 2023). Even without getting into arguing about what year the 20th century started, decades are not numbered ordinally, typically.
You’re right that it’s definitely not the year of the pig. The year of the rabbit starts tomorrow. Up until then, it’s still been year of the tiger. The Chinese Zodiac signs don’t start on the first of the year on the Gregorian calendar.
Sure. But it was “the 47th year of the 21st century” that got me. This is in exactly the same category as the very basic numerical errors it was making earlier, though not necessarily the same cause. It’s perplexing, and in a way fascinating, to consider the cause of these logical oddities.
So is #7. I can just type “2023 chinese zodiac” into Google and get “water rabbit”. So where does it get “pig”?
I once asked ChatGPT about its political preferences, and it assured me that it was politically neutral!
Yeah, for straight-up facts, it’s unreliable. But it’s not really meant to be a search or knowledge engine like that, from my understanding. For the type of output I typically ask it, it actually seems really good and self-consistent in a given session. I don’t quite get it, either – I just assume it’s from how the AI generates answers. What is odd is also that sometimes it will give me correct answers to factual questions, but other times, with the same question, it will not. But then I’ll ask it a question that’s more intricate, like “what do stoicism and existentialism have in common” and it generates an answer that is reasonable. Or, as a photographer, I’ve asked it a question about a certain photographic and lighting scenario and how might a photographer handle it to create a dramatic portrait, and it gives a plausible answer that is correct enough.
Meanwhile, I ask it what year in the Chinese zodiac is 2023, and it gives me Water Buffalo (which I don’t think exists from what I can google – maybe the ox? Though Wikipedia does state: “The Chinese term translated here as ox is in Chinese niú (牛), a word generally referring to cows, bulls, or neutered types of the bovine family, such as common cattle or water buffalo.”) I ask it what year is the year of the Rabbit in the same conversation and it tells me the correct answer: Jan 22, 2023 - Feb 10, 2024 (as well as giving me the last year of the Rabbit, which was mostly in 2011). It is definitely not to be relied upon for simple factual inquiries.
An interesting sidebar to all this is that there are a number of other chatbots out there that claim to be layered on ChatGPT but more powerful because of the layered enhancements. Chatsonic, for instance (now called Writesonic) claims to be able to retrieve responses from the internet which ChatGPT does not do. A number of these chatbots are reviewed here:
Yeah, when all these AI technologies and information sources intertwine, it’s gonna be something else. I swear, it feels like a gigantic leap has been made in the last three years or so. During COVID or thereabouts, I played with some AI landscape generator that was hitting the news, and it was pretty underwhelming. Now with the myriad of AI art generators, I’m absolutely gobsmacked how quickly we went from interesting but crude AI image generation to this. Same with the chat stuff. My wife got her Ph.D. in NLP (Natural Language Processing) back in 2011, and I remember stuff about sentiment analysis, summarization, and stuff like that, but it seemed like the Stone Age, okay, maybe the Bronze Age, to what we see now. I can only imagine what the next ten years will bring.
If that’s the standard you’re going by, it’s still a big nothing, since every year has at least one solar eclipse somewhere, and at least one lunar eclipse.
I wasn’t really clear and you probably misunderstood me. By “garbled” I was referring to the sentence structure; ChatGPT usually speaks with impeccable grammar. The meaning of the response was potentially a bit unclear, but ultimately I did rate it as one of the wrong ones.
I’m curious about how something like Writesonic would perform on a question like that, since it notably differs by augmenting ChatGPT with a layer that has online access to the internet, but I’m not keen to bother signing up since free accounts have limited functionality.
Not surprising that it doesn’t give good answers to that question. The pre-trained database cuts off in 2021. Even if they’ve updated it a bit since, I would think information about 2023 might be scarce. Also, it’s not designed to be a knowledge engine and asking it facts often takes you down a hallucinatory diversion.
There are LLMs trying to incorporate a real-time internet search into its results, and others that are trying various methods of fact-checking. I’d think of ChatGPT like a really smart, well-read literate assistant who will tell you anything you want to hear, but who does not have perfect knowledge and will tell you bullshit if he doesn’t know an answer - or closer yet, an assistant who doesn’t even know when he doesn’t know the answer, so will just say what he thinks is true or make it up. But he does know an awful lot, and usually will give you good advice, especially on widely discussed subjects. Anything that isn’t widely discussed on the internet is likely to be hit-and-miss.
This is the first step to either UBI or mass starvation.
The implications of this are earth shattering.
And fascinating… glad I am too old to worry about the effect on me, still young enough for a strong chance of getting a good look at the coming changes.
Well, one way to look at it is to see the negative and assume people will be laid off and there will be no jobs and the rich will own everything because they control the AIs. I’ve heard that many times in the last while.
On the other hand, this is a very democratizing technology. The barrier to entry to entrepreneurship could drop dramatically, for example. Starting a small business often requires the use of legal services, consultants, environmental specialists, you name it. Preparing a product for market requires artists, web site development for support, product art, technical writers… If you are an unconnected poor person, your chance of taking a product to market is just about zero. But with AIs giving you legal advice, writing ad copy, generating product art and all the rest, a one-or-two person small business on a shoestring is much more viable.
I think what’s generating all the negativity is that unlike the industrial revolution which impacts the poor, the AI revolution may take the jobs of the chattering classes who make all the public noise, and they are very nervous about it. But who you don’t hear from is the office worker who has to write boilerplate all day and now can demand more money because of higher productivity, or the junior programmer who can now perform at the level of intermediate programmers with help from an AI, and so on. Professors may have reason to worry, but on the other hand you will no longer need a professor to learn about things, so access to education will be democratized. Home schooling is much more viable with an AI in the mix to answer the parent’s or student’s questions, prepare exams for evaluation, ask questions to test understanding from time to time, etc.
Everyone in the world just got a free assistant to help them with things that would have been too expensive just a year ago except for the well off. That’s grerat for equality.
It might just be that a change is needed in how people are taught. More oral defenses, exams without access to a computer, that sort of thing.
But let’s focus on the positive. What a boon to education, especially for poor kids who can’t afford tutors or who have indifferent parents who won’t help them.
Imagine being able to submit your writing to an LLM and say, “Where is this wrong? How could I get better at this?” Or you have to do sentence diagramming and you don’t get it very well, so you ask the LLM to give you ten sentences fully diagrammed so you can understand. If there’s a concept in your science class you don’t get, and the teacher isn’t available, ask your LLM. Drill as deep as you want to go until the concept is clear.
And you know there will be specialist LLMs with fine tuning for various subjects, grade levels, etc, probably with accuracy filters so they don’t give the kids nonsense.
Teachers in overloaded classrooms should welcome this. They get inundated with questions from the weaker students, which slows up the whole class. They can’t spend enough time with each student, but they can curate prompts to give the kids which will help them get more information. The LLM can create an infinite number of sample problems for practice. The advanced students can proceed forward without help from the teacher with guidance from the LLM.
And poor kids who can’t afford college will have a perfectly viable way to gain college-level knowledge for free, much easier than they can just studying on their own.
For every person whose job may be lost to an AI, there are going to be a lot more jobs created because an LLM is a productivity amplifier in many jobs and industries, like 3D printers and CAD did for small manufacturing, or home computers and spreadsheets did for small businesses.
Is this something that is done at all anymore? I very very vaguely remember some cursory stuff about sentence diagramming back in the mid-80s, but it was like a couple weeks of lessons and that was it. My kids haven’t been exposed to it, at least not so far.
I am getting old. In grade 8 I was told by our language arts teacher that sentence diagramming was the most important skill we would ever learn. He was apparently wrong.
However, he also told us we had to memorize a poem, and that we’d remember it for the rest of our lives. And I can still recite ‘Jabberwocky’. Turns out, that was just as useless and did not lead to a frabjous future.
I’m late to this party, but I’ve had an entertaining evening playing around with the same sorts of queries you guys were doing last month!
Here’s my favorite so far.
Prompt:Write a script for a TV commercial for Theranos starring Jules from Pulp Fiction.
Result:
Opening shot of Jules walking down a busy street, dressed in his iconic Pulp Fiction suit and carrying a briefcase.
Jules: “You know what they say, the taste of the pudding is in the eating.”
Cut to a laboratory, Jules approaches a scientist who is holding a test tube.
Jules: “And when it comes to blood tests, ain’t nobody got time for that.”
The scientist hands Jules the test tube and he holds it up to the camera.
Jules: “That’s why I trust Theranos. With just a tiny drop of blood, they can run over 240 tests. That’s right, 240 tests. No more pricking your fingers all day, no more waiting for results. Just fast, accurate results. Just like the Gimp.”
Cut to Jules walking down the street again.
Jules: “So, what you wanna do? You wanna go to a regular lab and wait for hours? Or you wanna trust the taste of the pudding with Theranos?”
Closing shot of Jules walking off into the distance, the Theranos logo and tagline appear on the screen: “Theranos: Taste the pudding.”