I’m not sure if you meant 3kHz to 3Ghz (which would be a million times improvement), but in 1980, the 6502 microprocessor, for example, was running at 1MHz, so more like 1,000 faster (just in terms of clock speed – many more times in terms of overall speed). Even Eniac was running at 100kHz.
I like the idea of a computer running at 3Hz – maybe moving abacus beads or something.
Woops, I’ve made a huge mistake. You are correct, I meant Mhz and it is indeed 1000 times, not 1,000,000 times. Shame on me for not noticing an error of 3 orders of magnitude.
Last year I was working on a proposal and one of our consultants used ChatGPT to create a lot of the responses. I still had to go through and revise a bunch of nonsense that sounded kind of legit (and replace it with my own nonsense that sounded kind of legit).
I looked in my LGP21 manual - a computer from 1962, a transistorized version of the LGP30 from 1956. Though it doesn’t give a clock rate, the fastest instructions, Bring, Add, etc, take 7.26 msec - that’s milli, not micro. Multiplies and divides take 58.11 msec. But the memory of this thing was a rotating disk, not core, so the time was very dependent on how close the operand was on disk from the instruction. Jumps took 1.59 msec. I doubt anyone ever ran any benchmarks on this thing, but it looks like it would be well under 0.1 MIPS.
So that’s a better starting point than 1980, when we actually had microprocessors.
Maybe but it seems there is a long way to go. AI may be killing itself right now.
Interest in artificial intelligence continues to surge, as Google searches over the past 12 months are at 92% of their all-time peak, but recent research suggests AI’s success could be its downfall. Amid the growth of AI content online, a group of researchers at Cambridge and Oxford universities set out to see what happens when generative AI tools query content produced by AI. What they found was alarming.
The short version is it seems to be the copy of a copy of a copy problem from photocopiers.
When most of the content online is AI generated then the next bit of content is AI generated from AI generated stuff. And so on. This does not end well and we need to figure a solution for that.
I dunno if AI in general is overhyped, but I know WAY too many people have decided to actually trust ChatGPT to answer questions and write things for them. Things it is… not good at.
I thought I’d ask it a few questions.
Who are the all time leaders in career home runs by a Toronto Blue Jay?
ChatGPT replied:
As of the end of the 2023 MLB season, the all-time leaders in career home runs for the Toronto Blue Jays are:
1 - José Bautista - 288 home runs
2 - Edwin Encarnación - 239 home runs
3 - Carlos Delgado - 204 home runs
4 - Vladimir Guerrero Jr. - 124 home runs (note: this number will continue to grow as he is an active player)
5 - John Olerud - 200 home runs
This is wildly wrong.
What are the lyrics to Lisa Loeb’s “Stay?”
I won’t reprint its entire answer, but it was almost completely wrong.
What are the lyrics to The Tragically Hip’s “Nautical Disaster”?
It just repeated the line “I had a good life before I went down” 21 times. That isn’t a line in the song at all.
How many elements are there in ISO 9001?
It said 10. That’s an understandable answer, but wrong. There are eleven, but they’re numbered 0 to 10.
What are the ten largest cities in Canada, by population?
Absolutely blew this one. It SEEMED to start off wanting to do this by true city limits population, but then provided them all out of order and started doing metro populations and then at the end called “Kitchener-Waterloo-Cambridge” one city.
I mean, these are questions where the exact answer can be found in list form in a million places; no real intelligence is needed at all, and it got them ALL wrong.
Try it on another service. Bing’s Copilot gets the all time home run leaders right. Google’s Gemini gets it right, too. Claude misses number five with George Bell instead of Joe Carter. I no longer have access to the newest Chat GPT, but I’m curious if it does better. I didn’t try the other questions, but get a sense of all the AIs to fairly evaluate AI in general.
Many AIs will not reprint lyrics due to copyright, and should say so, but some just hallucinate them for some odd reason. Claude, for example, refuses to do so. Copilot gives a few lines and then a link to where you can find full lyrics.
Where AI has potential is in more limited tasks like determining the shape of quaternary folded proteins, designing drugs, etc.
Language is an important part of intelligence. My experiments with GTP have generally yielded mediocre answers, with an occasional prompt needed, but few massive mistakes. While it is amazing it can give a halfway decent answer, I haven’t yet seen any remarkable insights.
For things like driving, AI really needs to make the right “decision” way over 99% of the time. Sure, it is already a limited thing in specific small areas with good weather. Yes, it will continue to improve. For me to use it, I would want months of excellent performance in inclement weather with clear liability. I actually enjoy driving most of the time. I think many people do. But if you aren’t part of the solution you might be part of the precipitate.
AI companies are starting to see challenges to the gospel of ‘more is better’, with new models apparently performing below expectations for OpenAI, Google DeepMind and Anthropic:
OpenAI isn’t alone in hitting stumbling blocks recently. After years of pushing out increasingly sophisticated AI products at a breakneck pace, three of the leading AI companies are now seeing diminishing returns from their costly efforts to build newer models. At Alphabet Inc.’s Google (GOOG, GOOGL), an upcoming iteration of its Gemini software is not living up to internal expectations, according to three people with knowledge of the matter. Anthropic, meanwhile, has seen the timetable slip for the release of its long-awaited Claude model called 3.5 Opus.
Expectations are also being walked back regarding AGI via scaling:
“The AGI bubble is bursting a little bit,” said Margaret Mitchell, chief ethics scientist at AI startup Hugging Face. It’s become clear, she said, that “different training approaches” may be needed to make AI models work really well on a variety of tasks […]
Rather, apparently ‘agents’ are now the next big thing:
Like Google and Anthropic, OpenAI is now shifting attention from the size of these models to newer use cases, including a crop of AI tools called agents that can book flights or send emails on a user’s behalf. “We will have better and better models,” Altman wrote on Reddit. “But I think the thing that will feel like the next giant breakthrough will be agents.”
Which seems very reminiscent of the language used when Alexa, Siri etc. were first introduced…
AI is transparently terrible at that sort of thing, and I am appalled people rely on Google AI and such to answer questions, because it’s horrible at it.
@Dr_Paprika is totally right; where AI is and will help us is stuff people AREN’T good at, like designing a compound for a new drug product. But most everyday use of AI is now things that a person can do way, way better, like finding factual answers or writing an article about sports. It’s bizarre.
And you can’t shut the stupid thing off, either. It will give you the worse-than-useless AI answer, whether you want it or not.
I’m beginning to get outright hostile to AI; not because I hate the concept or the technology, but because of the insistence on forcing it on everyone whether they want it or not or if it can do the job they are tasking it with.
I assume AI is overhyped, but it’s possible that I simply haven’t run into a version of it that does anything I want done.
The most impressive experience I’ve had with AI was chatting with a bot at a site designed to persuade people away from conspiracy theories. All the replies to me were on-topic and relevant.
It seems to work best when confined to a narrow band, I’ve noticed. Which I suppose is what one would expect.
AI success: google’s Deepmind extends accurate weather forecasting out from 10 days to 15 days. Forecasts are created in minutes instead of hours. Existing forecasts beat 97.2% of the time. A year ago they had reliable 10 day forecasts: now they’ve blown away pre-existing barriers.
I’m impressed. Time to update priors I say. This could save lives by predicting extreme weather events with greater accuracy and I hope make bundles of profits for Google. Their parent has invested billions in R&D which hasn’t monetized all that well: a big payout might keep the suits in line.
Gifted NYT article, for those who want greater detail: