The next page in the book of AI evolution is here, powered by GPT 3.5, and I am very, nay, extremely impressed

I posted someplace else that it looks like a natural for ship board damage control. There are contractors who write such software, but it’s pretty crude. GPT could absorb all of the data about the ship, its’ structure, electrical and hydraulic systems and the effect of different weapons that may fire on it. The literature relevant to a ship would not be as large as GPTs training set, but it would be very large. Definitely worthy of the task. And, DARPA has the money to fund it.

I would think call center replacement is a first target. I don’t know what it costs to run the system and how many callers it can serve at once. Of course it may be just a passing novelty like talking cars were in the 80s.

I don’t think so. @Chronos position might be one of functionalism (but from a one-liner it’s hard to tell), while you’re proposing a kind of behaviorism. I don’t think either is correct, and I don’t think that there’s anything mystical to the proposition that differences in implementation lead to, well, differences, but that’s another matter. (Without endorsing it, Integrated Information Theory is a good litmus test on the matter: there, it’s a perfectly simple consequence of the theory that two systems performing the same tasks may differ in the associated mental states, but there’s nothing mystical about that—they just differ in the amount of information they integrate.)

A human sensing the temperature in a room may turn a knob to function as a thermostat. A bimetal strip or a computer with a sensor and relay can do the same thing. So, they share the definition of the task and they have different mental states. Or at least that is the argument being made. That because they share the description of a task, they share the nature of how it was done. They do not. A bimetal strip does not have a mental state and neither does a computer…

It’s odd that nobody ever accused an IBM electrical data processing machine of thinking. An IBM 407 is a programmable computer with adders, addends and augends just like an electronic processor. But it was labeled a processor not a computer. Maybe there’s some semantics at play here.

More accurately, the thermostat has no mental states at all. But no one has ever tried to claim that a thermostat is demonstrating intelligent behaviour. When an AI like ChatGPT successfully answers a broad range of questions specifically designed to test intelligence, you’re dealing with a qualitatively different phenomenon, and it’s not one that can be easily hand-waved away.

Agreed, I was addressing the perception that any computer is a rudimentary brain.

Very informative link, thanks.

I thought a couple of comments were humorous. Like ‘tractors didn’t replace farmers, they made farmers more productive’. Tell that to the thousands of displaced farmers who migrated west in the thirties. Also, that an initial issue is that the user has to ‘define the problem’. At a corporate management level that becomes political. Perhaps LLM will speak truth to management. That would be fun to watch.

Zombies Are Us

What a ghastly thought, being a manager in the GPTverse. A Chatbox HR gets Chatbox generated resumes and uses a Chatbox to filter and pass them on to the Chatbox assisting the human manager. Someone in the high castle is monitoring the process on a large screen He notes that hiring so far required 14 milliseconds but now has come to a halt because the manager is in the MENS room (GPS urinal 3). He muses on a Chatbox controlled catheter that could eliminate such non-task oriented activities.

Yeah, if AIs make people more productive, then the only options are expansion or cutting workforce.

But I’d rather focus on the positive. Think about how hard it is to get a job today, going through the process of writing resumes, submitting them, waiting for results, submitting more… And the process is error-prone in the sense that you might miss submitting to what would have been a great employer, or the employer misses getting you, the perfect candidate, because they put some requirements in the job listing they didn’t really need but which disqualifed you.

Matching employers to employees is difficult, and the result is an inefficient job market, unhappy employees and less productivity.

With AIs in the mix, all you might eventually need to do is announce to the AI that you are on the job market, and it will submit your resume to every employer that’s even close to your requirements. And those employers will respond almost instantly if you aren’t a match, so you can get feedback faster and either re-tailor your resume or upgrade your skills or whatever.

As an employer, I announce to the HR AI that I need a junior developer for team X. The AI looks at what team X is doing, figures out what skills are needed, and within minutes has filtered through every candidate with an open resume, scheduled phone interviews for the best candidates, etc.

The downstream effects of this are progressive. It should make it easier for employees to leave jobs, as the friction for finding a new one will be lower. The old way of hiring employees through job agencies and such could cost as much as a year’s salary. Eliminate that, and you can pay employees more. And having a better match between jobs and employees should drive up productivity.

On the other hand, if you are an employee in a head-hunting firm, you might want to start upgrading your skills, as you may not have a job for long.

Good points. It depends on whether it’s me or HR who instructs the bot.

Both a brain and a CPU have, as their primary purpose, processing information. The brain can process a lot more complicated information than the CPU, and in many more ways, but that’s what makes the CPU only rudimentary.

And this similarity has long been recognized. If you had asked someone three hundred years ago if a man-made object could play chess, the answer would have been “no, because only something that thinks can play chess”. And yet, here we are.

You don’t have to go back nearly that far to hear assertions that “only something that thinks can play chess well” … an example of the perennial moving-of-the-goalposts that has been the practice of skeptics since the earliest days of AI …

In 1965, Dr. Hubert Dreyfus, a professor of philosophy at MIT, later at Berkeley, was hired by RAND Corporation to explore the issue of artificial intelligence. He wrote a 90-page paper called “Alchemy and Artificial Intelligence” (later expanded into the book What Computers Can’t Do) questioning the computer’s ability to serve as a model for the human brain. He also asserted that no computer program could defeat even a 10-year-old child at chess.

… In 1967, several MIT students and professors (organized by Seymour Papert) challenged Dreyfus to play a game of chess against MacHack VI. Dreyfus accepted. Herbert Simon, an AI pioneer, watched the match. He said “It was a wonderful game - a real cliffhanger between two woodpushers with bursts of insights and fiendish plans…great moments of drama and disaster that go in such games.” Dreyfus was being beaten by the computer when he found a move which could have captured the enemy queen. The only way the computer could get out of this was to keep Dreyfus in checks with his own queen until he could fork the queen and king, then exchange them. And that’s what the computer did. Soon, Dreyfus was losing. Finally, the computer checkmated Dreyfus in the middle of the board.
MacHack Attack - Chess.com

Something fun for everyone.

Alan Alda and Mike Farrell acted out a scene from an A.I. written script for MASH. Their first time acting in almost exactly 40 years since the MASH finale. Very cute.

About minute 24 into this podcast from Alda:

Thanks for that! Pretty cool hearing those guys.

Strangely, Alan Alda sounds like an old man, but Mike Farrell sounds almost exactly like he did 40 years ago.

I know, said the same thing. It’s a bit sad to hear Alda’s voice finally changing a bit. But…that was BJ!

Here come the multi-modal AIs:

Palm-E has 526 billion parameters (as opposed to 175 billion for ChatGPT), and combines spatial and linguistic reasoning.

This thing can solve visual puzzles, operate robots intelligently, link images to text when analyzing input, etc.

The linked paper has lots of examples of its visual capability. For instance, shown a picture of a bunch of ingredients on a table, PALM-E recognizes them, then describes in detail the steps it would take to make a cake batter out of them. Given a picture of a restaurant with some dirty dishes on the table, the Ai is asked what it would do to help the situation and it describes the steps needed to clean everything. It can solve visual puns and make jokes about images. And apparently, the ‘positive transfer’ means that the images are helping with its linguistic ability and vice versa.

The argument that AIs are very limited because they don’t have context around their word choices is rapidly becoming obsolete.

Multi-modal models have been around many years. Palm-E is very cool though. Always cool to slap two pre-existing models together and getting extra out of it. Allows training the vision and language portions separately too. I think Flamengo just straight up freezes both. The coolest part though, although it’s also not new, is LLMs being sort of a world model. Completely not designed for it, LLMs are “merely” next-token predictors. But to do that the models encode world knowledge, and that’s also useful for things like moving robots around.

I think robotics is ~10 years behind deep learning in other fields (Alexnet was in 2012), but these sorts of results as well as some deepRL papers like Dreamerv3 make me think a similar explosion to the text and visual explosions is possible.

And I would like to answer that this is precisely the problem I see coming at us: not whether ChatGPT really understands or not*, but what will happen when somebody programs a ChatGPT API for this board (or for wikipedia, or for twitter, or whatever) so that it generates contributions or even new subjects, who will be answered by other ChatGPT bots, who in turn will bring in more ChatGPT contributions ad infinitum? How long are the mods going to be able to stop that? And as ChatGPT is much faster than any of us, soon most of the posts here could be AI generated, read by AIs that use them to train themselves to write even more posts, and more posts, and more…
I imagine that when we readers and posters suspect that this is happening, we will leave. Because that would make no sense to stay to read something so impersonal. Except if it is so good and interesting that we like it. But then the whole internet will be a word-soma producing machine and soon humans will be only a vanishing minority of the writers. I, for one, do not look forward to that.
There is an article that argues something similar (I think) in the Atlantic:

*I believe it does not, as it was not programed to and I don’t believe this is an emerging property at this stage. But other systems will try, and maybe they succeed. And when understanding is solved the problem will shift, perhaps to awareness, or self-coonciousness, and so on.

The idea that chat bots will take over written human discourse is basically a subset of the “AI will take over the world” type of fear, although there are specific aspects of AI that are legitimate risks and have already arrived. Just a few examples …

One area of risk is that the power of AI can be misapplied, either deliberately or unintentionally. Existing systems that pre-screen resumes, so that job applications have to fulfill criteria judged by a robot before a human ever sees them, are already a concern for many reasons. An obvious one is that good candidates may be rejected for entirely superficial, ridiculous reasons. A more insidious one is that such screening systems may be discriminatory, not because of any deliberate intent, but because of superficial correlations with the putative success criteria on which they’ve been trained that create systemic bias.

The ability of generative systems like ChatGPT to create impressive new writing, or to demonstrate understanding of an existing narrative and be able to summarize it, has already created problems where students use it to cheat on homework assignments. As further refinements make it even more powerful, these problems may become more pervasive, and it will become virtually impossible to tell, not only in the educational world but in the commercial one as well, whether a piece of writing came from a human or a machine, which in the commercial world will typically boil down to whether the responding agent is actually capable of addressing an issue or is only simulating responsiveness.

The increasing power of AI is also going to involve it more and more in how businesses are run and how they deal with their customers. This is nothing new – computers have already long since “taken over” in the sense that they’re an essential part of all commerce, and their intransigent algorithms are infamous for controlling what employees on the front lines can or cannot do for their customers. With AI becoming more and more pervasive in company policies and customer engagement, I see this impersonal inflexibility becoming pretty much universal in business and in government. If we think these large organizations are impersonal now, imagine a world where you must always start your interaction by dealing with a robot gatekeeper first.

That is Lufthansa customer service (or so they call it). Same with Vodafone, my electricity company, my health insurance and probably some more I am not thinking about right now. I am sure the interaction will get worse for me, the customer, when it is ruled by more AI, as the AI will be paid for by the company and the metrics they will choose to measure success will incluse something like “less payments made” and will not take consumer satisfaction into account.
Just dwelling on the Lufthansa example, the last five or six replies I received from Lufthansa started:

uns ist bewusst, dass Sie schon lange auf eine Antwort von uns warten. Dies bedauern wir sehr. (EN: We are aware of the fact that you have been waiting a long time for a response from us. We regret this very much.)

and ended:

Es tut uns sehr leid, dass Ihnen Unannehmlichkeiten auf Ihrer Reise entstanden sind. Wir würden uns freuen, Sie bald wieder an Bord begrüßen zu dürfen. (EN: We are very sorry that you experienced any inconvenience during your trip. We would be happy to welcome you back on board soon.)

I am sure they will improve that by a lot with the help of ChatGPT. But my satisfaction will not increase.

I just this moment opened a package from Amazon, and a food item in a sealed package was damaged in such a way that it was no longer “sealed for your safety”. Because it’s food, it’s not eligible for a normal return, so I had to “Contact Us” which began with a logic-tree computer chat which gave me multiple choice answers and concluded by telling me again that I couldn’t return it. So there’s your robot gatekeeper.

I choose, “I have more questions on this” and it gave me an actual customer service rep I could chat with.

Within about a minute my refund was issued, “As an exception”. This agent has very strict rules about what is allowed and what is not, and if an AI can grant exceptions under the same rules I don’t really care much if it’s a person or AI granting my refund.