The next page in the book of AI evolution is here, powered by GPT 3.5, and I am very, nay, extremely impressed

Now you’re getting it! So is everything else, aparts from elementary particles, or perhaps the fields that form them (or perhaps there are even deeper levels for us to discover someday).

Fair enough. And how will you move the goalposts further to account for PALM-E, which has access to relationships between words but also vision and sound and touch, and the relationship between vision and sound and touch and words?

I haven’t done any calculus stuff with it so I don’t know where those examples might be found. But this is an example of what I was referring to by its ability to generalize problems and develop generic equations:

Me: Can you now write out a general formula for the time T it would take for M machines to produce D devices at a production rate of P devices per minute?

Yes, I can write a general formula for the time T it would take for M machines to produce D devices at a production rate of P devices per minute.

The formula is:

T = D / (M * P)

where:

T is the time it will take for M machines to produce D devices
M is the number of machines
D is the number of devices to be produced
P is the production rate per machine per minute (in devices per minute)

The next page in the book of AI evolution is here, powered by GPT 3.5, and I am very, nay, extremely impressed - #1318 by wolfpup

There’s another example here of it developing a system of equations to a solve a problem. As it turns out, the problem can be solved more easily without resorting to laying out equations, but that’s beside the point. The equations were correct, and its answer was correct. And as I said elsewhere, there are many humans who would have failed to solve those problems. In the case of the fish puzzle, they might not only have failed to see the simple shortcut, but might altogether have failed to get the answer at all within a set time. It just seems to me manifestly absurd to insist that there is no real reasoning going on here.

Ok? You’re saying this like it’s somehow in conflict with what I said, but I can’t see how?

If there’s just relationships, no goalposts need to be moved, and the proof I gave in the other thread applies unchanged.

Even if not, I’ve given a general argument against the possibility of conscious AI before.

Before I dive too deeply–do you believe there’s something “extra” in the human sensory apparatus that is not just classical data?

Depends on what you mean by ‘extra’, and what you consider to be the boundaries of the human sensory apparatus. ‘Extra’ in the sense of ‘extra-physical’: no, there’s none of that there, I’m a thoroughgoing materialist. Also, the sense organs transmit nothing but classical data in the sense of neuron spike-trains to the brain.

Somewhere in the brain, however, there is a self-referential process that evolves a pattern to be in accord with the ‘fitness-landscape’ set up by the data transmitted through the senses. There, we have something ‘extra’ in the sense of non-computational, namely, direct access to its non-structural properties. Although of course I don’t think of this as ‘extra’—the non-computable and hence, non-structural seems pretty ubiquitous, it’s only that our models are essentially structural that makes it seem otherwise.

…this doesn’t make any sense. What is a “non-structural property”? Can you give an example of one? How does the brain have “direct access” to “non-structural” properties?

I’m no neuroscientist, but I’ve done a decent amount of reading on computers, neuroscience, and philosophy, and I don’t believe I have ever encountered those terms in this context before; a Google search doesn’t help.

How do we know it’s not an intrinsic one?

I am honestly amazed that you regard this as evidence for genuine concept formation. Because you followed up that original post (in which you described ChatGPT as a “not very bright kid” that “needed lots of help” when it got “suddenly stupid” with this post where you found it in “full moron mode” when you gave it an identical machines/device/production/time question and it went back and produced exactly the same error you’d tutored it not to make! You had to remind it of the formula you’d kicked it towards in the same session before it would apply it to exactly the same problem. This strikes me as a very good reason to be skeptical that there is real reasoning going on here.

ETA: I’ve just had a go with the same sort of question. I got a tissue of plausible nonsense that looked “mathy” but wasn’t actually maths. Including direct contradictions. What ChatGPT does is amazing, but it’s not reasoning.

For the reason I already gave – that similar systems, like Watson, have shown that candidate responses can be subjected to confidence rating and ranked and rejected accordingly. Confidence can be based on many factors, such as a fixed internal scheme premised on the number and quality of corroborating sources, or dynamically acquired through a type of supervised learning called relevance training.

There are two problems with your conclusion. The first is that as it’s currently structured ChatGPT somewhat randomized its best-token predictions. This is useful for natural language generation but less than helpful when solving problems in logic or math, and results in the phenomenon we’ve already seen where ChatGPT will sometimes fail to get the right answer on the exact same problem that it successfully solved earlier. This does not imply a lack of reasoning skills in the former case, but just a different failure mode than we see in humans.

The second issue is somewhat related, and is just simply the fact that the failure to solve some particular logic puzzle does not by itself imply lack of reasoning skills; after all, humans consistently fail some questions on IQ tests while getting many others correct. All that is required to show that ChatGPT possesses some level of reasoning skills is to show evidence of capabilities for generalization and abstract reasoning on some substantial number of non-trivial logic puzzles.

Re the OP: the next page is that GPT type systems should be broadly applied with positive results. What I have been able to find on GPT applications so far is long on hope and short on examples. Mostly it’s text and front ends for search engines. With speculation on filtering data and acting as an adviser or assistant. OK, maybe, but where’s the $125 billion market? Definitely not as a personal financial advisor.

This may be a fad like bubble memories, tunnel diodes, fuzzy logic, genetic programming and neural nets, a useful technology that finds a niche but doesn’t displace much of anything.

Has anybody seen an actual example of deployment of the technology?

Why are you asking for deployment examples when the technology is obviously still in the research stage?

Its role as an intelligent advisor is potentially an immense market, touching not only on highly context-sensitive information retrieval but also interpretation, analysis, and writing in general. I recently had a very long and illuminating conversation with ChatGPT about the future of generative language models that would be hard to distinguish from a conversation with a knowledgeable AI researcher – and that was with ChatGPT 3.5 which is still quite primitive in the big picture. Its ability to summarize and synthesize information into a well-written composition can be an immensely useful starting point for writers. The possibilities are endless. We’re just not ready for commercial deployment yet.

openai claims they have deployed 300 systems to paying customers

I’m not clear on where you’re going with this line of argument. ChatGPT is in the research stage. I would surmise that virtually all of these deployments are to developers who are working with the API, and not to end users. In fact here’s a good example discussion about why ChatGPT isn’t yet ready to become an intelligent customer service agent.

Anything that’s not relational; basically, the same as intrinsic properties, or ‘structure-transcending’ properties, as Strawson puts it.

Well, on my model, everything you ever directly experience is an example—the intrinsic properties are the ‘raw stuff’ of experience, which are shaped by a self-referential process into a model of the world.

This is getting into the depths of my model. If you want to discuss that, there’s a thread for it; it’s off-topic here. But basically, there’s a self-referential process modeled on von Neumann’s design of a self-reproducing automaton, which adapts itself to data from the environment. But this process runs into undecidable questions regarding the proof-capacities of (a modified version of) itself. These can’t be solved on a structural, i.e. theoretical, level; thus, the non-structural properties decide among alternatives that are unresolvable computationally. My paper goes into the details of this, but it gets a bit mathy. Here’s a popular level summary.

‘Direct access’, or ‘direct acquaintance’, I basically take from Bertrand Russell: “We have acquaintance with anything of which we are directly aware, without the intermediary of any process of inference or any knowledge of truths.” (From The Analysis of Matter.) Although Russell (presumably) intended to argue for a direct acquaintance with concrete (intensionally given) structures, while I believe it is through acquaintance with non-structural properties that any such structure is known (as being the stuff fulfilling the given relations).

You may be aware that many programmers absolutely have been using, e.g., Github Copilot for productivity reasons since it came out, and could not get their work done without it. Not quite a fad.

Github supposedly says that over 40% of the code being checked in is now AI generated.

There are some 80 plugins for ChatGPT, Including services like Expedia, openTable, Kayak, Wolfram Alpha, etc. Many of these are now in beta.

ChatGPT is already in use in corporations around the world. Some jobs now require prompt engineering knowledge. A great use for these tools is to author excel spreasheets, create powerpoints from documents, create visualizations, summarize E-mails and meeting transcripts, etc.

To that end, Microsoft has incorporated ChatGPT into its Office 365 applications. The list of things it can do is pretty impressive, and not affected by its limitations (hallucinations. etc).

There are browser plugins in use already which will do things like summarize a web page or blog, automatically translate stuff, allow you to select a text block and have ChtGPT describe it, summarize it, whatever.

Interesting link. I suspect you are right that the current customers are developers. That seems to be what’s hitting the media. But claims like:

Create a Virtual Assistant

ChatGPT can be used to create virtual assistants that can handle day-to-day tasks for businesses, such as scheduling appointments, sending emails, and managing social media accounts. This could be a great way to streamline the workflow, automate repetitive tasks, and save time for busy professionals so they can focus on more important jobs such as innovation and research.

Those repetitive tasks are called ‘work’ especially in innovation and research. I don’t see GPTs doing ‘work’. And, any customer that get’s sloughed off to a bot is headed for your competitor. Companies have PR guys with VP titles just to make customers feel important. Of course maybe there will be VP bots that convey status, like a direct line to my personal VP bot that sends emails with gold borders. Could fill a need. After all the world does run on bullshit.

I just posted a dozen ways GPT is currently doing ‘work’. I’d say over 40% of Github checkins is ‘work’.

My hotel has an AI assistant.

Sorry, ran out of editing time. To repeat:

I just posted a dozen ways GPTs are currently doing ‘work’. I’d say over 40% of Github checkins qualifies as ‘work’.

My hotel in Vegas has an AI assistant. You can call it and say, “I need two tickets to a show. Can you get them for me?” And it will. They call it a ‘digital concierge’. We’ve used it, and it works well. I’m not sure what model they are using.

My wife uses chatGPT for work. It’s good for things like formatting a spreadsheet for you, creating complex cell formulas, etc. You can just give it a table of data and say, “I need a spreadsheet that breaks this down [insert way you want data formatted], plus I need a line graph with a regression line” and ChatGPT will give you everything youneed. Or, you can take a table of data in text format and ChatGPT will convert it to CSV for importing into other apps. If you need to print math formulas, ChatGPT can create the LaTex formatted formulas from descriptions.

If someone sends her a contract revision, she can drop it into ChatGPT along with the original and ask it to summarize the differences. When the Microsoft Office 365 is updated at her work, she’ll be able to say things like, “Listen to this meeting. Create a transcript, identify the action items and who signed up for them, and then send a copy of the summary to everyone in the meeting plus my manager distribution list. Write the preamble as me, in my style. Show me the message before sending it.”

ChatGPT is already becoming a huge time saver and analytics tool for those who know how to use it. It can do a million different repetitive tasks that qualify as ‘work’.

If you are a white collar worker, you should either already be using ChatGPT for mundane things, or you should be learning how to use ut for the time when you will have to use an AI to keep up with everyone else.

A huge application for GPTs will be as dedicated question answerers on specific topics. For example, having one fine-tuned on the ISO-9001 manual along with a company’s training manuals, a chatbot can answer just about every question employees need to ask to make sure they are in compliance. Onboarding a new employee with a chatbot trained on all the company manuals will be less labor intensive and allow the employee to ask questions at any time. University course lists, product troubleshooting, etc. All those ‘AI’ expert aystems out there that try to walk you through a problem are mostly garbage, but they are going to get a whole lot better.

Home schooling just got a major boost. You can tell ChatGPT to create curricula, lesson plans, handouts, problem sets, exams, etc. It’ll grade them, too. And the kids can talk to it for help. Basically every home-schooling parent now has a teaching assistant and an admin and free teaching materials.

All of this qualifies as ‘work’ being done right now by AIs.

Thanks, that’s what I what I was looking for. Formatting a spreadsheet makes sense. I’d be leery of turning it loose on a contract. It does get creative.