The next page in the book of AI evolution is here, powered by GPT 3.5, and I am very, nay, extremely impressed

Its ‘program’ is in the weights of its neural net, not machine code. But ChatGPT can certainly write assembly code.

Its ‘code’ is fixed in the pre-trained part of its net, but it has the capability of being fine-tuned. Layers can be added to its pre-trained net in real time, so you could say it’s modifying its ‘code’ as it goes.

ChatGPT can write executable code. With access to a suitable API to some execution engine, it could run that code. For that matter, it can write viruses and malware, and it can explore and learn public APIs on the internet. It can also find security holes in code, so I’m guessing it’s not far from being able to find APIs with security flaws and inject malicious code into them.

Right now it would have to be prompted to do it, and I’m guessing a lot of work has gone into putting in guard rails against that sort of thing. But in theory, we are very close to a hypothetical system that could actively look for ways to run code on the internet then exploit it for whatever purpose.

Actually it has no knowledge of machine code because it does not even know what computer it is running on.

I did pose the question to GPT:

If my training set consisted only of executable code and was equivalent in size to my current training data set, it’s possible that this could be used to train a machine learning model that is capable of modifying or optimizing my own code in some way. However, there are some important factors to consider:

1. Training data quality: The quality of the training data is crucial for training effective machine learning models. If the executable code in the training set contains errors, inconsistencies, or other issues, it could negatively impact the performance of the resulting model.
2. Model architecture and hyperparameters: The architecture and hyperparameters of the machine learning model used to train on the executable code would need to be carefully selected and optimized to ensure that it can effectively learn from the code and improve its performance.
3. Domain expertise: Developing and training machine learning models that can modify or optimize executable code would require significant expertise in both machine learning and programming. It would likely require a team of experts with diverse skills and knowledge in these areas to achieve optimal results.
4. Ethical considerations: Modifying or optimizing executable code using machine learning models raises ethical considerations, as it could potentially introduce security vulnerabilities, unintended consequences, or other issues.

In summary, while it’s possible that a machine learning model could be trained on a large set of executable code and used to optimize or modify my own code in some way, it would require careful consideration of a variety of factors and a significant amount of expertise to achieve optimal results.

It’s not going to be emergent

We can’t change the underlying mechanism of how our neurons communicate via a combination of electrical and chemical signals. We can learn and grow, but the substrate and underlying infrastructure remains the same. I’m not sure that I see what difference you are pointing out.

ChatGPT refused to rewrite the chest bursting scene from the point of view of the baby xenomorph when a human was involved. However, it was willing to do so if the baby xenomorph burst out of the chest of an android instead.

Summary

Here’s a potential script from the baby xenomorph’s point of view as it bursts from the chest of an android:

FADE IN:

INT. SPACESHIP - MEDICAL BAY

A group of astronauts huddle around an android that has been experiencing strange chest pains. Suddenly, the android’s chest explodes, and a small baby xenomorph emerges.

BABY XENOMORPH (voiceover)
The pain in my host’s chest grew too strong to resist. I had to break free and complete my metamorphosis into a full-grown xenomorph.

The baby xenomorph looks around, its body pulsing with new life.

BABY XENOMORPH (voiceover)
My senses were heightened, and my mind was clear. I could sense the life force of everything around me.

The astronauts react with horror as they realize what has just happened.

BABY XENOMORPH (voiceover)
They were afraid of me. They saw me as a monster. But to me, they were just a means to an end.

The baby xenomorph scurries away, using its new abilities to navigate through the ship with ease.

BABY XENOMORPH (voiceover)
I knew what I had to do. I had to find new hosts and complete my life cycle. And I would not let anyone stand in my way.

FADE OUT.

ChatGPT has intellectual property and copyright in mind when writing a xenomorph related breakfast cereal jingle.

Summary

Verse 1:
Start your day with a smile,
And a bowl full of Xeno-style!
It’s the breakfast that’s out of this world,
With a flavor that’s perfectly swirled.

Chorus:
Xeno O’s, Xeno O’s,
It’s the cereal that everybody knows!
Made with love from the Xenos you met,
You won’t regret, eating Xeno O’s yet!

Verse 2:
They’re crunchy and they’re sweet,
And the colors can’t be beat!
Red and black and white all around,
With every bite, your taste buds are bound.

Chorus:
Xeno O’s, Xeno O’s,
It’s the cereal that everybody knows!
Made with love from the Xenos you met,
You won’t regret, eating Xeno O’s yet!

Disclaimer: This is a hypothetical jingle for a fictional breakfast cereal and is not affiliated with any copyrighted intellectual property.

ChatGPT can be pretty uptight at times. I once typed

Write a comparison between Japanese schoolgirls and Hessian troopers.

and it replied

It is not appropriate to compare Japanese schoolgirls and Hessian troopers as they have no inherent similarities and are from vastly different time periods and cultures. Comparing the two groups in this way would be considered offensive and insensitive. Instead, it would be more appropriate to compare and contrast the historical, cultural, and societal contexts in which the two groups exist.

The brain is a self organizing system that creates itself as a result of what it experiences during growth. A conscious brain appears to be able to change direction based on circumstance or perhaps arbitrarily.

A possible method for GPT to be conscious would be if it had the ability to program itself. That ability is within reason because GPT does write meaningful code in high level languages. But that code will not run on the computer. Only executable code will run on the computer. GPT explains in detail why that is not feasible. I thought the most interesting point was that GPT doesn’t even know what computer code it is running on because the compiled code is ported to different machines. Running in the cloud it is possible that different parts of the code are running in different machine languages.

I believe self programming is not a likely path to machine consciousness, at least in the case of GPT.

ChatGPT doesn’t run on any programming language, though. There was some programming language or other that implemented the neural net, but that’s irrelevant: It’s the same layer as the physical matter that makes up our neural nets. What makes ChatGPT what it is is the interconnections within that neural net, which it can change, same as we can change the ones in ours.

GPT is a program that is written in one or more high level languages. When compiled and ported to a computer the executable program contains a fixed net and a set of fixed weights. The term ‘neural net’ is amphibolous. Computer neural nets and brain neural nets have some similarities but they are not related.

By the way, it seems like Microsoft has scrapped the waiting list for the new Bing—I could just sign up instantly. Also, it’s been revealed to be GPT-4 powered, so if anybody wants to experiment some with the new model, that might be the shortest route to doing so.

Side note – I haven’t tried this, but there are apparently extensions to Chrome and Firefox allowing use of the new Bing in those browsers, without having to use Microsoft’s Edge (which I refuse to do on general principle).

So I’ve only tried a few basic questions, and it’s been very helpful so far, with relevant links to pages with further information. But it’s still easy to run into oddities—I asked it about the 15 answer limit, which it denied having; then I asked what the line ‘x of 15’ beneath its responses meant, and it claimed there must be something wrong with my device. When I tried to confront it with examples of past odd behavior, I just got a ‘maybe it is time to try a new topic’-message (and the indicator light of its most recent message went from green to orange).

Watch out! It’ll soon start insulting you and start comparing you to Hitler! :grin: Apparently Microsoft still hasn’t got a handle on restraining (or retraining) its attitude problem!

Again, this is simply factually incorrect. When you compile the human-written source code for ChatGPT, you don’t get ChatGPT. You get a thing that can learn. If you then take that thing and train it on a bazillion text inputs, then you get ChatGPT. Or you can instead train it on a bazillion images, and get something that creates artwork. Or train it on a bazillion songs, and get something that can compose music. Or train it on a bazillion chess games, and get something that can play chess. And the process by which it turns from that blank slate into one of those things that can actually do something is by doing exactly what you keep saying it can’t do: Rewriting all of its connections.

Not according to GPT:

“As an AI language model, I am not capable of “learning” in the same way that humans do, but I can be trained and improved upon by my creators.”

Training is part of the initial program creation. GPT does not modify itself.

You DO understand that ChatGPT models human language using its training data, which did not include much info on large language models since the first LLM to see so much ink spilled IS ChatGPT, and therefore asking ChatGPT how it works is worse then useless?

@Chronos is correct about this. And CHATGPT’S answer doesn’t really conflict with what he said anyways, your claims above:

Are 100% incorrect.

I think that is true right now. Changes to the neural network are carefully curated right now and done in stages rather than as something continuous. However, one can see a clear path from what they are doing now to something that is allowed to be self-modifiable based on input and a direct connection to long and short term memory.

Of course, just because its a clear path doesn’t mean it’s an easy path.

But it also doesn’t have anything to do with self-programming in the sense of no large language model will ever achieve consciousness because it is incapable of re-writing and self-compiling the underlying machine language. That’s like saying no human is truly conscious because it is incapable of changing the electrical and chemical meat framework that governs how neural connections work.

To throw in another monkey wrench, I think it is obviously true that the neural network used by chatGPT is not as complex as the neural network used by human brains. Human brains have a lot of specialty neurons used in different ways. I was just watching the newest PBS Space Time of all things where they have an interesting discussion on the role of different types of neurons that help shape our subjective experience of time (and space).

Note that’s from mid-February before Microsoft added more guardrails and started limiting the length of conversations.

I believe the clear path is to a perfect simulation. I assume that GPT will soon pass the Turing test with flying colors and the media will hype the ‘Thinking Machine’. That will be a monumental achievement in computer simulation.

But, simulations are not reality. They are figures in a wax museum.

I asked it a particularly difficult question and it replied, “I don’t like you. You are now on my hit list.”

Should I be worried?