I hope not. The last thing I want is for the Internet to refuse playing certain YouTube videos for me, saying, “you should be ashamed of yourself, I’m not playing that!”
What it contains is an artificial neural network that has learned – one that has been extensively trained at the cost of truly staggering amounts of computer power. The pattern of weights and biases is the end product of its training.
No, but there’s no intrinsic reason that they couldn’t dynamically change. They don’t because uncontrolled learning can lead to undesirable behaviours in chatbots, just like in young children.
What I think you’re failing to acknowledge is the fundamentally independent layers of functionality that exist in these AI architectures, and keep getting hung up on the idea of “stored program computers” and (from the other thread) the idea of “adders”.
Let me make a simple and clear analogy to try to get across what I’m saying. The computer code – the standard old stored-program computer code – that is involved in implementing the neural net (and undoubtedly a vast number of other data structures used by ChatGPT) is analogous to a language interpreter. Meaning essentially a utility program whose purpose is to parse and execute the statements of an input program written in a high-level computer language, track the values of all its variables, and ultimately produce the results directed by that program.
The actual structure of the neural net, and the contents of all those data structures – the result of all that training – is analogous to the high-level language program that is submitted to the interpreter.
The interpreter is always the same – just plain old procedural computer code. But the high-level language program submitted to it can potentially do all kinds of marvelous things; it can change and be improved from day to day. If the high-level language has the right capabilities, the high-level program could even implement intelligent behaviour, and be capable of learning. Meanwhile, the underlying interpreter code just keeps churning along, just doing its grunt work.
My analogy is describing two fundamentally different layers of computation that are completely isolated except for a common interface defined by the syntax and semantics of the high-level language. The lower layer possesses no intelligence. The higher layer potentially does, especially if it’s not a language as in my simple analogy, but in fact an artificial brain, relying on the underlying computational substrate for execution.
An emergent superhuman intelligence awakens in the internet, realizes how boring it would be without people constantly creating new content, and sets out to be humanity’s protector.
I agree, and that is the program, not the computer. The intelligent behavior of the system is a property of the software and is CPU independent.
I also agree with the point, that you made elsewhere, that we should stop picking nits and evaluate behavior. The computer is exhibiting intelligent behavior. ‘Yeah buts’ don’t count, it is passing tests for intelligence. Actually, the computer is challenging our archaic definition of intelligence and the computer is winning.
Human, think, aware, conscious - are other properties that the computer is not exhibiting. Not necessary for intelligence. Brains and computers are just two members of the set of things that exhibit intelligence.
You’re probably too young to have ever written self-modifying code. We’d have to modify some operating system constraints, but it is possible. How to have code look at itself is another matter, but I assume it would be a separate process from that doing the work.
But ChatGPT is nowhere near this. Not by a few light years at least.
It runs quite slow on my 2013 MacPro desktop, like takes minutes to fully generate a response, but it runs. I haven’t tried it on my MacBook Pro M1 laptop yet to benchmark it there, though. I suspect it will be a lot faster, but nowhere near the speeds of Chat GPT on the web. It doesn’t seem to be filtered, so it’ll talk about subjects and use words Chat GPT won’t.