The role of electronic brains

Give it the single clue ‘color code’ and it will do better

I got the color mixing tutorial the first time on GPT

I believe (and hope) AI has the potential to revolutionize anthropology, paleontology, archaeology, and related hard sciences by greatly expanding our understanding of our planet’s origins and human history.

I think AI will excel at data analysis, pattern recognition, prioritizing excavation sites, fossil and language analysis, predictive modeling, and much more.

Along with helping us to find solutions to global warming (before we plunge into our planet’s 6th mass extinction event) are the things I’m most excited about regarding the future of AI.

Thoughts?

We’re already in the middle of the 6th mass extinction. It’s happening right now, just about everywhere on Earth. Global warming is not the only cause of the extinction event and fixing AGW will only slow the extinctions a bit.

And I have strong doubts that AI will find a magical solution to Global Warming. At best it’s going to suggest one or two ways to make it easier to reduce some of the burning of fossil fuels.

Yes, we are believed to be in the middle of the Holocene extinction (sixth mass extinction), and climate change is not the only cause, but one of many causes perpetrated by humans. I don’t believe we’ve reached the point of no return, but it’s getting critical. I believe it will require major changes in our civilization and lifestyle to prevent cascading into the point of no return, however, and I think AI will be able to help guide us to viable solutions. Whether or not we listen is another matter.

Spent the weekend with GPT4 to learn about it’s programming skills. I write real time applications in assembly language. I chose the light switch example for a test. Just a program to turn on an LED for 1 second and turn it off for 1 1/2 seconds. The short form result of the weekend is that: GPT4 is not a practical source of computer code; GPT4 is a very valuable aid in computer programming; any real time learning that takes place is in the user not GPT.

GPT4 does not write executable assembly code.

Based on previous experience with 3.5 I gave it a 1 page preamble describing the processor and development environment I am using and the output format I expected. Then I described the problem and asked for a program.

What I got back was impressively close to an algorithm for solving the problem. I removed the syntax errors and told it that it was on the right track but it had missed half the problem. After several hours of back and forth I did get a listing that assembled and executed. I pointed out that the program did not have some elements required by the preamble. Could it add them. The result was half the elements and scrambled code.

GPT4 is a great aid in writing assembly code.

I started a new file and took a new approach. I treated it like two geeks at the Wagon Wheel after lunch writing code on napkins. I told it I needed help setting up TA0 ACCR0 and ‘bam’ I got everything from the user manual, better organized, with clear commentary on each bit in the SFR. I then told it what I wanted to do and it gave me the code using the assembler symbols. Using ctrl c I copied it into my listing and we went on from there. I learned what gave me the best responses and how to avoid getting noise from high level language conventions. GPT4 did it’s thing and I learned how to productively engage it in conversation.

So, it’s a great software tool, but it no danger to the industry.

Re the OP, I believe ‘AI’ will be a huge disappointment to the brain hypesters and a boon to those engaged in serious IT work. I also believe it will enhance work quality more than it will replace few if any programmers.

All of the improvements being made to computer chips are normal progressions. There is nothing in their architectures that limits them to AI. So, the industry will benefit regardless of the fate of AI, GPT et al

This made me think.

Part of the problem with modern AI is that it seems to be built task-bound. Like a person taking the SAT: the only available input is what you are given. A human faced with the color question would probably ask for more information/context. Similarly, a human programmer will tend to work outward from the basic parts to the end product, testing along the way. If AIs are going to be truly useful, we need to add in empiricism and seeking clarification (inquisitivenss?), otherwise they are just Univacs with lipstick.

Exactly! AIs do not learn in a human sense. The kinds of learning are analogous to mixtures and compounds. AI learning is a mixture. Human learning is a compound.

Maybe. You’re probably right for current AI

I’d also argue that mixtures and compounds are a grade-school simplification of how chemistry really works. There is lots of interesting chemistry in the intermediate gray area.

Likewise I suspect that the learning of a bug or slug is “mixture-like”, while that of a human is “compound-like”. Not different as a matter of kind, but simply as a matter of degree. More connections = more subtlety and more generative capacity. A dog is much, much stupider than a human. And much, much smarter than a slug. Its learning methods are intermediate as well.

Well yeah, I remember it from the manual in my Gilbert Chemistry Set.

Sam Altman announces that LLMs are not the AI of the future:

OpenAI CEO Sam Altman says that ChatGPT is not the way to superintelligence

And subsequently steps down as CEO of OpenAI. Perhaps this will become a near term topic of discussion.

And now this

Crosslinking to dedicated Altman departure thread: