The role of electronic brains

True, it does have an engaging output format. But, the content is descriptive information common in the literature searched. And, it is useful in education and data searches. May even be unique and revealing. But it is still search engine

What does it do with this?

My following comments are directed at the authors of the cited article. Not at @Crane who I thank for that cite.

Hey there you friggin “geniuses” (morons more like) at Forbes: Your three criteria apply to every software project from a payroll system to a parking garage kiosk. And substantially every enterprise IT system fails utterly on all 3 criteria. Short sighted management does not value clean data, they do not value responsible implementation, and they sure as hell don’t partner with IT on a strategic level. Instead IT is a cost to be minimized and a problem to be ignored. Even if your company is ostensibly in the IT business.

You (Forbes) have delivered pure pabulum for executives and executive wannabes to feel better about the next project they’re going to doom from the start by their intellectually lazy pursuit of short term cashflow only.

Harrumph!!1!

When I get access to GPT-4v I’d be happy to try it. But I’ve seen it solve much more complex images than that.

Have a look at this:

https://www.reddit.com/r/ChatGPT/comments/16vgrh6/gpt4v_shows_understanding_of_electronics/

Simple circuit. Be interesting to see what it does.

Oh, so true. Management types wouldn’t know clean data if it bit them. They’ll back responsible implementation so long as it doesn’t exceed their unrealistic deadlines and budgets - then corners start getting cut. Ditto for the “strategic partnership” which works fine until IT asks for a budget that will get the job done even when the C-suite people aren’t with it enough to know the work needs to be done.
Perfect example was MGM, who starved IT and thus security, which was a great idea until the North Koreans broke into their computers and published their private emails.

Per your link: GPT comments on the function of diode D1, but fails to notice that the diode is in backwards. Also D1 is in the wrong place. It should be on the power input pin before the regulator.

Couldn’t this also be said of humans? We can sift through zillions of bits of data in our brains and winnow out the best answer in response to a query.

Yes, all brains are search engines, but all search engines are not brains. Forgive the cliche, but it fits. GPT mechanically replicates some human activity. We should regard it in that context - an awesome software achievement.

Do you believe AI will someday gain consciousness? Become self-aware?

We still lack a clear understanding of how these phenomena arise in animals, let alone in machines. However, I think it is possible that AI will eventually reach this level of intelligence, but it may require some form of organic integration. If it does, I believe the form it takes will be completely alien to us, considering we don’t share a common ancestor—but it’s likely to be superior (deeper) to ours.

I do not believe that it is reasonable to assume that consciousness/self-awareness are per se dependent on intelligence. We see evidence of consciousness in the most basic of animals, which suggests that it is not directly tied to reason.
      Programming the equivalent of self-awareness is actually fairly trivial and can be done in basic logic (does not require elaborate neural-net designs). There is no test that can inform us that programmed self-awareness in terms of what we understand it to be, any more than you can confirm that I am truly conscious and self aware rather than a fair simulation.
      Some of us like to say that consciousness/self-awareness is an emergent property of sufficiently-developed intelligence, while I take the position that abstract intelligence, at least biologically, is the emergent decoration.

I believe the Forbes article is a cautionary tale that agrees with you and LSL.

It is not it’s job to do so:

keep in mind Chat-GPT is a Language model … so it’s job is to verbalize “things” … it doesn’t claim to be an electronics specialist.

Things will get interesting if we get into multi-modal models … (e.g. electrónics AI or medical image interpretation AI integrating with LLM model) … then it should take notice of that … (and possibly propose better solutions - if possible)

but AFAIU - once there are microprocessors in the schematics, all bets are off, as neither you nor the AI can see the program that drives the microprocessor and hence has no way to know what it does

kindalike looking at a motherboard of a PC and not knowing if you play tetris or use excel.

Some researchers argue that even insects have a basic form of consciousness, based on their brain structures and behaviors. Self-awareness, on the other hand, is a higher-order mental state that involves the ability to reflect on one’s own thoughts, feelings, and actions. Only a few species are believed to be self aware.

There’s no consensus on whether AI can or will become self-aware. Some experts suggest that AI could potentially develop self-awareness if it has the capacity to imagine the future outcomes of its actions and model itself as an agent in the world. This would require advanced cognitive abilities that are not yet available in current AI systems. I believe it will emerge some day, but not someday soon.

Actually we are not. GPT is good at searching and orderly presentation, but it has no facility for analysis. It doesn’t understand, on any level, what it is doing. It doesn’t really make mistakes or become delusional. It’s just a neat search engine interface statistically slinging verbiage. Criticizing it is like flogging the dog for pissing on the rug. It’s just doing computer stuff.

I described the circuit above and asked GPT 3.5 for the output waveform corresponding to a parabolic input at the capacitor. Here’s the result:

I can follow where all of this originates. It is just a neatly arranged word salad of verbiage describing transistor amplifiers. The statements have the words in the right order, but they don’t make sense and the equations contain the wrong terms. The jumbled words attribute grounded emitter characteristics to the grounded base configuration. Actually it’s not attributing anything. It’s just statistically evaluating the terms in transistor literature.

I’m surprised that it misses the point so completely. In (1) it calculates Ic as being equal to Ie but leaves out the contribution of current from the capacitor. The capacitor produces a current that is proportional to the rate of change of voltage at it’s input. A parabola is changing at a constant rate so it produces a current that increases at a constant rate. The waveform at the collector would be a linear ramp, a sawtooth if repeating. It will not be a reproduction of the parabolic waveform and it will not be multiplied by beta.

In your linked example GPT describes each block with verbiage from the corresponding data sheets. Sufficient for a data sheet or instruction manual. But, it misses the fact that the circuit will not work. The wrong facing diode blocks the regulator output from reaching the system. Turn the diode around and it may work, but the system will fail if a negative voltage is applied to the power input because the voltage regulator will be destroyed.

GPT is an amazing software accomplishment but it’s not a brain. It is not thinking. It is not even evaluating or analyzing. It is just word fondling.

I’ll drink to that!

BTW: The schematic is for a data acquisition system with USB link to a computer.

A thing that many (you probably as well) mis-understand about chat-gpt is:

It is not a search engine, it is not for solving problems … its a state-of-afairs AI system whose job it is to form meaningful sentences.

You (generic you) must stop confusing comunications form and content … It is NOT meant to create content, it is meant to communicate existing content.

Think: asking a friend to read a given book about a given topic (importance of drones in warfare) and make you a 1 min. cliff-notes.

You wouldn’t scold you friend for erronious info in the book, he is just condensing the info provided from 500 to 1 page … THAT is what chat-gpt does …

This article has a different take on the present AI/GPT fad. Because the output of GPT is dependent on the prompt, the author proposes that GPT is just a conversational language approach to computer programming. A computer running an LLM is not a ProbSol. It is the same old computer that can now be programmed by anyone using just conversation.

I applied this approach to my design evaluation problem stated above. In the prompt I defined the operation of each component. Just as I would define terms in a program. Then I defined the parameter I wanted evaluated. The result:

“The collector current (IC) waveform for a parabolic voltage input to the capacitor in the grounded- base configuration of a transistor with infinite β and zero VBE will have a linear shape. This shape corresponds to the linear increase in the rate of change of voltage (dV/dt) of the parabolic input signal, which itself increases quadratically with time. As time progresses, IC increases linearly with time, following the trend of the rate of change of voltage, resulting in a simple linear waveform.”

So, in the context of the OP, I believe the near term future of computers will be to eliminate programming, and programmers, as we know it. GPT brings computer solutions ever closer to the problem by eliminating the need for unrelated programming language skills.

Didn’t you post elsewhere (I forget if it was upthread or in a different thread) that that ChatGPT answer is flawed in various ways?

The LLM of AI is not going to be what everyone thinks AI is supposed to be. It won’t discover new physical laws or design new molecules to solve problems. It won’t even start the AI revolution.

Okay, tell you what: you tell me something that would indicate ChatGPT is more than just a ‘search engine’. Give me a test we can do. Because it has clearly gone WAY beyond just ‘next word prediction’ or ‘just a sophisticated search engine’. So what will it take to convince you?

BTW, GPT-4V is officially multi-modal now, so it’s not even a language model any more. Its language is fully integrated with images, video, and audio.

Perhaps you could look through this and see if there are any capabilities that make you go, “Hmm.”

The Dawn of LMMs: Preliminary Explorations with GPT-4V(ision)