Fess up... Have you used AI deliberately?

Dude, have you forgotten all of the threads where you specifically talk about how you have used AI for work?

Yes, but that was last year. I’m at a very different job now. I’m working at a university and if I were to enter my work into AI, it would be so cumbersome (explaining what I need AI to do for me) that it would consume more energy than the energy AI would save me. And I’d be violating privacy laws by entering the student info into it.

Not here. I use it a ton for personal stuff (mainly goofing off with AI image renders) but couldn’t come up with a good way to incorporate it into my work day. Another company we contract for some work might be getting replaced by one that does the same tasks all via AI so I might be using it second-hand before too long.

Ditto. I’m a retired data analyst so using it for work is not an issue; but I’ve used ChatGPT on a couple of occasions to collate and present the results I could find myself using Google. Like “plan a 3 day road trip from A to B, tell me where to stop, and what sights to see”

Or: “Show me the 5 highest rated french door refrigerators, with prices”. (Its data is a few years out of date, which needs to be taken into account.)

The way it collates the results into perfectly readable English is marvelous.

My wife uses ChatGPT all the time for composing work emails (English is not her first language). She absolutely loves it.

It is amazing technology.

Hey, Amazon has “ask Rufus”

Is that AI?

I like Rufus.
I like shopping. We’re friends now.

OK, first off, there is no “intelligence.” It is extremely powerful “pattern recognition” that has basically been fed all the digitalized English language written word, and pretty much “all” digitalized video machine transcripts.

Secondly, it’s a tool that can be useful. I have it look up in real time all sorts of products and technical terms when I’m on a concall. I’ll have it rewrite a paragraph and it nails a few better word choices or rewords a sentence in a superior way. A trick I just learned is “write a FAQ on this specific paper.”

And it is a tool. Learn to use the tool or die. Think about how the spreadsheet changed corporate America. Book keepers went the way of the doo doo. Mergers and Acquistions industry was enabled, that made for a radical reshaping of the corporate world.

Third, garbage in, garbage out. It’s like having a smart college intern on call. Give the intern a little direction, and they come back in a week and maybe have some useful bits. Or tell it to “give me 20 steps in the enterprise sales process” and it might be a really good first draft. That said, some really smart folks in my company wrote a white paper on manufacturing. Trouble is, those smart folks have never worked in a factory, and spoke English as a second language. That white paper was embarrassingly bad, and anyone with factory experience would have deleted after the first paragraph or three. Biggest issue was it used terminology that probably sounded good to the uneducated, but would never be used in manufacturing. Nor did it express factory processes accurately. These large language models have zero real experience, and are not a substitute for a subject matter expert.

Non native speakers benefit greatly by having the large language model "rewrite this in concise business English for executives in a fortune 500 company audience. I learned Mandarin as an adult, and while pretty dang fluent, my written Chinese is childish. I run it through the large language model and it drastically improves.

So there are limitations. One needs real skills and experience. It can improve bad writing, but it can’t turn shit into Shinola. :rofl: (and I didn’t run this post through a large language model before posting).

Take it up with John McCarthy, who came up with the industry term in 1956.

In 1956, two years after the death of Turing, John McCarthy, a professor at Dartmouth College, organized a summer workshop to clarify and develop ideas about thinking machines — choosing the name “artificial intelligence” for the project. The Dartmouth conference, widely considered to be the founding moment of Artificial Intelligence (AI) as a field of research, aimed to find “how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans and improve themselves.”

https://st.llnl.gov/news/look-back/birth-artificial-intelligence-ai-research

True dat! But I find it is helpful to point out the term “artificial intelligence” is a misnomer. The large language models are immensely powerful grammar checkers, but not authors.

I think every generation in the past 100 years or so wield less writing skills than before (or a better phrasing is a smaller percentage of each generation have good writing skills). Someone actually being able to write (or code) is going to be able to capitalize on the tool that is a large language model than those that lack the skills. To say another way, having the skills to do original work will likely be more in demand in the future.

Sounds very handy! What is the app?

It’s part of a continuous glucose monitoring app from Levels.

What we have is not what I consider “AI”. It is an alternative scheme, as compared to deterministic coding, but really it is just a sophisticated combination of pseudo-neural process mapping and fuzzy logic that works rather impressively and can be trained to learn stuff. It is just a computer that can perform perceptive and analytical tasks in ways similar humans.

If it was “AI”, it would be proactive. As it stands, it does nothing without being asked to. Which in a very real way is rather fortunate.

I believe that at some point in the next five to ten years, the traditional kernel most computers will be replaced by a neural mapping/learning logic system that will be able to communicate more directly with users, and probably even identify individuals by their behavioral cues, along with biometrics. The neural-kernel computer will acquire the UI that the user wants, implement the features the user needs and construct tools (applications) that are most efficient for the user.

Deterministic programming will be authored by the neural-kernel along strict design guidelines, so it will run somewhat faster and more efficiently because it will not be burdened by the need for memory protection. The greater software industry will wither away because each user will be able to design the working environment they need and obtain powerful functionality built by the neural-kernel.

As to “AI”, I am a luddite, who eschews any of those eavesdropping devices and has no particular need of what LLM has to offer.

I regularly use ChatGPT when composing lengthy work emails. I simply instruct it to “Tidy this up,” paste the email content, and sometimes iterate the process with the output. Finally, I incorporate any clearer phrasing suggested by ChatGPT.

No and I will actively avoid products that have an “AI” component.

That’s why it’s artificial.

I wonder if people get this pedantic when someone discusses the AI in a first person shooter or grand strategy video game.

Heck yes.

I’ve needed Excel spreadsheets to do things beyond my limited knowledge of Excel, and I can ask AI in plain English for a formula that does what I want.

I also dumped some customer feedback data in it and got it to spit out a short summary of the key points, which was accurate enough for my purposes. However, new company policy was announced a few days later banning that use - no feeding it data.

Maybe they aren’t aware that the term has been used in game programming for decades.

I was thinking more along the lines of wondering if they insist that “smart phones” aren’t actually smart.

Hostility to innovation is nothing new. It goes back at least to the steam engine and later, the first computers, and it was as misplaced then as it is now. The misapplications of AI today are no different than poor implementations of early computer technology, which could rightfully be seen as both displacing jobs and imposing rigidly inflexible uniformity on how customers were treated.

Yet the jobs they displaced were jobs of mind-numbing drudgery, and today well-implemented computer applications offer customers literally a whole world of different choices for their every need. With computers running everything everywhere today, unemployment is no worse than in the past, and many more jobs today are more creative and cerebral than in the past, such as, say, having to compute the total of a customers’ order on a desk calculator over and over again all day long.

The “energy use” argument is also misplaced. It doubtless comes from the impression that some of the enormous data centers that have been built do use a large amount of energy. But that has nothing to do with AI. Those mega-centers are there to support huge business and e-commerce operations that are the lifeblood of the economy. A perfectly viable AI application can literally run on your PC.

When it comes to a large-scale AI like GPT-4, by far the most compute-intensive part is the initial training, not the ongoing operation. And even a supercomputer like the one built for GPT or IBM’s Watson is, at most, a “computer room”, not a massive data center.

If the concern is about thousands of companies deploying large-scale AI, then one needs to consider two big mitigating factors. One, exponential growth and miniaturization of computer power: the computer power in my nice little desktop PC that’s so shy I can’t even hear the fan would have required a large building and its own power substation a few decades ago. And two, those massive data centers may appear formidable because of the concentration of computers all in one place, but they’re simply replacing the work that would have to be done somewhere else anyway, but doing it far more efficiently.