Future of General Questions in the AI age

<wasn’t sure whether to put this in ATMB, but I was thinking of an open discussion, not aimed at mods, so went with GD>

ISTM that it’s already the case that some questions in GQ could be answered well by AI. I am not criticizing anyone for starting such threads just wondered how do you guys see it going forwards?

OT1H, this is a discussion forum, and hearing different answers, maybe correcting others slightly, can be valuable. Plus of course there are often interesting spin off questions and opinions.

OTOH, people tend not to ask questions that are trivially googleable, lest they get the dreaded “Let me Google that for you”. We might be close to the time where people start to say similar (as soon as someone comes up with a verbing of ChatGPT) for open-ended questions.

Considering how poor many of the AI answers still are, I would say wait a few years to have this conversation.

If you don’t mind getting answers that are incorrect, counterfactual, or fundamentally baseless.

People post with questions in Factual Questions that can be easily researched with a simple internet search all the time, presumably because they don’t realize there is a simply referenced answer, or they are looking for a more nuanced or expanded explanation than can be found in Wikipedia. Asking a chatbot a question for which the questioner cannot recognize a correct answer from a erroneous or counterfactual one has the risk of incorrectly ‘verbing’ the prompt or just getting a nonsense response from a language prediction confabulation machine that is purpose designed to provide authoritative answers even if they are complete bullshit.

Stranger

IMO, the veracity of ‘let me google that for you answers’ was generally a bit better than ‘I asked AI and it said X’ answers are at present. I am frequently dismayed that people seem to think AI is reliable at answering factual questions; in my experience, it is not. Twice this week I have had people shove utterly wrong AI generated ‘facts’ in my face. This is a normal week.

Huh, maybe it is a topic for a few year’s time then. It’s just IME I’ve generally gotten either accurate answers (yes, after confirming) or “I don’t know” (though I wish it didn’t hide “I don’t know” in flowerly crap).
If your mileage is that they are junk, for now, then to be continued in 2028…

I don’t think I have ever seen an LLM actually say “I don’t know” or equivalent.

It seems they are hardwired to generate some kind of response, hallucinating one if necessary.

Lots of times for me, here’s an example from yesterday:

I’ll grant they are getting better in the case of asking them to locate something specific, like a song.
The interface with web seach has definitely improved.

For things which are less strictly factual, however, hallucinations still seem to be common.

They seem good for searches (note I don’t say answers) based on questions that are extremely shallow, but incredibly wide. For example, if I’m trying to find a song where I only know a bit of the lyrics, or an obscure TV Show or movie where I might only remember a single scene, it does a decent job of picking those bits out of a huge web of possible information.

But it generates so many false positives I have to dig through many of it’s results to the underlying links to get a useful final answer. So it’s great as a research assistant but does a lousy job of providing a final answer, while doing so with extreme confidence.

And once you get into a factual question that has nuance… oh boy.

Much like @What_Exit, I think that we’re years away from having one as useful as FQ here, where most of our posters are great at digging into such nuance, or evaluating answers, and I also wonder if we’ll ever get to useful FQ from AI in the long run - there’s just too much money or influence to be made by slanting your glorified search engine (see Musk’s efforts with Grok) and identifying any possible bias (or lack thereof) in your training (see ALSO Musk’s efforts with Grok).

Now if you asked me about the value of an AI search vs a general (non specialty) response from various options such as Reddit or Quora, it’s much closer than what we have here, a mix of great, okay, and flat out wrong, all with great confidence!

[ Just for the record, again what people are current describing as AI, the advanced search engines based on advanced pattern matching and a huge database isn’t what I consider AI, but I fear that’s a losing battle.]

This; I don’t trust “AI” even slightly, and there’s plenty of other people with the same opinion.

Yes. The fundamental paradox of the current generation of “AI” tools is here. You must have at least some reasonable familiarity with a topic to distinguish a correct-ish answer from algorithmically generated bullshit, or be willing to do a bunch of research afterward to verify (or refute) the answer. And if either of those is true, why do you need the AI?

Yeah. It’s worse than asking some random person, since at least the person is more likely to say “I don’t know” if they are ignorant on the subject than give you a nonsense answer with complete confidence. LLM “AI” doesn’t actually know or understand anything, so it has no ability to tell when it’s wrong.

No, this is not the case. AI is dumb and will answer questions wrongly but with confidence. OK, a lot like some dopers in that regard, I guess, but not better.

Perhaps the one situation I find LLMs useful at the moment is when I want to identify something (song, book, whatever) where I remember disconnected snippets but not the author/title etc.

It is sometimes surprisingly good at identifying such things from very limited hints.
In those cases if it finds the answer I snap my fingers and say ‘Oh, yes… of course’!

But then I know the answer so it’s not bullshit…

I’ve been lied to a lot asking earlier generations of GPT those kinds of questions. Thankfully, I knew enough that I recognized the hallucinations as they were presented.

But it’s gotten to the point this last couple pf years that the LLM is able to accurately do the identification, so progress I suppose.

I do not know if AI is getting better or I am missing something but a lot of companies are going in big on AI to do things their workers do now. One would assume if it was as bad as you say it is they would not let their business get run, to any extent, by an AI.

“Artificial intelligence is going to replace literally half of all white-collar workers in the U.S.,” Ford Motor Chief Executive Jim Farley said in an interview last week with author Walter Isaacson at the Aspen Ideas Festival. “AI will leave a lot of white-collar people behind.”

At JPMorgan Chase, Marianne Lake, CEO of the bank’s massive consumer and community business, told investors in May that she could see its operations head count falling by 10% in the coming years as the company uses new AI tools.

< snip >

Amazon CEO Andy Jassy wrote in a note to employees in June that he expected the company’s overall corporate workforce to be smaller in the coming years because of the “once-in-a-lifetime” AI technology.

It’s hard to list the number of media organizations relying in part or almost completely on AI generated content. Sports Illustrated was recently caught doing it. The New York Times has an AI initiative to see what they can do with it. Quartz recently fired all their writers to use AI instead. The Atlantic seems to be exploring it.

I am not saying these are welcome changes. Quite the opposite. Nevertheless, companies are going in big for it which would be really weird if AI was so fundamentally unreliable.

Of course they would; they hate having employees at all, you have to pay employees. The ideal corporation is a single CEO with no human employees, just machines feeding him all the profits.

That isn’t the question. The question is, if AI is as useless as some cynics make it out to be, why are large organizations expressing serious interest in it, and indeed even using it right now? And from a personal anecdote POV, how come it’s been so useful in giving me important information and useful insights about complex topics?

This is mostly my experience, too, especially with the latest free version of ChatGPT which I believe is 4o. It may be that people with negative views are basing them on experience with older versions – the tech is advancing remarkably fast. I am not by any means saying that ChatGPT is infallible – far from it! But saying it’s right far more often than wrong, especially in tasks of information retrieval and content analysis, is not the same as claiming infallibility. It’s also grossly inconsistent with saying that it “parrots gibberish”.

Yes I think people are rather understating the ability of LLMs here. They didn’t go from zero to hundreds of millions of unique visitors a day by usually being inaccurate. (As much as Apple wishes that were so)

I don’t want to sound like I’m evangelizing though.

The aim of this thread was not to push AI just pondering what happens if/when the AI equivalent of “just Google it” catches on.

Exactly this. My problem is that every time I’m awed by something that it does, it does something even more impressive. And I’m not a modern version of Joseph Weizenbaum’s secretary, who was awed by Eliza, a completely trivial program that did nothing more than literally parrot sentences or sentence fragments that incorporated part of a user’s previous input. I’m talking about the ability to have truly substantive and genuinely useful discussions about complex subjects.