AI is wonderful and will make your life better! (not)

Ah, yes, “it’s ‘only’ a next-token predictor”. I genuinely hope that one day the light bulb turns on and you understand that your intuition of what LLMs can and cannot do when operating at very large scales is not reliable. I acknowledge that LLMs have inherent limitations and they alone are not the path forward to AGI, but what they can already do is impressive and useful, such as, in the context of this conversation, summarizing a 160-page academic paper and providing a critical analysis of it with citations, and doing it in less than 2 seconds.

In any case, the only way I know of to try to determine whether GPT’s response to my prompt was influenced by prior conversations is to ask it. It may be a highly imperfect method but its value is not zero. Its response was much longer than what I posted but I found its analysis of my prompting and conversational style to be quite accurate.

Furthermore, I’ve kept most of my conversations with ChatGPT so I’ve been able to quickly scan them. Very few are about AI, and those are mostly questions about the internals of GPT, and I’ve never used trigger terms like “brain damage”, “reduced intelligence”, or “inevitably harms learning". The term “cognitive decline” was used for the first time within the prompt about that paper, but not prior to that.

Everyone will get a somewhat different response to the same prompt, even the same person with the same history, because there’s a certain amount of randomization each time.

The “what people say it means” was in the first part of GPT’s response to my question, which was:

Below is a careful evaluation of the study itself, its methodology, and its credibility. I’ll separate what the paper actually shows from what people sometimes claim it shows.

You can see it in the screenshot I posted.

I agree that there’s a certain negativity in the prompt, but I worded it that way precisely because you were citing that study in support of your repeatedly stated belief about AI causing long-term cognitive decline.

Aaand we’re back to the AI project existing. Contrary to what you might expect, I plan to do what I’m told at work, and while there are certain bad ideas about ways to use AI in the grants arena right now, there are things I haven’t tried, and I’m willing to try them. I’m even prepared to speak positively of them if they work.

More generally, you have to understand my situation. We are in a moment of unprecedented, targeted attacks by the US government on nonprofit organizations. We are being painted as terrorists and subject to constant threats via Executive Orders. This has resulted in, among other things, devastating funding losses. I carry a lot on my shoulders, because what I do impacts whether people at my agency get to have jobs, whether survivors of domestic violence have heating and air conditioning, whether kids who get raped receive trauma-informed medical examinations.

I know, rationally, it’s not all on me, but psychologically I’m having a hard time getting past the idea that this is somehow my fault. It is in this context that I developed the impression that the board thinks we’re not getting grants not because of the ultra-competitive environment or changes in the US tax code that disinscentivized corporate giving, but because there’s something deficient in the way I write them. (I have been successfully writing grants for fifteen years.) I have no idea how else to take this sudden need to have AI analyze my work. Considering I’ve never even spoken with anyone on the board*** and some of their suggestions do not seem remotely connected to the reality of grant-writing, I have feelings about it.

All of which if to say, I’m currently on an emotional roller coaster from hell about my current job. It’s become clearer to me over the past several weeks that my emotions are getting in the way of accurately perceiving reality at work. I don’t know what’s true anymore. I had a meeting last week with HR where I seriously entertained the idea that I was getting fired and it mostly consisted of my boss and the HR guy speaking very positively about my work. One minute I’m convinced everything I do is completely meaningless and the next I’m on top of the world. One minute I think AI is going to put me out of a career, and the next minute that feels like an overreaction.

Oh yeah, and another complication is I have to take intermittent FMLA to deal with the needs of my disabled son, which makes me feel even worse about work.

What you perceive as “whining” is really something closer to an emotional breakdown.

I’m working on it. Like, really. Because it has become a real concern.

***I had one meeting with two board members about the AI project, during which I was largely talked at, not sure why I was even involved. They seemed fundamentally uninterested in anything I said.

it’s absolutely sucking up to him.

well, you know this but wolfpup does not think it applies to him (or he can see through it, whatever he thinks)

brilliantly said, and this is true for the rest of your body as well as your mind of course

I would say it is THE deal

I am so sorry you have been so stressed in this way, it’s really shitty.

I hope you can do whatever you need to do to take care of yourself (you know, put your own O2 mask on first so you can take care of your family!) whatever happens at your job.

all over the USA, at least, hardworking, smart, well educated people are being shit on by the monster that is this fascist regime. and they are getting paid by our tax dollars.

it’s amazing we haven’t all burst into flame from the righteous anger.

Thank you. I think this is one reason I’m so aggressively stuck on this AI thing. It feels so profoundly existentially threatening right now. Not that I don’t think my reasoning about it is sound, just that the intensity of the feeling is probably out of proportion. Although my husband hates AI more than I do and he’s dealing with different issues. His job is pretty damned secure because there will always be people who want face to face therapy. And with the world being what it is, he’s never more in demand. He’s got a wait list that stops at two years. His specialization is anxiety disorders, particularly OCD, as well as tic disorders. Americans are nothing if not anxious.

I empathize with your concerns about AI, I really do. Some of them are valid and I don’t mean to be disrespectful in the areas where we don’t agree. As I’ve repeatedly said, I think that AI will have profound and mostly negative impacts on large segments of the job market, and will contribute to even greater wealth disparity which is already a huge problem not only in America but globally.

Those are quantitatively measurable things. But quite frankly I find it difficult to accept the loosey-goosey theories that AI will make us all more stupid. I think a far more likely outcome is that it will reshape our cognitive skills, weakening some mental skills while strengthening others as we adapt to the inevitable new world order, as we always have through the centuries.

well, it actually is, just like climate change and The War. Yes, anyone not anxious should question how well informed they are. Meanwhile, go outside if you can. watch We Rate Dogs, maybe. whatever helps.

WHY won’t it let me embed?? We Rate Dogs

The technology being so new, we are in the infancy of research on this subject. I worry we’re not going to have good baseline measures to compare AI to because it’s already happening. But there definitely will be more research in the future.

As for the cite you offered, I think it’s very important for us to determine how students can best learn with AI, because AI is happening. If only from a harm reduction approach we need to know how to keep people as engaged as possible in their learning. We need both a comprehensive understanding of how AI may negatively impact cognition as well as a comprehensive understanding of how to mitigate those impacts.

The main reason I tend to avoid AI actually has very little to do with my personal feelings about it. It’s because I have a history of compulsive Internet behavior and if social media is crack cocaine, AI looks like fentanyl to me. Talk to a chatbot about my feelings in the midst of social isolation and personal upheaval? You’d never hear from me again.

Fun stuff. (Not)

I’ve got mixed feelings about AI. I find it really helpful for my work, as in it has made it so much easier. I understand its limitations and have found workarounds, I verify things and I discard stuff that isn’t useful.

My kids are getting into AI. We’ve got a trip planned to the States and my 15-year-old son is able to find a lot of information that I could never have found when I was his age.

And then AI helps kids learn how to be killers.

If you don’t mind, @Spice_Weasel, we’ll be glad to help. What AI do you have available to you, and which version?

WTF? She’s stressed about AI in her job, personal life, and the world. She said she has concerns that an AI chatbot is not a good idea with her personality.

And your solution is…you’ll help her use it? Really, just WTF.

Yes, that was my response. Given that her job may hinge upon her actually using the tool, why not help her?

I can’t do anything about the other issues she mentioned, but if she doesn’t want to get undercut by somebody who is willing to use AI as her CEO wants, well, we’re here to help.

It’s just who I am. When I found out how bad franchising was for franchisees, I just didn’t just complain about it on a seldom-viewed internet board, I wrote a book, started a podcast, appeared on radio and TV spots, opened a consulting business, gave webinars, created a YouTube channel, started a Skool.com classroom, more, all to get the message out and help people avoid the franchise trap. I’ve saved people millions of dollars and well over a hundred years of stress (combined).

Some people do things, others do not. If she may want help, my inclination… always… is to ask her if she wants help.

Unfortunately, that is probably a higher level than the process that some people use to come up with an answer.

Some people are problem fixers. I like problem fixers.

So this is a custom tool and I don’t know anything about it yet. I just talked to my CEO today and they met the guy who is working on it. He asked for six weeks. I’m guessing it will take much longer than that. But when I have more details, I will share. I’m probably going to be attending more meetings about this.

There’s nothing to do, really, but lean in.

I still reserve the right to whine about it, though. I’ve gotta let off steam somewhere.

Do you know the name of the system? Gravity? Donor AI? Or is it something custom built? If your team is building out an already-existing platform (like Gravity), look at their website and… in the next meeting about this… have some questions ready to go, make yourself look involved (even if you don’t GAF about the answers).

It is important to get your minimum daily requirement of Vitamin Chocolate.

I’ll ask about it. Thanks!

ETA: it’s custom built but I imagine they start with a template?

Looks like we cross-posted, so I missed your reply about this being a custom tool. Hell, you should just review other AI-centric non-profit platforms, jot down some notes, and then ask your guy questions like “Will this tool analyze giving patterns to identify donors who could “bunch” their donations—giving three years’ worth of gifts in one year to surpass the new itemization thresholds?” and “Will this tool be able to flag individual donors aged 70½+ who can utilize Qualified Charitable Distributions?”, stuff like that.

Input from the eventual end-users is always (well, to me it was always) appreciated. And questions like the above allows you to guide the development of the project so that it would be more useful to you once it’s unveiled.

I appreciate this. And I apologize for my rude response. It was wrong.

If you trust that it did a good job and didn’t just make stuff up.

So at first, we spend a lot of time verifying that it did it right. We stop practicing how to do the work and only know how to verify the work. Then we get even more used to it being “right” most of the time, so we verify less. And less. Less practice at doing the work, and less practice at verifying the work.

Then Anthropic’s (or whoever’s) investors start demanding a return on their substantial investment, they make it clear that they expect the company to stop giving away the product at a massive loss, and the price for these LLMs goes up to $2000 a month.

Do you pay it?

LLMs are pretty reliable at the task of summarizing articles. It’s when they’re asked to source new information that there’s a possibility that they can go wrong and occasionally make stuff up. In the context of the paper I asked it about, that would have been the part where I asked it to critique the paper and describe its weaknesses. However, I posted a summary of its critique and it looked sound to me, and nobody else seemed to take issue with it.

A study I linked upthread suggests otherwise. It shows that in the task of essay-writing, interactive collaboration with an LLM generally produces a higher quality product than writing entirely on one’s own. I suspect that in the future the skills traditionally needed to do knowledge work on one’s own will be gradually transformed into a slightly different set of skills needed to produce a better product through AI collaboration.

It’s estimated that ChatGPT has around 900 million weekly users. Why do you think they’d need to charge $2000 a month?