How I use it
Personally speaking, ChatGPT has got to be one of the most helpful things (software, chatbot, “person?”, whatever you want to call it) I’ve ever used.
I have the app installed on my laptop and use the shortcut key (opt-space) to ask it questions many times a day, every day. Same on my phone, where I either type short questions or chat with it in voice mode.
I know that it’s “only” a statistical model and sometimes prone to hallucinations; in my experience, it’s been maybe 70-80% accurate, which is pretty good, and honestly on par with (or better than) what Google would normally give me anyway, or what I could come up with on my own or chatting with other, non-expert humans.
I never use it as the sole source of truth, only for initial research or a rough draft of something, unless it’s a field I’m personally already knowledgeable in (where I can double-check its work myself).
Some specific examples
Coding
Professionally, as a computer programmer, it’s been a huge time saver because it can write mundane, boring code so well. I’ve been a computer nerd for three decades and a professional programmer for two, and it’s honestly already a better coder than me and only getting better every year. It’s especially good at writing small bits of modular code, documenting it, and testing it – the kind of tedious business logic that any mid-level programmer could easily write, but takes forever because it always has to be customized to every specific project’s needs.
It writes more readable code than 90% of the fellow developers I’ve worked with. And code is one of those domains where the output is easily testable (and I actually do read/review it manually), so hallucinations aren’t as big an issue. Their newest model actually internally verifies the output anyway and will keep re-prompting the LLM on its own until it gets it right.
Overall, it’s been so helpful that I’m really scared for the juniors in my field (it’s way better than most of them, both at coding and communicating). And also scared for myself; it won’t be long before it completely replaces me. Already this field has seen hundreds of thousands of layoffs and not much rehiring, and AI is only just getting started.
My only remaining value, really, is being the human interface between a client and the product. For now, I can still better understand the client’s business (and emotional) needs, find the best way to implement it using our product, and then churning out mundane code that gets it done – with ChatGPT doing about 30-40% of that last part. But the AI is also getting better and better every year at understanding human communications and business idiosyncrasies. Once it gets better at reading human emotion/mood/tone/body/language and also ingesting business docs (code, documentation, etc.), it’s over for me (and large swaths of society). I don’t think that future is very far away…
Summarizing science
I have a environmental science B.S., but always found reading primary lit difficult and exhausting (I mean, who doesn’t). Even secondary sources can be overwhelming, given their sheer volume and the amount of SEO spam on the web these days. ChatGPT is an incredible tool for doing initial literature reviews, giving broad summaries of a field or a sub-field with mostly-reasonable factoids.
Once I get the general overview of something, then I use Google Scholar to find specific relevant papers (which, for now, are still mostly human-written). I collect those papers and send them to Google’s NotebookLM (which takes PDFs and uses AI to summarize them into written reports, podcasts, and interactive chatbots) for a deeper look. Then I manually (using what’s left of my brain) synthesize that into a written report and fact-check everything the AIs have done.
It’s still a huge, huge time saver vs doing it the old-fashioned way. I most recently used it to write a nature talk as a natural history museum volunteer, and it turned what would’ve been several days of reading and research into just a few hours. I learned a lot from it (but still fact-checked everything).
Of course, this whole cycle is something that is itself becoming rapidly AI-fied. Researchers are also using AI to help write their papers, the search engines are getting better at using AI to find and summarize those papers, etc. Not being a working scientist, I don’t know how this will directly affect their work, but I’m certainly afraid for all the secondary jobs that work alongside scientists (journalists, writers, lawyers, lobbyists, etc.).
Fun stuff
It’s also quite a lot of fun to just hang out with. On road trips, sometimes we use it for things like:
- Can you come up with 10 veterinary trivia questions? (My partner’s a vet tech). It did well, but was way too easy.
- Can you come up with 10 extra hard veterinary dermatology trivia questions? It did very well with this and my partner was thoroughly impressed.
- Let’s roleplay. Pretend like you’re a sentient toaster trying to communicate with a person. You can only answer in on-off clicks. It was scarily good at this, “panicking” with a series of rapid click-click-clicks once the scene got intense.
- Let’s play 20 questions. (It was decently good at guessing, but merely OK at coming up with something to be guessed. The non-AI apps are still better.)
- Can you tell us some jokes about ________? It is terrible. I mean, my dad jokes are already atrocious, but this is… next-level bad. I guess humor is, for now, something that is still difficult to encode in a LLM. Comedians are safe for the time being.
- Can you tell me a long story about (some completely absurd, improbable topic). It was also amusingly good at this, like a child’s imagination, and way more interesting a storytelling partner than most human adults who’ve lost the ability to think outside the box.