What practical tasks are AIs already good at today?

Oh, I see. Totally agreed there.

I make no attempt to predict where they’ll be in the future, however near… just trying to catch up on all the uses today that I don’t yet know about.

Just the other day, for example, I asked it to make a Chinese-English-Italian comparison app for technical jargon phrases like “codebase” and “version control”, with pronunciations in each. It did a pretty decent job at it (much better than my native-speaker family could, actually) and helped me field a client call later that day.

That’s really all this thread was meant to be, a “What’s new with AIs this week?” kinda thing. They’re evolving so fast it’s hard for my measly human cognition to keep up.

(side note)

Like many Dopers, I’m past the peak of my career, and not so involved in discovering or understanding the day-to-day latest-and-greatest anymore. (And I was never any sort of expert in anything, anyway.)

Had the LLM revolution happened 10 or 20 years ago, maybe some of us here would’ve been smart and active enough to have been on the bleeding edge of them, actively helping to research or develop them. Alas, the Dope is past its heyday now, and the few coding-techie-ML type nerds we have left here seem to be mostly just occasionally reporting in and documenting other people’s work, rather than sharing their own. (Or maybe they’re just lurking and actively developing their own super clever app or Skynet, who knows!)

It’s actually a welcome contrast to other forums like https://news.ycombinator.com/, where AI developments are discussed so frequently (many times a day) that it’s really hard to keep up. Like Google announced yet another breakthrough today, but I honestly have no idea what it means: Google releases Gemma 4 open models | Hacker News It’s all just jargon in a language I don’t speak (math and stats).

OK, speaking of Gemma 4 (Google’s latest free & open model), I asked Claude to explain all this to me (how to install and use local models on my own computer). Long story short, the smallest version of it runs insanely fast, and I now have a free universal translator on my computer that runs entirely offline:

https://fightingignorance.org/1e25ee58-4829-4f9b-b55f-e23a1e2b43ae-roger-ollama-run-gemma4e2b-ollama-ollama-run-gemma4e2b-80-28-april-02-09-39-17-pm.mp4

I asked it to translate the sentence “The quick brown fox jumped over the lazy dog, but then landed awkwardly on one foot. I took it to the vet, who told me he fought in World War 2 but didn’t know anything about fox feet.” I cross-checked a few languages with other translators and LLMs and they all seemed pretty good, but the “vet” pun did not survive any of the translations. (Same thing happened with the expensive cloud-hosted pro thinking models I tried, though.) It also went into an infinite loop upon trying to speak Klingon.

Downloading and running it was easy. You just get ollama for your operating system, then in the terminal, run ollama run gemma4:e2b. It downloads the model (a few gigs) and then you now have a local chatbot. (There are bigger, even more powerful variants of that model too, but they were much much too slow on my computer, a M2 Macbook Pro).

For a free, local model, really not too bad! (And this is mere table stakes for it; it is a thinking/reasoning model capable of audio and vision too…all for free, on your own computer or phone! You can connect it to Claude Code or other IDEs and agentic orchestrators too for local AI coding workflows.)

Interesting. I’m headed to Ukraine in August, is this something I can run on my android phone?

ETA: @Reply

Maybe, if it has enough RAM (maybe 5+ GB?): litert-community/gemma-4-E2B-it-litert-lm · Hugging Face

I haven’t tried yet, but it should technically be possible. The model was just released earlier today, though, so if you wait a few days probably there will be better tutorials (and maybe smaller, distilled versions that can run on more phones). Androids are just small, weak Linux computers, so there’s nothing inherently stopping this from working on Android, but the practical workflow might take some knowhow and tinkering.

In the meantime, though, Google Translate (the app) already lets you download languages for offline translations. Russian is on the list. I think it (now) uses an LLM internally anyway so you might want to give that a try for now and see if it’s good enough in offline mode?

I’ll do that. I linked your post to a Gemini chat and it told me that this is possible if my phone has 8 gigs of RAM:

PS They are working on integrating Gemma4 directly into Pixel phones:

From Google announces Gemma 4 open AI models, switches to Apache 2.0 license - Ars Technica

The release of E2B and E4B also shows where Google is heading with its smartphone AI efforts. Google Pixels and a few other phones run local AI models known as Gemini Nano. That’s how these Android phones can detect phone and text scams, summarize notes, or create phone call summaries without sending your data to the cloud. A Google representative notes that Gemini Nano has always been derived from Gemma models, but that’s especially true of the next-gen update to Gemini Nano 4.

This is the first time Google has confirmed that there will be an updated version of its minimal smartphone-based AI model. The current Gemini Nano 3 running on Pixel phones is based on Gemma 3n, but Google confirmed to Ars Technica that the next-gen Nano 4 will have 2B and 4B variants based on Gemma 4 E2B and E4B.

This weekend I might try to vibecode an Android universal translator app using that model — I think all the pieces are there and free now, just a matter of prompting and waiting. Crazy times to live in. Would’ve been pure sci-fi even 5 years ago.

AI is very good at interpreting natural language that is much more suited to existing as a boolean expression. I needed to find when certain information is missing in a spreadsheet, so I gave ChatGPT this prompt

Check if text in each cell in B5:I12,B15:I22 also exists in column K. If the text in the cell does exist in column K, and the row in column M matching in column K has a value, then color the original cell green. If column M is empty, then color the original cell red, if the text does not exist in column K, then do not color the original cell.

It hurt me to just write that, I can’t imagine having to read and interpret it.

Of course, being AI, it can’t quite get everything completely correct. It says

You can handle all of that logic with a single custom conditional formatting rule.

It then proceeds to give me two rules. The rules work, and do exactly what I wanted.

Geeze lol, that’s the kind of thing that’s more naturally expressed in code or pseudocode than in English, e.g.

(Claude’s attempt to make it bullet points)

I wonder if there is any human language that could express such conditionals more eloquently.

AI says maybe Lojban, a constructed logical language:

ganai le textu cu zvati le kolno K gi
  ganai le kolno M cu kunti gi
    skari fo lo xunre
  gi
    skari fo lo crino
gi
  na galfi le skari

Translates roughly to

ganai  [if]   the text exists-in column K  gi [then]
  ganai  [if]   column M is empty  gi [then]
    color it red
  gi [else]
    color it green
gi [else]
  don't change the color

Kinda looks like Python for aliens.

I uploaded my company’s 2025 financial statements into both Claude and Gemini and asked them to write a financial analysis on what they saw, and to give me suggestions on how to reach $X in revenue by 2028, and I will say that Claude did a much better job on the presentation side, even though their recommendations were largely the same.

Have you tried it in NotebookLM? Google’s presentation / report focused AI product?

Yep, but figuring out how to formally express it gets me 80% of the way towards writing the conditional to do the highlighting. That formal step thinking was what I was trying to avoid. What impressed me is I was able to just spew some glurge into the prompt, and it parsed it correctly, and presented a working solution. It described the solution wrong…

I’ve tried doing stuff on Notebook and found it amazingly kludgy. Just didn’t work at all for what I wanted.

Oh? That’s too bad. They added executive summaries and video PowerPoints a while back, which I’ve found to be good at summarizing long confusing documents in particular.

The interface is kludgy though for sure.

@JohnT A friend just let me know about this: Google AI Edge Gallery.

It’s Google’s own nicely-packaged app with a good GUI that lets you run Gemma4 and other local models right on your Android. It supports text, image, and voice already. You can also download other models from HuggingFace and use those if you prefer.

On my Pixel 10 Pro, the E2B variant is very fast, the E4B is a little slower but still usable. These run locally right on the Android phone itself, even in offline mode.

Out of the box, anyone can run these using the phone CPU. To use the ML accelerator TPU chip, you have to first opt-in to a AICore beta (just a few clicks, but it’s in early preview so they have you click a few checkboxes first).

Early test: Translations work very well. With the AI accelerator mode on the responses are instant.

Image OCR and transcription, not so well. E2B completely hallucinated the result. E4B did better, but could not finish the whole thing. Both stopped after a few sentences. (E2B/E4B and beyond refer to the model sizes… bigger is better but slower)

A minor point to add to this:

Google’s aicore preview will download Nano4, which is their privately trained gemma4. It also includes some fine-tuning for certain use cases. If you don’t opt-in to preview, you’ll get nano3 (or nano2, nano1 depending on HW). This is also mentioned in your quote above so apologies if this is known

I thought Google had demoed voice translation with nano3 at Google I/O, but I don’t know if it ever shipped.

My friend told me he uses AI to help him with online dating conversations. He uses it as an advisor, not as a chat generator. He says it’s been transformative!

Part of me thinks it’s absolutely disgraceful, but small part of me thinks it’s fair game. Chatting on these apps is a game that bears no relation to reality.

Good for him! Honestly, I think the world would be a better place if everyone got dating and communications lessons, from AI or otherwise.