So... How does one actually use AI?

You all should spend more time using Claude instead of ChatGPT or Gemini. My experience is that it’s more “thoughtful” or something, better answers. It’s also a fantastic programmer.

I’ve had a few philosophical-theological discussions with ChatGPT, Claude and Grok - atheism, proofs of the existence of God, theodicy, that sort of thing. ChatGPT’s answers and statements about that were not less thoughtful than Claude’s.

I’ve just added it and will play around with it. I’m sad to learn it natively supports Python debugging but not R…I knew I should’ve learned Python.

I mean using it for a specific purpose – summarizing a document or set of slides, creating a set of slides, providing programming advice or code. I don’t really spend time chatting idly with them, so maybe ChatGPT is better for that.

Exactly my sentiments! GPT has been very, very helpful in a lot of stuff in everyday life. I have no interest in videos about how stupid it is regarding an upside-down cup – hur hur!! ChatGPT has been genuinely helpful on practical matters, and very informative on more abstract cosmic matters. It’s a fantastic resource. But keep in mind that, like any information resource, it can be wrong sometimes.

AI for me is a way to allow my experienced software-engineer mind to be able to make amazing things without having any knowledge of the technologies involved.

I used ChatGPT5.2 over the past week and a half to do two key things:

  1. Help me write an app for the Flipper Zero (hacker’s Swiss army knife gadget) that displays a nice clock with large digits as it sits on my nightstand.
    You write code for these things in C and use their own special libraries, two areas that I am not well versed in.
  2. Help me set up a full home lab with an Oracle 26ai database installed on an Oracle Linux install on Proxmox on a random Dell computer, talking to a non-cloud Ollama LLM as well as Gemini and finally OpenAI, allowing me to do a demo where I indexed the transcripts of over 60 of my YouTube videos and I can ask questions of the database and it answers based on my videos, offering quotes.

This was all done with me guiding and doing QA on the work. It was as if I was leading a team of 4 specialists, having a network guy doing one thing a storage guy doing another, and so on.

It’s a force multiplier for geeks!

“Nothing of substance here”?

I like AI for drudgery tasks.

For example, I use an email service to send our newsletter to subscribers (not spam, only those who signed-up). I collect statistics the next day. The service, in their zeal to keep upgrading things, made all the URLs have “new_Window_icon” appended to every URL I copy. Soooo annoying.

I can pop that list of URLs into an AI, tell it to remove “new_Window_icon” and poof…clean URL list. Takes seconds…maybe not even that long. Sucks to have to do that step but light years better than having to do it by hand.

I also see it as a super-charged Google search. It can help a lot there too. Just be sure to ask it for citations to its answers so you can check its work.

Another benefit of AI is to challenge ones own self. I do this with ChatGPT as a verbal wall to bounce my assumptions off of. For instance, I asked ChatGPT last night, “It is my opinion that TSMC was foolish to build foundries in Germany and Arizona and that this is a big waste of money, please give me 10 reasons why my assumption may be incorrect.” And it indeed listed ten reasons.

Have you ever tried taking its answer and feeding it back into the AI as a prompt? I read about this prompt some weeks ago (I forget where so no link):

Act like gravity for my idea. Your job is to pull it back to reality. Attack the weakest points in my reasoning, challenge my assumptions, and expose what I might be missing. Be tough, specific, and do not sugarcoat your feedback. [Insert your idea].

Then re-insert what it gave you before and see what happens. Sometimes (not always) the results can be interesting.

All right. I grudgingly admit that’s pretty cool.

CEO probably used an AI LLM to write it. They are fabulous at generating corporate speak. Maybe a summarising AI task should include translating corporate speak back to plain speaking.

For the OP. Little to no hope. There are people who claim to have used AI coding assistants to write systems that can parse input with only a formal specification of the input structure. But as noted above. They make lots of mistakes and need expert guidance and oversight to get to the point if being useful.

Reverse engineering a binary format description is at the black belt end of proper hacking.

The overarching problem is that no AI at the moment usefully reasons. They can do a good job of faking it. Unless there is a solid amount of representative information out there matching the paradigm you seek, and available to train the AI, there is nothing in the AI that can get traction.

Could you train one to parse a binary representation of a schematic? Perhaps. But you would need a prodigious amount of representative data, and you would need to provide a known good readable representation of the schematic described by each binary file. They won’t just work out the structure ab-initio. One of the early successes of modern AI was language translation. Fed enough text in two languages that meant the same thing, systems could learn how to correlate and construct sentences with nuances of context of use. The trouble with binary representations is that context of any individual part can be essentially arbitrary.

And that’s where we’re going full circle. Probably CEO fed an LLM a few bullet points and asked it to turn these into a lengthy e-mail. Then the recipients take this e-mail, feed it into an LLM and ask it to summarise it in a few bullet points.

My experience is that it is wrong an alarming amount of time, up to and including inventing things out of thin air. That is, when you know the subject. Yet everything, spot-on and bullshit alike, is presented in a smooth-talking expert way.

When someone uses AI to learn about something they don’t know about, I have to wonder how much misconception and pure bullshit is being fed there.

My experience is that it is right an alarming amount of time. It certainly will hallucinate and make stuff. Not usually but just enough to make us pay attention. I would never rely on it for things like legal or medical advice.

Hallucination is a real thing with LLMs, but I believe it’s not as big a problem as people make it to be. In casual chats, the consequences of hallucinations are negligible. And if accuracy is of the essence, you can mitigate the risk by telling the model to give you verifiable sources for its claims. You will then have to check the sources, of course, but that’s much less work than if you had to start from scratch.

You can also ask one AI to check another AI’s work. Doesn’t give you complete certainty, but it increases the chances of spotting an error. To give an example: I’m currently in the process of renting out an apartment that I own, and German law is very detailed in prescribing what costs can be factored into regulated rents and what can’t. I asked three LLMs (ChatGPT, Claude, Grok) to come up with a calculation based on an invoices that I uploaded. Claude and Grok were broadly aligned, ChatGPT was far off the chart. I looked into the invoice and found a calculation error that ChatGPT had made.

Assuming that:

  1. You know these files are schematics because you have seen graphic representations
  2. You have graphic representations because the binaries are convertible to a human-readable format
  3. You’re not particularly invested in the binaries, in fact they sound like an obstacle to understanding, what you want is a more documented and editable format.

Then:

  1. Export the binaries into whatever human-readable format you’re actually consuming them in today. (PDF, or even the caveman approach of taking a screenshot GIF of what you’re viewing on your screen)
  2. Feed that to Claude. Use the latest Opus model if you can. Ask it if it can figure out the binaries converted to that image. If it can, try the approach on other binaries.
  3. Consider the output, if it appears to be nonsensical, try to coach Claude using your own knowledge of the schema and insights about where Claude might be misunderstanding.
  4. Maybe have Claude formalize a new specific format and have it theorize how you could write a converter in Python or something to process the rest of the files, and then actually generate the converter.
  5. Work through a similar process with the Python outputs, refining that Python script as you go.

Not sure how far you want to go with this, the first 4 steps might be sufficient for your needs.

For the ASCII PCB stuff, this should be a lot easier and more straightforward. You might just be able to present it to Claude, tell it as much as you know about it, maybe give it some sample exports to make sense of it, and then just tell Claude to write a converter with documentation.

AI use is mostly about understanding how to decompose your problem and critique the answer effectively. Present your critique to the AI to help refine its answers. If the situation will yield to that approach, and you can give the AI some hints as to how to decode the inputs, you can probably find a fairly easy path forward.

AI is excellent at doing stuff you are not good at. In my case, art. The publisher sent suggestions for a cover my daughter and wife are writing, and they were all crap. In about five minutes, using ChatGPT, I produced a better one, and refined it almost immediately when my daughter had a suggestion. And I’ve seen a few independently published books with AI illustrations as bonus content.
And there is a great thread about AI generated pictures.

No, but it’s a starting point. To cite an example I’ve mentioned several times already, you do not want to use AI to answer a medical question like “what do these symptoms mean, and what should I do about it?” except perhaps for an iniitial take before talking to an actual physician. But it’s perfectly fine, harmless, and probably very useful to ask a question like “I’m going to see a doctor about these symptoms. What are the best and most useful questions I can ask him/her?” Worst case, the answer is harmless bullshit; much more likely, it helps you gain very useful information from your doctor and demonstrates that you’re an informed patient.

At my company, our lab generates technical reports that are often 80-100 pages long. They can be a real pain to review, especially since these reports are somewhat new and our technicians are still learning how to generate them, meaning there can be lots of little mistakes. A coworker recently came into my office and proclaimed that we can use AI to review these reports in a fraction of the time it takes a human to do the job. And that is true, AI is very quick.

He ran a couple of reports through and AI did indeed find three or 4 problems. He was ecstatic. Then I did a manual review of the reports, and showed him the 10 or so mistakes that AI didn’t catch at all. His smile turned to a frown. AI may very well be able to perform this task in the future, but we will not be letting it review any of these reports in the meantime.