So... How does one actually use AI?

I’m very anti-AI. I have yet to see any results that make me go “Wow, I need this!” But - I’m not a total Luddite. If AI could do something useful for me, I would consider using it.

To that end, I have some old schematics which were done in a currently unsupported program. The file format is binary, and not publicly documented. Are any of the AI tools able to take these files and convert them into a newer, well-documented format, or is that asking too much? I’m willing to spend some small amount of money to do this, but not very much, and I don’t have a huge number of documents to train AI on. Is this currently achievable?

I also have some old PCB layouts where the file format is ASCII, but is also not publicly documented. I would like to convert those, too.

AI only works if you’re good enough to spot when it’s made a mistake. Otherwise it could slip in lots of errors without your realizing it. It’s a fantastic time-saver, money-saver, energy-saver in 90-99% of situations, but it’s those small issues you must watch out for. I wouldn’t use AI for something that was absolutely important or must be error-free. For me, even when doing a lot of document revision, I still manually proofread everything it’s done.

What’s being called Artificial Intelligence (AI) nowdays is really just an expanded version of predictive input. Way back years ago we had applications that watched what you were typing and tried to ‘help’ you by predicting what you would type next. Mostly they were so inaccurate & intrusive that most people considered them a dang nuisance* and turned them off (if possible).

The new AI still basically tries to predict what word(s) should come next, based on your input, or your directions to it. Technically it’s called an Large Language Model (LLM) because it now has billions of online language samples to analyze, and so can do a much better job producing what you want to say (based on copying what many, many others have said). But it has no ability to recognize the accuracy of what it copies – so if many online sources give the same wrong info, it just accepts that. And it knows the rules of grammar & linguistics, so it can parrot this in a reasonable sounding manner – which is why in can sometimes produce completely wrong answers that seem so correct.

For the OP’s problem, I don’t think AI would help much. Only if there were a lot of online examples of people converting schematics into your newer format would AI be able to reproduce that for you.

I got AI to summarise the CEO’s weekly email today. It did a pretty good job.

I have a similar hesitation towards using AI, but your envisaged use case seems pretty harmless to me, especially if you take into account Velocity’s recommendation to carefully check for errors.

I just asked what some AI say themselves. Kimi (a Chinese AI) says it can’t do it itself but recommends specific programs/tools. ChatGPT boasts about being able to do the job itself, but asks which ASCII format you have. My suggestion is to give ChatGPT a try with the files you have, unless they are sensitive (national security/privacy wise). I always presume any AI will keep your data and use it for its own purposes.

I’m a tech writer, being told to use the shiny new AI tools, and AI auto complete for text goes from amazing to nonsense at the drop of a virtual hat. It will correctly predict the need for “complete the following:“ before a numbered procedure, for about ten edits then decide to add tonnes of fluff text around it on the eleventh.

I’ve found it much more useful as a powerful find and replace tool. Instead of me having to build up a complex find and replace instruction using regex etc. I can just type one sentence with the instruction and the tool will do it.

Or it can find parts of a document that match very specific but weird requirements. i.e. a bullet list that follows a third level heading but only if there isn’t text between the heading and the list but there is a blank line and the content is not in a task. This is a real example I used the other day and it saved me about an hour of work.

Testing it for content generation though and it’s mostly useless beyond creating a very rough and ready draft document.

For the ASCII formatted one, try going to Claude and uploading the file. Then, ask it if it can convert it or whatever you’re trying to do. Maybe it can convert it to Mermaid or HTML or something.

For the weird binary format, I don’t know.

The question in the title doesn’t really match the OP, but to answer the title question, you can talk to it in natural language and ask what it can do. If the results aren’t what you expect, explain why and tell it to try again. I use AI all the time now for little programming tasks, and if there are errors, I just tell it what the errors are and it fixes them for me.

I kind of feel you are asking in the wrong format.

You are asking a bunch of humans to tell you how to talk to Ai.

Maybe you need to to ask AI how to talk to AI.

Either you get an answer or the AI goes through an infinite loop… which having taken down a mainframe at age 13, appeals to the child hacker in me.

`10 PRINT “OH NO”;

20 GOTO 10"`

I can’t resist posting a link to this Reddit thread in response to this: Reddit - The heart of the internet

It’s true, of course, that in technical terms, what LLMs do is predict the expected next word in a text based on correlations observed in the vast data used to train the model. But I think it’s wrong and smug to conclude from this that LLMs are essentially just a glorified autocomplete. The things that modern LLMs (especially the pay-to-use tiers) are capable of accomplishing are astonishing, and we haven’t yet seen a fraction of the impact they will have on job markets. And that’s just LLMs, without speaking of autonomous agentic AIs. Ultimately, as long as AIs get a complex job done fast (and they do), it doesn’t matter whether, underneath the hood, they’re “just predicting tokens”.

Man, do they ever. Nine months ago, we were barely using them and now they’re an essential part of our workflow. They can look at a document with hundreds of pages and come up with a really insightful summary in seconds.

I’m starting to think that all intelligence is just predicting tokens at this point, given how good the AIs have gotten so quickly.

To set a mental model of what a modern LLM (such as ChatGPT) can and can’t do, it helps to think of it as : a person who has read all the relevant parts of the Internet, including many books it shouldn’t legally have access to. It remembers this content (not exactly all of it, but certainly more than you or I could), and can write and interact on any of these subjects while imitating the style of humans.

There are several things you can do here:

1 Use AI as a search engine. Ask if the AI can tell you how to convert this, or if someone else has already done the work and published his knowledge / attempts / converter on the internet.

2 You don’t say whether you still have that old program or can still run it. AI might tell you how to run the program, by using compatibility options, or walk you through setting up a virtual machine in which the program will run .

3 If you, in any way, have a picture of the finished schematics, AI might (I really don’t know) be able to turn that picture into a description or file format that a modern program can use.

4 You could try using a coding agent (Github Copilot, Claude etc.) and instruct it to write a program that will automatically try to figure out the file format, maybe given some clues as to the components found in the schematics. Definitely an involved process.

5 You could ask AI to guide you through deciphering the file format yourself. Start with asking what tools you need and how to understand the file header.

Here are some of the best uses for AI I’ve found.

It is far better than a search engine if you need to learn something. It can be very educational if you have a back and forth dialogue about a subject you want to learn more about, because you can ask it to explain concepts you are confused about. It will also bring up concepts you didn’t even know existed, and you only learn about them for the first time by having a dialogue with the AI.

Also notebookLM is an amazing program. You find a pdf file of a book or article (on annas archive, etc). You upload it, and then you ask it questions. This is extremely helpful when you have technical training manuals. You can create a notebook in notebookLM, add all the technical training manuals, and then use it for tech support. If you are a student studying something for class, you can upload a PDF of your textbook, and ask the notebook questions on things you are confused by. Its like having a virtual tutor who understands the book.

Those are the main uses I’ve gotten out of it.

It’s a fantastic search engine. You can ask it natural language questions like "Find me the last five Minolta cameras released and rank them by their metering capability, then autofocus capability, then weight. Display the results in a concise table. " And it’ll chug through and produce a list of cameras ranked by the things you’re looking for.

The reviewing is nice to be able to give it a list of contractual must-haves and then tell it to compare the contract to the list and have it find obvious

(results of prompt above on Google Gemini)

Rank Camera Model Release Year Metering (Segments) AF Points Weight (Body Only)
1 Maxxum 7 2000 14-Segment Honeycomb 9 Points ~550g
2 Maxxum 5 2001 14-Segment Honeycomb 7 Points ~335g
3 Maxxum 70 2004 14-Segment Honeycomb 9 Points ~375g
4 Maxxum 50 2004 14-Segment Honeycomb 7 Points ~335g
5 Maxxum 4 2002 2-Segment 3 Points ~319g

To add another use case: I’m terrible at DIY, but lately I’ve had a couple of things to fix in my home. Change closet locks, mend a broken freezer lid without replacing the entire lid, this sort of thing. Nothing spectacular, but enough to make an amateur like me unsure how to not get into a mess. I took photos of the issues and uploaded them to ChatGPT with a description of the problem. I got detailed advice what to do. List of the items I need to get, names under which they’re sold in hardware stores, what to look out for to make sure it fits, and step-by-step instructions. In the pre-LLM days (i.e., four years ago or so) I would have spent hours trying to google this information. Now I get it instantaneously with a photo and a short description. Today’s LLMs are one of those things which, until very recently, would have been considered sci-fi. Now they’re here, available to anyone, for free, at any time.

I’ve been a bit skeptical about using AI or LLMs. But I have found it helpful for translating software from one language to another.
My job has decided to move away from SAS and replace it with R. I have used AI to translate sections of code from SAS to R, and it does a pretty good job. It is helpful because R has pretty poor documentation and tends to use commands with odd names. I took the translation into R and had to work with it to get the code right, but it saved a lot of time.

I primarily use ChatGPT as an aid to writing tabletop role-playing game adventures.

I give it an outline (which I’ve written) for an adventure, and it will generate ideas for encounters, opponents, etc. Its ideas aren’t always perfect, and I tweak them a lot, but it gives me a good springboard, and helps me to develop encounters and challenges that I might not have initially thought of on my own.

I also use it to generate names for NPCs, locations, etc. I’ll ask it, “give me a list of 30 possible names for [character],” and it’ll crank those out, too. For work I’ve been doing on an ongoing fantasy campaign, it’s gradually figured out which kind of names fit my vision for the setting I’ve developed (based on me telling it which names I’ve chosen).

Yes, there’s probably some truth to that. I’ve always wondered how pre-1980 writers could create a good novel by typing one character at a time on a typewriter, left to right, top to bottom. Maybe now we know.

We’re adjusting the meaning we’ve been assigning to “intelligence” if an entity can seem smarter than us while we know it’s just a machine. And, conversely, many of us are trying to find a definition of intelligence that includes just the parts “we” do that it doesn’t do.

I use AI all the time, and for one of my main use cases it’s been a huge accelerator. At work I handle a lot of proposal development. Over the past 15 years I’ve written dozens if not hundreds of RFP/RFI responses, process docs, reports, blog posts, white papers, webinars, case studies… on top of that, basically all the thought-leadership stuff my work requires. So for most client questions, I’ve answered some version of it before, often with professional writers helping polish the final result.

The problem is it used to take forever to find the right material. Question X might really be the same as question Y in a different industry, but I wouldn’t recognize that right away. I’d spend hours digging through docs and even longer figuring out which responses actually landed well versus which just scraped by.

So I dumped all that material into my AI of choice.

Now when I meet with a client, or sales sends me a call transcript, I load it in and start asking things like: “What problems did this client express?” or “Where have we solved this successfully before?” If they send a 60-question RFI, I upload it and let the AI take a first pass. I still review and edit everything, but it’s such a massive jumpstart I honestly don’t know how I worked without it.

This is interesting to me because I find ChatGPT to be an incredible search engine for seeking and organizing, but Google Gemini has been hit or miss for me (particularly its default implementation in Google search).

With respect to analyzing content, it’s amazing at summarizing PDFs, but if you load a scientific study into it, it’s incredibly sensitive to your prompt - load in a shit paper with limited prompting, and it will frame it uncritically or even positively, and will infer causality on its own. I’ve been working with ChatGPT on this for a while, trying to get it to apply a critical (but not highly critical lens) to doing those kinds of summaries. I think the main issue here is that it’s a level of analysis that doesn’t exist as much in its corpus.