­xkcd thread

A farad is a unit of capacitance. Which is to say the storage of a static electric charge. Until ~20 years ago, a one farad capacitor was the size of a washing machine, not something handheld. The small bite-sized ones in e.g. old fashioned CRT TVs, would have capacities measured in pico-farads, not whole farads. A 100 micro-farad capacitor was the size of a small soda can.

One farad of charge is plenty enough to kill a human instantly. And if you touch both contacts, that charge will all come out … instantly.

“This sentence has a volume of one decibel.”
“What?”

“Sorry. This sentence has a volume of one bel.”
“What?”

I asked ChatGPT a few questions about what it takes to make a 1-farad capacitor that small, and it talked about how the voltage of something in that size range would not make it all that dangerous. Was ChatGPT wrong?

A capacitor is basically a small battery that doesn’t hold much but can quickly be recharged. This isn’t really right, but it’s enough for this.
If you short a 1F capacitor out by touching the ends together, you’ll get a flash and probably melted wires, like dropping a spanner across a lead acid battery. Touching it with your fingers… It would probably make you throw it across the room.

Big caps like that were used in car stereo systems in the nineties. I think @LSLGuy’s timescale is off by a decade, perhaps, but I know they did shrink fast. A look now suggests you can get 1F caps you could reasonably fit to a circuit board.

Circa 1990 I saw a 1 F capacitor in the Digi-Key catalog, and ordered it just for fun. It was about the size of a two stacked quarters. Seems to me it had some sort of limitation though - max voltage?

Brian
Just checked - there is a 13500 F (!) cap available ($240)

A capacitor doesn’t carry an amount of charge. It carries an amount of charge per voltage. The total energy held in a capacitor is 1/2 Q*V (half of charge times voltage), so for any given capacitor, the energy it contains will be proportional to its voltage squared. So to put a lot of energy in a capacitor, you need a high capacitance, and you need a high voltage.

Modern supercapacitors have the high capacitance part covered. But they’re only capable of holding a fairly small voltage: Try to overcharge them, and you’ll just end up letting out the magic blue smoke. So the 1-F caps you can hold in the palm of your hand really aren’t all that dangerous.

That said, the technology is advancing very rapidly, and there’s still a lot of room for improvement. In a few decades, batteries as we know them might become obsolete, replaced by supercapacitors that are both high capacitance and high voltage.

And this shows my age. When I took my power systems class (when I thought I was going to be an electrical engineer), that was how big a 1F capacitor was. I had no idea that they had gotten so small.

So my first thought upon reading the comic this morning was, “How the hell is he holding that in his hand?”

I’m evidently showing my age too. I knew supercapacitors existed, but little about their practical realities. And I futzed up the timeline; they’re older than I thought.

Thanks to everyone else for the clarifications / corrections.

If ChatGPT was right it was a coincidence, LLMs are not designed to return the correct answer, they don’t know what correct means, LLMs are designed to return statistically plausible answers

But then, so are humans.

Well, yes, I know that. Although so far, when I ask it factual questions that I could research on my own with enough time, it’s correct well over half of the time when I do fact-check it. Anyway, there was a mismatch between that comic and what it told me. Thus I’m asking here. :slight_smile:

I dispute that humans are designed.

Humans are capable of knowing what a correct answer is, many don’t but they still have the capacity.
Asking ChatGPT instead of just googling for human created answers is IMHO a bad idea.

For low voltages that’s a bit of an exaggeration. Circa 1985 I was a member of a model railroad club in Silicon Valley. The switch machines used to throw the turnouts are solenoids (i.e. inductors) and as anybody who’s taken basic electronics knows, in a DC circuit inductors resist a change in voltage, for example closing a switch to send 12vdc to a switch machine, it takes a small fraction of a second for the current to change from zero and that meant the turnout might not move all the way over.

OTOH capacitors in a DC circuit sit there with all that energy stored in them and love to send a generous amount instantly when there’s a change in voltage so a EE named Fyfe designed a circuit where a capacitor would be used to ensure the turnout snapped over with enthusiasm. The idea was to use one big capacitor on the 12v supply that goes to all the switch machines the size dependent on how many of them you wanted to throw simultaneously.

For route selection the railroad used NX boards (eNtrance eXit) where on a complicated bit of trackwork you’d push a button at the end where you were coming in and another button at the end where you wanted to leave and the board would figure out which turnouts had to be in which position to set up the route. On this layout there were several complicated bits and upwards of a dozen switch machines per route so we needed a lot of capacitance.

One evening I walked in and another member said, “C’mere. We just got done installing the new Fyfe circuit.” Under the railroad was a rack with shoulder to shoulder electrolytic capacitors the size of a soda can. I’ve forgotten their capacity or the number but the rack was about two feet wide by three and a half tall.

I peered at one capacitor, did the math and said in wonder, “That’s a whole Farad.”
“Yup.”
“I’ve never seen a whole Farad before.”
“Me, neither.”

Needless to say, guards were installed to keep the unwary from tangling with it.

I think that the “human ability to know what a correct answer is” is just a very highly-developed sense of saying what we’re expected to say.

I’m not sure I agree, humans have a concept of “truth/reality”, at least some humans do, LLMs do not.

I’d say it’s a question with a two part answer.

As @Chronos says, Humans and AI’s both first think of what they’re expected to say. That’s Part 1.

Here’s Part 2:
AIs then blurt their answer out.

The, as @Frodo says, hHumans, at least the good “thoughtful” ones, stop and think to verify their data and conclusions with external sources before blurting it out. AIs don’t (yet) know of the existence of that second step, much less the value of doing it.

I don’t want to turn this into a big hijack, but you’re a couple years out of date with the bit about ChatGPT simply returning probabilities. AI assistants now use retrieval augmented generation (RAG) to supplement the language model with trusted sources in real time.

It obviously can still result in incorrect info, especially when it really shouldn’t be trusting a particular source. But it’s simply wrong to say it’s all probabilities.

Thanks for the update. That field is moving a lot faster than I’ve been willing to keep up with.

I’m beginning to think the range of things on which I can safely opine is shrinking rapidly. I’m hoping that’s more a matter of my laziness than of creeping stupidity / senility. I guess we’ll all get to find out over the coming years. :yikes:

I don’t think anyone is actually keeping up. There are ridiculous changes in the field month to month.