Direction of flow of current vs electrons in a wire

I just read in a physics text book that current flows in the opposite direction of electrons, and, as it seems with every textbook I’ve ever read, it doesn’t explain why. So I did a little googling and on places like Quora, it was said that it was just a convention established by Ben Franklin and that it doesn’t really matter. But isn’t it more than a convention? If charge travels in a wire at close to light speed, and I had a wire a light-year long, then, barring practical resistance issues, wouldn’t it take a year for me to feel a shock at the positive terminal? If so, given that the speed of light matters for satellite communications, aren’t there applications where picosecond differences might matter? Thanks.

And come to think of it, if the direction does matter, it should matter in the construction of circuits regardless of the speed, no? As in the relative placement of capacitors, resistors. Etc.?

It really is just a convention, and you really can blame it on Ben Franklin. Nobody knew about electron flow back then, so he took a guess and had a 50/50 chance of getting it right.

Think of electrons all going around in a hoop. You push an electron in one direction, and it makes a hole in the other direction. The electrons are going to be pushed in the one direction, and will be pulled into the hole in the other direction, so the electrons move in one direction and the holes move in the other.

The electrons also don’t move all the way around the hoop for you to get shocked. Think of it more like a hula-hoop filled with marbles. If you push on the marbles in one direction, the “push” goes all the way around the hoop and a marble pops into the hole that you made by pushing one marble to the side. But it’s not the same marble that you pushed. Similarly in a wire, the charge goes all the way around the wire even though the individual electrons don’t move that far.

Also, the speed of electricity through wire is less than the speed of light.

The placement of components in a circuit is more related to things like inductance and capacitance of the wires as well as the speed of signals going through the circuit traces. If you are using capacitors as charge storage for example to keep the voltage supply of an integrated circuit constant, you need to keep the capacitor very close tot he IC so that the inductance of the circuit traces is small. Otherwise, you can get ringing in the voltage due to the charge bounding back and forth between the capacitance of the capacitor and the inductance of the circuit traces. Digital computer buses with parallel data lines and such need to have their circuit traces fairly close to the same length, so that as the voltages rise up (limited by the inductance and capacitance of the circuit traces more than the speed of electricity through the copper) they all rise and fall at approximately the same rate. Otherwise you can end up with conditions where part of the bus has its data ready and parts where the signals are still transitioning. How fast you can run your computer bus depends on how much inductance and capacitance slows down the signal transitions, so shorter signal traces means a faster data bus.

I knew the marble analogy, but that energy still has to propagate, and so it would still take a little less than a light year no? If not, then what is traveling at less than light speed?

The charge propagation down the wire is typically something like 70 percent of the speed of light (it actually can vary from about 50 to 90 percent or so depending on conditions). The analogy there is how quickly the “marble movement” goes through the hula-hoop.

Using your light-year long wire example, it doesn’t matter if you push an electron in or pull an electron out. The change in charge still has to propagate down the wire. It doesn’t matter if it’s positive or negative, or which direction you are defining as positive and negative.

In engineering, we use what we call “conventional current flow” which is Ben’s definition of positive and negative. Physics folks use electron flow, so their definitions are backwards from ours. As long as you know the context and the definitions being used, the equations work out the same.

Although our electrified world almost entirely uses electrons moving in a conductive material, you can also have electricity with positive charges moving. It’s part of what happens in some electrochemical batteries for instance, and in the movement of K+ and Na+ ions in your nerves.

And as engineer_comp_geek has already pointed out, if you’re going to send a signal down a long wire, it doesn’t matter if you apply a negative potential to your end of the wire (push electrons in) or a positive (pull electrons out) you are still going to have a signal propagating down the wire in one specific direction. The direction of the current doesn’t matter.

So you’re saying it would take .7 light years to get shocked? Now I’m watching the lecture on circuits and he’s talking about using capacitors as time delays, and he’s tracing the current from positive to negative. But the delay would occur (Charge would accumulate) on the negative side of the capacitor wouldn’t it? I can’t imagine how you can analyze or build circuits properly if you’re getting the flow direction backwards.

Delay will occur on both sides of a capacitor. If I have an empty capacitor hooked up to a couple of long wires, and connect a battery to one end, the battery will start pushing electrons down one wire and pulling them out of the other wire. The effects of that pushing and pulling will propagate down both wires at the same speed, eventually (after a time corresponding to the length of wire) result in an excess of electrons on one side of the capacitor and a deficit of them on the other side.

If you wanted to charge a capacitor unevenly, you could do that by using wires of different lengths on the two terminals, so the shorter side would charge first (at least, until the longer side eventually caught up). But which side charged first would depend not at all on which was positive or negative, and only on which wire was longer.

Strictly speaking, incidentally, the signals do travel down the wire at the speed of light. It’s just that it’s the speed of light in that material, not c (the fundamental constant that happens to be the speed of light in a vacuum). Depending on the sort of wire and whether it’s AC or DC, the relevant material could be the metal of the wire itself, or whatever’s around the wire.

Oh, and I don’t like saying that Franklin “got it wrong”. You could just as well point out that Franklin assigned the positive charge to the more massive particle, and the negative to the less massive one, and “positive is more” sure sounds “right” to me. Or, for that matter, he could have used completely different terms, like “east” and “west”.

“Velocity Factor” is the term, and is often given for cables meant for data/communication.

Because the circuits only care (for the most part) about whether there is flow or isn’t, not the direction of the flow. (if you hold two wires connected to a car battery, one in each hand, (don’t do this!), it doesn’t matter whether the electrons are flowing into your right hand and out your left or vice versa, you still get zapped)

Chronos already answered part of this, but I want to single it out for a little more detail.

The time delay when you introduce a capacitor is not in that no signal will be transmitted until later, it’s in decreasing the initial strength of that signal, so that it only passes whatever threshold you are looking at at the other end later.

Say you have a 5V source and the thing you’re looking to happen happens at 3V, a capacitor will slow down the increase from 0V to 5V so that you don’t pass that 3V threshold the moment the signal reaches where the thing should happen.

I could explain in more detail if you shows us the specific example you are looking at. I don’t want to muddle the water by introducing one that is significantly different.

Actually you won’t get zapped.

The human body has an odd, non-linear response to electricity. At low voltages, the electricity can’t punch through your skin, so your body’s “resistance” is very high. If you grab both leads of a multimeter, your resistance will measure several million ohms. At higher “touch voltages”, the electricity punches through your skin much more easily, and your effective resistance drops down to maybe a thousand ohms or so.

The admittedly over-simplified version of the human body that they use for standards testing and such is typically a resistor in series with a resistor and capacitor in parallel, but the values of those components isn’t constant and varies according to the amount of voltage applied to the human body and other factors. The human body is most definitely not a simple resistor.

At 12 volts, you’re not going to get enough current to flow to even feel anything. You can safely grab both terminals of a car battery. It won’t hurt you. Once you get up around 40 to 50 volts or so, then the electricity punches through your skin much more easily and becomes significantly more dangerous.

It used to be much more apparent in electronics that the electrons were moving, and not some positive charge carrier. Much more asymmetric, that is. This was because we used vacuum tubes, which threw the electrons off of a cathode and then made some use of the way they flew to the anode.

Perhaps there’s no better example than a cathode ray tube, such as a picture tube in a television or an oscilloscope. These have an electron gun at the back, and then some steering mechanism to spray the electron beam one way or another. In televisions the steering mechanism was electromagnetic with a multi coil yoke near the back. In oscilloscopes the steering mechanism was electrostatic with plates built into the tube close to the beam.

You can look at all of this and get a decent understanding about what was going on – but only with the understanding that it is electrons, the negative charge carrier, that compose the electricity.

I remember having a discussion with an old-school ‘current-flow’ person. I asked him, “So how does the current work in a vacuum tube diode? The current leaps from the anode to get to the cathode 'cause it’s cold and wants to be closer to the filament where it’s warmer?”

“Well, um…”

An older physics teacher colleague of mine claimed to have been using this for an interesting physics/psychology experiment that he discontinued due to, as far as he was concerned, an overabundance of caution:

He’d bring a car battery to class and ask if anyone was willing to touch the two posts. Usually no takers. Then he’d do it himself, and a few were willing to follow his example. Then he’d take a long metal bar, rest one end on one post and drop the other down on the other, resulting in a brief shower of sparks. Then he’d ask again if anyone wanted to touch the posts and all of a sudden they were hesitant again …

A light-year is a distance. It’s the distance that it takes light to travel in one year. If your wire is one light-year long, and the charge is traveling at 0.7 times the speed of light, it will actually take about 1.4 years for you to get shocked.

You seem to have a bit of a misunderstanding of how capacitors work. Charge builds up on both plates. You get negative charge on one plate and positive charge on the other plate.

I’ve been designing circuits professionally for over 30 years now so somehow I’ve managed. :wink:

In circuit design, we assume “conventional current flow”, and while we technically know that this is backwards from the way that the physics folks define it, we don’t really care much. For example, since you mention using a capacitor as a delaying element, we know that the voltage on the capacitor is going to be V(1-e^(-t/RC)) (man, that looks ugly… does anyone know how to make Discourse do superscripts and subscripts?). Do we care that the electrons are actually flowing in the opposite way from what we have defined it? No. Our formula works just fine. We also know that if you multiply the resistance and capacitance together, that is called the RC time constant, and our voltage is going to rise to about 63 percent of the supply voltage within 1 RC time constant, and that the capacitor will be fully charged (the voltage on the capacitor will be basically V, our supply voltage, or technically just close enough that it doesn’t matter) after 5 RC time constants. So with some simple multiplication, we know exactly how long of a delay we will get in the rising voltage from our simple RC circuit.

All of these calculations work just fine even with our definition of conventional current flow being backwards from the physics definition of current flow. That’s how we design circuits despite our definition being “backwards”.

By the way, most of the capacitors we use these days can work with either positive or negative voltages. We call these “unpolarized” capacitors. All of the little chip capacitors on circuit boards these days don’t really care which side gets the positive charge and which ones get the negative charge. These capacitors are made by basically taking two metal plates and sticking a piece of ceramic in between them. Electrolytic capacitors do care about which side gets the positive charge and which side gets the negative charge, so we call these polarized capacitors. Instead of being made out of two metal plates, these are typically made out of metal foil with electrolytic goop smeared on one side, and the whole thing is rolled up like a fruit roll-up. These are the larger capacitors that look like little tin cans on circuit boards. Apply a negative voltage to an electrolytic capacitor and sometimes they will explode and blow the metal top off of the can. So sometimes which side gets the positive charge and which side gets the negative charge does matter, but this is taken into consideration when we assign our arbitrary reference voltages and which way we define current to flow.

Heh. You have to be careful doing that, or you’ll end up arc-welding the metal bar to the posts. Then it’s a bit of a race between the metal bar melting through and breaking the circuit or the electrolyte in the battery boiling due to the excessive current and making the battery explode.

When I was in EE school, I salvaged a capacitor that was roughly the size of a soda can out of the trash. Just to satisfy my own curiosity, I charged it up with a bench power supply then shorted the terminals with a screwdriver. I tried to just touch the screwdriver quickly to the terminals then lift it back up out of the way, but ended up picking up the entire capacitor since the screwdriver had been arc-welded to the terminals of the capacitor. The tip of the screwdriver had also been blown off.

First post…then I guess I have to wait 24 hours. Fantastic I stumbled into this one on my first look! I’m a retired electrical power systems engineer, sort of remember my classes long ago, also was a bit baffled by the positive current flow but realized I needed to just accept and go on. … I have held both terminals of a car battery more than once, but would not do what an electrician I heard of did (friend of a friend, but the source was not joking) which is always stick his finger into a light socket to make sure the power was off. Until he did it standing in a puddle, and that was the end of that.

Fair enough. I’m a digital HW engineer. I don’t deal with anything over 3.3 volts, it scares me. :slight_smile:

The point remains, though. Replace the person with a lightbulb. (but not an LED!) You complete a circuit and current flows, the bulb lights up. Doesn’t matter which end the electrons go in and out.

Negative current going one direction is the same as positive current going the opposite direction. The two are completely identical from the perspective of the circuit. It’s actually a good thing Ben Franklin guessed wrong, or I feel like a lot more people would mistakenly believe that electrons go zipping through copper at the speed of light.

Electrons moving through a metal lattice is only one kind of current. There are other kinds of current, where positive charges flow instead. The whole point of “current” as a concept, and schematic circuit analysis in general, is to abstract away all that messy subatomic chemistry stuff and focus on the quantities that matter to the engineer or electrician. Sure, physicists need more nuanced concepts like current density and electron clouds, but a guy designing a circuit just needs to know that if a coulomb of charge passes a particular point every second, that’s one ampere of current. And it doesn’t matter to him whether it’s electrons going one way, or holes going the other way, or various ions moving back and forth through an electrolyte.

If you’re familiar with computers, think about a guy designing some email application. He doesn’t care if the messages go through coaxial cable, an RS-232 serial interface, fiber optics, or even smoke signals, really. All that stuff has been abstracted away so he doesn’t have to worry about it. Sure, if you’re designing a new physical layer protocol, you have to be aware of physical layer details, but otherwise you don’t. The same goes for circuits. Physicists and materials scientists need to know what’s going on at the subatomic level in their wires, but the engineer or electrician just need to know that current flows from higher voltage to lower voltage when they connect the circuit.