How do they execute wear leveling systems in Flash cards?

Flash cards can only handle 100,000 or so writes to a given address, but are divided into sectors that are used according to an internal wear leveling system so that all sectors age similarly on average.

How does this system work, if it can’t keep rewriting its own records in the same place? Doesn’t the card need a little space with a much longer service life in which to do the recordkeeping required to do the load leveling for the rest of the space?

No idea but given the relatively few number of times in real life that a flash card is actually write to vs it’s theoretical limits, I’d guess that the few cards they replace due to this limit being exceeded (and beyond that this being the diagnosed problem) is infinitesimally small. I don’t tthink it would make economic sense to engineer in special protections for this wear issue beyond some sort of read-write distribution algorithm.

The algorithms are proprietary. I doubt any of them are described completely in any resource accessible online.

Whatever state information it needs to retain can also be moved around the card to avoid writing to the same area too much.

I think that’s sort of the OP’s point. If the file table (or whatever) is constantly moved around the card, how do you know where to look for it? There has to be something that’s in an easily predictable place that will tell you where to look for more information. And every time you move the file table (or whatever), you’ll have to rewrite that first bootstrapping pointer. Which means that that address will get rewritten all the time.

I’m not sure what they actually do, but I have a few ideas.

The most obvious one is that that basic filesystem pointer isn’t kept on the rewritable flash itself while running, but in some volatile memory format. That way you reduce the number of rewrites to once per reset/power cycle. 100,000 writes is pretty conservative for flash; lots of flash chips are specced at orders of magnitudes more. And other stuff will wear out long before you can put a device through 100,000 power cycles. That way, you can rewrite the file table with each write, and just keep track of where the most recent write was. You’d still have to have some way of recovering the filesystem in case of a catastrophic loss, but either a journaling system, or some known way of crawling the filesystem for the most recent data would do fine.

Just off the top of my head I can think of a few ways to do it. But if you really care search with the google scholar http://scholar.google.com/ search and pull up papers on this.

Looks like it may survey a few techniques.

Here is one way.
In general flash is erased in blocks of say 1 kbyte and then writen in smaller chunks 8 to 32 bits. In say the first block you could have a pointer to the leveling information. When the information is updated you write the pointer in the next address. When looking up the address of the leveling information you use the information in the last non blank entry in the first block. Now the first block needs to be erased only once every few hundred writes.

Wiki on wear leveling

Sandisk white paper on wear leveling

Dynamic & static wear leveling

Lots here on flash algorithms & data structures at $ 10 a pop