You can re-write a DVD-RW 1,000 times generally. But is a disc/burner smart enough to keep using different sectors as you burn/erase, or does it always start at the beginning?
Let’s say I copy 1 gig of data onto a 4.7-gig disc, then erase the data so the disc is blank. Is there information on the now-blank disc that says if another gig is copied, it should be burned onto a completely different area than the first use? If not, it seems that the disc would start giving errors even if a 1-meg file was burned and deleted 1,000 times.
I don’t know that there’s any “intelligence” on rewritable discs, such as a field that tells the drive how many times a segment or block of segments has been written.
I believe the whole system depends, as do most rewritable media, on a write-verify process. Write the data; verify it’s readable; move on. If there’s a read error, the drive might use simple logic to try again until it decides the sector is bad, marks it so in the index, and writes the data somewhere else.
If you have data verification turned on when you write discs, it should always produce a fully readable disc even if only a quarter of the surface is good for use. (I am also pretty sure that the verification you can switch on and off is a secondary check of all the data written, which has nothing to do with the immediate write verification of each block or sector in most systems.)
ETA: the logic used to pick which sector on which to write data varies with the drive and OS. Some are stupid and start writing with the first available sector, even if it’s only a fraction of the amount to be written, and keep filling holes in sequence until all the data is written in fragmented chunks. Others will search for a single location large enough to write, or the fewest numbers. On hard drives, some OSes will move around other data to defragment on the fly and make a space for a new contiguous data write. (As well as do that in the background, continually, so that data is always unfragmented.)
If you write a 1GB file and delete it, I don’t think most drives would avoid that space in favor of “fresher” spaces - and we’re back to the write/verify process to keep things honest.
I think it all depends on the OS and the file type used. If any of these media, even the semi-intelligent ones like USB drives, manage the data to that level, it’s nothing I can attest to.
DVD’s aren’t intelligent at all.
Depending on the software used to write the disk, and the format of the disk, bad blocks will either be re-allocated, or the write will fail.
My money is on the write failing most of the time.
Flash memory wears out over time; that is, you can only erase & write a particular block in the Flash memory a finite number of times before that block “breaks” and stops functioning. The number of writes is variable even within a single Flash part, so software that deals with Flash must allow for the unreliability. Even when a newly manufactured Flash part leaves the factory, in most cases there are already bad blocks, and the number of bad blocks will increase over time in an unpredictable way.
There are file systems specifically written for Flash memory, like jffs, yaffs and ubifs. These file systems have the logic to understand that single blocks shouldn’t be written too frequently – writes are spread out around different blocks (“wear leveling”), and to understand that a write to a single block may unexpectedly fail. On the other hand, disk-based filesystems like FAT, NTFS, ext3, etc. do not have such logic, and assume it’s perfectly fine to write the same block many times, even if other blocks on the device aren’t being used.
Flash based filesystems are often used on Linux-based embedded devices (I worked on the one that’s used by Roku). However, just to confuse matters, USB sticks which use Flash memory normally use a standard disk-based filesystem like NTFS, not a Flash filesystem. How do they get away with that? The stick actually has embedded intelligence that does the wear leveling internally. So if you write to sector 15 on a USB stick, and then later write something else to sector 15, it’s probably internally written to a different block. There is a tiny computer inside the stick that says, “ok, he wants to write to sector 15, let’s find a block that hasn’t been written recently and call that sector 15.”
Which is all lead-in to the answer to the OP’s question about DVDs. Plain DVDs have limitations that ordinary disks and even Flash don’t have – data can only be written to a given block once, and sometimes you can only write the whole disk in one shot; you can’t write some of the disk at one time, and then add more data later. DVDs normally use the UDF filesystem. There are different variants of UDF. The “VAT” (Virtual Allocation Table) version of UDF allows “packet writing”, which means blocks can be written incrementally; you don’t have to write the whole disk all at once. There is also another variant called “spared” which adds a “sparing table” so that if a block really does completely fail, a spare alternate block can be used in its place. The spared version can only be used on rewritable media like DVD-RW. Not all software and DVD hardware supports the VAT and spared variants, but if your system supports spared UDF then presumably your disk will be getting some wear leveling when you write to it.
This is it.
Yhe question is comparqble to : “how smart is a chalkboard?” After all, it’s erased and written over many times, too.
But all the intelligence of how to do that, and where on the chalkboard to start writing is contained in the person doing the writing, not the chalkboard itself.
I’ve never even found an optical disk writer that will handle marking a sector as bad and moving on. If I don’t keep the surface as clean and scratch free as possible, the burn just fails. No attempts to move the data around at all.
So I guess, if there is any of that available, it’s in a filesystem.
Generally, running (or re-running) a format on the disk will identify bad sectors & mark them as unusable for the future. But to identify these, it has to write something to the sector, then try reading it back to identify failures.
A lot of the disk-writing software writes the whole file, then afterward reads (‘verifies’) the whole file. If a sector fails at that time, it’s too late to just skip that one and go on to the next. (This is done because writing one sector, reading it to check, then going on to the next sector is much slower than writing the whole file at once,)
I can’t dig up a cite, but I am pretty sure newer/better optical disk writers will do at least some sector verification/remapping rather than just an end-of-write fail. I have seen write-pattern analyses that show files scattered into multiple blocks around others, which seems QED to me. Maybe older stuff was more linear/fault-intolerant?
I should hope so!
This problem’s been around long enough that the manufacturers should have improved their writing software.
On the other hand, most such software is given away for free with the hardware, so no great monetary incentive to spend money writing better software. And most PC magazines have disappeared, taking with them reviews of hardware. What’s left is online ‘reviews’, and mostly they just echo specs, thus emphasizing write speed the most.