Question about my computer Recycle Bin

Yes, once again, I am having a problem figuring out something my computer is doing. Stop laughing; you’ll be old too some day.

This time it’s my Recycle Bin. I have been using some of the time I have on hand to sort through computer files. And this has involved dumping a lot of files into my Recycle Bin and clearing them out.

But on several occasions I have noticed something I can’t figure out. I will have a bunch of files in my Recycle Bin. I will click the button to Empty the bin. The files will all supposedly be deleted and the bin will report that it’s empty. But the next day, when I open the Recycle Bin, a bunch of the files that were supposedly deleted the day before are back in the bin.

Not an urgent problem. I just re-empty the bin to delete them a second time. But why is this happening?

I’ve seen this happen only when I have vast numbers of files (total filecount, not size). I’m really not certain of the mechanism that causes it.

However, if you are certain you want the files removed, you can bypass the recycle bin by holding SHIFT during deletion.

That’s probably it.

At least one reason this can happen is if any process running in the computer has the file open, it cannot be deleted. Note that “deleting” a file, in modern systems, doesn’t actually delete anything, just moves it to the recycle folder. Deleting files from the recycle folder is when the file really gets deleted. So that’s when you see the case that it can’t be deleted if the file is still open.

I think there may be further cases where the operating system gets a little confused and thinks a file is still open even though the process that had it open is long gone. I suspect this may happen if the process terminates (abnormally or perhaps even normally) while leaving an open file open. It’s possible the operating system might not always clean up fully when that happens, leaving loose ends around. (Unix and Linux users, for example, may be familiar with “Zombie” processes that refuse to disappear even after they are dead.)

Re-booting a system typically cleans up most of that kind of stuff. Although I have seen occasional exceptions to even that.

Are those files exactly the same as the ones deleted (identical size, name, timestamp, etc.)? I’m wondering if you have some background or periodic process that uses temporary files, and if it finds its workfiles unavailable, recreates them.

You could test this by leaving the (now empty) recycle bin display open, in the foreground where you can keep an eye on it. Do nothing else on the computer for a while (overnight?) and see if the display shows new files popping up.

It is also worth mentioning that ‘deleting’ files from the recycle bin does not actually delete them. If you think of it all as a paper filing system, the file name that you see on the screen is really the address of the file, if it was a paper file in a cabinet, It might point you to cabinet 5, drawer 2, “something meaningful”. What 'deleting it does, is to take the address away from the index while leaving the file, “something meaningful”, still in the drawer.

Your computer may, eventually, use the space for something else at some point. If the files were something illegal (and I am not for a moment suggesting that this is the case) and some law enforcement agency really wanted to, they could find those abandoned files and look at them, even though your computer ‘forgot’ where they are.

If you really want to delete files completely, there are free programs that will do it by overwriting them with something else.

Which won’t gain you any more free storage space; that’s strictly a security measure.

I don’t understand that. Once the address is lost, the occupied space becomes available again. Yes, it still had (random/rubbish) data on it, but your computer will just overwrite that whenever it needs to.

Unless I am wrong?

I object to insinuations that beautiful properly-designed operating systems like Unix have anything in common with Micro$oft Windo$e. :smiley:

Unix does not forget to close files when a process abends. If no links remain, the file will be unlinked and its space reclaimed.

And ‘Zombie’ processes don’t hang around because of OS negligence. They have, by definition, already been destroyed except for their exit status which is deliberately preserved in case it’s requested by the parent process. When that parent exits, the Zombie will also disappear.

This is Windows 10? Thus the file system is NTFS? On an SSD or spinning rust?

The drive may be approaching the long goodbye. Or the file system is corrupted. Investigate the SMART status of the drive and run chkdsk.

I’m pretty sure Windows won’t let you delete a file if any program has it open. I remember that much from Windows XP. This seems to more-or-less confirm my memories.

(You can delete a file “out from under” a Unix process, but the disk space won’t be cleared until the last process which has it open closes it.)

I seriously doubt this. OSes have to keep track of which processes are running and which ones aren’t, and if they get that wrong, you’ll have a dead system in very short order.

(The scheduler would be Mighty Unhappy if the process table was suddenly at variance with reality, for one thing.)

I’m finding it hard to believe an OS kernel wouldn’t immediately free the kernel data structures associated with a process when that process exits.

A zombie process isn’t a process: It’s an unclaimed death certificate. When a process exits in a Unix-like OS, it leaves a record of its passing including a number which can (sometimes) be inspected to determine whether it died “of natural causes” or not. The OS keeps records of this stuff because the still-living process which launched that now-deceased one might want to know whether its children are encountering fatal errors. If the parent process never claims that info, the OS keeps it around until that parent process dies, at which point everything is cleaned up by process number 1, init, the ultimate ancestor of all processes.

The only thing I can think of you might be talking about is uninterruptible sleep; a long-term uninterruptible sleep might be caused by a process waiting on a disk which simply isn’t responding, possibly because it’s mounted on another system via NFS and the LAN suddenly went flaky. A process waiting on a defunct drive is still “there” it’s just catatonic, because the OS has put it in suspended animation until the disk it’s waiting on has come through with whatever the process has asked for. Those processes typically can’t be killed with anything short of reboot.

Of course, I can’t quite see how any of this has any relevance whatsoever to the OP.

If you delete a file, then remove the reference from the Recycle Bin, the data space is available for reuse, even though the original data is probably still sitting somewhere, orphaned. Smart programs can find these orphans and recover the data.

If security is a concern and you want to avoid recovery, you can overwrite the actual data with zeros or garbage data. This action does not make any more data space available because it already was.

Does that make sense?

Dude. Have you ever used Windows? This was one of the things that drove me craziest about it. It would never let me delete a damn file or folder because “files are still in use,” when they clearly weren’t. And as Senegoid says, rebooting usually cleared it up.

Having used Windows, all versions, for years, I can confirm this is a common problem. The “files in use” message is one of the most useless ones ever; it doesn’t show which files or which programs are involved.

And some programs (video editors, for example) spawn auxiliary utilities (background shadow file creation, audio analysis, etc.) which may or may not close when the main app does. The user is not told of these, and it takes some computer expertise to find them. These “extra” programs are often the culprit to “file open” problems.

It does not, because that is not how the Recycle Bin works.

First, the Recycle Bin is only used for user-deleted files from Windows Explorer. Files deleted through the various Windows APIs, as would be done by software, do not go through the Recycle Bin.

Second, the Recycle Bin, “under the hood”, is just another folder on the file system. Deleted files are simply moved to it using exactly the same mechanisms as any other file move. The files are deleted from there using typical Windows API calls. There is no “reference” to be orphaned. There is only the one instance of the file at all times.

“Emptying the Recycle Bin” is supposed to delete a file for real by reducing the number of hard links. When that gets to zero is when the physical space gets marked as free and can be potentially overwritten. In NTFS, all this metadata should be in the Master File Table.

The “deleted” files are not moved. Only the reference (the directory entry) is altered. This entry is a “pointer,” a common computer term. Data is rarely moved, only the pointer to where it resides is modified.

You can observe this easily. Try coping a large block or blocks of data, say 500GB, from one disk drive to another on the same computer. This will require the physical data to be moved, and it will take some time.

Then try to “copy” the same data from one location on the same drive. This should be nearly instantaneous, as only the directory pointers need to be changed, and this is a comparatively small amount of data. (That only works if the OS is smart enough to know this. Recent Windows versions (from 7?) know this, older ones did not (DOS?).

I’m not sure how Linux handles this, but the concept has been in Computer Science for decades (I learned about it ca. 1978).

You are correct about only one instance of the file exists, but you might misunderstand how the reference (directory pointers) work and how files are flagged in the directory. Before the Recycle Bin was developed, DOS/Windows would delete a file with a single byte entry; the first byte of the file’s directory reference block.

This is only a gut feeling (working around such problems is what I do now; diagnosing such problems isn’t in my job description), but I think such bugs are often the cause of OS system crashes. Windows is not known to be stable. Although old bugs have been fixed or minimized (MS often takes 20 years to do it :rolleyes: ), the increased complexity of newer programs keeps the bug list pretty full, and OS crashes have not gone away, just gotten harder to find. Yes, the task scheduler can get mighty unhappy sometimes.

NTFS allows more than 1 directory entry linking to the same file. Once a file is completely deleted, some housekeeping needs to be done, including updating the $Bitmap which lists which clusters are in use.

I believe you are wrong about copying files: NTFS does not support instantly making snapshots of large files and will have to actually copy all the data when you make a copy of your 500 GB file. If you MKLINK a link it doesn’t need to copy any data blocks, though.

Having used Windows since version 2.0 I can confirm that this is definitely the case and always has been. But sometimes it’s even worse than that. I have a small free program that I got somewhere called FDEL (“Force Delete”) which traces the process holding a file open, closes it, and then deletes the file. But sometimes it doesn’t work – sometimes there seems to be a phantom file handle just sitting there with no associated process. It’s remarkable how often I can’t dismount a removable drive because Windows reports that a process still has file(s) open yet there is absolutely nothing else running that could possibly have a file open on the device (and I have auto-indexing disabled on all my external devices).

I presume that what Musicat is referring to is moving (not copying) a file to another location on the same logical device (same drive letter). That process is identical to renaming the file – it just makes a change to the file table, and is essentially instantaneous regardless of file size. In fact on the old PDP-10 timesharing system the RENAME command was the method used to move a file from one folder (directory) to another on the same account. But actually copying a file creates another physical instance of it which is of course a different matter, as is moving a file across to a different logical or physical device.