Recently when running chkdsk on windows 2000, either through the windows interface or the pre-boot text one, it’s been pausing abnormally long on phase 2, which is “verifying indexes” or something like that. It sits at 0% for about a minute, then proceeds to go through 1-100 at what seems like a normal pace. The other parts of the process (verifying files, and verifying security indexes, I believe) work fine.
The strange thing is that this is happening on three seperate drives on two seperate controllers, so I’m not sure where the hardware failure could be. Could the indexes somehow be damaged? How would I find out about this or fix the problem?
Windows 2000 SP4, one 74gb WD raptor SATA on IIRC the nforce 3 SATA controller, a 250gb WD “special edition” on the same controller, and another 250gb P-ATA WD special edition drive. No recent hardware changes and nothing I can think of software-wise that would affect it. I did, however, crash twice when trying to defrag my main drive the other day - a few days after the problems started. Third defrag worked.
I’ve seen this in Windows NT when CHKDSK needed too much memory and then started paging, but NT’s CHKDSK had a hard limit on the memory it could use. The CHKDSK in Windows 2000 should use all available memory. Is this a shared drive?
And you shouldn’t need to defragment NTFS partitions anyway.
I find the need to defrag my main home machine’s applications drive occasionally - at least the inbuilt Windows defrag reports that it could do with one. Is it misreporting, or is it just that I’m a bit of a compulsive installer/uninstaller?
Quite possibly. It looks like one of my drives is getting ready to die, although I’m not sure of that. The 250GB SATA drive has recently started to act up - one time I booted and the partitions on the drive didn’t show up in windows explorer. Upon rebooting, my system wouldn’t get past the POST screen. Then unplugging that drive let the system boot up. However, I shut the system down, plugged that drive back in, and it worked fine (and is working now).
I’m not sure if it’s the drive, or if the controller is going bad, or what.
But… I did try running a chkdsk (on both OSes) while nothing but my 74gb system/boot drive was active, and it still had the same issue.
To complicate things, I fiddled around inside my case and made sure all the connections were proper made and such, and since then I’m having some issues that don’t seem to be directly hard drive related - video glitches in games, programs crashing, system crashing. It may be unrelated in that I bumped something when checking the connections that’s now causing unrelated problems, complicating things… or it could all be pointing to one root problem, like perhaps a bad RAM chip, which is causing symptoms that I’m misinterpreting as hard drive problems.
It’s definitely not using all the memory available - I had 750+ mb of memory open and available during one of the windows chkdsks. As far as NTFS fragmentation - I’ve read that it keeps itself partially defragmented, but that it still benefits from the occasional manual defragmentation. Certainly defrag programs will tell me how much the drive is fragmented, and specifically which files have how many fragments.
Oh - I forgot to mention this, but one of the times I crashed during a defrag I got a blue screen that said kernal stack inpage error. Which I gather means that it couldn’t read from the page file. But the page file is on the 74gb boot/system drive, not the one that seems to be failing. Hmm.
In a related question, does anyone know of any memory stress testing programs besides memtest86? I don’t have a floppy drive and don’t know if I have a blank CD around here… I was considering trying to use my MuVo mp3 player which is basically a flash drive with embedded software to boot from, but I’m not sure if the firmware is stored seperately from the usb flash drive, and if formatting it for booting would wreck the thing. So I was hoping there’s a good alternative usable in windows.
Try unplugging and re-plugging everything - all the drive data cables, power cables, add-in cards (video, network, etc) and the RAM. It just takes one mote of dust or tin whisker to spoil the fun.
Also make sure the CPU heatsink is reasonably dust-free and that all fans (especially the CPU fan) are running. Don’t forget the ones that are probably on the video card.
This will probably do nothing for the chkdsk times, but hopefully will clear up the glitches and random crashes.
Good call, I did this last night once after it crashed. I was too lazy to go get a can of compressed air, so I used a blow dryer. Cleaned all the contacts with alcohol and a Q-tip, and replaced everything. My northbridge chipset fan is damaged and barely works, but I didn’t have the tiny screwdriver necesary to fix it.
So it turns out it was probably bad ram. I ran some freeway memory test utility. I don’t know how good it is, but it gave me lots of errors on the first run through - so I removed 1 ram stick, got no errors, then swapped sticks, and got tons of errors again. Looks like the mushkin high performance stick died.
Turns out an earlier mistake is going to benefit me. I got a third ram chip when I decided that 1.5 gb would be enough, but with the dual channel memory controller setup thing you can’t use 3 chips - either 1, 2, or 4. Rather than buy another chip, I just said screw it and stuck with 2. So the other one got put aside all this time… to come to the rescue now.
What worries me though is that if this ram has been going bad for a while (the error the memtest repeatedly gave was that blocks did not copy properly), then it may have damaged my data. For instance, when I was defragging the other day and it was crashing - all that data was subject to being copied on the bad ram chip, and hence damaged. I was using diskperfect - any idea if that would do CRC checks to verify the copied data?
Strangely, though, even with the new ram, my original problem is still there - long ass phase 2. Could my file indexes have become damaged by copying data from the bad ram chip? Is there some way I can fix this by rebuilding the indexes?
The MS Technet article on NTFS is rather lacking because it doesn’t go down to sector level.
While NTFS does benefit from defragmentation, NTFS is resilient to it: a few extents make little difference in performance. In practice, this stops being true when the disk starts to get full (90%+). Sorry, but I can’t find a decent online cite. When NTFS writes a file, it will try and place it in what it thinks is one contiguous chunk. And just because a drive looks like it needs defragmenting doesn’t mean that it actually does: the drive’s geometry may not be what Windows thinks it is; further, some drives dynamically remap bad sectors. And let’s not start on RAIDed drives.
Defragmenting a drive is also dangerous: you can lose all your files quite easily e.g. if there’s a power glitch. The very best way of defragmenting is to copy all your files to another drive.