windows - ntfs bad clusters (chdsk /r)

  • georg

    A chkdsk /r on a Windows XP NTFS system partition revealed bad clusters in a file. chkdsk reported "bad clusters in file xyz have been replaced" (translated). The filesystem is stored on a plain SATA disk (no RAID).


    If the error occurred during a read, NTFS returns a read error to the calling program, and the data is lost.

    Ok, I guess the file is corrupted... really?


    When an unreadable sector is located, NTFS will add the cluster containing that sector to its list of bad clusters and, if the cluster was in use, allocate a new cluster to do the job of the old. If a fault tolerant disk driver is being used, data is recovered and written to the newly allocated cluster. Otherwise, the new cluster is filled with a pattern of 0xFF bytes.

    What is meant by "fault tolerant" disk driver? A RAID system? Is there any means to determine if chkdsk restored the file without data loss or do I have to resort to using a hex editor to search to file for a 4 kB bloc filled with 0xff? I am pretty sure the file is corrupted and I can easily restore it from backup, but I would like to now if there is a definitive answer.

  • Answers
    Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.

    Related Question

    filesystems - Bad NTFS performance
  • JesperE

    Why is it that NTFS performance is so lousy compared to, for example, Linux/ext3? Most often I see this when checking out (large) source trees from Subversion. Checkout takes around 10-15 minutes on NTFS, while corresponding checkout on Linux (on almost identical hardware) takes an order of magnitude faster (1 - 1.5 minutes).

    Maybe this is specific to handling lot of small files and NTFS is better when it comes to large files, but why should that be? Wouldn't improving NTFS performance for small files be hugely beneficial for Windows performance in general?

    EDIT: This is not meant as a "NTFS sucks compared to ext3" inflammatory question; I'm genuinely interested in why NTFS performs bad in certain cases. Is it just bad design (which I doubt), or are there other issues which come into play?

  • Related Answers
  • dlamblin

    NTFS has this thing called a Master File Table. It sounds really cool when you read about it.

    You can see that ext3 performs alright up to about 95% disk use, while the existence of the MFT means that NTFS doesn't really want you to use more than 90% of your disk. But I'll assume that's not your problem, and that your problem is with the many operations on many small files.

    One of the differences here is what happens when you create a small file. If a file is smaller than a block size, it is not written to it's own block but rather is stored in the MFT. This is nice if the file stays exactly the way it was when created. In practice though, it means that when svn touches a file to create it, then adds to that file, removes from it, or just modifies it by not enough to move it to it's own block, the operation is pretty slow. Also just reading lots of small files puts some stress on the MFT where they all reside, with multiples per block. Why would it do this? It's preemptively avoiding fragmentation and using more of the blocks more effectively, and in general that's a good thing.

    In ext2 and 3 by contrast, file blocks for every file are stored next to where the directory metadata is for the directory they're in (when possible, if your disk is unfragmented and you have about 20% space free). This means that as svn is opening up directories, a number of blocks get cached basically for free in that 16mb cache on your drive, and then again in the kernel's cache. Those files might include the .svn file and the revision files for your last update. This is handy since those are likely some of the files svn is looking at next. NTFS doesn't get to do this, though large parts of the MFT should be cached in the system, they might not be the parts you will want next.

  • Joey

    Well, your particular problem is because

    1. Subversion itself comes from the UNIX world, the Windows version therefore assumes similar performance characteristics.
    2. NTFS performance really isn't great with gazillions of small files.

    What you are seeing is simply an artifact of something designed for a particular operating system with performance assumptions on that operating systems. This usually breaks down badly, when taken to other systems. Other examples would be forking vs. threading. On UNIX-likes the traditional way of parallizing something is just to spawn another process. On Windows, where processes take at least five times longer to start, this is a really bad idea.

    In general, you can't just take any artifacts of a particular OS to be granted on any other one with vastly different architecture. Also don't forget that NTFS has many file system features that were absent in UNIX file systems widely in use at that point, such as journaling and ACLs. Those things come at a cost.

    Some day, when I have lots of free time, I was planning to write a SVN filesystem module which takes advantage of features you have on NTFS, such as transaction support (should eliminate the "touching millions of small files issue") and alternate data streams (should eliminate the need of the separate .svn directory). It'd be a nice thing to have but I doubt the SVN devs will get around implementing such things in the foreseeable future.

    Side note: A single update on a large SVN repository I am using took around 250,000 file operations. Some tiny voice tells me that this is really much for 24 files that changed ...

  • Kenneth Cochran

    Here's Microsoft's info on how NTFS works. It may be overkill for what you're looking for but studying it may shed some light on what scenarios NTFS has problems with.