tllcurv.gif (1047 bytes)  

 

 

 

 

 

Home

Free Downloads

Product FAQs (NT)

Product FAQs (OpenVMS)

Product Alerts

Technical References

Links

brlcurv.gif (1043 bytes)

Efficient NTFS Partitions

The way you set up and use an NTFS partition can have a great dealof effect on performance. Two new, but differently set up, partitions can yield drastically different performance.In addition, response time can degrade over time, even if you keep the partition defragmented. Here are the mainfactors and what you can do about them.

The Partition Itself

Partitions should be created as NTFS partitions, not converted fromFAT. On a newly formatted NTFS partition, the Master File Table (MFT) is created at the beginning of the disk,and about 12½% of the disk is reserved for the MFT, but when you convert a FAT partition to NTFS, the MFTis placed wherever there is free space. It almost always ends up badly fragmented. See the 5 January 1998 article MFT Fragmentation for details.

Large partitions should be avoided. They take much longer to backup and restore, data corruption becomes more likely because there is so much more writing going on, and accessto the disk becomes slower (it takes longer to find, read and write files). Of course, there are valid reasonsfor very large partitions. If you have a 5-gigabyte database or you work with large video files, you'll need bigpartitions. Just don't make them big if you can avoid it. One to two gigabytes is about right.

It's also a good idea to have specialized partitions: System, Applications,Data, Temp, etc. This will increase safety and performance. See the 8 July 1997 article Configuring Your Partitions for details. (Note: Thatarticle recommended FAT for the system partition, but you may prefer NTFS, especially if you are security conscious,but also because of the NTFS self-repair capabilities.)

Directories

It is nice to have deep, multi-branched directory trees. We like thelogical organization, keeping separate types of files neatly sorted. However, deep trees can really slow thingsdown and the sequence in which you create directories can make a big difference. Fortunately, they are easy toclean up. Here are the details:

Under NTFS, each directory is a file just like any other, but witha special internal structure called a "B+ Tree". Though fairly complicated, for our purposes it's enoughto say that it is a very good structure for a directory tree, but can be weak on handling changes. That is, themore changes you make, the more complicated things get internally and the longer it takes to locate your file.Since files in the directory file are listed alphabetically by file name, adding new files (or directories) canrequire changes in the middle of the tree structure. Many such changes can make the structure quite complex andmore complexity means less speed.

Files are located by searching through the directories. If you arelooking for a file in a tree that is ten levels deep, you have to locate ten directories before you get to theone that points to the file itself. That takes much longer than finding a file that is located only three levelsdown. Plus, if the directories have been changed a lot, so that their internal structure has become complex, findingfiles can become very slow.

Directories tend to grow, they rarely shrink. Sometimes when you adda new file or directory, it can be fitted into the space left by a deleted file, but often it uses a new space.The directory grows and can fragment, slowing down access even more.

Long file names can cause directories and the MFT to fragment. Theway the file names are stored, each character requires two bytes. For computer efficiency, the DOS 8-dot-3 formatis best. On the other hand, for human efficiency, 20 to 30 character names are much better. Of course, there areexceptions, such as files on a CD-ROM or an archive partition where they won't be rewritten, but in general, don'tgo over thirty characters.

Diskeeper 4.0 can defragment directories, which is a great help, butthis will not handle the internal complexity. To clean that up and restore the directory to its initial perfectstate, just copy the directory (with the copy under the same parent directory as the original), giving it a newname, then delete the original, then rename the copy to the original name. This should be done periodically (perhapsonce or twice a year) if you frequently create and delete files, or whenever you delete a large number of filesfrom a single directory. Since this changes the location of the directory file, it's a good idea to make a listof all of the directories that you want to clean up, and do them all at once. Then use Diskeeper to do a boot-timeconsolidation afterwards. This will move the directories together and defragment them.

Compression

The value of compression has, in our opinion, mostly disappeared sincethe hard disk prices crashed. It's fine for archives and such, where fragmentation and performance issues aren'tvery important, but on your active partitions, it can really slow you down.

When a file is compressed, it is compressed in units of 16 clusters.For each unit, the MFT record contains the Logical Cluster Number (LCN) and the number of clusters actually used,plus an entry containing an LCN of -1 and the number of clusters the last entry needs to be de-compressed. Whatyou have is, in effect, a file fragmented into 8-cluster fragments (on average)! If the file is large enough, therewill be too many compressed units to be recorded in one MFT record, so one or more additional MFT entries willhave to be used. If you compress an entire partition with a large number of files, the MFT may fill its pre-allocatedspace and overflow in fragments into the rest of the disk.

When you decompress a file, each unit is decompressed and writtento the disk; they may or may not be written contiguously. But the extended MFT entries, allocated during the compressionof the file, will still be in use by that file. You can copy a single, formerly compressed, file to another partition,delete the formerly compressed file, defragment the partition, and copy the file back onto the partition. Thatwill reverse most of the compression/decompression side effects for that file. But now, there are excess MFT recordsthat serve no purpose in the MFT. (In a test done at Executive Sofware, compressing a 271-megabyte file created467 excess MFT records!) The only way to completely remove all of the side effects of compression is to back upor copy all of the data to another partition, reformat, and restore the data.

This can be simplified for annual maintenance. This procedure involvesmoving your partitions to different physical locations, but that does not matter except for the boot partition.Do not use this method for the boot partition! If you create all of your partitions the same size, then you canstart by reformatting your Temp partition and copying one of the other partitions to it. Then reformat the partitionyou copied, and copy another partition to it. Continue in this manner until you have done all  your partitions,then change the partition letters and names so the data is on the correct partitions. Reboot and you're done.

Cluster Size

In the 20 October 1997 article Cluster Sizes , we described the pros and cons of NTFScluster sizes. New data regarding the MFT and its internal functions leads us to recommend 4096 bytes as the bestcluster size, especially if you will have a very large number of files or will be using compression. Never useless than 1024 bytes, as this will allow MFT records to fragment, and never exceed 4096 bytes, as compression andDiskeeper will not work.

 

If you have any comments about this article orany requests for new technical articles e-mail

 

Executive Software Europe