Re: NTFS uses least used clusters ? (Cluster durability/lifetime ?)
It's not, in general, possible to avoid fragmentation in a generic case.
Even for a single user or writer, it's not generally possible to minimize
fragmentation. When a file is created, there is usually no hint whatsoever
how big it's going to grow. Even though ZwCreateFile can accept initial
allocation size argument, CreateFile doesn't pass it. Thus a particular
strategy to assigning initial place for multiple simultaneously open files
doesn't guarantee that a heavily used disk isn't going to get fragmented.
All that fragmentation issue becomes a non-issue, when you have a randomly
accessed database which occupies most of the volume.
"Tony Sperling" <tony.sperling@dbREMOVEmail.dk> wrote in message
news:eaD8kfnhIHA.1208@TK2MSFTNGP05.phx.gbl...
> This has to be a misconception turned 'myth'. The used/unused clusters are
> magnetic particles that are actually kept alive by use - if not
> periodically
> revived by rewrites, they will fade.
>
> The HD head arrangements are worn out by use, and fragmentation aggregates
> movement!
>
> If I remember, NTFS is designed to use the smallest free space available
> for
> writing new data to disk. Microsoft has actually fostered it's own 'myth',
> in saying the Filesystem isn't likely to fragment as much as FAT. In
> reallity NTFS is happier fragmenting than not, but it's design is such
> that
> it doesn't care (performancewise) if it is fragmented or not, until it
> becomes nearly full, then it grinds to a halt. There are, however,
> filesystems around that really doesn't fragment as much, and therefore
> also
> doesn't lose performance as a result of that. But NTFS doesn't care!
>
> NTFS, primarily, is a SAFE filesystem, and it is miles ahead of FAT. It my
> not be the best, but the 'best' is really allways determined by the user's
> personal needs!
>
>
> Tony. . .
>
>
> "Skybuck Flying" <spam@hotmail.com> wrote in message
> news:848d1$47db3243$541983fa$23029@cache3.tilbu1.nb.home.nl...
>> Hello,
>>
>> Somebody believes NTFS works as follows:
>>
>> When NTFS needs to write new data to the disk it finds the clusters which
>> have been least used.
>>
>> This would ensure longer disk life.
>>
>> If NTFS simply re-used the same clusters over and over and over again
>> this
>> would lead to early drive failure (???).
>>
>> Is there any thruth in this or is this internet/usenet myth ? Me
> wonders...
>>
>> (It does so via a list of clusters somebody said.)
>>
>> (Freeed clusters would be added to the back of the list)
>> (Needed clusters would be removed from the front of the list)
>>
>> Thus this would automatically cycle the clusters somewhat.
>>
>> Sounds plausible.
>>
>> Bye,
>> Skybuck.
>>
>>
>
>