Do you have a question? Post it now! No Registration Necessary. Now with pictures!
- Posted on
- NTFS - System files...
January 4, 2005, 7:21 pm
rate this thread
(MFT, MFT2, Log......)
Am I correct in saying that the physical position, of these files, on the
disk is undefined?
Thus, if I format a disk with data on it, the only data 'actually'
destroyed is that, that happens to be in the 'random' locations in which
the system files are written too?
By destroyed, I mean destroyed to a laymen (myself included) - I
understand that they can still recover files in the lab that have been
Re: NTFS - System files...
:(MFT, MFT2, Log......)
:Am I correct in saying that the physical position, of these files, on the
:disk is undefined?
Not really. The algorithm that is used to choose the locations is
consistant, so if you "format" using the same cluster size and
filesystme size, it will put them into the same place.
:Thus, if I format a disk with data on it, the only data 'actually'
:destroyed is that, that happens to be in the 'random' locations in which
:the system files are written too?
That is a different question that I do not know the answer to.
If it were Unix, then the answer would be that on most versions with
most filesystem types, the sections that would be written would
be the initial headers (possibly including a fixed 'inode' area that
contains all directory information about all files -except- for the
name of the files); possible backup copies of the 'superblock'
(so the basic information about the filesystem can be recovered if
the original copy becomes unreadable); and possibly file/directory
allocation sections sprinkled throughout the filesystem [common
on more modern filesystems.]
In other words, on Unix, pretty much all directory information would
get zapped, and some file contents might get zapped as well [some
filesystems allow small files to be saved into the same block as the
file header], but Unix systems seldom bother to go through and
zero out all the data blocks at the time you create the filesystem,
preferring to -effectively- zero them at the time they are allocated
to a file. Early interpretations of the Orange Book C4 security rules had
vendors zeroing the filesystem at the time it was created, but
one vendor (SGI) realized that the rules only required that any
block available to an -unprivileged- process must be zero'd by the
time the process sees it [so that the unprivileged process is not
able to read the previous contents of the disk], and that there was
no violation of the C4 rules if -privileged- processes were able to
examine data that had previously been on the disk.
Whose posting was this .signature Google'd from?