Re: Question on shredding a terebyte drive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wednesday 02 September 2009 22:39:24 you wrote:
> On 02Sep2009 22:17, Marko Vojinovic <vvmarko@xxxxxxxxx> wrote:
> | On Wednesday 02 September 2009 21:32:32 Dean S. Messing wrote:
> | > I have a terebyte sata drive that I need to securely wipe clean.
> |
> | I have always wondered about this, why not just do a rm -rf *  on the
> | drive, then put one big file on it (some divx movie or such), and copy it
> | over and over under different names until the drive space gets exhausted
> | completely? This can easily be scripted, and I believe it would work as
> | fast as possible for a drive of given capacity.
>
> Copying /dev/zero is a fast way to get an arbitrary amount of data (my
> standard anecdote involves emptying it, which I did once on an ancient
> system). It will be faster than copying a real file since the "read"
> part is free.

You are right, zeroing is faster of course. I mentioned a dvix movie just to 
make the data written more random than all-zeroes, which might be more secure, 
but the end result is the same, I guess. :-)

> HOWEVER:
>
> The purpose of shred is to rewrite the data many times with random data,
> since it is technically possibly to read "old" patterns from the drive
> with the right (expensive and special) hardware.

This is the part that puzzles me. Let's give it a following thought 
experiment. Suppose I have all that state-of-the-art expensive and special 
equipment at my disposal, and unlimited free time. So I fill the drive with 
data1, zero it out, fill it with data2. Are you saying that 
I can use the equipment to recover the old layer of data1 (all or some part of 
it)? Then I could zero the drive again, fill it with data3. Can I use the 
equipment to recover both data1 and data2 layers which have been deleted? 
Suppose I repeat the process arbitrarily many times. At some point data1 layer 
would have to be lost completely, since otherwise it would mean that there is 
a way to read and write infinite amount of data on the drive, which is 
impossible.

So the question is: if you suppose I have in my possession a yet-to-be-
invented-most-expensive-CIA-NSA-dream-about-it-machine for data recovery, how 
many times should a typical drive be zeroed over and over in order to destroy 
that first layer of sensitive data beyond any chance of recovery, even in 
principle?

Given that I know so little about modern hard drives, I can only guess, but I 
guess the number of such rewrite-cycles is ridiculously small, like 3 or maybe 
4 top. It would need a serious scientific study to convince me that it needs 5 
times to do it.

So what's all the fuss and hype about deleting drives, then? Create a script 
to zero out (or random out) the drive four times, let it run for a week, and 
be done with it. There should be some extremely serious arguments to convince 
me that this would not be completely effective on any drive.

Best, :-)
Marko

-- 
fedora-list mailing list
fedora-list@xxxxxxxxxx
To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-list
Guidelines: http://fedoraproject.org/wiki/Communicate/MailingListGuidelines

[Index of Archives]     [Current Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [Yosemite Photos]     [KDE Users]     [Fedora Tools]     [Fedora Docs]

  Powered by Linux