Dean S. Messing wrote:
Thanks to all for the replies.
I'll answer most of the comments here.
0) The disk is unmounted.
1) The drive is (was) a backup drive with a great deal of sensitive
corporate laboratory research data and algorithms on it. The
monitary loss of the data being stolen would be significant though
it's hard to put a $$ value on it. More importantly, I'm following
corporate policy.
2) The drive is under extended warranty and so I'm sending it back for
a new drive. The Power Supply in the enclosure is bad. The actual
drive is still good, but they want the whole thing back for a
replacement. Sanding off the oxide and then melting the drive
probably won't go over well with the manufacturer.
3) Writing zeros is a not a good idea if the data is valuable. The
small latent magnetic orientation info left from the previously
written data is not _that_ hard to recover with $5000 equipment, so
I've read. Multiple passes of random patterns are needed to make
recovery costly.
Tony Nelson's remark about newer drives having overlapping data
tracks is interesting and I don't know what current research says
about the effects of that on recovery, but Gutmann's (slightly old)
paper from 1996:
<http://www.cs.auckland.ac.nz/%7Epgut001/pubs/secure_del.html>
says in Section 2:
When all the above factors are combined it turns out that each
track contains an image of everything ever written to it, but
that the contribution from each "layer" gets progressively
smaller the further back it was made. Intelligence organisations
have a lot of expertise in recovering these palimpsestuous
images.
Which is why 25 passes meets DoD (and my corporate) standards.
4) I don't know if the fact that the process runing at 100% of one CPU
means it is compute bound. Looking at the disk I/O meter in
gkrellm I see bursts of writes followed by intervals of no
transfer. I know that magnetic reorientation requires some time
to "set" and that may be why the delays are there. Or it may be
compute bound.
Run "top" and you may find that the shred process is in a "D" state a
lot of the time. That means it's in an I/O wait state, waiting on the
drive to complete some operation. A "D" state can suck up a lot of CPU.
Thanks for all the interesting comments on my question. At this point
I think I'll just let the thing run for the five days.
Dean
--
----------------------------------------------------------------------
- Rick Stevens, Systems Engineer ricks@xxxxxxxx -
- AIM/Skype: therps2 ICQ: 22643734 Yahoo: origrps2 -
- -
- Give me ambiguity or give me something else! -
----------------------------------------------------------------------
--
fedora-list mailing list
fedora-list@xxxxxxxxxx
To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-list
Guidelines: http://fedoraproject.org/wiki/Communicate/MailingListGuidelines