Michael D. Setzer II wrote: > On 7 Oct 2010 at 17:36, Bill Davidsen wrote: > > To: users@xxxxxxxxxxxxxxxxxxxxxxx > From: Bill Davidsen<davidsen@xxxxxxx> > Subject: Re: rsync or dd to clone a hard drive? > Date sent: Thu, 07 Oct 2010 17:36:08 -0400 > Send reply to: Community support for Fedora users > <users@xxxxxxxxxxxxxxxxxxxxxxx> > <mailto:users- > request@xxxxxxxxxxxxxxxxxxxxxxx?subject=unsubscribe> > <mailto:users- > request@xxxxxxxxxxxxxxxxxxxxxxx?subject=subscribe> > >> Kwan Lowe wrote: >>> On Wed, Oct 6, 2010 at 12:23 PM, Patrick Bartek<bartek047@xxxxxxxxx> wrote: >>>> I would use dd to clone (or back up) an entire hard drive. Easier. You can even pipe it through gzip to get a compressed image file. >>> >>> I do exactly that... dd piped through gzip then push through an SSH >>> session to a remote where it's extracted and written on the fly to >>> another LV. I use it to move kvm LVs from machine to machine. Works >>> great, but there's not much error checking. If it's critical I do an >>> md5sum on the LVs at the end then compare. >> > > Cut part. > > Some comments on Compression. With my G4L project that backs up and > restores disk and partitions using dd and compression. I've also seem > difference in the speed and load on cpu. I've used lzop compression as the > default option do to this. > > In a small partition test these were the time results. > 10 seconds with no compression > 3 seconds with lzop compression > 6 seconds with gzip compression > 18 seconds with bzip compression > > With full disk images the same machines take the following. > 50 minutes with lzop > 100 minutes with gzip > > Intestingly, restoring both the lzop and gzip images only takes about 40 > minutes, so compression process takes more load compared to > uncompressing. > > The general difference between the compression is about 10%. Also, lzop > only seems to load the CPU at about 30% rather than the 80 - 90% of gzip. > > There are lots of things that effect the speed, so this might vary based on the > overall system and network. > Good Luck. > A lot depends on the speed of the network and the CPU. If I were in Europe using a 128k ISDN connection, I might well run gzip at a low level, because while lzop is fast, the compression suffers. I might even use lzma with the -1 or -2 option, since that can be faster than bzip2, and produces better compression. And for backug up sparse files on a file basis, the -S (sparse) option of tar can make a big difference. tar -cSf - *.img | ssh foo "tar xSf -" might reduce the bytes transferred better than compression, depending on file content. As you said, "lots of things effect the speed." Pick the compression which produces output slightly faster than the network and you get close to optimal. Use of a buffering program can keep the network loaded, things like bcopy or ptbuf are examples. -- Bill Davidsen <davidsen@xxxxxxx> "We have more to fear from the bungling of the incompetent than from the machinations of the wicked." - from Slashdot -- users mailing list users@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe or change subscription options: https://admin.fedoraproject.org/mailman/listinfo/users Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines