Re: rsync or dd to clone a hard drive?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 8 Oct 2010 at 16:21, Bill Davidsen wrote:

To:             	users@xxxxxxxxxxxxxxxxxxxxxxx
From:           	Bill Davidsen <davidsen@xxxxxxx>
Subject:        	Re: rsync or dd to clone a hard drive?
Date sent:      	Fri, 08 Oct 2010 16:21:18 -0400

> Michael D. Setzer II wrote:
> > On 7 Oct 2010 at 17:36, Bill Davidsen wrote:
> >
> > To:             	users@xxxxxxxxxxxxxxxxxxxxxxx
> > From:           	Bill Davidsen<davidsen@xxxxxxx>
> > Subject:        	Re: rsync or dd to clone a hard drive?
> > Date sent:      	Thu, 07 Oct 2010 17:36:08 -0400
> >
> >> Kwan Lowe wrote:
> >>> On Wed, Oct 6, 2010 at 12:23 PM, Patrick Bartek<bartek047@xxxxxxxxx>   wrote:
> >>>> I would use dd to clone (or back up) an entire hard drive.  Easier.  You can even pipe it through gzip to get a compressed image file.
> >>>
> >>> I do exactly that... dd piped through gzip then push through an SSH
> >>> session to a remote where it's extracted and written on the fly to
> >>> another LV.  I use it to move kvm LVs from machine to machine.  Works
> >>> great, but there's not much error checking.  If it's critical I do an
> >>> md5sum on the LVs at the end then compare.
> >>
> >
> > Cut part.
> >
> > Some comments on Compression. With my G4L project that backs up and
> > restores disk and partitions using dd and compression.  I've also seem
> > difference in the speed and load on cpu.  I've used lzop compression as the
> > default option do to this.
> >
> > In a small partition test these were the time results.
> > 10 seconds with no compression
> >   3 seconds with lzop compression
> >   6 seconds with gzip compression
> > 18 seconds with bzip compression
> >
> > With full disk images the same machines take the following.
> > 50 minutes with lzop
> > 100 minutes with gzip
> >
> > Intestingly, restoring both the lzop and gzip images only takes about 40
> > minutes, so compression process takes more load compared to
> > uncompressing.
> >
> > The general difference between the compression is about 10%. Also, lzop
> > only seems to load the CPU at about 30% rather than the 80 - 90% of gzip.
> >
> > There are lots of things that effect the speed, so this might vary based on the
> > overall system and network.
> > Good Luck.
> >
> A lot depends on the speed of the network and the CPU. If I were in Europe using 
> a 128k ISDN connection, I might well run gzip at a low level, because while lzop 
> is fast, the compression suffers. I might even use lzma with the -1 or -2 
> option, since that can be faster than bzip2, and produces better compression. 
> And for backug up sparse files on a file basis, the -S (sparse) option of tar 
> can make a big difference.

I wouldn't suggest attempting to do disk or partition images over 128K or 
most broadband connection. With a 100MB network, it takes about 50 
minutes to image an 80GB disk image to about 14GB in my case.  Can make 
a 9GB image file of a 40GB partition using ntfsclone option in about 12 
minutes. I would suggest making an image to a local USB or other local 
machine, and then file could be sent using any transfer option.

>    tar -cSf - *.img | ssh foo "tar xSf -"
> might reduce the bytes transferred better than compression, depending on file 
> content.

that is a file level backup, whereas g4l's purpose is to mainly make full disk or 
partitionimages.

> 
> As you said, "lots of things effect the speed." Pick the compression which 
> produces output slightly faster than the network and you get close to optimal. 
> Use of a buffering program can keep the network loaded, things like bcopy or 
> ptbuf are examples.
> 

One thing I didn't mention in earlier messages, that I add g4l to my regular 
grub menu on fedora machines to be able to run it directly from grub.

> -- 
> Bill Davidsen <davidsen@xxxxxxx>
>    "We have more to fear from the bungling of the incompetent than from
> the machinations of the wicked."  - from Slashdot
> 
> -- 
> users mailing list
> users@xxxxxxxxxxxxxxxxxxxxxxx
> To unsubscribe or change subscription options:
> https://admin.fedoraproject.org/mailman/listinfo/users
> Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines


+----------------------------------------------------------+
  Michael D. Setzer II -  Computer Science Instructor      
  Guam Community College  Computer Center                  
  mailto:mikes@xxxxxxxxxxxxxxxx                            
  mailto:msetzerii@xxxxxxxxx
  http://www.guam.net/home/mikes
  Guam - Where America's Day Begins                        
  G4L Disk Imaging Project maintainer 
  http://sourceforge.net/projects/g4l/
+----------------------------------------------------------+

http://setiathome.berkeley.edu (Original)
Number of Seti Units Returned:  19,471
Processing time:  32 years, 290 days, 12 hours, 58 minutes
(Total Hours: 287,489)

BOINC@HOME CREDITS
SETI        10009860.871197   |   EINSTEIN     4670910.500851
ROSETTA      2304322.602610   |   ABC          2794658.432901

-- 
users mailing list
users@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe or change subscription options:
https://admin.fedoraproject.org/mailman/listinfo/users
Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines


[Index of Archives]     [Current Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [Yosemite Photos]     [KDE Users]     [Fedora Tools]     [Fedora Docs]

  Powered by Linux