Re: optimesing NFS Server for e.g. remote Backup

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 2006-02-16 at 08:25, John Summerfied wrote:
> >>
> >>I have some RHEL4 servers and want to set up a central backup system
> >>with a "big" harddisk. I found a very nice shell-script, doing
> >>incremental backups using tar.
> >>
> > 
> > 
> >>Any ideas what coul be wrong? The Backups are about 10 GB +- up to 100
> >>GB in the future.
> > 
> > 
> > With that amount of data to backup don't use tar, use rsync instead. Rsync 
> > is much faster as it only copies the parts of files which have changed. 
> > An excellent backup system using rsync to try is BackupPC                
> > <  backuppc.sourceforge.net >. It does full and incremental backups using 
> > rsync.
> 
> Some time ago, I tried backing up our server over the Internet using 
> rsync, and even after discussion with the developers and updating to the 
> latest version, it was way too slow.
> 
> It took enormous amounts of RAM and swap (fortunately, depite making 
> swap several times RAM it didn't induce thrashing), and never completed 
> in a reasonable time.

I also recommend the backuppc program, but rsync loads the
entire directory structure into RAM at both ends before starting
which can be a problem if you have a huge number of files
or a relatively small amount of ram.  This will be true whether
you use rsync standalone or as the transfer method for backuppc.
One approach to work around this issue is to split the
run into several smaller parts.  Doing individual filesystems
at a time is easy - if you have to break up subdirectories
it may be harder.

-- 
  Les Mikesell
   lesmikesell@xxxxxxxxx



[Index of Archives]     [Current Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [Yosemite Photos]     [KDE Users]     [Fedora Tools]     [Fedora Docs]

  Powered by Linux