On Sun, 2005-09-18 at 09:50, Jeff Vian wrote: > > > > > >>I'm taxed with moving a 200GB filesystem from one server to > > >>another. If > > >>it wasn't so big I'd think nothing of it and use the told 'tar | > > >>rsh' > > >>trick but I'm a bit concerned since it's over 200GB and 730,000 > > >>files. > > >> > > >>Is it safe to use tar or should I use dump or a script and > > >>'/bin/cp' or > > >>what? > > > > > >I'd probably use rsync(1). > > > > > > > > > What about dd? it woud help too? > > > > Recommending dd for moving a filesystem between machines does not seem > like a good choice to me. > > Dd makes a bit-for-bit copy of everything. It will be exactly the size > of the source and may have things you don't want. For example, using > dd to copy an entire 80gb drive to a 160gb drive would likely mess up > the ability to access the 160gb of space since dd also copies not just > the data but the drive geometry and the partition table as well. Similar > slightly different problems may occur if you just copied a partition > itself. Yes, it it possible with dd, but generally not the best choice. You'd have to duplicate the existing fileystem, then resize it. The only reason I can think of to make it worth the extra trouble would be if you had millions of hardlinks (like a backuppc archive might) which would make it slow for tar or rsync to handle. > To copy a filesystem to another filesystem I would recommend rsync or > scp. 200 gigs is not a problem for tar, but rsync has the advantage of being able to restart. If the drive is active and you want to minimize downtime, do one rsync while you are still working to do the bulk of the work, then stop everything that might change files (depending on how critical it is to be perfect vs. downtime) and make another run using the --delete option which will go quickly, then cut the service over. -- Les Mikesell lesmikesell@xxxxxxxxx