Aleksandar Milivojevic wrote:
Claude Jones wrote:
Contrary to the subject line, BackupPC doesn't NOT back up the entire
server, at least in WindowsLand. All files that are locked, which
include things like your Outlook .pst file, and many operating system
files, will NOT be backed up. I am too new to Linux to know how this
issue plays out in the Linux world.
I Unix world, there are no mandatory file locks, only advisory locks. If
application chooses to ignore the lock, nothing is preventing it from
accessing the file (including removing the file that some other process
keeps open).
So, backup application under Unix *can* backup the locked file, if it
chooses to ignore the locks (or doesn't check for them). However, if
the file was being kept open by some other application for writing, the
content of the file might be corrupted/unusable after restore is
performed (not to be confused with corrupted buckup or file, the backup
and file will be OK, only the content might get corrupted).
There are couple of solutions to the problem:
- stop applications that keep file open, then do the backup
- simlar to above, if under LVM, make snapshot (instant operation),
restart applications, do backup of snapshot, delete snapshot
- similar to previous (but redundancy lost while backup runs), if on
software RAID1, flush dirty pages to disk, detach one submirror, do the
backup, than reattach the mirror (Solaris has a nice feature that Linux
lacks, Solaris allows submirror to be offlined, instead of detached, in
which case no resync of entire mirror is needed, only changed blocks are
synced, plus handy lockfs command that prevents new dirty pages to be
created between sync and offline (or detach) operations, so you are
guaranteed to have correct data on offlined (or detached) submirror).
- use application's backup module to do safe backups. most databases
come with some kind of dump tool, some come with its own backup tools
and/or modules for backup systems such as Legato Networker or Veritas
Netbackup), and some can create snapshots. safe to use on live database
as long as database supports transactions and/or locking *and*
applications that access database are using them correctly (in other
words, they don't leave database in unconsistent state between two
transactions or wile tables are unlocked). if your database can't be
brought down not even for milisecond, this might be the only option for
making a backup of it.
Aleksandar has the right idea. Speaking as the manager of a pretty big
storage farm (45 terabytes and growing about 300GB/week), backing up a
filesystem with active, open, writable files is problematic at best.
Block-level backup is by far and away the fastest (think of "dd"ing
a filesystem directly to backup media), but the backed-up version of a
file that is currently in a writable state will be as it was at the
exact moment it was backed up. Such a file will most likely not be what
you think it should be and on restore, the filesystem may even deem it
corrupted since the directory entry describing the file will have been
backed up at a different time than the file data itself and they
probably won't be synced very well. To avoid that problem, the
filesystem must be "quiescent", meaning that you must stop all writing
activity to the filesystem in question while the backup occurs--and that
usually isn't an option. Tools to do block-level backup are things such
as "dd", "dump", etc.
As Aleksander says, block-level backup can also be accomplished by using
RAID-1 and using the secondary drive(s) as the backup media. You can
then "fail" the secondary drive(s), pull them out, replace them, and ask
your RAID to rebuild the secondaries. This is very, very clunky and I
do not recommend doing it--but it is an option.
File-level backup is a second mechanism. The benefits are that the
files are "complete" (well, the directory entry and data should match
up) and the filesystem can remain active and on-line. Most Linux/Unix
file-level backup programs (e.g. Veritas, amanda, etc.) will ignore
advisory write locks and backup writable files. The downside is that
it's much, much slower (by orders of magnitude) compared to block-level
and it flogs the filesystem pretty badly. You must also make sure that
any application that manages raw disk (e.g. the datastore for Oracle or
Informix) must create some form of file-level backup of its raw data
BEFORE the backup occurs. File-level backup only backs up files--it
won't deal with raw partitions that are not managed by the filesystem.
The best way to get a good, solid, file-level backup of everything
WITHOUT essentially entering a single user mode is to use some form of
snapshotting and doing a file-level backup of the snapshot. As
Aleksander states, LVM can do the snapshotting and you can use something
along the lines of amanda to do the backup of the snapshots. Note that
the backup, again, will only reflect stuff at the time of the snapshot.
You must still get applications that handle raw data to put that in some
form of file for the backup to pick it up.
The upsides of this are:
1) The filesystem stays on-line and available.
2) The relative slow speed of the file-level backup is of less
consequence since the backup is done "in the background".
3) The backup is performed on an inactive filesystem so filesystem
flogging shouldn't affect running applications beyond the amount of CPU
the backup actually uses.
The downsides are:
1) By its very nature, the backup will be slightly out-of-date.
2) Snapshots suck up storage (disk). If you have enough disk, then go
for it.
We use a snapshot/file-level backup mechanism ourselves and it works
well. Granted, we use expensive hardware and software to do it, but it
does work.
----------------------------------------------------------------------
- Rick Stevens, Senior Systems Engineer rstevens@xxxxxxxxxxxxxxx -
- VitalStream, Inc. http://www.vitalstream.com -
- -
- To iterate is human, to recurse, divine. -
----------------------------------------------------------------------