diff options
40 files changed, 2274 insertions, 1784 deletions
@@ -27,3 +27,10 @@ Matthew Palmer -- mysql enhancements romain.tartiere@healthgrid.org -- ldap fixes Adam Monsen - spec file updates Matthew Palmer <mpalmer@debian.org> -- halt loglevel feature +dan@garthwaite.org -- reportspace bugfix +Tuomas Jormola <tj@solitudo.net> -- "when = manual" option +Ian Beckwith <ianb@erislabs.net> -- dup bandwidthlimit fix +Olivier Berger <oberger@ouvaton.org> -- dup debug output bugfix, reportinfo option +stefan <s.freudenberg@jpberlin.de> -- dup support for Amazon S3 buckets +maniacmartin <martin@maniacmartin.com> -- rdiff confusing error message fix + @@ -1,3 +1,74 @@ +version 0.9.7 -- UNRELEASED + backupninja changes + . fix bug in reportspace, thanks Dan Garthwaite + . do not assume English locale when using date (Closes: #465837) + . add 'when = manual' option, that can be used in the global config + file or in a given backup action file. Thanks Tuomas Jormola for the + preliminary patch (Closes: #511299) + . new reportinfo option: when set, messages sent by handlers with + "info" are included in the report e-mail (closes #563734) + . reportspace: skip non-directories and places that don't exist on the + local filesystem (Closes: #536049) + . set BACKUPNINJA_DEBUG when invoked with -d (Closes: #537266) + lib changes + easydialog: + . Allow form fields input to grow up to 100 chars (Closes: #562249) + handler changes + ldap: + . Use gzip's --rsyncable option. + . Use bash pipefail option when needed so that failed dumps are + reported as such. + maildir: + . fix location of deleted_on file + . add missing destid_file options to ssh connections + . added sshoptions variable for arbitrary ssh options + . updated example file to include destid_file, destport and sshoptions + mysql: + . Options passed to mysqldump are now customizable with the new + sqldumpoptions configuration variable. Thanks to Chris Lamb for his + preliminary patch (Closes: #502966) + . Hide 'mysqladmin ping' output, to prevent confusing the user in + case mysqld is running but the authentication fails, which apparently + does not prevent mysqldump to work. + . Fix the error message displayed when mysqld is not running: + mysqladmin ping indeed returns 0 when authentication fails. + . Use gzip's --rsyncable option. + . Quote output filenames to support shell meta-characters in + database names. + . Use bash pipefail option when needed so that failed dumps are + reported as such. + pgsql: + . Use gzip's --rsyncable option. + . Quote output filenames to support shell meta-characters in + database names. + . Use bash pipefail option when needed so that failed dumps are + reported as such. + rdiff: + . Fix include/exclude paths with spaces (Closes: #398435) + . Fix confusing error if rdiff-backup cannot be found on remote server. + sys: + . New luksheaders option (default=disabled) to backup the Luks header + of every Luks device. + . New lvm option (default=disabled) to backup LVM metadata for every + detected volume group. + . Backup dmsetup info as well, for easier restoring of Luks headers. + dup: + . Fixed bandwidthlimit syntax error. Thanks to Ian Beckwith for + the patch. + . Send duplicity output to debug line by line instead of as a whole at + one time (Closes: #536360) + . Report duplicity output as "info" so that it can be included in + report e-mail when reportinfo is on (Closes: #563734) + . Fix include/exclude paths with spaces + . Support backups to Amazon S3 buckets, thanks to stefan for the patch. + helper changes + dup: + . Do not propose to exclude /home/*/.gnupg twice anymore + (Closes: #563044) + autotools + . Added the stat command to the automagically replaced ones, hoping it + will help supporting *BSD some day. + version 0.9.6 -- July 21, 2008 backupninja changes . fix bug in cstream definition, thanks Jamie McClelland @@ -1,3 +1,12 @@ +backupninja (0.9.7-1) UNRELEASED + + * mysql: output filenames to support shell meta-characters in + database names. This change was not heavily tested, please + report any breakage. + * pgsql: output filenames to support shell meta-characters in + database names. This change was not heavily tested, please + report any breakage. + backupninja (0.9.4-1) UNRELEASED * duplicity: Old (pre-0.9.4) example.dup file used to give false @@ -182,6 +182,16 @@ blank by hitting return. The included helper program "ninjahelper" will walk you through creating an rdiff-backup configuration, and will set up the ssh keys for you. + +Amazon Simple Storage Service (S3) +================================== + +Duplicity can store backups on Amazon S3 buckets, taking care of encryption. +Since it performs incremental backups it minimizes the number of request per +operation therefore reducing the costs. The boto Python interface to Amazon +Web Services is needed to use duplicity with S3 (Debian package: python-boto). + + INSTALLATION ============ @@ -5,17 +5,14 @@ you are working on it! . Fix all bugs reported on the Debian BTS: http://bugs.debian.org/cgi-bin/pkgreport.cgi?which=pkg&data=backupninja&archive=no -. Fix all bugs reported on our Trac: - https://code.autistici.org/trac/backupninja/report/3 +. Fix all bugs reported on our Redmine instance: + https://labs.riseup.net/code/projects/backupninja/issues . Make ninjahelper allow you to pick what type of backup you want (instead of just assuming you want local-to-remote, or push backups. Some people want local-to-local, or remote-to-local, or pull backups). This has been reported for the duplicity handler as Debian bug #346040. -. Make it so backupninja can be run as a regular user, instead of requiring - root - . Allow vsnames "all" in the msyql handler. . Factorize the rdiff.helper's connection-related functions into a lib, so @@ -25,5 +22,3 @@ you are working on it! . Add an exclude option to database handlers so you can configure it to backup all databases, except for the excluded ones - -. Consolidate the 'rsnap' and 'rub' handlers into one rsync snapshot handler diff --git a/backupninja.spec.in b/backupninja.spec.in index 917e4de..8a62023 100644 --- a/backupninja.spec.in +++ b/backupninja.spec.in @@ -7,7 +7,7 @@ Version: %{version} Release: 1 License: GPL Group: Applications/System -URL: http://dev.riseup.net/backupninja/ +URL: https://labs.riseup.net/code/projects/show/backupninja Source: %{name}-%{version}.tar.gz Requires: bash, gawk, rdiff-backup, gzip Provides: %{name} diff --git a/configure.in b/configure.in index 26e95be..174399d 100644 --- a/configure.in +++ b/configure.in @@ -36,6 +36,13 @@ if test x$MKTEMPT = "xno"; then AC_MSG_ERROR([mktemp is required]) fi +AC_PATH_PROGS(STAT, stat, "no") +if test x$STAT = "xno"; then + AC_MSG_ERROR([stat is required]) +else + export STAT +fi + AC_CHECK_PROG(ac_cv_have_rpm, rpm, "yes", "no") if test "x$ac_cv_have_rpm" = "xyes"; then rpm --define '_topdir /tmp' > /dev/null 2>&1 diff --git a/etc/backupninja.conf.in b/etc/backupninja.conf.in index f7668a5..de1fbf3 100644 --- a/etc/backupninja.conf.in +++ b/etc/backupninja.conf.in @@ -21,6 +21,10 @@ reportemail = root # even if all modules reported success. (default = yes) reportsuccess = yes +# if set to 'yes', info messages from handlers will be +# sent into the email (default = no) +reportinfo = no + # if set to 'yes', a report email will be generated # even if there was no error. (default = yes) reportwarning = yes diff --git a/examples/example.dup b/examples/example.dup index 830a47d..ea4d66e 100644 --- a/examples/example.dup +++ b/examples/example.dup @@ -1,16 +1,34 @@ +## This is an example duplicity configuration file. +## +## Here you can find all the possible duplicity options, details of +## what the options provide and possible settings. The defaults are set +## as the commented out option, uncomment and change when +## necessary. Options which are uncommented in this example do not have +## defaults, and the settings provided are recommended. + +## passed directly to duplicity, e.g. to increase verbosity set this to: +## options = --verbosity 8 +## when using the Amazon S3 backend to create buckets in Europe: +## options = --s3-european-buckets --s3-use-new-style +## +## Default: +# options = -# passed directly to duplicity -#options = --verbosity 8 - -# default is 0, but set to 19 if you want to lower the priority. -nicelevel = 19 +## default is 0, but set to something like 19 if you want to lower the priority. +## +## Default: +# nicelevel = 0 -# default is yes. set to no to skip the test if the remote host is alive -#testconnect = no +## test the connection? set to no to skip the test if the remote host is alive +## +## Default: +# testconnect = yes -# temporary directory used by duplicity -# (default = /tmp or /usr/tmp, depending on the system) -#tmpdir = /var/tmp/duplicity +## temporary directory used by duplicity, set to some other location if your /tmp is small +## default is either /tmp or /usr/tmp, depending on the system +## +## Default: +# tmpdir = /tmp ###################################################### ## gpg section @@ -35,23 +53,36 @@ nicelevel = 19 [gpg] -# when set to yes, encryptkey variable must be set below; if you want to use -# two different keys for encryption and signing, you must also set the signkey -# variable below. -# default is no, for backwards compatibility with backupninja <= 0.5. -sign = yes - -# ID of the GnuPG public key used for data encryption. -# if not set, symmetric encryption is used, and data signing is not possible. -encryptkey = 04D9EA79 - -# ID of the GnuPG private key used for data signing. -# if not set, encryptkey will be used. -#signkey = 04D9EA79 +## when set to yes, encryptkey variable must be set below; if you want to use +## two different keys for encryption and signing, you must also set the signkey +## variable below. +## default is set to no, for backwards compatibility with backupninja <= 0.5. +## +## Default: +# sign = no -# password -# NB: neither quote this, nor should it contain any quotes -password = a_very_complicated_passphrase +## ID of the GnuPG public key used for data encryption. +## if not set, symmetric encryption is used, and data signing is not possible. +## an example setting would be: +## encryptkey = 04D9EA79 +## +## Default: +# encryptkey = + +## ID of the GnuPG private key used for data signing. +## if not set, encryptkey will be used, an example setting would be: +## signkey = 04D9EA79 +## +## Default: +# signkey = + +## password +## NB: neither quote this, nor should it contain any quotes, +## an example setting would be: +## password = a_very_complicated_passphrase +## +## Default: +# password = ###################################################### ## source section @@ -59,23 +90,23 @@ password = a_very_complicated_passphrase [source] -# A few notes about includes and excludes: -# 1. include, exclude and vsinclude statements support globbing with '*' -# 2. Symlinks are not dereferenced. Moreover, an include line whose path -# contains, at any level, a symlink to a directory, will only have the -# symlink backed-up, not the target directory's content. Yes, you have to -# dereference yourself the symlinks, or to use 'mount --bind' instead. -# Example: let's say /home is a symlink to /mnt/crypt/home ; the following -# line will only backup a "/home" symlink ; neither /home/user nor -# /home/user/Mail will be backed-up : -# include = /home/user/Mail -# A workaround is to 'mount --bind /mnt/crypt/home /home' ; another one is to -# write : -# include = /mnt/crypt/home/user/Mail -# 3. All the excludes come after all the includes. The order is not otherwise -# taken into account. - -# files to include in the backup +## A few notes about includes and excludes: +## 1. include, exclude and vsinclude statements support globbing with '*' +## 2. Symlinks are not dereferenced. Moreover, an include line whose path +## contains, at any level, a symlink to a directory, will only have the +## symlink backed-up, not the target directory's content. Yes, you have to +## dereference yourself the symlinks, or to use 'mount --bind' instead. +## Example: let's say /home is a symlink to /mnt/crypt/home ; the following +## line will only backup a "/home" symlink ; neither /home/user nor +## /home/user/Mail will be backed-up : +## include = /home/user/Mail +## A workaround is to 'mount --bind /mnt/crypt/home /home' ; another one is to +## write : +## include = /mnt/crypt/home/user/Mail +## 3. All the excludes come after all the includes. The order is not otherwise +## taken into account. + +## files to include in the backup include = /var/spool/cron/crontabs include = /var/backups include = /etc @@ -86,20 +117,20 @@ include = /usr/local/sbin include = /var/lib/dpkg/status include = /var/lib/dpkg/status-old -# If vservers = yes in /etc/backupninja.conf then the following variables can -# be used: -# vsnames = all | <vserver1> <vserver2> ... (default = all) -# vsinclude = <path> -# vsinclude = <path> -# ... -# Any path specified in vsinclude is added to the include list for each vserver -# listed in vsnames (or all if vsnames = all, which is the default). -# -# For example, vsinclude = /home will backup the /home directory in every -# vserver listed in vsnames. If you have 'vsnames = foo bar baz', this -# vsinclude will add to the include list /vservers/foo/home, /vservers/bar/home -# and /vservers/baz/home. -# Vservers paths are derived from $VROOTDIR. +## If vservers = yes in /etc/backupninja.conf then the following variables can +## be used: +## vsnames = all | <vserver1> <vserver2> ... (default = all) +## vsinclude = <path> +## vsinclude = <path> +## ... +## Any path specified in vsinclude is added to the include list for each vserver +## listed in vsnames (or all if vsnames = all, which is the default). +## +## For example, vsinclude = /home will backup the /home directory in every +## vserver listed in vsnames. If you have 'vsnames = foo bar baz', this +## vsinclude will add to the include list /vservers/foo/home, /vservers/bar/home +## and /vservers/baz/home. +## Vservers paths are derived from $VROOTDIR. # files to exclude from the backup exclude = /home/*/.gnupg @@ -110,38 +141,77 @@ exclude = /home/*/.gnupg [dest] -# perform an incremental backup? (default = yes) -# if incremental = no, perform a full backup in order to start a new backup set -#incremental = yes - -# how many days of data to keep ; default is 60 days. -# (you can also use the time format of duplicity) -# 'keep = yes' means : do not delete old data, the remote host will take care of this -#keep = 60 -#keep = yes - -# full destination URL, in duplicity format; if set, desturl overrides -# sshoptions, destdir, desthost and destuser; it also disables testconnect and -# bandwithlimit. For details, see duplicity manpage, section "URL FORMAT". -#desturl = file:///usr/local/backup -#desturl = rsync://user@other.host//var/backup/bla - -# bandwith limit, in kbit/s ; default is 0, i.e. no limit -#bandwidthlimit = 128 +## perform an incremental backup? (default = yes) +## if incremental = no, perform a full backup in order to start a new backup set +## +## Default: +# incremental = yes -# passed directly to ssh, scp (and sftp in duplicity >=0.4.2) -# warning: sftp does not support all scp options, especially -i; as -# a workaround, you can use "-o <SSHOPTION>" -sshoptions = -o IdentityFile=/root/.ssh/id_dsa_duplicity +## how many days of data to keep ; default is 60 days. +## (you can also use the time format of duplicity) +## 'keep = yes' means : do not delete old data, the remote host will take care of this +## +## Default: +# keep = 60 + +## full destination URL, in duplicity format; if set, desturl overrides +## sshoptions, destdir, desthost and destuser; it also disables testconnect and +## bandwithlimit. For details, see duplicity manpage, section "URL FORMAT", some +## examples include: +## desturl = file:///usr/local/backup +## desturl = rsync://user@other.host//var/backup/bla +## desturl = s3+http:// +## the default value of this configuration option is not set: +## +## Default: +# desturl = -# put the backups under this directory -destdir = /backups +## Amazon Web Services Access Key ID and Secret Access Key, needed for backups +## to S3 buckets. +## awsaccesskeyid = YOUR_AWS_ACCESS_KEY_ID +## awssecretaccesskey = YOUR_AWS_SECRET_KEY +## +## Default: +# awsaccesskeyid = +# awssecretaccesskey = -# the machine which will receive the backups -desthost = backuphost +## bandwith limit, in kbit/s ; default is 0, i.e. no limit an example +## setting would be: +## bandwidthlimit = 128 +## +## Default: +# bandwidthlimit = 0 + +## passed directly to ssh, scp (and sftp in duplicity >=0.4.2) +## warning: sftp does not support all scp options, especially -i; as +## a workaround, you can use "-o <SSHOPTION>" +## an example setting would be: +## sshoptions = -o IdentityFile=/root/.ssh/id_dsa_duplicity +## +## Default: +# sshoptions = + +## put the backups under this directory, this must be set! +## an example setting would be: +## destdir = /backups +## +## Default: +# destdir = + +## the machine which will receive the backups, this must be set! +## an example setting would be: +## desthost = backuphost +## +## Default: +# desthost = + +## make the files owned by this user +## note: you must be able to ssh backupuser@backhost +## without specifying a password (if type = remote). +## an example setting would be: +## destuser = backupuser +## +## Default: +# destuser = -# make the files owned by this user -# note: you must be able to ssh backupuser@backhost -# without specifying a password (if type = remote). -destuser = backupuser diff --git a/examples/example.maildir b/examples/example.maildir index 8898a78..eba5429 100644 --- a/examples/example.maildir +++ b/examples/example.maildir @@ -50,6 +50,16 @@ desthost = kakapo-pn # arbitrary bash commands on the desthost. destuser = backer +# For alternate ports from the default 22, specify here +destport = 4444 + +# If you need to specify an alternate ssh public key authentication file +# do that here. Default: /root/.ssh/id_rsa +destid_file = /home/backupkeys/.ssh/maildirbackup_id_rsa + +# If you need to specify other ssh configuration options, do that here +sshoptions = "-C -6" + # remove any maildirs from backup which might have been deleted remove = yes diff --git a/examples/example.rdiff b/examples/example.rdiff index 3767f9b..08e8869 100644 --- a/examples/example.rdiff +++ b/examples/example.rdiff @@ -1,16 +1,33 @@ ## ## This is an example rdiff-backup configuration file. -## The defaults are useful in most cases, just make sure -## to configure the destination host and user. +## +## Here you can find all the possible rdiff-backup options, details of +## what the options provide and possible settings. The defaults are set +## as the commented out option, uncomment and change when +## necessary. Options which are uncommented in this example do not have +## defaults, and the settings provided are recommended. +## +## The defaults are useful in most cases, just make sure to configure the +## destination host and user. ## ## passed directly to rdiff-backup -# options = --force +## an example setting would be: +## options = --force +## +## Default: +# options = ## default is 0, but set to 19 if you want to lower the priority. -# nicelevel = 19 +## an example setting would be: +## nicelevel = 19 +## +## Default +# nicelevel = 0 ## default is yes. set to no to skip the test if the remote host is alive +## +## Default: # testconnect = no ## default is not to limit bandwidth. @@ -18,7 +35,11 @@ ## number to set a limit that will never be exceeded, or a positive number ## to set a target average bandwidth use. cstream is required. See cstream's ## -t option for more information. 62500 bytes = 500 Kb (.5 Mb) -# bwlimit = 62500 +## an example setting would be: +## bwlimit = 62500 +## +## Default: +# bwlimit = 0 ## should backupninja ignore the version differences between source and remote ## rdiff-backup? (default: no) @@ -28,6 +49,8 @@ ## An example usage could be the remote side has its authorized_keys configured ## with command="rdiff-backup --server" to allow for restricted yet automated ## password-less backups +## +## Default: # ignore_version = no ###################################################### @@ -36,39 +59,42 @@ [source] -# an optional subdirectory below 'directory' (see [dest]) +## an optional subdirectory below 'directory' (see [dest]) label = thishostname -# type can be "local" or "remote" +## type can be "local" or "remote" type = local -# only use if '[source] type = remote' -#host = srchost -#user = srcuser - -# how many days of data to keep -# (you can also use the time format of rdiff-backup, e.g. 6D5h) -# (to keep everything, set this to yes) -#keep = yes -keep = 60 - -# A few notes about includes and excludes: -# 1. include, exclude and vsinclude statements support globbing with '*' -# 2. Symlinks are not dereferenced. Moreover, an include line whose path -# contains, at any level, a symlink to a directory, will only have the -# symlink backed-up, not the target directory's content. Yes, you have to -# dereference yourself the symlinks, or to use 'mount --bind' instead. -# Example: let's say /home is a symlink to /mnt/crypt/home ; the following -# line will only backup a "/home" symlink ; neither /home/user nor -# /home/user/Mail will be backed-up : -# include = /home/user/Mail -# A workaround is to 'mount --bind /mnt/crypt/home /home' ; another one is to -# write : -# include = /mnt/crypt/home/user/Mail -# 3. All the excludes come after all the includes. The order is not otherwise -# taken into account. - -# files to include in the backup +## only use if '[source] type = remote' +# host = srchost +# user = srcuser + +## how many days of data to keep +## (you can also use the time format of rdiff-backup, e.g. 6D5h) +## (to keep everything, set this to yes) +## an example setting would be: +##keep = yes +## +## Default: +# keep = 60 + +## A few notes about includes and excludes: +## 1. include, exclude and vsinclude statements support globbing with '*' +## 2. Symlinks are not dereferenced. Moreover, an include line whose path +## contains, at any level, a symlink to a directory, will only have the +## symlink backed-up, not the target directory's content. Yes, you have to +## dereference yourself the symlinks, or to use 'mount --bind' instead. +## Example: let's say /home is a symlink to /mnt/crypt/home ; the following +## line will only backup a "/home" symlink ; neither /home/user nor +## /home/user/Mail will be backed-up : +## include = /home/user/Mail +## A workaround is to 'mount --bind /mnt/crypt/home /home' ; another one is to +## write : +## include = /mnt/crypt/home/user/Mail +## 3. All the excludes come after all the includes. The order is not otherwise +## taken into account. + +## files to include in the backup include = /var/spool/cron/crontabs include = /var/backups include = /etc @@ -79,23 +105,23 @@ include = /usr/local/sbin include = /var/lib/dpkg/status include = /var/lib/dpkg/status-old -# If vservers = yes in /etc/backupninja.conf then the following variables can -# be used: -# vsnames = all | <vserver1> <vserver2> ... (default = all) -# vsinclude = <path> -# vsinclude = <path> -# ... -# Any path specified in vsinclude is added to the include list for each vserver -# listed in vsnames (or all if vsnames = all, which is the default). -# -# For example, vsinclude = /home will backup the /home directory in every -# vserver listed in vsnames. If you have 'vsnames = foo bar baz', this -# vsinclude will add to the include list /vservers/foo/home, /vservers/bar/home -# and /vservers/baz/home. -# Vservers paths are derived from $VROOTDIR. - -# files to exclude from the backup -#exclude = /home/*/.gnupg +## If vservers = yes in /etc/backupninja.conf then the following variables can +## be used: +## vsnames = all | <vserver1> <vserver2> ... (default = all) +## vsinclude = <path> +## vsinclude = <path> +## ... +## Any path specified in vsinclude is added to the include list for each vserver +## listed in vsnames (or all if vsnames = all, which is the default). +## +## For example, vsinclude = /home will backup the /home directory in every +## vserver listed in vsnames. If you have 'vsnames = foo bar baz', this +## vsinclude will add to the include list /vservers/foo/home, /vservers/bar/home +## and /vservers/baz/home. +## Vservers paths are derived from $VROOTDIR. + +## files to exclude from the backup +exclude = /home/*/.gnupg ###################################################### ## destination section @@ -103,18 +129,35 @@ include = /var/lib/dpkg/status-old [dest] -# type can be "local" or "remote" -type = remote - -# put the backups under this directory -directory = /backups - -# the machine which will receive the backups. -# only use if "[dest] type = remote" -host = backuphost +## type can be "local" or "remote", this must be set! +## an example configuration would be: +## type = remote +## +## Default: +# type = + +## put the backups under this directory, this must be set! +## an example setting would be: +## directory = /backups +## +## Default: +# directory = + +## the machine which will receive the backups. +## only use if "[dest] type = remote" +## an example setting would be: +## host = backuphost +## +## Default +# host = + +## make the files owned by this user. you must be able to +## `su -c "ssh backupuser@backhost"` without specifying a password. +## only use if "[dest] type = remote" +## an example setting would be: +## user = backupuser +## +## Default: +# user = -# make the files owned by this user. you must be able to -# `su -c "ssh backupuser@backhost"` without specifying a password. -# only use if "[dest] type = remote" -user = backupuser diff --git a/handlers/dup.helper.in b/handlers/dup.helper.in index c1fbdd5..ae48e4c 100644 --- a/handlers/dup.helper.in +++ b/handlers/dup.helper.in @@ -1,4 +1,5 @@ # -*- mode: sh; sh-basic-offset: 3; indent-tabs-mode: nil; -*- +# vim: set filetype=sh sw=3 sts=3 expandtab autoindent: HELPERS="$HELPERS dup:incremental_encrypted_remote_filesystem_backup" @@ -13,11 +14,11 @@ do_dup_host_includes() { [ -z "$dup_includes" ] && dup_includes="$dup_default_includes" for i in $dup_includes; do formItem include "$i" - done - formItem include "" - formItem include "" - formItem include "" - formDisplay + done + formItem include "" + formItem include "" + formItem include "" + formDisplay [ $? = 0 ] || return 1 dup_includes="$REPLY" done @@ -37,10 +38,10 @@ do_dup_vserver() { [ -z "$dup_vsincludes" ] && dup_vsincludes="$dup_default_includes" for i in $dup_vsincludes; do formItem include "$i" - done - formItem include "" - formItem include "" - formItem include "" + done + formItem include "" + formItem include "" + formItem include "" formDisplay [ $? = 0 ] || return 1 dup_vsincludes="$REPLY" @@ -69,26 +70,26 @@ do_dup_src() { [ $? = 0 ] || return 1 case $host_or_vservers in 'host') - do_dup_host_includes - [ $? = 0 ] || return 1 - ;; + do_dup_host_includes + [ $? = 0 ] || return 1 + ;; 'vservers') - do_dup_vserver - [ $? = 0 ] || return 1 - ;; + do_dup_vserver + [ $? = 0 ] || return 1 + ;; 'both') - do_dup_host_includes - [ $? = 0 ] || return 1 - do_dup_vserver - [ $? = 0 ] || return 1 - ;; + do_dup_host_includes + [ $? = 0 ] || return 1 + do_dup_vserver + [ $? = 0 ] || return 1 + ;; *) - return 1 - ;; + return 1 + ;; esac do_dup_excludes [ $? = 0 ] || return 1 - + _src_done="(DONE)" setDefault dest } @@ -102,13 +103,13 @@ do_dup_dest() { REPLY= while [ -z "$REPLY" -o -z "$dup_destdir" -o -z "$dup_desthost" -o -z "$dup_destuser" ]; do formBegin "$dup_title - destination: first three items are compulsory" - formItem "desthost" "$dup_desthost" - formItem "destuser" "$dup_destuser" - formItem "destdir" "$dup_destdir" - formItem "keep" "$dup_keep" + formItem "desthost" "$dup_desthost" + formItem "destuser" "$dup_destuser" + formItem "destdir" "$dup_destdir" + formItem "keep" "$dup_keep" formItem "incremental" "$dup_incremental" - formItem "bandwidthlimit" "$dup_bandwidth" - formItem "sshoptions" "$dup_sshoptions" + formItem "bandwidthlimit" "$dup_bandwidth" + formItem "sshoptions" "$dup_sshoptions" formDisplay [ $? = 0 ] || return 1 @@ -117,7 +118,7 @@ do_dup_dest() { IFS=$':' thereply=($replyconverted) IFS=$' \t\n' - + dup_desthost=${thereply[0]} dup_destuser=${thereply[1]} dup_destdir=${thereply[2]} @@ -165,9 +166,9 @@ do_dup_gpg_signkey() { # signkey ? REPLY= while [ -z "$REPLY" -o -z "$dup_gpg_signkey" ]; do - inputBox "$dup_title - GnuPG" "Enter the ID of the private GnuPG key to be used to sign the backups:" "$dup_gpg_signkey" - [ $? = 0 ] || return 1 - dup_gpg_signkey="$REPLY" + inputBox "$dup_title - GnuPG" "Enter the ID of the private GnuPG key to be used to sign the backups:" "$dup_gpg_signkey" + [ $? = 0 ] || return 1 + dup_gpg_signkey="$REPLY" done fi } @@ -183,7 +184,7 @@ do_dup_gpg_passphrase() { } do_dup_gpg() { - + # symmetric or public key encryption ? booleanBox "$dup_title - GnuPG" "Use public key encryption? Otherwise, symmetric encryption will be used, and data signing will be impossible." "$dup_gpg_asymmetric_encryption" if [ $? = 0 ]; then @@ -197,7 +198,7 @@ do_dup_gpg() { do_dup_gpg_encryptkey ; [ $? = 0 ] || return 1 do_dup_gpg_sign ; [ $? = 0 ] || return 1 if [ "$dup_gpg_sign" == yes ]; then - do_dup_gpg_signkey ; [ $? = 0 ] || return 1 + do_dup_gpg_signkey ; [ $? = 0 ] || return 1 fi else dup_gpg_sign=no @@ -334,7 +335,7 @@ EOF if [ "$host_or_vservers" == host -o "$host_or_vservers" == both ]; then set -o noglob for i in $dup_includes; do - echo "include = $i" >> $next_filename + echo "include = $i" >> $next_filename done set +o noglob fi @@ -402,6 +403,12 @@ keep = $dup_keep # bandwithlimit. For details, see duplicity manpage, section "URL FORMAT". #desturl = file:///usr/local/backup #desturl = rsync://user@other.host//var/backup/bla +#desturl = s3+http://your_bucket + +# Amazon Web Services Access Key ID and Secret Access Key, needed for backups +# to S3 buckets. +#awsaccesskeyid = YOUR_AWS_ACCESS_KEY_ID +#awssecretaccesskey = YOUR_AWS_SECRET_KEY # bandwith limit, in kbit/s ; default is 0, i.e. no limit #bandwidthlimit = 128 @@ -432,49 +439,49 @@ EOF dup_main_menu() { - while true; do - srcitem="choose files to include & exclude $_src_done" - destitem="configure backup destination $_dest_done" - gpgitem="configure GnuPG encryption/signing $_gpg_done" - conitem="set up ssh keys and test remote connection $_con_done" - advitem="edit advanced settings $_adv_done" - # TODO: add the following to the menu when do_dup_conn is written - # conn "$conitem" \ - menuBox "$dup_title" "choose a step:" \ - src "$srcitem" \ - dest "$destitem" \ - gpg "$gpgitem" \ - adv "$advitem" \ - finish "finish and create config file" - [ $? = 0 ] || return 1 - result="$REPLY" - - case "$result" in - "src") do_dup_src;; - "dest") do_dup_dest;; - "gpg") do_dup_gpg;; - # TODO: enable the following when do_dup_conn is written - # "conn") do_dup_conn;; - "adv") do_dup_adv;; - "finish") - if [[ "$_dest_done$_gpg_done$_src_done" != "(DONE)(DONE)(DONE)" ]]; then - # TODO: replace the previous test by the following when do_dup_conn is written - # if [[ "$_con_done$_dest_done$_gpg_done$_src_done" != "(DONE)(DONE)(DONE)(DONE)" ]]; then - msgBox "$dup_title" "You cannot create the configuration file until the four first steps are completed." - else - do_dup_finish - break - fi - ;; - esac - - done + while true; do + srcitem="choose files to include & exclude $_src_done" + destitem="configure backup destination $_dest_done" + gpgitem="configure GnuPG encryption/signing $_gpg_done" + conitem="set up ssh keys and test remote connection $_con_done" + advitem="edit advanced settings $_adv_done" + # TODO: add the following to the menu when do_dup_conn is written + # conn "$conitem" \ + menuBox "$dup_title" "choose a step:" \ + src "$srcitem" \ + dest "$destitem" \ + gpg "$gpgitem" \ + adv "$advitem" \ + finish "finish and create config file" + [ $? = 0 ] || return 1 + result="$REPLY" + + case "$result" in + "src") do_dup_src;; + "dest") do_dup_dest;; + "gpg") do_dup_gpg;; + # TODO: enable the following when do_dup_conn is written + # "conn") do_dup_conn;; + "adv") do_dup_adv;; + "finish") + if [[ "$_dest_done$_gpg_done$_src_done" != "(DONE)(DONE)(DONE)" ]]; then + # TODO: replace the previous test by the following when do_dup_conn is written + # if [[ "$_con_done$_dest_done$_gpg_done$_src_done" != "(DONE)(DONE)(DONE)(DONE)" ]]; then + msgBox "$dup_title" "You cannot create the configuration file until the four first steps are completed." + else + do_dup_finish + break + fi + ;; + esac + + done } ### Main function dup_wizard() { - + require_packages duplicity # Global variables @@ -507,7 +514,7 @@ dup_wizard() { # Global variables whose '*' shall not be expanded set -o noglob dup_default_includes="/var/spool/cron/crontabs /var/backups /etc /root /home /usr/local/*bin /var/lib/dpkg/status*" - dup_default_excludes="/home/*/.gnupg /home/*/.gnupg /home/*/.local/share/Trash /home/*/.Trash /home/*/.thumbnails /home/*/.beagle /home/*/.aMule /home/*/gtk-gnutella-downloads" + dup_default_excludes="/home/*/.gnupg /home/*/.local/share/Trash /home/*/.Trash /home/*/.thumbnails /home/*/.beagle /home/*/.aMule /home/*/gtk-gnutella-downloads" set +o noglob dup_main_menu diff --git a/handlers/dup.in b/handlers/dup.in index aed6030..ffae48c 100644 --- a/handlers/dup.in +++ b/handlers/dup.in @@ -26,6 +26,8 @@ setsection dest getconf incremental yes getconf keep 60 getconf desturl +getconf awsaccesskeyid +getconf awssecretaccesskey getconf sshoptions getconf bandwidthlimit 0 getconf desthost @@ -38,6 +40,9 @@ destdir=${destdir%/} [ -n "$desturl" -o -n "$destdir" ] || fatal "The destination directory (destdir) must be set when desturl is not used." [ -n "$include" -o -n "$vsinclude" ] || fatal "No source includes specified" [ -n "$password" ] || fatal "The password option must be set." +if [ "`echo $desturl | @AWK@ -F ':' '{print $1}'`" == "s3+http" ]; then + [ -n "$awsaccesskeyid" -a -n "$awssecretaccesskey" ] || fatal "AWS access keys must be set for S3 backups." +fi ### VServers # If vservers are configured, check that the ones listed in $vsnames do exist. @@ -106,8 +111,8 @@ duplicity_sub="`echo $duplicity_version | @AWK@ -F '.' '{print $3}'`" # --sftp-command ourselves scpoptions="$sshoptions" -if [ "$bandwidthlimit" =! 0 ]; then - [ -z "$testurl" ] || warning 'The bandwidthlimit option is not used when desturl is set.' +if [ "$bandwidthlimit" != 0 ]; then + [ -z "$desturl" ] || warning 'The bandwidthlimit option is not used when desturl is set.' scpoptions="$scpoptions -l $bandwidthlimit" fi @@ -189,26 +194,35 @@ fi set -o noglob # excludes +SAVEIFS=$IFS +IFS=$(echo -en "\n\b") for i in $exclude; do str="${i//__star__/*}" execstr_source="${execstr_source} --exclude '$str'" done +IFS=$SAVEIFS # includes +SAVEIFS=$IFS +IFS=$(echo -en "\n\b") for i in $include; do [ "$i" != "/" ] || fatal "Sorry, you cannot use 'include = /'" str="${i//__star__/*}" execstr_source="${execstr_source} --include '$str'" done +IFS=$SAVEIFS # vsincludes if [ $usevserver = yes ]; then for vserver in $vsnames; do + SAVEIFS=$IFS + IFS=$(echo -en "\n\b") for vi in $vsinclude; do str="${vi//__star__/*}" str="$VROOTDIR/$vserver$str" execstr_source="${execstr_source} --include '$str'" done + IFS=$SAVEIFS done fi @@ -218,6 +232,12 @@ set +o noglob execstr_source=${execstr_source//\\*/\\\\\\*} +### If desturl is an S3 URL export the AWS environment variables +if [ "`echo $desturl | @AWK@ -F ':' '{print $1}'`" == "s3+http" ]; then + export AWS_ACCESS_KEY_ID="$awsaccesskeyid" + export AWS_SECRET_ACCESS_KEY="$awssecretaccesskey" +fi + ### Cleanup commands (duplicity >= 0.4.4) # cleanup @@ -263,18 +283,23 @@ fi ### Backup command debug "$precmd duplicity $execstr_command $execstr_options $execstr_source --exclude '**' / $execstr_serverpart" if [ ! $test ]; then + outputfile=`maketemp backupout` export PASSPHRASE=$password output=`nice -n $nicelevel \ su -c \ - "$precmd duplicity $execstr_command $execstr_options $execstr_source --exclude '**' / $execstr_serverpart 2>&1"` + "$precmd duplicity $execstr_command $execstr_options $execstr_source --exclude '**' / $execstr_serverpart >$outputfile 2>&1"` exit_code=$? + debug $output + cat $outputfile | (while read output ; do + info $output + done + ) if [ $exit_code -eq 0 ]; then - debug $output info "Duplicity finished successfully." else - debug $output fatal "Duplicity failed." fi + rm $outputfile fi return 0 diff --git a/handlers/ldap.helper.in b/handlers/ldap.helper.in index 9251826..4154cc6 100644 --- a/handlers/ldap.helper.in +++ b/handlers/ldap.helper.in @@ -1,9 +1,10 @@ # -*- mode: sh; sh-basic-offset: 3; indent-tabs-mode: nil; -*- +# vim: set filetype=sh sw=3 sts=3 expandtab autoindent: HELPERS="$HELPERS ldap:ldap_database_backup" ldap_create_file() { -while true; do + while true; do checkBox "ldap action wizard" "check options (slapcat OR ldapsearch)" \ "slapcat" "export ldif using slapcat" yes \ "ldapsearch" "export ldif using ldapsearch" no \ @@ -22,21 +23,21 @@ while true; do result="$REPLY" for opt in $result; do case $opt in - '"compress"') compress="compress = yes";; - '"slapcat"') - method="method = slapcat" - [ "$_RESTART" == "yes" ] && restart="restart = yes" - ;; - '"ldapsearch"') - method="method = ldapsearch" - inputBox "ldap action wizard" "ldapsearch requires authentication. Specify here what password file to use. It must have the password with no trailing return and it should not be world readable." - [ $? = 1 ] && return - passwordfile="passwordfile = $REPLY" - inputBox "ldap action wizard" "ldapsearch requires authentication. Specify here what DN to bind as:" - [ $? = 1 ] && return - binddn="binddn = $REPLY" - require_packages ldap-utils - ;; + '"compress"') compress="compress = yes";; + '"slapcat"') + method="method = slapcat" + [ "$_RESTART" == "yes" ] && restart="restart = yes" + ;; + '"ldapsearch"') + method="method = ldapsearch" + inputBox "ldap action wizard" "ldapsearch requires authentication. Specify here what password file to use. It must have the password with no trailing return and it should not be world readable." + [ $? = 1 ] && return + passwordfile="passwordfile = $REPLY" + inputBox "ldap action wizard" "ldapsearch requires authentication. Specify here what DN to bind as:" + [ $? = 1 ] && return + binddn="binddn = $REPLY" + require_packages ldap-utils + ;; '"ssl"') ssl="ssl = yes";; '"tls"') tls="tls = yes";; esac @@ -54,9 +55,9 @@ $tls # conf = /etc/ldap/slapd.conf # databases = all EOF - chmod 600 $next_filename - return -done + chmod 600 $next_filename + return + done } ldap_wizard() { @@ -71,7 +72,7 @@ ldap_wizard() { elif [ "$backend" == "ldbm" ]; then ldbm=yes fi - done + done if [ "$bdb" == "yes" -o "$hdb" == "yes" ]; then if [ "$ldbm" == "no" ]; then @@ -80,11 +81,11 @@ ldap_wizard() { ldap_create_file fi elif [ "$ldbm" == "yes" ]; then - msgBox "ldap action wizard" "It looks like the backend in your slapd.conf is set to LDBM. Because of this, you will have less options (because it is not safe to use slapcat while slapd is running LDBM)." - _RESTART=yes - ldap_create_file + msgBox "ldap action wizard" "It looks like the backend in your slapd.conf is set to LDBM. Because of this, you will have less options (because it is not safe to use slapcat while slapd is running LDBM)." + _RESTART=yes + ldap_create_file else - msgBox "ldap action wizard" "I couldn't find any supported backend in your slapd.conf. Bailing out." - return + msgBox "ldap action wizard" "I couldn't find any supported backend in your slapd.conf. Bailing out." + return fi } diff --git a/handlers/ldap.in b/handlers/ldap.in index 8ff1ccf..fda24d0 100644 --- a/handlers/ldap.in +++ b/handlers/ldap.in @@ -1,4 +1,5 @@ # -*- mode: sh; sh-basic-offset: 3; indent-tabs-mode: nil; -*- +# vim: set filetype=sh sw=3 sts=3 expandtab autoindent: # # openldap backup handler script for backupninja # @@ -35,25 +36,25 @@ dbsuffixes=(`@AWK@ 'BEGIN {OFS=":"} /[:space:]*^database[:space:]*\w*/ {db=$2}; if [ "$ldif" == "yes" ]; then dumpdir="$backupdir" [ -d $dumpdir ] || mkdir -p $dumpdir - + if [ "$databases" == 'all' ]; then dbcount=`grep '^database' $conf | wc -l` let "dbcount = dbcount - 1" databases=`seq 0 $dbcount`; - fi - + fi + for db in $databases; do if [ `expr index "$db" "="` == "0" ]; then - # db is a number, get the suffix. + # db is a number, get the suffix. dbsuffix=${dbsuffixes[$db]/*:/} else dbsuffix=$db fi - # some databases don't have suffix (like monitor), skip these + # some databases don't have suffix (like monitor), skip these if [ "$dbsuffix" == "" ]; then continue; fi - + if [ "$method" == "slapcat" ]; then execstr="$SLAPCAT -f $conf -b $dbsuffix" else @@ -74,23 +75,23 @@ if [ "$ldif" == "yes" ]; then debug "Shutting down ldap server..." /etc/init.d/slapd stop fi - - ext= - if [ "$compress" == "yes" ]; then - ext=".gz" - fi + + ext= + if [ "$compress" == "yes" ]; then + ext=".gz" + fi touch $dumpdir/$dbsuffix.ldif$ext if [ ! -f $dumpdir/$dbsuffix.ldif$ext ]; then fatal "Couldn't create ldif dump file: $dumpdir/$dbsuffix.ldif$ext" fi - + if [ "$compress" == "yes" ]; then - execstr="$execstr | $GZIP > $dumpdir/$dbsuffix.ldif.gz" + execstr="$execstr | $GZIP --rsyncable > $dumpdir/$dbsuffix.ldif.gz" else execstr="$execstr > $dumpdir/$dbsuffix.ldif" fi debug "$execstr" - output=`su root -c "$execstr" 2>&1` + output=`su root -c "set -o pipefail ; $execstr" 2>&1` code=$? if [ "$code" == "0" ]; then debug $output @@ -99,13 +100,13 @@ if [ "$ldif" == "yes" ]; then warning $output warning "Failed ldif export of $dbsuffix" fi - + if [ "$restart" == "yes" ]; then debug "Starting ldap server..." /etc/init.d/slapd start fi fi - done + done fi return 0 diff --git a/handlers/maildir.in b/handlers/maildir.in index 3514153..44959aa 100644 --- a/handlers/maildir.in +++ b/handlers/maildir.in @@ -1,4 +1,5 @@ # -*- mode: sh; sh-basic-offset: 3; indent-tabs-mode: nil; -*- +# vim: set filetype=sh sw=3 sts=3 expandtab autoindent: ############################################################### # @@ -6,17 +7,17 @@ # to a remote server. It is designed to be run with low overhead # in terms of cpu and bandwidth so it runs pretty slow. # Hardlinking is used to save storage space. -# +# # This handler expects that your maildir directory structure is # either one of the following: -# -# 1. /$srcdir/[a-zA-Z0-9]/$user for example: +# +# 1. /$srcdir/[a-zA-Z0-9]/$user for example: # /var/maildir/a/anarchist # /var/maildir/a/arthur # ... # /var/maildir/Z/Zaphod # /var/maildir/Z/Zebra -# +# # 2. or the following: # /var/maildir/domain.org/user1 # /var/maildir/domain.org/user2 @@ -25,7 +26,7 @@ # /var/maildir/anotherdomain.org/user2 # ... # -# if the configuration is setup to have keepdaily at 3, +# if the configuration is setup to have keepdaily at 3, # keepweekly is 2, and keepmonthly is 1, then each user's # maildir backup snapshot directory will contain these files: # daily.1 @@ -41,15 +42,15 @@ # We handle each maildir individually because it becomes very # unweldy to hardlink and rsync many hundreds of thousands # of files at once. It is much faster to take on smaller -# chunks at a time. +# chunks at a time. # -# For the backup rotation to work, destuser must be able to run +# For the backup rotation to work, destuser must be able to run # arbitrary bash commands on the desthost. # # Any maildir which is deleted from the source will be moved to -# "deleted" directory in the destination. It is up to you to +# "deleted" directory in the destination. It is up to you to # periodically remove this directory or old maildirs in it. -# +# ############################################################## getconf rotate yes @@ -68,6 +69,7 @@ getconf desthost getconf destport 22 getconf destuser getconf destid_file /root/.ssh/id_rsa +getconf sshoptions getconf multiconnection notset @@ -84,40 +86,40 @@ if [ $test ]; then testflags="--dry-run -v" fi -rsyncflags="$testflags -e 'ssh -p $destport -i $destid_file' -r -v --ignore-existing --delete --size-only --bwlimit=$speedlimit" +rsyncflags="$testflags -e 'ssh -p $destport -i $destid_file $sshoptions' -r -v --ignore-existing --delete --size-only --bwlimit=$speedlimit" excludes="--exclude '.Trash/\*' --exclude '.Mistakes/\*' --exclude '.Spam/\*'" ################################################################## ### FUNCTIONS function do_user() { - local user=$1 - local btype=$2 - local userdir=${3%/} - local source="$srcdir/$userdir/$user/" - local target="$destdir/$userdir/$user/$btype.1" - if [ ! -d $source ]; then - warning "maildir $source not found" - return - fi - - debug "syncing" - ret=`$RSYNC -e "ssh -p $destport -i $destid_file" -r \ - --links --ignore-existing --delete --size-only --bwlimit=$speedlimit \ - --exclude '.Trash/*' --exclude '.Mistakes/*' --exclude '.Spam/*' \ - $source $destuser@$desthost:$target \ - 2>&1` - ret=$? - # ignore 0 (success) and 24 (file vanished before it could be copied) - if [ $ret != 0 -a $ret != 24 ]; then - warning "rsync $user failed" - warning " returned: $ret" - let "failedcount = failedcount + 1" - if [ $failedcount -gt 100 ]; then - fatal "100 rsync errors -- something is not working right. bailing out." - fi - fi - ssh -o PasswordAuthentication=no $desthost -l $destuser -i $destid_file "date +%c%n%s > $target/created" + local user=$1 + local btype=$2 + local userdir=${3%/} + local source="$srcdir/$userdir/$user/" + local target="$destdir/$userdir/$user/$btype.1" + if [ ! -d $source ]; then + warning "maildir $source not found" + return + fi + + debug "syncing" + ret=`$RSYNC -e "ssh -p $destport -i $destid_file $sshoptions" -r \ + --links --ignore-existing --delete --size-only --bwlimit=$speedlimit \ + --exclude '.Trash/*' --exclude '.Mistakes/*' --exclude '.Spam/*' \ + $source $destuser@$desthost:$target \ + 2>&1` + ret=$? + # ignore 0 (success) and 24 (file vanished before it could be copied) + if [ $ret != 0 -a $ret != 24 ]; then + warning "rsync $user failed" + warning " returned: $ret" + let "failedcount = failedcount + 1" + if [ $failedcount -gt 100 ]; then + fatal "100 rsync errors -- something is not working right. bailing out." + fi + fi + ssh -o PasswordAuthentication=no $desthost -l $destuser -i $destid_file $sshoptions "date +%c%n%s > $target/created" } # remove any maildirs from backup which might have been deleted @@ -125,6 +127,7 @@ function do_user() { # (actually, it just moved them to the directory "deleted") function do_remove() { +<<<<<<< HEAD local tmp1=`maketemp maildir-tmp-file` local tmp2=`maketemp maildir-tmp-file` @@ -142,99 +145,118 @@ function do_remove() { done rm $tmp1 rm $tmp2 +======= + local tmp1=`maketemp maildir-tmp-file` + local tmp2=`maketemp maildir-tmp-file` + + ssh -p $destport -i $destid_file $sshoptions $destuser@$desthost mkdir -p "$destdir/deleted" + cd "$srcdir" + for userdir in `ls -d1 */`; do + ls -1 "$srcdir/$userdir" | sort > $tmp1 + ssh -p $destport -i $destid_file $sshoptions $destuser@$desthost ls -1 "$destdir/$userdir" | sort > $tmp2 + for deluser in `join -v 2 $tmp1 $tmp2`; do + [ "$deluser" != "" ] || continue + info "removing $destuser@$desthost:$destdir/$userdir$deluser/" + ssh -p $destport -i $destid_file $sshoptions $destuser@$desthost mv "$destdir/$userdir$deluser/" "$destdir/deleted" + ssh -p $destport -i $destid_file $sshoptions $destuser@$desthost "date +%c%n%s > '$destdir/deleted/$deluser/deleted_on'" + done + done + rm $tmp1 + rm $tmp2 +>>>>>>> master } function do_rotate() { - [ "$rotate" == "yes" ] || return; - local user=$1 - local userdir=${2%/} - local backuproot="$destdir/$userdir/$user" + [ "$rotate" == "yes" ] || return; + local user=$1 + local userdir=${2%/} + local backuproot="$destdir/$userdir/$user" ( - ssh -T -o PasswordAuthentication=no $desthost -l $destuser -i $destid_file <<EOF + ssh -T -o PasswordAuthentication=no $desthost -l $destuser -i $destid_file $sshoptions <<EOF ##### BEGIN REMOTE SCRIPT ##### - seconds_daily=86400 - seconds_weekly=604800 - seconds_monthly=2628000 - keepdaily=$keepdaily - keepweekly=$keepweekly - keepmonthly=$keepmonthly - now=\`date +%s\` - - if [ ! -d "$backuproot" ]; then - echo "Debug: skipping rotate of $user. $backuproot doesn't exist." - exit - fi - for rottype in daily weekly monthly; do - seconds=\$((seconds_\${rottype})) - - dir="$backuproot/\$rottype" - if [ ! -d \$dir.1 ]; then - echo "Debug: \$dir.1 does not exist, skipping." - continue 1 - elif [ ! -f \$dir.1/created ]; then - echo "Warning: \$dir.1/created does not exist. This backup may be only partially completed. Skipping rotation." - continue 1 - fi - - # Rotate the current list of backups, if we can. - oldest=\`find $backuproot -maxdepth 1 -type d -name \$rottype'.*' | @SED@ 's/^.*\.//' | sort -n | tail -1\` - #echo "Debug: oldest \$oldest" - [ "\$oldest" == "" ] && oldest=0 - for (( i=\$oldest; i > 0; i-- )); do - if [ -d \$dir.\$i ]; then - if [ -f \$dir.\$i/created ]; then - created=\`tail -1 \$dir.\$i/created\` - else - created=0 - fi - cutoff_time=\$(( now - (seconds*(i-1)) )) - if [ ! \$created -gt \$cutoff_time ]; then - next=\$(( i + 1 )) - if [ ! -d \$dir.\$next ]; then - echo "Debug: \$rottype.\$i --> \$rottype.\$next" - mv \$dir.\$i \$dir.\$next - date +%c%n%s > \$dir.\$next/rotated - else - echo "Debug: skipping rotation of \$dir.\$i because \$dir.\$next already exists." - fi - else - echo "Debug: skipping rotation of \$dir.\$i because it was created" \$(( (now-created)/86400)) "days ago ("\$(( (now-cutoff_time)/86400))" needed)." - fi - fi - done - done - - max=\$((keepdaily+1)) - if [ \( \$keepweekly -gt 0 -a -d $backuproot/daily.\$max \) -a ! -d $backuproot/weekly.1 ]; then - echo "Debug: daily.\$max --> weekly.1" - mv $backuproot/daily.\$max $backuproot/weekly.1 - date +%c%n%s > $backuproot/weekly.1/rotated - fi - - max=\$((keepweekly+1)) - if [ \( \$keepmonthly -gt 0 -a -d $backuproot/weekly.\$max \) -a ! -d $backuproot/monthly.1 ]; then - echo "Debug: weekly.\$max --> monthly.1" - mv $backuproot/weekly.\$max $backuproot/monthly.1 - date +%c%n%s > $backuproot/monthly.1/rotated - fi - - for rottype in daily weekly monthly; do - max=\$((keep\${rottype}+1)) - dir="$backuproot/\$rottype" - oldest=\`find $backuproot -maxdepth 1 -type d -name \$rottype'.*' | @SED@ 's/^.*\.//' | sort -n | tail -1\` - [ "\$oldest" == "" ] && oldest=0 - # if we've rotated the last backup off the stack, remove it. - for (( i=\$oldest; i >= \$max; i-- )); do - if [ -d \$dir.\$i ]; then - if [ -d $backuproot/rotate.tmp ]; then - echo "Debug: removing rotate.tmp" - rm -rf $backuproot/rotate.tmp - fi - echo "Debug: moving \$rottype.\$i to rotate.tmp" - mv \$dir.\$i $backuproot/rotate.tmp - fi - done - done + seconds_daily=86400 + seconds_weekly=604800 + seconds_monthly=2628000 + keepdaily=$keepdaily + keepweekly=$keepweekly + keepmonthly=$keepmonthly + now=\`date +%s\` + + if [ ! -d "$backuproot" ]; then + echo "Debug: skipping rotate of $user. $backuproot doesn't exist." + exit + fi + for rottype in daily weekly monthly; do + seconds=\$((seconds_\${rottype})) + + dir="$backuproot/\$rottype" + if [ ! -d \$dir.1 ]; then + echo "Debug: \$dir.1 does not exist, skipping." + continue 1 + elif [ ! -f \$dir.1/created ]; then + echo "Warning: \$dir.1/created does not exist. This backup may be only partially completed. Skipping rotation." + continue 1 + fi + + # Rotate the current list of backups, if we can. + oldest=\`find $backuproot -maxdepth 1 -type d -name \$rottype'.*' | @SED@ 's/^.*\.//' | sort -n | tail -1\` + #echo "Debug: oldest \$oldest" + [ "\$oldest" == "" ] && oldest=0 + for (( i=\$oldest; i > 0; i-- )); do + if [ -d \$dir.\$i ]; then + if [ -f \$dir.\$i/created ]; then + created=\`tail -1 \$dir.\$i/created\` + else + created=0 + fi + cutoff_time=\$(( now - (seconds*(i-1)) )) + if [ ! \$created -gt \$cutoff_time ]; then + next=\$(( i + 1 )) + if [ ! -d \$dir.\$next ]; then + echo "Debug: \$rottype.\$i --> \$rottype.\$next" + mv \$dir.\$i \$dir.\$next + date +%c%n%s > \$dir.\$next/rotated + else + echo "Debug: skipping rotation of \$dir.\$i because \$dir.\$next already exists." + fi + else + echo "Debug: skipping rotation of \$dir.\$i because it was created" \$(( (now-created)/86400)) "days ago ("\$(( (now-cutoff_time)/86400))" needed)." + fi + fi + done + done + + max=\$((keepdaily+1)) + if [ \( \$keepweekly -gt 0 -a -d $backuproot/daily.\$max \) -a ! -d $backuproot/weekly.1 ]; then + echo "Debug: daily.\$max --> weekly.1" + mv $backuproot/daily.\$max $backuproot/weekly.1 + date +%c%n%s > $backuproot/weekly.1/rotated + fi + + max=\$((keepweekly+1)) + if [ \( \$keepmonthly -gt 0 -a -d $backuproot/weekly.\$max \) -a ! -d $backuproot/monthly.1 ]; then + echo "Debug: weekly.\$max --> monthly.1" + mv $backuproot/weekly.\$max $backuproot/monthly.1 + date +%c%n%s > $backuproot/monthly.1/rotated + fi + + for rottype in daily weekly monthly; do + max=\$((keep\${rottype}+1)) + dir="$backuproot/\$rottype" + oldest=\`find $backuproot -maxdepth 1 -type d -name \$rottype'.*' | @SED@ 's/^.*\.//' | sort -n | tail -1\` + [ "\$oldest" == "" ] && oldest=0 + # if we've rotated the last backup off the stack, remove it. + for (( i=\$oldest; i >= \$max; i-- )); do + if [ -d \$dir.\$i ]; then + if [ -d $backuproot/rotate.tmp ]; then + echo "Debug: removing rotate.tmp" + rm -rf $backuproot/rotate.tmp + fi + echo "Debug: moving \$rottype.\$i to rotate.tmp" + mv \$dir.\$i $backuproot/rotate.tmp + fi + done + done ####### END REMOTE SCRIPT ####### EOF ) | (while read a; do passthru $a; done) @@ -243,128 +265,127 @@ EOF function setup_remote_dirs() { - local user=$1 - local backuptype=$2 - local userdir=${3%/} - local dir="$destdir/$userdir/$user/$backuptype" - local tmpdir="$destdir/$userdir/$user/rotate.tmp" + local user=$1 + local backuptype=$2 + local userdir=${3%/} + local dir="$destdir/$userdir/$user/$backuptype" + local tmpdir="$destdir/$userdir/$user/rotate.tmp" ( - ssh -T -o PasswordAuthentication=no $desthost -l $destuser -i $destid_file <<EOF - if [ ! -d $destdir ]; then - echo "Fatal: Destination directory $destdir does not exist on host $desthost." - exit 1 - elif [ -d $dir.1 ]; then - if [ -f $dir.1/created ]; then - echo "Warning: $dir.1 already exists. Overwriting contents." - else - echo "Warning: we seem to be resuming a partially written $dir.1" - fi - else - if [ -d $tmpdir ]; then - mv $tmpdir $dir.1 - if [ \$? == 1 ]; then - echo "Fatal: could mv $destdir/rotate.tmp $dir.1 on host $desthost" - exit 1 - fi - else - mkdir --parents $dir.1 - if [ \$? == 1 ]; then - echo "Fatal: could not create directory $dir.1 on host $desthost" - exit 1 - fi - fi - if [ -d $dir.2 ]; then - echo "Debug: update links $backuptype.2 --> $backuptype.1" - cp -alf $dir.2/. $dir.1 - #if [ \$? == 1 ]; then - # echo "Fatal: could not create hard links to $dir.1 on host $desthost" - # exit 1 - #fi - fi - fi - [ -f $dir.1/created ] && rm $dir.1/created - [ -f $dir.1/rotated ] && rm $dir.1/rotated - exit 0 + ssh -T -o PasswordAuthentication=no $desthost -l $destuser -i $destid_file $sshoptions <<EOF + if [ ! -d $destdir ]; then + echo "Fatal: Destination directory $destdir does not exist on host $desthost." + exit 1 + elif [ -d $dir.1 ]; then + if [ -f $dir.1/created ]; then + echo "Warning: $dir.1 already exists. Overwriting contents." + else + echo "Warning: we seem to be resuming a partially written $dir.1" + fi + else + if [ -d $tmpdir ]; then + mv $tmpdir $dir.1 + if [ \$? == 1 ]; then + echo "Fatal: could mv $destdir/rotate.tmp $dir.1 on host $desthost" + exit 1 + fi + else + mkdir --parents $dir.1 + if [ \$? == 1 ]; then + echo "Fatal: could not create directory $dir.1 on host $desthost" + exit 1 + fi + fi + if [ -d $dir.2 ]; then + echo "Debug: update links $backuptype.2 --> $backuptype.1" + cp -alf $dir.2/. $dir.1 + #if [ \$? == 1 ]; then + # echo "Fatal: could not create hard links to $dir.1 on host $desthost" + # exit 1 + #fi + fi + fi + [ -f $dir.1/created ] && rm $dir.1/created + [ -f $dir.1/rotated ] && rm $dir.1/rotated + exit 0 EOF ) | (while read a; do passthru $a; done) - if [ $? == 1 ]; then exit; fi + if [ $? == 1 ]; then exit; fi } function start_mux() { - if [ "$multiconnection" == "yes" ]; then - debug "Starting dummy ssh connection" - ssh -p $destport -i $destid_file $destuser@$desthost sleep 1d & - sleep 1 - fi + if [ "$multiconnection" == "yes" ]; then + debug "Starting dummy ssh connection" + ssh -p $destport -i $destid_file $sshoptions $destuser@$desthost sleep 1d & + sleep 1 + fi } function end_mux() { - if [ "$multiconnection" == "yes" ]; then - debug "Stopping dummy ssh connection" - ssh -p $destport -i $destid_file $destuser@$desthost pkill sleep - fi + if [ "$multiconnection" == "yes" ]; then + debug "Stopping dummy ssh connection" + ssh -p $destport -i $destid_file $sshoptions $destuser@$desthost pkill sleep + fi } ### ################################################################## # see if we can login -debug "ssh -o PasswordAuthentication=no $desthost -l $destuser -i $destid_file 'echo -n 1'" +debug "ssh -o PasswordAuthentication=no $desthost -l $destuser -i $destid_file $sshoptions 'echo -n 1'" if [ ! $test ]; then - result=`ssh -o PasswordAuthentication=no $desthost -l $destuser -i $destid_file 'echo -n 1' 2>&1` - if [ "$result" != "1" ]; then - fatal "Can't connect to $desthost as $destuser using $destid_file." - fi + result=`ssh -o PasswordAuthentication=no $desthost -l $destuser -i $destid_file $sshoptions 'echo -n 1' 2>&1` + if [ "$result" != "1" ]; then + fatal "Can't connect to $desthost as $destuser using $destid_file." + fi fi end_mux start_mux ## SANITY CHECKS ## -status=`ssh -p $destport -i $destid_file $destuser@$desthost "[ -d \"$destdir\" ] && echo 'ok'"` +status=`ssh -p $destport -i $destid_file $sshoptions $destuser@$desthost "[ -d \"$destdir\" ] && echo 'ok'"` if [ "$status" != "ok" ]; then - end_mux - fatal "Destination directory $destdir doesn't exist!" - exit + end_mux + fatal "Destination directory $destdir doesn't exist!" + exit fi ### REMOVE OLD MAILDIRS ### if [ "$remove" == "yes" ]; then - do_remove + do_remove fi ### MAKE BACKUPS ### if [ "$backup" == "yes" ]; then - if [ $keepdaily -gt 0 ]; then btype=daily - elif [ $keepweekly -gt 0 ]; then btype=weekly - elif [ $keepmonthly -gt 0 ]; then btype=monthly - else fatal "keeping no backups"; fi - - if [ "$testuser" != "" ]; then - cd "$srcdir/${user:0:1}" - do_rotate $testuser - setup_remote_dirs $testuser $btype - do_user $testuser $btype - else - [ -d "$srcdir" ] || fatal "directory $srcdir not found." - cd "$srcdir" - for userdir in `ls -d1 */`; do - [ -d "$srcdir/$userdir" ] || fatal "directory $srcdir/$userdir not found." - cd "$srcdir/$userdir" - debug $userdir - for user in `ls -1`; do - [ "$user" != "" ] || continue - debug "$user $userdir" - do_rotate $user $userdir - setup_remote_dirs $user $btype $userdir - do_user $user $btype $userdir - done - done - fi + if [ $keepdaily -gt 0 ]; then btype=daily + elif [ $keepweekly -gt 0 ]; then btype=weekly + elif [ $keepmonthly -gt 0 ]; then btype=monthly + else fatal "keeping no backups"; fi + + if [ "$testuser" != "" ]; then + cd "$srcdir/${user:0:1}" + do_rotate $testuser + setup_remote_dirs $testuser $btype + do_user $testuser $btype + else + [ -d "$srcdir" ] || fatal "directory $srcdir not found." + cd "$srcdir" + for userdir in `ls -d1 */`; do + [ -d "$srcdir/$userdir" ] || fatal "directory $srcdir/$userdir not found." + cd "$srcdir/$userdir" + debug $userdir + for user in `ls -1`; do + [ "$user" != "" ] || continue + debug "$user $userdir" + do_rotate $user $userdir + setup_remote_dirs $user $btype $userdir + do_user $user $btype $userdir + done + done + fi fi end_mux - diff --git a/handlers/makecd.helper.in b/handlers/makecd.helper.in index f83b541..ad0c4aa 100644 --- a/handlers/makecd.helper.in +++ b/handlers/makecd.helper.in @@ -1,4 +1,5 @@ # -*- mode: sh; sh-basic-offset: 3; indent-tabs-mode: nil; -*- +# vim: set filetype=sh sw=3 sts=3 expandtab autoindent: HELPERS="$HELPERS makecd:makecd_backup" wizardname="makecd action wizard" @@ -7,59 +8,59 @@ declare -a makecd_excludes makecd_wizard() { - inputBox "$wizardname" "specify a burner type cd or dvd:" - [ $? = 1 ] && return - burnertype="burnertype = $REPLY" - - booleanBox "$wizardname" "Make iso image only? or burn" - if [ $? = 0 ]; then - isoonly="isoonly = yes" - else - isoonly="isoonly = no" - fi - - # backupdir - inputBox "$wizardname" "Directory where to store the backups:" - [ $? = 1 ] && return - backupdir="backupdir = $REPLY" - - inputBox "$wizardname" "what name to give to the image file?" - [ $? = 1 ] && return - imagefile="imagefile = $REPLY" - - inputBox "$wizardname" "specify a burner device:" - [ $? = 1 ] && return - device="device = $REPLY" - - # target - root of system to be included - inputBox "$wizardname" "root of filesystem for burn:" - [ $? = 1 ] && return - target="target = $REPLY" - - - # excludes - - formBegin "$wizardname: excludes" - for ((i=0; i < ${#makecd_excludes[@]} ; i++)); do - formItem exclude ${makecd_excludes[$i]} - done - formItem exclude - formItem exclude - formItem exclude - formItem exclude - formItem exclude - formItem exclude - formItem exclude - formItem exclude - formItem exclude - formDisplay - [ $? = 1 ] && return; - - unset makecd_excludes - makecd_excludes=($REPLY) - - get_next_filename $configdirectory/20.makecd - cat >> $next_filename <<EOF + inputBox "$wizardname" "specify a burner type cd or dvd:" + [ $? = 1 ] && return + burnertype="burnertype = $REPLY" + + booleanBox "$wizardname" "Make iso image only? or burn" + if [ $? = 0 ]; then + isoonly="isoonly = yes" + else + isoonly="isoonly = no" + fi + + # backupdir + inputBox "$wizardname" "Directory where to store the backups:" + [ $? = 1 ] && return + backupdir="backupdir = $REPLY" + + inputBox "$wizardname" "what name to give to the image file?" + [ $? = 1 ] && return + imagefile="imagefile = $REPLY" + + inputBox "$wizardname" "specify a burner device:" + [ $? = 1 ] && return + device="device = $REPLY" + + # target - root of system to be included + inputBox "$wizardname" "root of filesystem for burn:" + [ $? = 1 ] && return + target="target = $REPLY" + + + # excludes + + formBegin "$wizardname: excludes" + for ((i=0; i < ${#makecd_excludes[@]} ; i++)); do + formItem exclude ${makecd_excludes[$i]} + done + formItem exclude + formItem exclude + formItem exclude + formItem exclude + formItem exclude + formItem exclude + formItem exclude + formItem exclude + formItem exclude + formDisplay + [ $? = 1 ] && return; + + unset makecd_excludes + makecd_excludes=($REPLY) + + get_next_filename $configdirectory/20.makecd + cat >> $next_filename <<EOF # TYP is cd or dvd AS WELL AS the disk inside!! $burnertype @@ -88,10 +89,10 @@ $target # exclude = /dev EOF - for ((j=0; j < ${#makecd_excludes[@]} ; j++)); do - echo "exclude = ${makecd_excludes[$j]}" >> $next_filename - done - + for ((j=0; j < ${#makecd_excludes[@]} ; j++)); do + echo "exclude = ${makecd_excludes[$j]}" >> $next_filename + done + chmod 600 $next_filename } diff --git a/handlers/makecd.in b/handlers/makecd.in index 1a95d6d..d44bba3 100644 --- a/handlers/makecd.in +++ b/handlers/makecd.in @@ -1,4 +1,5 @@ # -*- mode: sh; sh-basic-offset: 3; indent-tabs-mode: nil; -*- +# vim: set filetype=sh sw=3 sts=3 expandtab autoindent: # # burncd handler script for backupninja # @@ -31,7 +32,7 @@ DVDINFO="/usr/bin/dvd+rw-mediainfo" [ -x "$CDRDAO" ] || debug 3 "echo executable $CDRDAO not present" if [ "$isoonly" == "no" ]; then - [ -e $device ] || fatal "No Burner device available" + [ -e $device ] || fatal "No Burner device available" fi outputfile="$backupdir/$imagefile" @@ -40,7 +41,7 @@ execstr="nice -n $nicelevel $MKISOFS --quiet -R -o $outputfile " str="" # excludes for i in $exclude; do - str=" -x ${i}$str" + str=" -x ${i}$str" done debug 0 "echo $str " @@ -50,39 +51,39 @@ debug 0 "echo $execstr " output=` $execstr 2>&1 ` code=$? if [ "$code" == "0" ]; then - debug $output - info "Successfully finished creation of iso" + debug $output + info "Successfully finished creation of iso" else - warning $output - warning "Failed to create iso" + warning $output + warning "Failed to create iso" fi if [ "$isoonly" == "no" ]; then - if [ "$burnertype" == "cd" ]; then - # burning iso to CD - $CDRECORD -v gracetime=2 dev=$device speed=8 -dao -data $outputfile - code=$? - if [ "$code" == "0" ]; then - debug $output - info "Successfully burned CD" - else - warning $output - warning "Failed to create CD" - fi - fi - if [ "$burnertype" == "dvd" ]; then - # burning iso dvd - $GROWISOFS -speed=2 -Z $device=$outputfile -use-the-force-luke=notray -use-the-force-luke=tty - code=$? - if [ "$code" == "0" ]; then - debug $output - info "Successfully burned DVD" - else - warning $output - warning "Failed to create DVD" - fi - fi + if [ "$burnertype" == "cd" ]; then + # burning iso to CD + $CDRECORD -v gracetime=2 dev=$device speed=8 -dao -data $outputfile + code=$? + if [ "$code" == "0" ]; then + debug $output + info "Successfully burned CD" + else + warning $output + warning "Failed to create CD" + fi + fi + if [ "$burnertype" == "dvd" ]; then + # burning iso dvd + $GROWISOFS -speed=2 -Z $device=$outputfile -use-the-force-luke=notray -use-the-force-luke=tty + code=$? + if [ "$code" == "0" ]; then + debug $output + info "Successfully burned DVD" + else + warning $output + warning "Failed to create DVD" + fi + fi fi return 0 diff --git a/handlers/mysql.helper.in b/handlers/mysql.helper.in index 9622d41..d42bc7f 100644 --- a/handlers/mysql.helper.in +++ b/handlers/mysql.helper.in @@ -1,4 +1,5 @@ # -*- mode: sh; sh-basic-offset: 3; indent-tabs-mode: nil; -*- +# vim: set filetype=sh sw=3 sts=3 expandtab autoindent: HELPERS="$HELPERS mysql:mysql_database_backup" @@ -13,43 +14,43 @@ do_mysql_databases() { while [ -z "$REPLY" ]; do formBegin "$mysql_title: databases" formItem "Database:" - formItem "Database:" - formItem "Database:" - formItem "Database:" - formItem "Database:" - formItem "Database:" - formItem "Database:" - formItem "Database:" - formItem "Database:" - formItem "Database:" + formItem "Database:" + formItem "Database:" + formItem "Database:" + formItem "Database:" + formItem "Database:" + formItem "Database:" + formItem "Database:" + formItem "Database:" + formItem "Database:" formDisplay [ $? = 0 ] || return 1 mysql_databases="databases = " for i in $REPLY; do - [ -n "$i" ] && mysql_databases="$mysql_databases $i" + [ -n "$i" ] && mysql_databases="$mysql_databases $i" done done } do_mysql_password() { - inputBox "$mysql_title" "specify a mysql user:" - [ $? = 1 ] && return - user=$REPLY - inputBox "$mysql_title" "specify the mysql user's password:" - [ $? = 1 ] && return - password=$REPLY - do_mysql_final "dbusername = $user\ndbpassword = $password" + inputBox "$mysql_title" "specify a mysql user:" + [ $? = 1 ] && return + user=$REPLY + inputBox "$mysql_title" "specify the mysql user's password:" + [ $? = 1 ] && return + password=$REPLY + do_mysql_final "dbusername = $user\ndbpassword = $password" } do_mysql_debian() { - _DISABLE_HOTCOPY=yes - do_mysql_final "configfile = /etc/mysql/debian.cnf" + _DISABLE_HOTCOPY=yes + do_mysql_final "configfile = /etc/mysql/debian.cnf" } do_mysql_user() { - inputBox "$mysql_title" "what system user does mysql backup use?" - [ $? = 1 ] && return - do_mysql_final "user = $REPLY" + inputBox "$mysql_title" "what system user does mysql backup use?" + [ $? = 1 ] && return + do_mysql_final "user = $REPLY" } do_mysql_final() { @@ -69,18 +70,18 @@ do_mysql_final() { hotcopy="hotcopy = no" fi - [ $status = 1 ] && return; + [ $status = 1 ] && return; result="$REPLY" compress="compress = no" for opt in $result; do case $opt in - '"sqldump"') sqldump="sqldump = yes";; - '"hotcopy"') hotcopy="hotcopy = yes";; - '"compress"') compress="compress = yes";; + '"sqldump"') sqldump="sqldump = yes";; + '"hotcopy"') hotcopy="hotcopy = yes";; + '"compress"') compress="compress = yes";; esac done get_next_filename $configdirectory/20.mysql - + cat >> $next_filename <<EOF ### backupninja MySQL config file ### @@ -109,7 +110,7 @@ EOF $mysql_backupdir # databases = <all | db1 db2 db3 > (default = all) -# which databases to backup. should either be the word 'all' or a +# which databases to backup. should either be the word 'all' or a # space separated list of database names. $mysql_databases @@ -117,43 +118,43 @@ EOF if [ $host_or_vservers == vservers ] then - cat >> $next_filename <<EOF + cat >> $next_filename <<EOF # # vsname = <vserver> (no default) -# vsname indicates which vserver to operate on, this is only used if +# vsname indicates which vserver to operate on, this is only used if # vserver is set to yes in /etc/backupninja.conf # NOTE: if you do not specify a vsname the host will be operated on -# alsoNOTE: if operating on a vserver, $VROOTDIR will be +# alsoNOTE: if operating on a vserver, $VROOTDIR will be # prepended to backupdir. EOF echo -e "$mysql_vsname\n" >> $next_filename fi echo -e $@ >> $next_filename - + chmod 600 $next_filename } mysql_wizard() { - + # Global variables mysql_title="MySQL action wizard" - + # backup the host system or a Vserver? choose_host_or_one_vserver "$mysql_title" [ $? = 0 ] || return 1 if [ $host_or_vservers == vservers ] then - do_mysql_vserver - [ $? = 0 ] || return 1 + do_mysql_vserver + [ $? = 0 ] || return 1 fi - + # backupdir if [ $host_or_vservers == vservers ] then - inputBox "$mysql_title" "Directory where to store the backups:`echo \"\n(Relative to chosen vserver's root directory)\"`" "/var/backups/mysql" + inputBox "$mysql_title" "Directory where to store the backups:`echo \"\n(Relative to chosen vserver's root directory)\"`" "/var/backups/mysql" else - inputBox "$mysql_title" "Directory where to store the backups" "/var/backups/mysql" + inputBox "$mysql_title" "Directory where to store the backups" "/var/backups/mysql" fi [ $? = 1 ] && return mysql_backupdir="backupdir = $REPLY" @@ -166,23 +167,23 @@ mysql_wizard() { do_mysql_databases [ $? = 0 ] || return 1 fi - + while true; do _DISABLE_HOTCOPY= menuBoxHelpFile "$mysql_title" "choose a mysql authentication method:" \ - user "change to a linux user first." \ - password "manually specify mysql user and password." \ - debian "use default mysql user debian-sys-maint." + user "change to a linux user first." \ + password "manually specify mysql user and password." \ + debian "use default mysql user debian-sys-maint." status=$? if [ $status = 2 ]; then - # show help. - helptmp="/tmp/backupninja.help.$$" - cat > $helptmp <<EOF + # show help. + helptmp="/tmp/backupninja.help.$$" + cat > $helptmp <<EOF To connect to mysql, backupninja must authenticate. There are three possible authentication methods: USER -With this method, you specify a system user. Backupninja will +With this method, you specify a system user. Backupninja will then become this user before running mysqldump or mysqlhotcopy. The result is that ~/.my.cnf is used for authentication. @@ -194,11 +195,11 @@ DEBIAN With this method, we use the debian-sys-maint user which is already defined in /etc/mysql/debian.cnf. If you are running debian, this is recommended, because no further configuration -is needed. The drawback is that this is incompatible with +is needed. The drawback is that this is incompatible with mysqlhotcopy: you must use mysqldump. EOF - dialog --textbox $helptmp 0 0 - rm $helptmp + dialog --textbox $helptmp 0 0 + rm $helptmp fi [ $status = 1 ] && return; @@ -208,5 +209,5 @@ EOF "password") do_mysql_password;return;; "debian") do_mysql_debian;return;; esac - done + done } diff --git a/handlers/mysql.in b/handlers/mysql.in index b304833..3488c51 100644 --- a/handlers/mysql.in +++ b/handlers/mysql.in @@ -1,4 +1,5 @@ # -*- mode: sh; sh-basic-offset: 3; indent-tabs-mode: nil; -*- +# vim: set filetype=sh sw=3 sts=3 expandtab autoindent: # # mysql handler script for backupninja # @@ -10,6 +11,7 @@ getconf nodata getconf dbhost localhost getconf hotcopy no getconf sqldump no +getconf sqldumpoptions "--lock-tables --complete-insert --add-drop-table --quick --quote-names" getconf compress yes getconf vsname @@ -46,9 +48,9 @@ fi ignore='' for i in $ignores $nodata; do - ignore="$ignore --ignore-table=$i" + ignore="$ignore --ignore-table=$i" done - + # create backup dirs, $vroot will be empty if no vsname was specified # and we will instead proceed to operate on the host [ -d $vroot$backupdir ] || mkdir -p $vroot$backupdir @@ -58,11 +60,11 @@ dumpdir="$backupdir/sqldump" if [ $usevserver = yes ] then - [ "$sqldump" == "no" -o -d $vroot$dumpdir ] || $VSERVER $vsname exec mkdir -p $dumpdir - [ "$hotcopy" == "no" -o -d $vroot$hotdir ] || $VSERVER $vsname exec mkdir -p $hotdir + [ "$sqldump" == "no" -o -d $vroot$dumpdir ] || $VSERVER $vsname exec mkdir -p $dumpdir + [ "$hotcopy" == "no" -o -d $vroot$hotdir ] || $VSERVER $vsname exec mkdir -p $hotdir else - [ "$sqldump" == "no" -o -d $dumpdir ] || mkdir -p $dumpdir - [ "$hotcopy" == "no" -o -d $hotdir ] || mkdir -p $hotdir + [ "$sqldump" == "no" -o -d $dumpdir ] || mkdir -p $dumpdir + [ "$hotcopy" == "no" -o -d $hotdir ] || mkdir -p $hotdir fi ####################################################################### @@ -86,35 +88,35 @@ defaultsfile="" if [ "$dbusername" != "" -a "$dbpassword" != "" ] then - if [ $usevserver = yes ] - then - home=`$VSERVER $vsname exec getent passwd "root" | @AWK@ -F: '{print $6}'` - else - home=`getent passwd "root" | @AWK@ -F: '{print $6}'` - fi + if [ $usevserver = yes ] + then + home=`$VSERVER $vsname exec getent passwd "root" | @AWK@ -F: '{print $6}'` + else + home=`getent passwd "root" | @AWK@ -F: '{print $6}'` + fi - [ -d $home ] || fatal "Can't find root's home directory ($home)." + [ -d $home ] || fatal "Can't find root's home directory ($home)." - mycnf="$home/.my.cnf" + mycnf="$home/.my.cnf" - if [ $usevserver = yes ] - then + if [ $usevserver = yes ] + then workcnf="$vroot$mycnf" - else + else workcnf="$mycnf" - fi + fi - if [ -f $workcnf ] - then + if [ -f $workcnf ] + then # rename temporarily tmpcnf="$workcnf.disable" debug "mv $workcnf $tmpcnf" mv $workcnf $tmpcnf - fi + fi - oldmask=`umask` - umask 077 - cat > $workcnf <<EOF + oldmask=`umask` + umask 077 + cat > $workcnf <<EOF # auto generated backupninja mysql conf [mysql] host=$dbhost @@ -136,37 +138,37 @@ host=$dbhost user=$dbusername password="$dbpassword" EOF - umask $oldmask - defaultsfile="--defaults-extra-file=$mycnf" + umask $oldmask + defaultsfile="--defaults-extra-file=$mycnf" fi # if a user is not set, use $configfile, otherwise use $mycnf if [ "$user" == "" ]; then - user=root; - defaultsfile="--defaults-extra-file=$configfile" + user=root; + defaultsfile="--defaults-extra-file=$configfile" else - userset=true; - if [ $usevserver = yes ] - then - userhome=`$VSERVER $vsname exec getent passwd "$user" | @AWK@ -F: '{print $6}'` - if [ $? -eq 2 ] - then - fatal "User $user not found in /etc/passwd" - fi - debug "User home set to: $vroot$userhome" - [ -f $vroot$userhome/.my.cnf ] || fatal "Can't find config file in $userhome/.my.cnf" - else - userhome=`getent passwd "$user" | @AWK@ -F: '{print $6}'` - if [ $? -eq 2 ] - then - fatal "User $user not found in /etc/passwd" - fi - debug "User home set to: $userhome" - [ -f $userhome/.my.cnf ] || fatal "Can't find config file in $userhome/.my.cnf" - fi + userset=true; + if [ $usevserver = yes ] + then + userhome=`$VSERVER $vsname exec getent passwd "$user" | @AWK@ -F: '{print $6}'` + if [ $? -eq 2 ] + then + fatal "User $user not found in /etc/passwd" + fi + debug "User home set to: $vroot$userhome" + [ -f $vroot$userhome/.my.cnf ] || fatal "Can't find config file in $userhome/.my.cnf" + else + userhome=`getent passwd "$user" | @AWK@ -F: '{print $6}'` + if [ $? -eq 2 ] + then + fatal "User $user not found in /etc/passwd" + fi + debug "User home set to: $userhome" + [ -f $userhome/.my.cnf ] || fatal "Can't find config file in $userhome/.my.cnf" + fi - defaultsfile="--defaults-extra-file=$userhome/.my.cnf" - debug "using $defaultsfile" + defaultsfile="--defaults-extra-file=$userhome/.my.cnf" + debug "using $defaultsfile" fi ####################################################################### @@ -174,55 +176,55 @@ fi if [ "$hotcopy" == "yes" ] then - info "Initializing hotcopy method" - if [ "$databases" == "all" ] - then - if [ $usevserver = yes ] - then - info "dbhost: $dbhost" - execstr="$VSERVER $vsname exec $MYSQLHOTCOPY -h $dbhost --quiet --allowold --regexp /.\*/./.\*/ $hotdir" - else - execstr="$MYSQLHOTCOPY --quiet --allowold --regexp /.\*/./.\*/ $hotdir" - fi - debug "su $user -c \"$execstr\"" - if [ ! $test ] - then - output=`su $user -c "$execstr" 2>&1` - code=$? - if [ "$code" == "0" ] - then - debug $output - info "Successfully finished hotcopy of all mysql databases" - else - warning $output - warning "Failed to hotcopy all mysql databases" - fi - fi - else - for db in $databases - do - if [ $usevserver = yes ] - then - execstr="$VSERVER $vsname exec $MYSQLHOTCOPY --allowold $db $hotdir" - else - execstr="$MYSQLHOTCOPY --allowold $db $hotdir" - fi - debug 'su $user -c \"$execstr\"' - if [ ! $test ] - then - output=`su $user -c "$execstr" 2>&1` - code=$? - if [ "$code" == "0" ] - then - debug $output - info "Successfully finished hotcopy of mysql database $db" - else - warning $output - warning "Failed to hotcopy mysql database $db" - fi - fi - done - fi + info "Initializing hotcopy method" + if [ "$databases" == "all" ] + then + if [ $usevserver = yes ] + then + info "dbhost: $dbhost" + execstr="$VSERVER $vsname exec $MYSQLHOTCOPY -h $dbhost --quiet --allowold --regexp /.\*/./.\*/ $hotdir" + else + execstr="$MYSQLHOTCOPY --quiet --allowold --regexp /.\*/./.\*/ $hotdir" + fi + debug "su $user -c \"$execstr\"" + if [ ! $test ] + then + output=`su $user -c "$execstr" 2>&1` + code=$? + if [ "$code" == "0" ] + then + debug $output + info "Successfully finished hotcopy of all mysql databases" + else + warning $output + warning "Failed to hotcopy all mysql databases" + fi + fi + else + for db in $databases + do + if [ $usevserver = yes ] + then + execstr="$VSERVER $vsname exec $MYSQLHOTCOPY --allowold $db $hotdir" + else + execstr="$MYSQLHOTCOPY --allowold $db $hotdir" + fi + debug 'su $user -c \"$execstr\"' + if [ ! $test ] + then + output=`su $user -c "$execstr" 2>&1` + code=$? + if [ "$code" == "0" ] + then + debug $output + info "Successfully finished hotcopy of mysql database $db" + else + warning $output + warning "Failed to hotcopy mysql database $db" + fi + fi + done + fi fi ########################################################################## @@ -230,97 +232,97 @@ fi if [ "$sqldump" == "yes" ] then - info "Initializing SQL dump method" - if [ "$databases" == "all" ] - then - if [ $usevserver = yes ] - then - debug 'echo show databases | $VSERVER $vsname exec su $user -c \"$MYSQL $defaultsfile\" | grep -v Database' - databases=`echo 'show databases' | $VSERVER $vsname exec su $user -c "$MYSQL $defaultsfile" | grep -v Database` - if [ $? -ne 0 ] - then - fatal "Authentication problem, maybe user/password is wrong or mysqld is not running?" - fi - else - databases=$(su $user -c "$MYSQL $defaultsfile -N -B -e 'show databases'" | sed 's/|//g;/\+----/d') - if [ $? -ne 0 ] - then - fatal "Authentication problem, maybe user/password is wrong or mysqld is not running?" - fi - fi - fi + info "Initializing SQL dump method" + if [ "$databases" == "all" ] + then + if [ $usevserver = yes ] + then + debug 'set -o pipefail ; echo show databases | $VSERVER $vsname exec su $user -c \"$MYSQL $defaultsfile\" | grep -v Database' + databases=`set -o pipefail ; echo 'show databases' | $VSERVER $vsname exec su $user -c "$MYSQL $defaultsfile" | grep -v Database` + if [ $? -ne 0 ] + then + fatal "Authentication problem, maybe user/password is wrong or mysqld is not running?" + fi + else + databases=$(set -o pipefail ; su $user -c "$MYSQL $defaultsfile -N -B -e 'show databases'" | sed 's/|//g;/\+----/d') + if [ $? -ne 0 ] + then + fatal "Authentication problem, maybe user/password is wrong or mysqld is not running?" + fi + fi + fi - for db in $databases - do - DUMP_BASE="$MYSQLDUMP $defaultsfile --lock-tables --complete-insert --add-drop-table --quick --quote-names" + for db in $databases + do + DUMP_BASE="$MYSQLDUMP $defaultsfile $sqldumpoptions" - # Dumping structure and data - DUMP="$DUMP_BASE $ignore $db" + # Dumping structure and data + DUMP="$DUMP_BASE $ignore $db" - # If requested, dump only the table structure for this database - if echo "$nodata" | grep -E '(^|[[:space:]])'"$db\." >/dev/null - then - # Get the structure of the tables, without data - DUMP_STRUCT="$DUMP_BASE --no-data $db" - for qualified_table in $nodata - do - table=$( expr match "$qualified_table" "$db\.\([^\w]*\)" ) - DUMP_STRUCT="$DUMP_STRUCT $table" - done - DUMP="( $DUMP; $DUMP_STRUCT )" - fi - if [ $usevserver = yes ] - then - # Test to make sure mysqld is running, if it is not sqldump will not work - $VSERVER $vsname exec su $user -c "$MYSQLADMIN $defaultsfile ping" - if [ $? -ne 0 ]; then - fatal "Either you have an authentication problem, or mysqld doesn't appear to be running!" - fi - if [ "$compress" == "yes" ]; then - execstr="$VSERVER $vsname exec $DUMP | $GZIP > $vroot$dumpdir/${db}.sql.gz" - else - execstr="$VSERVER $vsname exec $DUMP -r $vroot$dumpdir/${db}.sql" - fi - else - # Test to make sure mysqld is running, if it is not sqldump will not work - su $user -c "$MYSQLADMIN $defaultsfile ping" - if [ $? -ne 0 ]; then - fatal "Either you have an authentication problem, or mysqld doesn't appear to be running!" - fi - if [ "$compress" == "yes" ]; then - execstr="$DUMP | $GZIP > $dumpdir/${db}.sql.gz" - else - execstr="$DUMP -r $dumpdir/${db}.sql" - fi - fi - debug "su $user -c \"$execstr\"" - if [ ! $test ] - then - output=`su $user -c "$execstr" 2>&1` - code=$? - if [ "$code" == "0" ] - then - debug $output - info "Successfully finished dump of mysql database $db" - else - warning $output - warning "Failed to dump mysql databases $db" - fi - fi - done + # If requested, dump only the table structure for this database + if echo "$nodata" | grep -E '(^|[[:space:]])'"$db\." >/dev/null + then + # Get the structure of the tables, without data + DUMP_STRUCT="$DUMP_BASE --no-data $db" + for qualified_table in $nodata + do + table=$( expr match "$qualified_table" "$db\.\([^\w]*\)" ) + DUMP_STRUCT="$DUMP_STRUCT $table" + done + DUMP="( $DUMP; $DUMP_STRUCT )" + fi + if [ $usevserver = yes ] + then + # Test to make sure mysqld is running, if it is not sqldump will not work + $VSERVER $vsname exec su $user -c "$MYSQLADMIN $defaultsfile ping 2>&1 >/dev/null" + if [ $? -ne 0 ]; then + fatal "mysqld doesn't appear to be running!" + fi + if [ "$compress" == "yes" ]; then + execstr="$VSERVER $vsname exec $DUMP | $GZIP --rsyncable > '$vroot$dumpdir/${db}.sql.gz'" + else + execstr="$VSERVER $vsname exec $DUMP -r '$vroot$dumpdir/${db}.sql'" + fi + else + # Test to make sure mysqld is running, if it is not sqldump will not work + su $user -c "$MYSQLADMIN $defaultsfile ping 2>&1 >/dev/null" + if [ $? -ne 0 ]; then + fatal "mysqld doesn't appear to be running!" + fi + if [ "$compress" == "yes" ]; then + execstr="$DUMP | $GZIP --rsyncable > '$dumpdir/${db}.sql.gz'" + else + execstr="$DUMP -r '$dumpdir/${db}.sql'" + fi + fi + debug "su $user -c \"$execstr\"" + if [ ! $test ] + then + output=`su $user -c "set -o pipefail ; $execstr" 2>&1` + code=$? + if [ "$code" == "0" ] + then + debug $output + info "Successfully finished dump of mysql database $db" + else + warning $output + warning "Failed to dump mysql databases $db" + fi + fi + done fi # clean up tmp config file if [ "$dbusername" != "" -a "$dbpassword" != "" ] then - ## clean up tmp config file - debug "rm $workcnf" - rm $workcnf - if [ -f "$tmpcnf" ] - then - debug "mv $tmpcnf $workcnf" - mv $tmpcnf $workcnf - fi + ## clean up tmp config file + debug "rm $workcnf" + rm $workcnf + if [ -f "$tmpcnf" ] + then + debug "mv $tmpcnf $workcnf" + mv $tmpcnf $workcnf + fi fi return 0 diff --git a/handlers/pgsql.helper.in b/handlers/pgsql.helper.in index 8024616..ff1cfd4 100644 --- a/handlers/pgsql.helper.in +++ b/handlers/pgsql.helper.in @@ -1,4 +1,5 @@ # -*- mode: sh; sh-basic-offset: 3; indent-tabs-mode: nil; -*- +# vim: set filetype=sh sw=3 sts=3 expandtab autoindent: HELPERS="$HELPERS pgsql:postgresql_database_backup" @@ -13,20 +14,20 @@ do_pgsql_databases() { while [ -z "$REPLY" ]; do formBegin "$pgsql_title: databases" formItem "Database:" - formItem "Database:" - formItem "Database:" - formItem "Database:" - formItem "Database:" - formItem "Database:" - formItem "Database:" - formItem "Database:" - formItem "Database:" - formItem "Database:" + formItem "Database:" + formItem "Database:" + formItem "Database:" + formItem "Database:" + formItem "Database:" + formItem "Database:" + formItem "Database:" + formItem "Database:" + formItem "Database:" formDisplay [ $? = 0 ] || return 1 pgsql_databases="databases = " for i in $REPLY; do - [ -n "$i" ] && pgsql_databases="$pgsql_databases $i" + [ -n "$i" ] && pgsql_databases="$pgsql_databases $i" done done } @@ -86,14 +87,14 @@ EOF $pgsql_backupdir # databases = < all | db1 db2 db3 > (default = all) -# which databases to backup. should either be the word 'all' or a +# which databases to backup. should either be the word 'all' or a # space separated list of database names. # Note: when using 'all', pg_dumpall is used instead of pg_dump, which means # that cluster-wide data (such as users and groups) are saved. $pgsql_databases # compress = < yes | no > (default = yes) -# if yes, compress the pg_dump/pg_dumpall output. +# if yes, compress the pg_dump/pg_dumpall output. $pgsql_compress ### You can also set the following variables in backupninja.conf: diff --git a/handlers/pgsql.in b/handlers/pgsql.in index f334bf2..77a73fe 100644 --- a/handlers/pgsql.in +++ b/handlers/pgsql.in @@ -1,4 +1,5 @@ # -*- mode: sh; sh-basic-offset: 3; indent-tabs-mode: nil; -*- +# vim: set filetype=sh sw=3 sts=3 expandtab autoindent: # # PostgreSQL handler script for backupninja # @@ -36,18 +37,18 @@ if [ $usevserver = yes ]; then debug "Examining vserver '$vsname'." if [ "$databases" == "all" ]; then [ -x "$vroot`$VSERVER $vsname exec which $PGSQLDUMPALL`" ] || \ - fatal "Can't find $PGSQLDUMPALL in vserver $vsname." + fatal "Can't find $PGSQLDUMPALL in vserver $vsname." else [ -x "$vroot`$VSERVER $vsname exec which $PGSQLDUMP`" ] || \ - fatal "Can't find $PGSQLDUMP in vserver $vsname." + fatal "Can't find $PGSQLDUMP in vserver $vsname." fi else if [ "$databases" == "all" ]; then [ -x "`which $PGSQLDUMPALL`" ] || \ - fatal "Can't find $PGSQLDUMPALL." + fatal "Can't find $PGSQLDUMPALL." else [ -x "`which $PGSQLDUMP`" ] || \ - fatal "Can't find $PGSQLDUMP." + fatal "Can't find $PGSQLDUMP." fi fi @@ -64,7 +65,7 @@ else pguid=`getent passwd $PGSQLUSER | @AWK@ -F: '{print $3}'` fi [ -n "$pguid" ] || \ - fatal "No user called $PGSQLUSER`[ $usevserver = no ] || echo \" on vserver $vsname\"`." + fatal "No user called $PGSQLUSER`[ $usevserver = no ] || echo \" on vserver $vsname\"`." debug "chown $pguid $vroot$backupdir" chown $pguid $vroot$backupdir debug "chmod 700 $vroot$backupdir" @@ -72,61 +73,61 @@ chmod 700 $vroot$backupdir # if $databases = all, use pg_dumpall if [ "$databases" == "all" ]; then - if [ $usevserver = yes ]; then - if [ "$compress" == "yes" ]; then - execstr="$VSERVER $vsname exec su - $PGSQLUSER -c \"$PGSQLDUMPALL | $GZIP > $backupdir/${vsname}.sql.gz\"" - else - execstr="$VSERVER $vsname exec su - $PGSQLUSER -c \"$PGSQLDUMPALL > $backupdir/${vsname}.sql\"" - fi - else - if [ "$compress" == "yes" ]; then - execstr="su - $PGSQLUSER -c \"$PGSQLDUMPALL | $GZIP > $backupdir/${localhost}-all.sql.gz\"" - else - execstr="su - $PGSQLUSER -c \"$PGSQLDUMPALL > $backupdir/${localhost}-all.sql\"" - fi - fi - debug "$execstr" - if [ ! $test ]; then - output=`eval $execstr 2>&1` - code=$? - if [ "$code" == "0" ]; then - debug $output - info "Successfully finished dump of pgsql cluster" - else - warning $output - warning "Failed to dump pgsql cluster" - fi - fi - + if [ $usevserver = yes ]; then + if [ "$compress" == "yes" ]; then + execstr="$VSERVER $vsname exec su - $PGSQLUSER -c \"set -o pipefail ; $PGSQLDUMPALL | $GZIP --rsyncable > '$backupdir/${vsname}.sql.gz'\"" + else + execstr="$VSERVER $vsname exec su - $PGSQLUSER -c \"$PGSQLDUMPALL > '$backupdir/${vsname}.sql'\"" + fi + else + if [ "$compress" == "yes" ]; then + execstr="su - $PGSQLUSER -c \"set -o pipefail ; $PGSQLDUMPALL | $GZIP --rsyncable > '$backupdir/${localhost}-all.sql.gz'\"" + else + execstr="su - $PGSQLUSER -c \"$PGSQLDUMPALL > '$backupdir/${localhost}-all.sql'\"" + fi + fi + debug "$execstr" + if [ ! $test ]; then + output=`eval $execstr 2>&1` + code=$? + if [ "$code" == "0" ]; then + debug $output + info "Successfully finished dump of pgsql cluster" + else + warning $output + warning "Failed to dump pgsql cluster" + fi + fi + # else use pg_dump on each specified database else - for db in $databases; do - if [ $usevserver = yes ]; then - if [ "$compress" == "yes" ]; then - execstr="$VSERVER $vsname exec su - $PGSQLUSER -c \"$PGSQLDUMP $db | $GZIP > $backupdir/${db}.sql.gz\"" - else - execstr="$VSERVER $vsname exec su - $PGSQLUSER -c \"$PGSQLDUMP $db | > $backupdir/${db}.sql\"" - fi - else - if [ "$compress" == "yes" ]; then - execstr="su - $PGSQLUSER -c \"$PGSQLDUMP $db | $GZIP > $backupdir/${db}.sql.gz\"" - else - execstr="su - $PGSQLUSER -c \"$PGSQLDUMP $db > $backupdir/${db}.sql\"" - fi - fi - debug "$execstr" - if [ ! $test ]; then - output=`eval $execstr 2>&1` - code=$? - if [ "$code" == "0" ]; then - debug $output - info "Successfully finished dump of pgsql database ${db}" - else - warning $output - warning "Failed to dump pgsql database ${db}" - fi - fi - done + for db in $databases; do + if [ $usevserver = yes ]; then + if [ "$compress" == "yes" ]; then + execstr="$VSERVER $vsname exec su - $PGSQLUSER -c \"set -o pipefail ; $PGSQLDUMP $db | $GZIP --rsyncable > '$backupdir/${db}.sql.gz'\"" + else + execstr="$VSERVER $vsname exec su - $PGSQLUSER -c \"$PGSQLDUMP $db | > '$backupdir/${db}.sql'\"" + fi + else + if [ "$compress" == "yes" ]; then + execstr="su - $PGSQLUSER -c \"set -o pipefail ; $PGSQLDUMP $db | $GZIP --rsyncable > '$backupdir/${db}.sql.gz'\"" + else + execstr="su - $PGSQLUSER -c \"$PGSQLDUMP $db > '$backupdir/${db}.sql'\"" + fi + fi + debug "$execstr" + if [ ! $test ]; then + output=`eval $execstr 2>&1` + code=$? + if [ "$code" == "0" ]; then + debug $output + info "Successfully finished dump of pgsql database ${db}" + else + warning $output + warning "Failed to dump pgsql database ${db}" + fi + fi + done fi return 0 diff --git a/handlers/rdiff.helper.in b/handlers/rdiff.helper.in index 1055280..83f2fb5 100644 --- a/handlers/rdiff.helper.in +++ b/handlers/rdiff.helper.in @@ -1,4 +1,5 @@ # -*- mode: sh; sh-basic-offset: 3; indent-tabs-mode: nil; -*- +# vim: set filetype=sh sw=3 sts=3 expandtab autoindent: HELPERS="$HELPERS rdiff:incremental_remote_filesystem_backup" @@ -18,9 +19,9 @@ do_rdiff_host_includes() { for ((i=0; i < ${#rdiff_includes[@]} ; i++)); do formItem include ${rdiff_includes[$i]} done - formItem include - formItem include - formItem include + formItem include + formItem include + formItem include formDisplay [ $? = 0 ] || return unset rdiff_includes @@ -41,7 +42,7 @@ do_rdiff_vserver() { while [ -z "$REPLY" ]; do formBegin "$rdiff_title - vsincludes (backup these directories from every vserver)" [ -z "$rdiff_vsincludes" ] && rdiff_vsincludes="$rdiff_default_includes" - + for i in $rdiff_vsincludes; do formItem include "$i" done @@ -52,19 +53,19 @@ do_rdiff_vserver() { [ $? = 0 ] || return 1 rdiff_vsincludes=($REPLY) done - + set +o noglob } do_rdiff_excludes() { set -o noglob - formBegin "$rdiff_title: excludes" + formBegin "$rdiff_title: excludes" for ((i=0; i < ${#rdiff_excludes[@]} ; i++)) do formItem exclude ${rdiff_excludes[$i]} done - formItem exclude - formItem exclude + formItem exclude + formItem exclude formDisplay [ $? = 0 ] || return unset rdiff_excludes @@ -102,10 +103,10 @@ do_rdiff_src() { do_rdiff_dest() { declare -a tmp_array - + set -o noglob REPLY= - while [ -z "$REPLY" -o -z "$rdiff_directory" -o -z "$rdiff_host" -o -z "$rdiff_user" ] + while [ -z "$REPLY" -o -z "$rdiff_directory" -o -z "$rdiff_host" -o -z "$rdiff_user" ] do formBegin "$rdiff_title - destination: last three items are required" formItem "keep" "$rdiff_keep" @@ -119,11 +120,11 @@ do_rdiff_dest() { rdiff_keep=${tmp_array[0]} rdiff_directory=${tmp_array[1]} rdiff_host=${tmp_array[2]} - rdiff_user=${tmp_array[3]} + rdiff_user=${tmp_array[3]} rdiff_type=${tmp_array[4]} done set +o noglob - + _dest_done="(DONE)" setDefault conn } @@ -155,7 +156,7 @@ do_rdiff_ssh_con() { echo "Done. hit return to continue" read fi - + ssh -o PreferredAuthentications=publickey $rdiff_host -l $rdiff_user "exit" 2> /dev/null if [ $? -ne 0 ]; then echo "Copying root's public ssh key to authorized_keys of $rdiff_user@$rdiff_host. When prompted, specify the password for user $rdiff_user@$rdiff_host." @@ -171,7 +172,7 @@ do_rdiff_ssh_con() { 1 ) msgBox "$rdiff_title: error" "Connected successfully to $rdiff_user@$rdiff_host, but unable to write. Check ownership and modes of ~$rdiff_user on $rdiff_host." ;; 255 ) msgBox "$rdiff_title: error" "Failed to connect to $rdiff_user@$rdiff_host. Check hostname, username, and password. Also, make sure sshd is running on the destination host." ;; * ) msgBox "$rdiff_title: error" "Unexpected error." ;; - esac + esac return else echo "Done. hit return to continue" @@ -195,20 +196,20 @@ do_rdiff_ssh_con() { else booleanBox "Remote directory does not exist" "The destination backup directory does not exist, do you want me to create it for you?" if [ $? = 0 ]; then - ssh $rdiff_user@$rdiff_host "mkdir -p ${rdiff_directory}" + ssh $rdiff_user@$rdiff_host "mkdir -p ${rdiff_directory}" result=$? case $result in 0) msgBox "$rdiff_title: success" "Creation of the remote destination directory was a success!";; - 1) msgBox "$rdiff_title: error" "Connected successfully to $rdiff_user@$rdiff_host, but was unable to create the destination directory, check the directory permissions." + 1) msgBox "$rdiff_title: error" "Connected successfully to $rdiff_user@$rdiff_host, but was unable to create the destination directory, check the directory permissions." remote_status=failed;; - 255) msgBox "$rdiff_title: error" "Failed to connect to $rdiff_user@$rdiff_host. Check hostname, username, and password. Also, make sure sshd is running on the destination host." + 255) msgBox "$rdiff_title: error" "Failed to connect to $rdiff_user@$rdiff_host. Check hostname, username, and password. Also, make sure sshd is running on the destination host." remote_status=failed;; - *) msgBox "$rdiff_title: error" "Unexpected error." + *) msgBox "$rdiff_title: error" "Unexpected error." remote_status=failed;; esac fi fi - + if [ "$remote_status" = "ok" ]; then do_rdiff_con fi @@ -231,12 +232,12 @@ do_rdiff_con() { echo "Hit return to continue." read case $result in - 0) msgBox "$rdiff_title: success" "Installation of rdiff-backup was a success!" + 0) msgBox "$rdiff_title: success" "Installation of rdiff-backup was a success!" do_rdiff_con;; 1) msgBox "$rdiff_title: error" "Connected successfully to $rdiff_user@$rdiff_host, but was unable to install the package for some reason.";; 255) msgBox "$rdiff_title: error" "Failed to connect to $rdiff_user@$rdiff_host. Check hostname, username, and password. Also, make sure sshd is running on the destination host.";; *) msgBox "$rdiff_title: error" "Unexpected error.";; - esac + esac return fi else @@ -256,9 +257,9 @@ do_rdiff_con() { fi fi else - echo "SUCCESS: Everything looks good!" - echo "Hit return to continue." - read + echo "SUCCESS: Everything looks good!" + echo "Hit return to continue." + read fi _con_done="(DONE)" @@ -298,11 +299,11 @@ EOF set -o noglob for ((i=0; i < ${#rdiff_includes[@]} ; i++)); do echo "include = ${rdiff_includes[$i]}" >> $next_filename - done + done set +o noglob fi - if [ "$host_or_vservers" == vservers -o "$host_or_vservers" == both ]; then + if [ "$host_or_vservers" == vservers -o "$host_or_vservers" == both ]; then cat >> $next_filename <<EOF # # If vservers = yes in /etc/backupninja.conf then the following variables can @@ -328,7 +329,7 @@ EOF done set +o noglob fi - + ## excludes ## set -o noglob for ((i=0; i < ${#rdiff_excludes[@]} ; i++)); do @@ -340,7 +341,7 @@ EOF ###################################################### ## destination section ## (where the files are copied to) - + [dest] type = remote directory = $rdiff_directory @@ -382,7 +383,7 @@ rdiff_main_menu() { } rdiff_wizard() { - + # Global variables rdiff_title="rdiff-backup action wizard" _src_done= @@ -401,7 +402,7 @@ rdiff_wizard() { rdiff_excludes=(/home/*/.gnupg /home/*/.local/share/Trash /home/*/.Trash /home/*/.thumbnails /home/*/.beagle /home/*/.aMule /home/*/gtk-gnutella-downloads) rdiff_vsincludes= set +o noglob - + rdiff_main_menu } diff --git a/handlers/rdiff.in b/handlers/rdiff.in index 0f93429..60386fa 100644 --- a/handlers/rdiff.in +++ b/handlers/rdiff.in @@ -1,4 +1,5 @@ # -*- mode: sh; sh-basic-offset: 3; indent-tabs-mode: nil; -*- +# vim: set filetype=sh sw=3 sts=3 expandtab autoindent: # # rdiff-backup handler script for backupninja # requires rdiff-backup @@ -7,68 +8,74 @@ ### FUNCTIONS ### function test_connection() { - # given a user and host, - # tests the connection. - # if user or host is missing, returns 0 - # (ie, assume it's a local connection). - if [ $# -lt 2 ]; then - debug "(local is assumed to be a good connection)" - return 0 - fi - local user=$1 - local host=$2 - debug "ssh $sshoptions -o PasswordAuthentication=no $host -l $user 'echo -n 1'" - local ret=`ssh $sshoptions -o PasswordAuthentication=no $host -l $user 'echo -n host is alive'` - if echo $ret | grep "host is alive"; then - debug "Connected to $host as $user successfully" - else - fatal "Can't connect to $host as $user." - fi + # given a user and host, + # tests the connection. + # if user or host is missing, returns 0 + # (ie, assume it's a local connection). + if [ $# -lt 2 ]; then + debug "(local is assumed to be a good connection)" + return 0 + fi + local user=$1 + local host=$2 + debug "ssh $sshoptions -o PasswordAuthentication=no $host -l $user 'echo -n 1'" + local ret=`ssh $sshoptions -o PasswordAuthentication=no $host -l $user 'echo -n host is alive'` + if echo $ret | grep "host is alive"; then + debug "Connected to $host as $user successfully" + else + fatal "Can't connect to $host as $user." + fi } function get_version() { - # given no arguments, returns the local version. - # given a user and host, returns the remote version. - # if user or host is missing, returns the local version. - if [ "$#" -lt 2 ]; then - debug "$RDIFFBACKUP -V" - echo `$RDIFFBACKUP -V` - else - local user=$1 - local host=$2 - debug "ssh $sshoptions $host -l $user '$RDIFFBACKUP -V'" - echo `ssh $sshoptions $host -l $user "$RDIFFBACKUP -V | grep rdiff-backup"` - fi + # given no arguments, returns the local version. + # given a user and host, returns the remote version. + # if user or host is missing, returns the local version. + local version + if [ "$#" -lt 2 ]; then + debug "$RDIFFBACKUP -V" + echo `$RDIFFBACKUP -V` + else + local user=$1 + local host=$2 + debug "ssh $sshoptions $host -l $user '$RDIFFBACKUP -V'" + version=`ssh $sshoptions $host -l $user "$RDIFFBACKUP -V"` + if [ $? = 127 ]; then + fatal "Unable to execute rdiff-backup on remote server. It probably isn't installed" + else + echo "$version" | grep rdiff-backup + fi + fi } function check_consistency() { - local section=$1 - local type=$2 - local user=$3 - local host=$4 - if [ "$type" == "local" ]; then - if [ "$user" != "" ]; then - warning "User should not be specified for local $section." - fi - if [ "$host" != "" ]; then - warning "Host should not be specified for local $section." - fi - fi - if [ "$type" == "remote" ]; then - if [ "$user" == "" ]; then - fatal "User must be specified for remote $section." - fi - if [ "host" == "" ]; then - fatal "Host must be specifed for remote $section." - fi - fi + local section=$1 + local type=$2 + local user=$3 + local host=$4 + if [ "$type" == "local" ]; then + if [ "$user" != "" ]; then + warning "User should not be specified for local $section." + fi + if [ "$host" != "" ]; then + warning "Host should not be specified for local $section." + fi + fi + if [ "$type" == "remote" ]; then + if [ "$user" == "" ]; then + fatal "User must be specified for remote $section." + fi + if [ "$host" == "" ]; then + fatal "Host must be specifed for remote $section." + fi + fi } function check_cstream() { - local cstream=$1 - if [ ! -x $cstream ]; then - fatal "Can't find your cstream binary (trying: $cstream). If you use bwlimit you must have cstream installed." - fi + local cstream=$1 + if [ ! -x $cstream ]; then + fatal "Can't find your cstream binary (trying: $cstream). If you use bwlimit you must have cstream installed." + fi } ### GET CONFIG ### @@ -102,7 +109,7 @@ getconf sshoptions check_consistency "destination" "$type" "$user" "$host" if [ -n "$sshoptions" ] && echo $options | grep -qv "remote-schema"; then - options="$options --remote-schema 'ssh -C $sshoptions %s rdiff-backup --server'" + options="$options --remote-schema 'ssh -C $sshoptions %s rdiff-backup --server'" fi ### CHECK CONFIG ### @@ -114,7 +121,7 @@ if [ $vservers_are_available = yes ]; then vsnames="$found_vservers" else if ! vservers_exist "$vsnames" ; then - fatal "At least one of the vservers listed in vsnames ($vsnames) does not exist." + fatal "At least one of the vservers listed in vsnames ($vsnames) does not exist." fi fi if [ -n "$vsinclude" ]; then @@ -128,40 +135,40 @@ fi # check the connection at the source and destination [ -n "$test" ] || test=0 if [ "$testconnect" = "yes" ] || [ "${test}" -eq 1 ]; then - test_connection $sourceuser $sourcehost - test_connection $destuser $desthost + test_connection $sourceuser $sourcehost + test_connection $destuser $desthost fi if [ "$ignore_version" != "yes" ]; then - # see that rdiff-backup has the same version at the source and destination - sourceversion=`get_version $sourceuser $sourcehost` - destversion=`get_version $destuser $desthost` - if [ "$sourceversion" != "$destversion" ]; then - fatal "rdiff-backup does not have the same version at the source and at the destination." - fi + # see that rdiff-backup has the same version at the source and destination + sourceversion=`get_version $sourceuser $sourcehost` + destversion=`get_version $destuser $desthost` + if [ "$sourceversion" != "$destversion" ]; then + fatal "rdiff-backup does not have the same version at the source and at the destination." + fi fi # source specific checks -case $sourcetype in - remote ) execstr_sourcepart="$sourceuser@$sourcehost::/" ;; - local ) execstr_sourcepart="/" ;; - * ) fatal "sourcetype '$sourcetype' is neither local nor remote" ;; +case $sourcetype in + remote ) execstr_sourcepart="$sourceuser@$sourcehost::/" ;; + local ) execstr_sourcepart="/" ;; + * ) fatal "sourcetype '$sourcetype' is neither local nor remote" ;; esac # destination specific checks [ "$destdir" != "" ] || fatal "Destination directory not set" -case $desttype in - remote ) execstr_destpart="$destuser@$desthost::$destdir/$label" ;; - local ) execstr_destpart="$destdir/$label" ;; - * ) fatal "desttype '$desttype' is neither local nor remote" ;; +case $desttype in + remote ) execstr_destpart="$destuser@$desthost::$destdir/$label" ;; + local ) execstr_destpart="$destdir/$label" ;; + * ) fatal "desttype '$desttype' is neither local nor remote" ;; esac - + ### REMOVE OLD BACKUPS ### if [ "$keep" != yes ]; then if [ "`echo $keep | tr -d 0-9`" == "" ]; then - # add D if no other date unit is specified + # add D if no other date unit is specified keep="${keep}D" fi @@ -175,27 +182,27 @@ if [ "$keep" != yes ]; then if [ $test = 0 ]; then output="`su -c "$removestr" 2>&1`" if [ $? = 0 ]; then - debug $output - info "Removing backups older than $keep days succeeded." + debug $output + info "Removing backups older than $keep days succeeded." else - warning $output - warning "Failed removing backups older than $keep." + warning $output + warning "Failed removing backups older than $keep." fi fi fi -# Add cstream +# Add cstream if [ ! -z $bwlimit ]; then - check_cstream $CSTREAM; - if [ "$desttype" = "remote" ]; then - RDIFFBACKUP="$RDIFFBACKUP --remote-schema 'cstream -t $bwlimit | ssh %s \''rdiff-backup --server\'''" - elif [ "$sourcetype" = "remote" ]; then - RDIFFBACKUP="$RDIFFBACKUP --remote-schema 'ssh %s \''rdiff-backup --server\'' | cstream -t $bwlimit'" - else - fatal "You specified a bandwidth limit but neither your source nor destination types are remote." - fi + check_cstream $CSTREAM; + if [ "$desttype" = "remote" ]; then + RDIFFBACKUP="$RDIFFBACKUP --remote-schema 'cstream -t $bwlimit | ssh %s \''rdiff-backup --server\'''" + elif [ "$sourcetype" = "remote" ]; then + RDIFFBACKUP="$RDIFFBACKUP --remote-schema 'ssh %s \''rdiff-backup --server\'' | cstream -t $bwlimit'" + else + fatal "You specified a bandwidth limit but neither your source nor destination types are remote." + fi fi ### EXECUTE ### @@ -208,29 +215,38 @@ symlinks_warning="Maybe you have mixed symlinks and '*' in this statement, which # TODO: order the includes and excludes # excludes +SAVEIFS=$IFS +IFS=$(echo -en "\n\b") for i in $exclude; do str="${i//__star__/*}" execstr="${execstr}--exclude '$str' " done -# includes +IFS=$SAVEIFS +# includes +SAVEIFS=$IFS +IFS=$(echo -en "\n\b") for i in $include; do [ "$i" != "/" ] || fatal "Sorry, you cannot use 'include = /'" str="${i//__star__/*}" execstr="${execstr}--include '$str' " done +IFS=$SAVEIFS # vsinclude if [ $usevserver = yes ]; then for vserver in $vsnames; do + SAVEIFS=$IFS + IFS=$(echo -en "\n\b") for vi in $vsinclude; do - str="${vi//__star__/*}" - str="$VROOTDIR/$vserver$str" + str="${vi//__star__/*}" + str="$VROOTDIR/$vserver$str" if [ -n "$str" ]; then - execstr="${execstr}--include '$str' " + execstr="${execstr}--include '$str' " else warning "vsinclude statement '${vi//__star__/*}' will be ignored for VServer $vserver. $symlinks_warning" fi done + IFS=$SAVEIFS done fi @@ -238,20 +254,20 @@ set +o noglob # exclude everything else [ "$include" != "" -o "$vsinclude" != "" ] && execstr="${execstr}--exclude '/*' " - + # include client-part and server-part execstr="${execstr}$execstr_sourcepart $execstr_destpart" debug "$execstr" if [ $test = 0 ]; then - output=`nice -n $nicelevel su -c "$execstr" 2>&1` - if [ $? = 0 ]; then - debug $output - info "Successfully finished backing up source $label" - else - warning $output - warning "Failed backup up source $label" - fi -fi + output=`nice -n $nicelevel su -c "$execstr" 2>&1` + if [ $? = 0 ]; then + debug $output + info "Successfully finished backing up source $label" + else + warning $output + warning "Failed backup up source $label" + fi +fi return 0 diff --git a/handlers/rsync.in b/handlers/rsync.in index de746d5..8f638d7 100644 --- a/handlers/rsync.in +++ b/handlers/rsync.in @@ -1,13 +1,29 @@ +# -*- mode: sh; sh-basic-offset: 3; indent-tabs-mode: nil; -*- +# vim: set filetype=sh sw=3 sts=3 expandtab autoindent: # -# backupninja handler to do incremental backups using -# rsync and hardlinks, based on +# backupninja handler for incremental backups using rsync and hardlinks +# feedback: rhatto at riseup.net +# +# rsync handler is free software; you can redistribute it and/or modify it +# under the terms of the GNU General Public License as published by the Free +# Software Foundation; either version 2 of the License, or any later version. +# +# rsync handler is distributed in the hope that it will be useful, but WITHOUT +# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or +# FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for +# more details. +# +# You should have received a copy of the GNU General Public License along with +# this program; if not, write to the Free Software Foundation, Inc., 59 Temple +# Place - Suite 330, Boston, MA 02111-1307, USA +# +# Inspiration +# ----------- # # - http://www.mikerubel.org/computers/rsync_snapshots/ # - rsnap handler by paulv at bikkel.org # - maildir handler from backupninja # -# feedback: rhatto at riseup.net | gpl -# # Config file options # ------------------- # @@ -292,7 +308,7 @@ function rotate_short { local keep="$2" local metadata="`dirname $folder`/metadata" - if [[ "$keep" < 4 ]]; then + if [[ "$keep" -lt 4 ]]; then error "Rotate: minimum of 4 rotations" exit 1 fi @@ -327,7 +343,7 @@ function rotate_short_remote { local metadata="`dirname $folder`/metadata" local keep="$2" - if [[ "$2" < 4 ]]; then + if [[ "$2" -lt 4 ]]; then error "Rotate: minimum of 4 rotations" exit 1 fi diff --git a/handlers/sh.in b/handlers/sh.in index f9f1926..b070f3b 100644 --- a/handlers/sh.in +++ b/handlers/sh.in @@ -1,7 +1,8 @@ # -*- mode: sh; sh-basic-offset: 3; indent-tabs-mode: nil; -*- +# vim: set filetype=sh sw=3 sts=3 expandtab autoindent: # # shell script handler for backupninja # runs the file /etc/backup.d/scriptname.sh -# +# [ $test ] || ( . $1 ) diff --git a/handlers/svn.in b/handlers/svn.in index d19b0b8..5e5531a 100644 --- a/handlers/svn.in +++ b/handlers/svn.in @@ -1,4 +1,5 @@ # -*- mode: sh; sh-basic-offset: 3; indent-tabs-mode: nil; -*- +# vim: set filetype=sh sw=3 sts=3 expandtab autoindent: # # this handler will backup subversion repostitories. # @@ -7,7 +8,7 @@ getconf src /var/lib/svn getconf dest /var/backups/svn getconf tmp /var/backups/svn.tmp getconf HOTBACKUP "/usr/bin/svnadmin hotcopy" -getconf vsname +getconf vsname error=0 @@ -35,43 +36,43 @@ fi cd $vroot$src for repo in `find . -name svnserve.conf` do - repo=`dirname $repo` - repo=`dirname $repo` + repo=`dirname $repo` + repo=`dirname $repo` - ret=`mkdir -p $vroot$tmp/$repo 2>&1` - code=$? - if [ "$ret" ]; then - debug "$ret" - fi - if [ $code != 0 ]; then - error "command failed mkdir -p $vroot$tmp/$repo" - fi + ret=`mkdir -p $vroot$tmp/$repo 2>&1` + code=$? + if [ "$ret" ]; then + debug "$ret" + fi + if [ $code != 0 ]; then + error "command failed mkdir -p $vroot$tmp/$repo" + fi - if [ $usevserver = yes ] - then - ret=`$VSERVER $vsname exec $HOTBACKUP $src/$repo $tmp/$repo 2>&1` - else - ret=`$HOTBACKUP $src/$repo $tmp/$repo 2>&1` - fi - code=$? - if [ "$ret" ]; then - debug "$ret" - fi - if [ $code != 0 ]; then - error "command failed -- $HOTBACKUP $vroot$src/$repo $vroot$tmp/$repo" - error=1 - fi + if [ $usevserver = yes ] + then + ret=`$VSERVER $vsname exec $HOTBACKUP $src/$repo $tmp/$repo 2>&1` + else + ret=`$HOTBACKUP $src/$repo $tmp/$repo 2>&1` + fi + code=$? + if [ "$ret" ]; then + debug "$ret" + fi + if [ $code != 0 ]; then + error "command failed -- $HOTBACKUP $vroot$src/$repo $vroot$tmp/$repo" + error=1 + fi done if [ $error -eq 1 ]; then - echo "Error: because of earlier errors, we are leaving svn backups in $vroot$tmp instead of $vroot$dest" + echo "Error: because of earlier errors, we are leaving svn backups in $vroot$tmp instead of $vroot$dest" else - if [ -d $vroot$dest -a -d $vroot$tmp ]; then - rm -rf $vroot$dest - fi - if [ -d $vroot$tmp ]; then - mv $vroot$tmp $vroot$dest - fi + if [ -d $vroot$dest -a -d $vroot$tmp ]; then + rm -rf $vroot$dest + fi + if [ -d $vroot$tmp ]; then + mv $vroot$tmp $vroot$dest + fi fi exit 0 diff --git a/handlers/sys.helper.in b/handlers/sys.helper.in index d3d99a5..8a2fb07 100644 --- a/handlers/sys.helper.in +++ b/handlers/sys.helper.in @@ -1,26 +1,33 @@ # -*- mode: sh; sh-basic-offset: 3; indent-tabs-mode: nil; -*- +# vim: set filetype=sh sw=3 sts=3 expandtab autoindent: HELPERS="$HELPERS sys:general_hardware_and_system_info" sys_wizard() { require_packages hwinfo debconf-utils checkBox "new sys action" "check options" \ - "packages" "list of all installed packages." on \ - "partitions" "the partition table of all disks." on \ - "sfdisk" "use sfdisk to get partition information." on \ - "hardware" "detailed hardware information" on - [ $? = 1 ] && return; + "packages" "list of all installed packages." on \ + "partitions" "the partition table of all disks." on \ + "sfdisk" "use sfdisk to get partition information." on \ + "hardware" "detailed hardware information" on \ + "luksheaders" "Luks headers of all Luks partitions." off \ + "lvm" "LVM metadata for all volume groups." off + [ $? = 1 ] && return; result="$REPLY" packages="packages = no" partitions="partitions = no" sfdisk="dosfdisk = no" hardware="hardware = no" + luksheaders="luksheaders = no" + lvm="lvm = no" for opt in $result; do case $opt in - '"packages"') packages="packages = yes";; - '"partitions"') partitions="partitions = yes";; - '"sfdisk"') sfdisk="dosfdisk = yes";; - '"hardware"') hardware="hardware = yes";; + '"packages"') packages="packages = yes";; + '"partitions"') partitions="partitions = yes";; + '"sfdisk"') sfdisk="dosfdisk = yes";; + '"hardware"') hardware="hardware = yes";; + '"luksheaders"') luksheaders="luksheaders = yes";; + '"lvm"') lvm="lvm = yes";; esac done get_next_filename $configdirectory/10.sys @@ -29,10 +36,14 @@ $packages $partitions $sfdisk $hardware +$luksheaders +$lvm + # packagesfile = /var/backups/dpkg-selections.txt # selectionsfile = /var/backups/debconfsel.txt # partitionsfile = /var/backups/partitions.__star__.txt # hardwarefile = /var/backups/hardware.txt +# luksheadersfile = /var/backups/luksheader.__star__.bin # If vservers = yes in /etc/backupninja.conf then the following variables can # be used: diff --git a/handlers/sys.in b/handlers/sys.in index f293840..69751ed 100755 --- a/handlers/sys.in +++ b/handlers/sys.in @@ -1,7 +1,8 @@ # -*- mode: sh; sh-basic-offset: 3; indent-tabs-mode: nil; -*- +# vim: set filetype=sh sw=3 sts=3 expandtab autoindent: # # this handler will save various reports of vital system information. -# by default, all the reports are enabled and are saved in /var/backups. +# by default, all the reports are saved in /var/backups. # # (1) a capture of the debconf package selection states. This file # can be used to restore the answers to debconf questions for @@ -11,19 +12,27 @@ # (2) a list of all the packages installed and removed. # this file can be used to restore the state of installed packages # by running "dpkg --set-selections < dpkg-selections.txt and -# then run "apt-get -u dselect-upgrade". If you have the +# then run "apt-get -u dselect-upgrade". If you have the # debconf-set-selections file from (1), you should restore those first. -# -# (3) the partition table of all disks. +# +# (3) the partition table of all disks. # this partition table can be used to format another disk of -# the same size. this can be handy if using software raid and +# the same size. this can be handy if using software raid and # you have a disk go bad. just replace the disk and partition it # by running "sfdisk /dev/sdb < partitions.sdb.txt" # (MAKE SURE YOU PARTITION THE CORRECT DISK!!!) # -# (4) hardware information. +# (4) hardware information. # write to a text file the important things which hwinfo can gleen. # +# (5) the Luks header of every Luks block device, if option luksheaders +# is enabled. +# in case you (have to) scramble such a Luks header (for some time), +# and restore it later by running "dd if=luksheader.sda2.bin of=/dev/sda2" +# (MAKE SURE YOU PASS THE CORRECT DEVICE AS of= !!!) +# +# (6) LVM metadata for every detected volume group, if "lvm = yes" +# if [ -f /etc/debian_version ] then @@ -47,7 +56,7 @@ getconf dohwinfo yes if [ ! -d $parentdir ]; then mkdir -p $parentdir fi - + if [ $os = "debian" ] then getconf packagesfile $parentdir/dpkg-selections.txt @@ -57,9 +66,9 @@ then getconf debconfgetselections `which debconf-get-selections` elif [ $os = "redhat" ] then - getconf packagesfile $parentdir/rpmpackages.txt - getconf packagemgr `which rpm` - getconf packagemgroptions ' -qa ' + getconf packagesfile $parentdir/rpmpackages.txt + getconf packagemgr `which rpm` + getconf packagemgroptions ' -qa ' getconf SYSREPORT `which sysreport` getconf sysreport_options ' -norpm ' @@ -82,6 +91,15 @@ getconf HWINFO `which hwinfo` getconf sfdisk_options "" getconf hwinfo_options "" +getconf CRYPTSETUP `which cryptsetup` +getconf DD `which dd` +getconf luksheaders no +getconf luksheadersfile $parentdir/luksheader.__star__.bin + +getconf VGS `which vgs` +getconf VGCFGBACKUP `which vgcfgbackup` +getconf lvm no + getconf vsnames all # If vservers are configured, check that the ones listed in $vsnames are running. @@ -97,6 +115,30 @@ if [ $vservers_are_available = yes ]; then usevserver=yes fi +## SANITY CHECKS ######################### + +if [ "$luksheaders" == "yes" ]; then + if [ ! -x "$DD" ]; then + warning "can't find dd, skipping backup of Luks headers." + luksheaders="no" + fi + if [ ! -x "$CRYPTSETUP" ]; then + warning "can't find cryptsetup, skipping backup of Luks headers." + luksheaders="no" + fi +fi + +if [ "$lvm" == "yes" ]; then + if [ ! -x "$VGS" ]; then + warning "can't find vgs, skipping backup of LVM metadata" + lvm="no" + fi + if [ ! -x "$VGCFGBACKUP" ]; then + warning "can't find vgcfgbackup, skipping backup of LVM metadata" + lvm="no" + fi +fi + ## PACKAGES ############################## # @@ -122,7 +164,7 @@ if [ "$packages" == "yes" ]; then # don't expand * since it can be used in $packagemgroptions set -o noglob debug "$VSERVER $vserver exec $packagemgr $packagemgroptions > $VROOTDIR/$vserver$packagesfile" - $VSERVER $vserver exec $packagemgr $packagemgroptions > $VROOTDIR/$vserver$packagesfile || fatal "can not save $packagemgr info to $packagesfile" + $VSERVER $vserver exec $packagemgr $packagemgroptions > $VROOTDIR/$vserver$packagesfile || fatal "can not save $packagemgr info to $packagesfile" set +o noglob fi # is $debconfgetselections available inside $vserver ? @@ -143,9 +185,9 @@ if [ "$packages" == "yes" ]; then unset found done fi - + # We want to perform this on the host as well - if [ -z "$packagemgr" -o ! -x "$packagemgr" ]; then + if [ -z "$packagemgr" -o ! -x "$packagemgr" ]; then warning "can't find ${packagemgr}, skipping installed packages report." else # don't expand * since it can be used in $packagemgroptions @@ -194,7 +236,7 @@ catiffile () { done fi echo $DASHES >> $sysreportfile -} +} catifexec () { if [ -x $1 ]; then @@ -204,12 +246,12 @@ catifexec () { $* >> $sysreportfile 2>&1 || info "executing of $1 failed" fi } - + STATUS="Determining $os version:" catiffile $osversion -STATUS="Determinding your current hostname: " +STATUS="Determinding your current hostname: " catifexec "/bin/hostname" STATUS="Getting the date:" @@ -231,17 +273,17 @@ if [ $os = "redhat" ]; then catiffile "/bin/ls /etc/rc.d/rc*.d/" elif [ $os = "debian" ]; then - for level in 0 1 2 3 4 5 6 S; do - echo "Level: $level" >> $sysreportfile - for f in /etc/rc${level}.d/*; do - # Remove /etc/Knn or Snn from beginning - ff=$(echo $f | @SED@ 's_/etc/rc..d/[KS][0-9][0-9]__') - if [ $f != $ff ]; then - echo $ff >> $sysreportfile - fi - done - echo "" >> $sysreportfile - done + for level in 0 1 2 3 4 5 6 S; do + echo "Level: $level" >> $sysreportfile + for f in /etc/rc${level}.d/*; do + # Remove /etc/Knn or Snn from beginning + ff=$(echo $f | @SED@ 's_/etc/rc..d/[KS][0-9][0-9]__') + if [ $f != $ff ]; then + echo $ff >> $sysreportfile + fi + done + echo "" >> $sysreportfile + done fi STATUS="Getting bootloader information:" @@ -250,34 +292,34 @@ catifexec "/bin/ls" "-alR /boot" # This covers sparc, alpha, and intel (respectively) # updated for grub -mpg if [ -f /etc/silo.conf ]; then - STATUS="Collecting information about the boot process (silo):" - catiffile "/etc/silo.conf" + STATUS="Collecting information about the boot process (silo):" + catiffile "/etc/silo.conf" fi if [ -f /etc/milo.conf ]; then - STATUS="Collecting information about the boot process (milo):" - catiffile "/etc/milo.conf" + STATUS="Collecting information about the boot process (milo):" + catiffile "/etc/milo.conf" fi if [ -f /etc/lilo.conf ]; then - STATUS="Collecting information about the boot process (lilo):" - catiffile "/etc/lilo.conf" - catifexec "/sbin/lilo" "-q" + STATUS="Collecting information about the boot process (lilo):" + catiffile "/etc/lilo.conf" + catifexec "/sbin/lilo" "-q" fi if [ -d /boot/grub -a -f /boot/grub/grub.conf -a -f /boot/grub/device.map ]; then - STATUS="Collecting information about the boot process (grub.conf):" - catiffile "/boot/grub/grub.conf" - STATUS="Collecting information about the boot process (grub.map):" - catiffile "/boot/grub/device.map" + STATUS="Collecting information about the boot process (grub.conf):" + catiffile "/boot/grub/grub.conf" + STATUS="Collecting information about the boot process (grub.map):" + catiffile "/boot/grub/device.map" fi if [ -f /etc/cluster.conf -o -f /etc/cluster.xml ] ; then - STATUS="Gathering information on cluster setup" - # 2.1 AS - if [ -f /etc/cluster.conf ] ; then - catiffile "/etc/cluster.conf" - fi - # Taroon - if [ -f /etc/cluster.xml ] ; then - catiffile "/etc/cluster.xml" - fi + STATUS="Gathering information on cluster setup" + # 2.1 AS + if [ -f /etc/cluster.conf ] ; then + catiffile "/etc/cluster.conf" + fi + # Taroon + if [ -f /etc/cluster.xml ] ; then + catiffile "/etc/cluster.xml" + fi fi STATUS="Gathering sysctl information (sysctl -a):" @@ -308,10 +350,10 @@ STATUS="Getting kernel version:" catifexec "/bin/uname" "-a" STATUS="Checking module information:" catifexec "/sbin/lsmod" -for x in $(/sbin/lsmod | /bin/cut -f1 -d" " 2>/dev/null | /bin/grep -v Module 2>/dev/null +for x in $(/sbin/lsmod | /bin/cut -f1 -d" " 2>/dev/null | /bin/grep -v Module 2>/dev/null ) ; do - STATUS="Checking module information $x:" - catifexec "/sbin/modinfo" "$x" + STATUS="Checking module information $x:" + catifexec "/sbin/modinfo" "$x" done STATUS="Gathering information about your filesystems:" @@ -334,27 +376,27 @@ STATUS="Collecting information regarding kernel modules" VER=`uname -r` catiffile "/lib/modules/$VER/modules.dep" if [ -f /etc/conf.modules ]; then - STATUS="Collecting information regarding kernel modules (conf.modules)" - catiffile "/etc/conf.modules" + STATUS="Collecting information regarding kernel modules (conf.modules)" + catiffile "/etc/conf.modules" fi if [ -f /etc/modules.conf ]; then - STATUS="Collecting information regarding kernel modules (modules.conf)" - catiffile "/etc/modules.conf" + STATUS="Collecting information regarding kernel modules (modules.conf)" + catiffile "/etc/modules.conf" fi if [ -f /etc/modprobe.conf ]; then - STATUS="Collecting information regarding kernel modules (modeprobe.conf)" - catiffile "/etc/modprobe.conf" + STATUS="Collecting information regarding kernel modules (modeprobe.conf)" + catiffile "/etc/modprobe.conf" fi # dkms status if [ -x /usr/sbin/dkms ] ; then STATUS="Gathering current status of modules, versions and kernels (dkms):" - catifexec "/usr/sbin/dkms" "status" + catifexec "/usr/sbin/dkms" "status" fi if [ -f /etc/sysconfig/isdncard ] ; then - STATUS="Gathering information about ISDN:" - catiffile "/etc/sysconfig/isdncard" + STATUS="Gathering information about ISDN:" + catiffile "/etc/sysconfig/isdncard" fi STATUS="Collecting information from the proc directory:" @@ -437,18 +479,21 @@ if [ $os = "redhat" ]; then elif [ $os = "debian" ]; then catifexec "/sbin/vgdisplay" "-vv" fi - + +STATUS="Collecting device-mapper (dm) information:" +catifexec '/sbin/dmsetup' 'info' + STATUS="Collecting SCSI Tape information (/etc/stinit.def)" catiffile "/etc/stinit.def" if [ -x /sbin/lsusb ] ; then - STATUS="Collecting USB devices list (lsusb):" - catifexec "/sbin/lsusb" + STATUS="Collecting USB devices list (lsusb):" + catifexec "/sbin/lsusb" fi if [ -x /usr/bin/lshal ] ; then - STATUS="Collecting global devices list (lshal):" - catifexec "/usr/bin/lshal" + STATUS="Collecting global devices list (lshal):" + catifexec "/usr/bin/lshal" fi @@ -467,24 +512,24 @@ fi if [ "$partitions" == "yes" ]; then if [ "$dosfdisk" == "yes" ]; then - if [ ! -x "$SFDISK" ]; then - warning "can't find sfdisk, skipping sfdisk report." - partitions="no" - fi + if [ ! -x "$SFDISK" ]; then + warning "can't find sfdisk, skipping sfdisk report." + partitions="no" + fi fi if [ "$dohwinfo" == "yes" ]; then - if [ ! -x "$HWINFO" ]; then - warning "can't find hwinfo, skipping partition report." - partitions="no" - fi + if [ ! -x "$HWINFO" ]; then + warning "can't find hwinfo, skipping partition report." + partitions="no" + fi fi fi if [ "$hardware" == "yes" ]; then - if [ ! -x "$HWINFO" ]; then - warning "can't find hwinfo, skipping hardware report." - hardware="no" - fi + if [ ! -x "$HWINFO" ]; then + warning "can't find hwinfo, skipping hardware report." + hardware="no" + fi fi ## HARDWARE ############################# @@ -492,48 +537,47 @@ fi # # here we use hwinfo to dump a table listing all the # information we can find on the hardware of this machine -# +# if [ "$hardware" == "yes" ]; then if [ "dohwinfo" == "yes" ]; then if [ -f $hardwarefile ]; then - rm $hardwarefile + rm $hardwarefile fi touch $hardwarefile echo -e "\n\n====================== summary ======================\n" >> $hardwarefile debug "$HWINFO --short --cpu --network --disk --pci >> $hardwarefile" $HWINFO --short --cpu --network --disk --pci >> $hardwarefile for flag in cpu network bios pci; do - echo -e "\n\n====================== $flag ======================\n" >> $hardwarefile - $HWINFO --$flag >> $hardwarefile + echo -e "\n\n====================== $flag ======================\n" >> $hardwarefile + $HWINFO --$flag >> $hardwarefile done fi fi - ## PARTITIONS ############################# -# here we use sfdisk to dump a listing of all the partitions. +# here we use sfdisk to dump a listing of all the partitions. # these files can be used to directly partition a disk of the same size. if [ "$partitions" == "yes" ]; then if [ "$dosfdisk" == "yes" ]; then devices=`LC_ALL=C $SFDISK -l 2>/dev/null | grep "^Disk /dev" | @AWK@ '{print $2}' | cut -d: -f1` - if [ "$devices" == "" ]; then - warning "No harddisks found" - fi - for dev in $devices; do - debug "$SFDISK will try to backup partition tables for device $dev" - [ -b $dev ] || continue - label=${dev#/dev/} - label=${label//\//-} - outputfile=${partitionsfile//__star__/$label} - debug "$SFDISK $sfdisk_options -d $dev > $outputfile 2>/dev/null" - $SFDISK $sfdisk_options -d $dev > $outputfile 2>/dev/null - if [ $? -ne 0 ]; then - warning "The partition table for $dev could not be saved." - fi - done + if [ "$devices" == "" ]; then + warning "No harddisks found" + fi + for dev in $devices; do + debug "$SFDISK will try to backup partition tables for device $dev" + [ -b $dev ] || continue + label=${dev#/dev/} + label=${label//\//-} + outputfile=${partitionsfile//__star__/$label} + debug "$SFDISK $sfdisk_options -d $dev > $outputfile 2>/dev/null" + $SFDISK $sfdisk_options -d $dev > $outputfile 2>/dev/null + if [ $? -ne 0 ]; then + warning "The partition table for $dev could not be saved." + fi + done fi if [ "$dohwinfo" == "yes" ]; then debug "Using $HWINFO to get all available disk information" @@ -541,3 +585,99 @@ if [ "$partitions" == "yes" ]; then $HWINFO --disk >> $hardwarefile fi fi + +if [ "$luksheaders" == "yes" ]; then + devices=`LC_ALL=C $SFDISK -l 2>/dev/null | grep "^Disk /dev" | @AWK@ '{print $2}' | cut -d: -f1` + [ -n "$devices" ] || warning "No block device found" + targetdevices="" + for dev in $devices; do + [ -b $dev ] || continue + debug "$CRYPTSETUP isLuks $dev" + $CRYPTSETUP isLuks $dev + [ $? -eq 0 ] && targetdevices="$targetdevices $dev" + done + for dev in $targetdevices; do + label=${dev#/dev/} + label=${label//\//-} + outputfile=${luksheadersfile//__star__/$label} + # the following sizes are expressed in terms of 512-byte sectors + debug "Let us find out the Luks header size for $dev" + debug "$CRYPTSETUP luksDump \"$dev\" | grep '^Payload offset:' | @AWK@ '{print $3}'" + headersize=`$CRYPTSETUP luksDump "$dev" | grep '^Payload offset:' | @AWK@ '{print $3}'` + if [ $? -ne 0 ]; then + warning "Could not compute the size of Luks header, skipping device $dev" + continue + elif [ -z "$headersize" -o -n "`echo \"$headersize\" | sed 's/[0-9]*//g'`" ]; then + warning "The computed size of Luks header is not an integer, skipping device $dev" + continue + fi + debug "Let us backup the Luks header of device $dev" + debug "$DD if=\"${dev}\" of=\"${outputfile}\" bs=512 count=\"${headersize}\"" + output=`$DD if="${dev}" of="${outputfile}" bs=512 count="${headersize}" 2>&1` + exit_code=$? + if [ $exit_code -eq 0 ]; then + debug $output + info "The Luks header of $dev was saved to $outputfile." + else + debug $output + fatal "The Luks header of $dev could not be saved." + fi + done +fi + +## LVM #################################### + +# returns 0 on success, 1 on error, 2 if not tried +# outputs error message if error, reason if not tried +function doLvmBackup () { + local lvmdir="$1" + if [ ! -d "$lvmdir" ]; then + if ! mkdir "$lvmdir"; then + echo "could not create $lvmdir" + return 2 + else + info "successfully created $lvmdir" + fi + fi + if [ ! -w "$lvmdir" ]; then + echo "can not write to directory $lvmdir" + return 2 + fi + debug "Let's try to gather the list of LVM volume groups" + debug "$VGS --options vg_name --noheadings | @SED@ 's/^[ ]*//' | @SED@ 's/[ ]*$//' | tr '\n' ' '" + vgs=`$VGS --options vg_name --noheadings | @SED@ 's/^[ ]*//' | @SED@ 's/[ ]*$//' | tr '\n' ' '` + debug "Let's try to backup LVM metadata for detected volume groups: $vgs" + debug "$VGCFGBACKUP --file \"${lvmdir}\"/\'%s\' $vgs" + output=`$VGCFGBACKUP --file "${lvmdir}"/'%s' $vgs` + exit_code=$? + debug $output + case $exit_code in + 0) + info "LVM metadata was saved to $lvmdir for volume groups: $vgs" + return 0 + ;; + *) + echo "LVM metadata could not be saved for at least one of these volume groups: $vgs" + return 1 + ;; + esac +} + +if [ "$lvm" == "yes" ]; then + output=`doLvmBackup "${parentdir}/lvm"` + exit_code=$? + case $exit_code in + 0) # success. info message has already been displayed + true + ;; + 1) # error + fatal "$output" + ;; + 2) # could not even try + fatal "LVM metadata backup was not tried: $output" + ;; + *) # should never happen + fatal "Unhandled error ($exit_code) while trying to backup LVM metadata, please report a bug" + ;; + esac +fi diff --git a/handlers/tar.helper.in b/handlers/tar.helper.in index cc9a89b..cdbe03a 100644 --- a/handlers/tar.helper.in +++ b/handlers/tar.helper.in @@ -1,4 +1,5 @@ # -*- mode: sh; sh-basic-offset: 3; indent-tabs-mode: nil; -*- +# vim: set filetype=sh sw=3 sts=3 expandtab autoindent: HELPERS="$HELPERS tar:tar_backup" @@ -14,7 +15,7 @@ tar_wizard() { inputBox "$tar_title" "\"Name\" of backups" "$backupname" [ $? = 1 ] && return tar_backupname="backupname = $REPLY" - backupname="$REPLY" + backupname="$REPLY" inputBox "$tar_title" "Directory where to store the backups" "/net/backups/$backupname" [ $? = 1 ] && return @@ -24,8 +25,8 @@ tar_wizard() { "none" "do not filter trough" off \ "compress" "filter trough compress" off \ "gzip" "filter trough gzip" off \ - "bzip" "filter trough bzip" on - [ $? = 1 ] && return; + "bzip" "filter trough bzip" on + [ $? = 1 ] && return; result="$REPLY" tar_compress="compress = $REPLY " @@ -47,14 +48,14 @@ tar_wizard() { [ $? = 0 ] || return 1 tar_includes="includes = " for i in $REPLY; do - [ -n "$i" ] && tar_includes="$tar_includes $i" + [ -n "$i" ] && tar_includes="$tar_includes $i" done done REPLY= while [ -z "$REPLY" ]; do formBegin "$tar_title: Excludes" - formItem "Exclude:" /tmp + formItem "Exclude:" /tmp formItem "Exclude:" /proc formItem "Exclude:" /sys formItem "Exclude:" /dev @@ -63,17 +64,17 @@ tar_wizard() { formItem "Exclude:" /misc formItem "Exclude:" /net formItem "Exclude:" /selinux - formItem "Exclude:" + formItem "Exclude:" formItem "Exclude:" formDisplay [ $? = 0 ] || return 1 tar_excludes="excludes = " for i in $REPLY; do - [ -n "$i" ] && tar_excludes="$tar_excludes $i" + [ -n "$i" ] && tar_excludes="$tar_excludes $i" done done -# Save the config +# Save the config get_next_filename $configdirectory/10.tar cat > $next_filename <<EOF $tar_when_run @@ -85,8 +86,8 @@ $tar_excludes # tar binary - have to be GNU tar #TAR=/bin/tar -#DATE /bin/date -#DATEFORMAT "%Y.%m.%d-%H%M" +#DATE /bin/date +#DATEFORMAT "%Y.%m.%d-%H%M" EOF chmod 600 $next_filename diff --git a/handlers/tar.in b/handlers/tar.in index 7f0d147..b4f8c58 100644 --- a/handlers/tar.in +++ b/handlers/tar.in @@ -1,17 +1,18 @@ # -*- mode: sh; sh-basic-offset: 3; indent-tabs-mode: nil; -*- +# vim: set filetype=sh sw=3 sts=3 expandtab autoindent: # # tar handler script for backupninja -getconf backupname `hostname --fqdn` -getconf backupdir /var/backups/`hostname --fqdn` -getconf compress bzip -getconf includes "/etc /home /usr/local" -getconf excludes "/tmp /proc /dev /sys /net /misc /media /srv /selinux" +getconf backupname `hostname --fqdn` +getconf backupdir /var/backups/`hostname --fqdn` +getconf compress bzip +getconf includes "/etc /home /usr/local" +getconf excludes "/tmp /proc /dev /sys /net /misc /media /srv /selinux" -getconf TAR `which tar` -getconf EXTENSION tar -getconf DATE `which date` -getconf DATEFORMAT "%Y.%m.%d-%H%M" +getconf TAR `which tar` +getconf EXTENSION tar +getconf DATE `which date` +getconf DATEFORMAT "%Y.%m.%d-%H%M" # See if vservers are configured if [ "$vservers" = "yes" ] @@ -65,7 +66,7 @@ done debug "Running backup: " $TAR -c -p -v $compress_option $exclude_options \ -f "$backupdir/$backupname-"`$DATE "+$DATEFORMAT"`".$EXTENSION" \ - $includes + $includes $TAR -c -p -v $compress_option $exclude_options \ -f "$backupdir/$backupname-"`$DATE "+$DATEFORMAT"`".$EXTENSION" \ @@ -73,7 +74,7 @@ $TAR -c -p -v $compress_option $exclude_options \ > "$backupdir/$backupname-"`$DATE "+$DATEFORMAT"`.list \ 2> "$backupdir/$backupname-"`$DATE "+$DATEFORMAT"`.err -[ $? -ne 0 ] && fatal "Tar backup failed" +[ $? -ne 0 ] && fatal "Tar backup failed" diff --git a/handlers/trac.in b/handlers/trac.in index a4b7bdf..018bffd 100644 --- a/handlers/trac.in +++ b/handlers/trac.in @@ -1,4 +1,5 @@ # -*- mode: sh; sh-basic-offset: 3; indent-tabs-mode: nil; -*- +# vim: set filetype=sh sw=3 sts=3 expandtab autoindent: # # this handler will backup trac environments (based on the svn handler) # @@ -12,39 +13,38 @@ getconf tmp /var/backups/trac.tmp cd $src for repo in `find . -name VERSION` do - - repo=`dirname $repo` - if [ "$repo" == "." ] - then - repo="" - fi - - # Just make the parent directory for $tmp/$repo - parentdir=`dirname $tmp/$repo` - ret=`mkdir -p $parentdir 2>&1` - code=$? - if [ "$ret" ]; then - debug "$ret" - fi - if [ $code != 0 ]; then - error "command failed mkdir -p $parentdir" - fi - - ret=`trac-admin $src/$repo hotcopy $tmp/$repo 2>&1` - code=$? - if [ "$ret" ]; then - debug "$ret" - fi - if [ $code != 0 ]; then - error "command failed -- trac-admin $src/$repo hotcopy $tmp/$repo" - fi + repo=`dirname $repo` + if [ "$repo" == "." ] + then + repo="" + fi + + # Just make the parent directory for $tmp/$repo + parentdir=`dirname $tmp/$repo` + ret=`mkdir -p $parentdir 2>&1` + code=$? + if [ "$ret" ]; then + debug "$ret" + fi + if [ $code != 0 ]; then + error "command failed mkdir -p $parentdir" + fi + + ret=`trac-admin $src/$repo hotcopy $tmp/$repo 2>&1` + code=$? + if [ "$ret" ]; then + debug "$ret" + fi + if [ $code != 0 ]; then + error "command failed -- trac-admin $src/$repo hotcopy $tmp/$repo" + fi done if [ -d $dest -a -d $tmp ]; then - rm -rf $dest + rm -rf $dest fi if [ -d $tmp ]; then - mv $tmp $dest + mv $tmp $dest fi exit 0 diff --git a/handlers/wget b/handlers/wget index 51054ea..ebb391e 100644 --- a/handlers/wget +++ b/handlers/wget @@ -1,3 +1,5 @@ +# -*- mode: sh; sh-basic-offset: 3; indent-tabs-mode: nil; -*- +# vim: set filetype=sh sw=3 sts=3 expandtab autoindent: # # backupninja handler to do incremental backups using # wget and hardlinks, based on rsync handler @@ -77,84 +79,82 @@ getconf folder function rotate { - if [[ "$2" < 4 ]]; then - error "Rotate: minimum of 4 rotations" - exit 1 - fi + if [[ "$2" < 4 ]]; then + error "Rotate: minimum of 4 rotations" + exit 1 + fi - if [ -d $1.$2 ]; then - $nice $mv /$1.$2 /$1.tmp - fi + if [ -d $1.$2 ]; then + $nice $mv /$1.$2 /$1.tmp + fi - for ((n=`echo "$2 - 1" | bc`; n >= 0; n--)); do - if [ -d $1.$n ]; then - dest=`echo "$n + 1" | bc` - $nice $mv /$1.$n /$1.$dest - $touch /$1.$dest - fi - done + for ((n=`echo "$2 - 1" | bc`; n >= 0; n--)); do + if [ -d $1.$n ]; then + dest=`echo "$n + 1" | bc` + $nice $mv /$1.$n /$1.$dest + $touch /$1.$dest + fi + done - if [ -d $1.tmp ]; then - $nice $mv /$1.tmp /$1.0 - fi + if [ -d $1.tmp ]; then + $nice $mv /$1.tmp /$1.0 + fi - if [ -d $1.1 ]; then - $nice $cp -alf /$1.1/. /$1.0 - fi + if [ -d $1.1 ]; then + $nice $cp -alf /$1.1/. /$1.0 + fi } function move_files { - - ref=$tmp/makesnapshot-mymv-$$; - $touch -r $1 $ref; - $mv $1 $2; - $touch -r $ref $2; - $rm $ref; - + ref=$tmp/makesnapshot-mymv-$$; + $touch -r $1 $ref; + $mv $1 $2; + $touch -r $ref $2; + $rm $ref; } backupdir="$mountpoint/$backupdir" # does $backupdir exists? -if [ ! -d "$backupdir" ]; then - error "Backupdir $backupdir does not exist" - exit 1 +if [ ! -d "$backupdir" ]; then + error "Backupdir $backupdir does not exist" + exit 1 fi # setup number of increments if [ -z "$days" ]; then - keep="4" + keep="4" else - keep="`echo $days - 1 | bc -l`" + keep="`echo $days - 1 | bc -l`" fi # lockfile setup if [ ! -z "$lockfile" ]; then - $touch $lockfile || warning "Could not create lockfile $lockfile" + $touch $lockfile || warning "Could not create lockfile $lockfile" fi # nicelevel setup -if [ ! -z "$nicelevel" ]; then - nice="nice -n $nicelevel" -else - nice="" +if [ ! -z "$nicelevel" ]; then + nice="nice -n $nicelevel" +else + nice="" fi # set mv procedure if [ $enable_mv_timestamp_bug == "yes" ]; then - mv=move_files + mv=move_files fi # set excludes for path in $exclude; do - EXCLUDES="$EXCLUDES --exclude=$path" + EXCLUDES="$EXCLUDES --exclude=$path" done echo "Starting backup at `date`" >> $log @@ -162,19 +162,19 @@ echo "Starting backup at `date`" >> $log # mount backup destination folder as read-write if [ "$read_only" == "1" ] || [ "$read_only" == "yes" ]; then - if [ -d "$mountpoint" ]; then - mount -o remount,rw $mountpoint - if (($?)); then - error "Could not mount $mountpoint" - exit 1 - fi - fi + if [ -d "$mountpoint" ]; then + mount -o remount,rw $mountpoint + if (($?)); then + error "Could not mount $mountpoint" + exit 1 + fi + fi fi # the backup procedure if [ ! -d "$backupdir/$folder/$folder.0" ]; then - mkdir -p $backupdir/$folder/$folder.0 + mkdir -p $backupdir/$folder/$folder.0 fi info "Rotating $backupdir/$folder/$folder..." @@ -183,7 +183,7 @@ rotate $backupdir/$folder/$folder $keep info "Wget'ing $SECTION on $backupdir/$folder/$folder.0..." if [ ! -z "$badnwidth" ]; then - limit_rate="--limit-rate=$badnwidth""k" + limit_rate="--limit-rate=$badnwidth""k" fi cd $backupdir/$folder/$folder.0 @@ -195,25 +195,25 @@ $touch $backupdir/$folder/$folder.0 # remount backup destination as read-only if [ "$read_only" == "1" ] || [ "$read_only" == "yes" ]; then - mount -o remount,ro $mountpoint + mount -o remount,ro $mountpoint fi # check partition for errors if [ "$fscheck" == "1" ] || [ "$fscheck" == "yes" ]; then - umount $mountpoint - if (($?)); then - warning "Could not umount $mountpoint to run fsck" - else - $nice $fsck -v -y $partition >> $log - mount $mountpoint - fi + umount $mountpoint + if (($?)); then + warning "Could not umount $mountpoint to run fsck" + else + $nice $fsck -v -y $partition >> $log + mount $mountpoint + fi fi # removes the lockfile if [ ! -z "$lockfile" ]; then - $rm $lockfile || warning "Could not remove lockfile $lockfile" + $rm $lockfile || warning "Could not remove lockfile $lockfile" fi echo "Finnishing backup at `date`" >> $log diff --git a/lib/easydialog.in b/lib/easydialog.in index 056c76c..6b41e1b 100644 --- a/lib/easydialog.in +++ b/lib/easydialog.in @@ -1,5 +1,6 @@ #!@BASH@ # -*- mode: sh; sh-basic-offset: 3; indent-tabs-mode: nil; -*- +# vim: set filetype=sh sw=3 sts=3 expandtab autoindent: # copyright 2002 lmoore@tump.com under the terms of the GNU LGPL. # additions 2005 collective@riseup.net @@ -20,43 +21,43 @@ DIALOG=dialog HELP= setApplicationTitle() { - BACKTITLE=$* + BACKTITLE=$* } setHelp() { - HELP="$@" + HELP="$@" } setDimension() { - WIDTH=$1 - HEIGHT=$2 + WIDTH=$1 + HEIGHT=$2 } booleanBox() { - $DIALOG --backtitle "$BACKTITLE" --title "$1" \ - `[ "$3" == no ] && echo '--defaultno'` --yesno "$2" $HEIGHT $WIDTH + $DIALOG --backtitle "$BACKTITLE" --title "$1" \ + `[ "$3" == no ] && echo '--defaultno'` --yesno "$2" $HEIGHT $WIDTH } msgBox() { - $DIALOG --backtitle "$BACKTITLE" --title "$1" \ - --msgbox "$2" $HEIGHT $WIDTH + $DIALOG --backtitle "$BACKTITLE" --title "$1" \ + --msgbox "$2" $HEIGHT $WIDTH } gaugeBox() { - $DIALOG --backtitle "$BACKTITLE" --title "$1" \ - --gauge "$2" $HEIGHT $WIDTH 0 + $DIALOG --backtitle "$BACKTITLE" --title "$1" \ + --gauge "$2" $HEIGHT $WIDTH 0 } inputBox() { - local temp=$(@MKTEMP@ -t backupninja.XXXXXX) || exit 1 - trap "rm -f $temp" 0 - REPLY= - $DIALOG --backtitle "$BACKTITLE" --title "$1" \ - --inputbox "$2" $HEIGHT $WIDTH "$3" 2> $temp - local status=$? - [ $status = 0 ] && REPLY=$(cat $temp) - rm -f $temp - return $status + local temp=$(@MKTEMP@ -t backupninja.XXXXXX) || exit 1 + trap "rm -f $temp" 0 + REPLY= + $DIALOG --backtitle "$BACKTITLE" --title "$1" \ + --inputbox "$2" $HEIGHT $WIDTH "$3" 2> $temp + local status=$? + [ $status = 0 ] && REPLY=$(cat $temp) + rm -f $temp + return $status } # Xdialog and {dialog,whiptail} use different mechanism to "qoute" the @@ -67,7 +68,7 @@ inputBox() { # mechanism to the standard double-quoting one. it receives two # arguements, the file that has the data and the box type. _listReplyHook() { - cat $1 + cat $1 } # this is the base implementation of all the list based boxes, it works @@ -75,79 +76,79 @@ _listReplyHook() { # this function with an extra argument specifying the actual box that # needs to be rendered. _genericListBox() { - local box=$1 - shift 1 - local title=$1 - local text=$2 - shift 2 - local temp=$(@MKTEMP@ -t backupninja.XXXXXX) || exit 1 - trap "rm -f $temp" 0 - REPLY= - $DIALOG $HELP $_DEFAULT --backtitle "$BACKTITLE" --title "$title" \ - $box "$text" $HEIGHT $WIDTH 10 \ - "$@" 2> $temp - local status=$? - [ $status = 0 ] && REPLY=$(_listReplyHook $temp $box) - rm -f $temp - _DEFAULT= - return $status + local box=$1 + shift 1 + local title=$1 + local text=$2 + shift 2 + local temp=$(@MKTEMP@ -t backupninja.XXXXXX) || exit 1 + trap "rm -f $temp" 0 + REPLY= + $DIALOG $HELP $_DEFAULT --backtitle "$BACKTITLE" --title "$title" \ + $box "$text" $HEIGHT $WIDTH 10 \ + "$@" 2> $temp + local status=$? + [ $status = 0 ] && REPLY=$(_listReplyHook $temp $box) + rm -f $temp + _DEFAULT= + return $status } setDefault() { - _DEFAULT="--default-item $1" + _DEFAULT="--default-item $1" } menuBox() { - _genericListBox --menu "$@" + _genericListBox --menu "$@" } ## a menu box with additional help info displayed ## at the bottom of the window when an item is selected menuBoxHelp() { - HELP="--item-help" - _genericListBox --menu "$@" - status=$? - HELP= - return $status + HELP="--item-help" + _genericListBox --menu "$@" + status=$? + HELP= + return $status } ## a menu box with an addition button 'help' menuBoxHelpFile() { - HELP="--help-button" - _genericListBox --menu "$@" - status=$? - HELP= - return $status + HELP="--help-button" + _genericListBox --menu "$@" + status=$? + HELP= + return $status } checkBox() { - _genericListBox --checklist "$@" + _genericListBox --checklist "$@" } radioBox() { - _genericListBox --radiolist "$@" + _genericListBox --radiolist "$@" } textBox() { - $DIALOG --backtitle "$BACKTITLE" --title "$1" --textbox "$2" $HEIGHT $WIDTH + $DIALOG --backtitle "$BACKTITLE" --title "$1" --textbox "$2" $HEIGHT $WIDTH } passwordBox() { - local temp=$(@MKTEMP@ -t backupninja.XXXXXX) || exit 1 - trap "rm -f $temp" 0 - REPLY= - $DIALOG --backtitle "$BACKTITLE" --title "$1" \ - --passwordbox "$2" $HEIGHT $WIDTH 2> $temp - local status=$? - [ $status = 0 ] && REPLY=$(cat $temp) - rm -f $temp - return $status + local temp=$(@MKTEMP@ -t backupninja.XXXXXX) || exit 1 + trap "rm -f $temp" 0 + REPLY= + $DIALOG --backtitle "$BACKTITLE" --title "$1" \ + --passwordbox "$2" $HEIGHT $WIDTH 2> $temp + local status=$? + [ $status = 0 ] && REPLY=$(cat $temp) + rm -f $temp + return $status } ######################################################### ## begin-item-display style lists -## +## ## these lists are built by calling fuctions multiple times. ## this can make it easier to build your list in a loop ## @@ -177,7 +178,7 @@ listDisplay() { boxtype=$1 local temp=$(@MKTEMP@ -t backupninja.XXXXXX) || exit 1 trap "rm -f $temp" 0 - + local label local text local status @@ -188,13 +189,13 @@ listDisplay() { echo -ne " --$boxtype '$_menu_msg' " echo -ne " $HEIGHT $WIDTH 10 " for ((i=0; i < $_menu_items ; i++)); do - label=${_menu_labels[$i]} - text=${_menu_text[$i]} - status=${_menu_status[$i]} - echo -ne " $label '$text' $status " + label=${_menu_labels[$i]} + text=${_menu_text[$i]} + status=${_menu_status[$i]} + echo -ne " $label '$text' $status " done ) | xargs $DIALOG 2> $temp - + local status=$? REPLY="" [ $status = 0 ] && REPLY=`cat $temp` @@ -219,10 +220,10 @@ formItem() { _form_text[$_form_items]=$2 let "_form_items += 1" } - + formDisplay() { local temp=$(@MKTEMP@ -t backupninja.XXXXXX) || exit 1 - + max_length=0 for ((i=0; i < ${#_form_labels[@]} ; i++)); do label=${_form_labels[$i]} @@ -232,19 +233,19 @@ formDisplay() { fi done let "max_length += 2" - + local xpos=1 ( echo -n -e "--form '$_form_title' 0 0 20" for ((i=0; i < $_form_items ; i++)); do - label=${_form_labels[$i]} - text=${_form_text[$i]} - echo -n -e " $label $xpos 1 '$text' $xpos $max_length 30 30" - let "xpos += _form_gap" + label=${_form_labels[$i]} + text=${_form_text[$i]} + echo -n -e " $label $xpos 1 '$text' $xpos $max_length 30 100" + let "xpos += _form_gap" done ) | xargs $DIALOG 2> $temp local status=$? - + ## ## the exit status is meaningless, it is always 0. ## i can't figure out how to get the exit status of dialog @@ -254,7 +255,7 @@ formDisplay() { ## swallowed by xargs. xargs should return different exit status ## depending on the exit status of the command run, but i have ## never been able to get that to work. - ## + ## REPLY= if [ $status = 0 ]; then diff --git a/lib/tools.in b/lib/tools.in index 0005be9..929826b 100644 --- a/lib/tools.in +++ b/lib/tools.in @@ -1,5 +1,6 @@ #!@BASH@ # -*- mode: sh; sh-basic-offset: 3; indent-tabs-mode: nil; -*- +# vim: set filetype=sh sw=3 sts=3 expandtab autoindent: # This file contains functions shared between ninjahelper and backupninja. @@ -10,39 +11,39 @@ # create a temporary file in a secure way. # function maketemp() { - local tempfile=`mktemp /tmp/$1.XXXXXXXX` - echo $tempfile + local tempfile=`mktemp /tmp/$1.XXXXXXXX` + echo $tempfile } ##################################################### ## CONFIG-FILE RELATED FUNCTIONS function setfile() { - CURRENT_CONF_FILE=$1 + CURRENT_CONF_FILE=$1 } function setsection() { - CURRENT_SECTION=$1 + CURRENT_SECTION=$1 } # # sets a global var with name equal to $1 # to the value of the configuration parameter $1 # $2 is the default. -# +# function getconf() { - CURRENT_PARAM=$1 - ret=`@AWK@ -f $libdirectory/parseini S=$CURRENT_SECTION P=$CURRENT_PARAM $CURRENT_CONF_FILE` - # if nothing is returned, set the default - if [ "$ret" == "" -a "$2" != "" ]; then - ret="$2" - fi - - # replace * with %, so that it is not globbed. - ret="${ret//\\*/__star__}" - - # this is weird, but single quotes are needed to - # allow for returned values with spaces. $ret is still expanded - # because it is in an 'eval' statement. - eval $1='$ret' + CURRENT_PARAM=$1 + ret=`@AWK@ -f $libdirectory/parseini S=$CURRENT_SECTION P=$CURRENT_PARAM $CURRENT_CONF_FILE` + # if nothing is returned, set the default + if [ "$ret" == "" -a "$2" != "" ]; then + ret="$2" + fi + + # replace * with %, so that it is not globbed. + ret="${ret//\\*/__star__}" + + # this is weird, but single quotes are needed to + # allow for returned values with spaces. $ret is still expanded + # because it is in an 'eval' statement. + eval $1='$ret' } diff --git a/lib/vserver.in b/lib/vserver.in index 153a9b0..b356b2b 100644 --- a/lib/vserver.in +++ b/lib/vserver.in @@ -1,4 +1,5 @@ # -*- mode: sh; sh-basic-offset: 3; indent-tabs-mode: nil; -*- +# vim: set filetype=sh sw=3 sts=3 expandtab autoindent: ##################################################### ## VSERVERS RELATED FUNCTIONS FOR NINJAHELPER @@ -66,7 +67,7 @@ init_vservers() { found_vservers=`ls $VROOTDIR | grep -E -v "lost\+found|ARCHIVES" | tr "\n" " "` if [ -z "$found_vservers" ]; then `if [ "$arg" = nodialog ]; then echo warning; else echo "msgBox warning"; fi` \ - "vservers enabled in $conffile, but no vserver was found in $VROOTDIR."; + "vservers enabled in $conffile, but no vserver was found in $VROOTDIR."; return fi vservers_are_available=yes @@ -142,9 +143,9 @@ choose_one_vserver() { while [ -z "$REPLY" ]; do [ -n "$selected_vservers" ] && setDefault $selected_vservers listBegin "$title" "Choose at least one Linux-Vserver to backup:" - for vserver in $found_vservers; do - listItem "$vserver" "Backup $vserver vserver" - done + for vserver in $found_vservers; do + listItem "$vserver" "Backup $vserver vserver" + done listDisplay menu [ $? = 0 ] || return 1 done @@ -152,17 +153,17 @@ choose_one_vserver() { } ## -## If Vservers are not enabled, set host_or_vservers='host' and then return +## If Vservers are not enabled, set host_or_vservers='host' and then return ## Else, have the user choose if he/she wants to perform the backup on the host ## system or on one Vserver. ## Set, respectively, $host_or_vservers to 'host' or 'vservers'. ## Returns 1 if cancelled. ## choose_host_or_one_vserver() { - if [ "$vservers_are_available" != "yes" ] + if [ "$vservers_are_available" != "yes" ] then - host_or_vservers='host' - return + host_or_vservers='host' + return fi local title=$1 # if there is one, set the previously chosen item as the default @@ -173,11 +174,11 @@ choose_host_or_one_vserver() { [ $? = 0 ] || return 1 case $REPLY in "host") - host_or_vservers='host' - ;; + host_or_vservers='host' + ;; "vserver") - host_or_vservers='vservers' - ;; + host_or_vservers='vservers' + ;; esac } @@ -191,10 +192,10 @@ choose_host_or_one_vserver() { ## Returns 1 if cancelled. ## choose_host_or_vservers_or_both() { - if [ "$vservers_are_available" != "yes" ] + if [ "$vservers_are_available" != "yes" ] then - host_or_vservers='host' - return + host_or_vservers='host' + return fi local title=$1 # if there is one, set the previously chosen item as the default @@ -206,14 +207,14 @@ choose_host_or_vservers_or_both() { [ $? = 0 ] || return 1 case $REPLY in "host") - host_or_vservers='host' - ;; + host_or_vservers='host' + ;; "vservers") - host_or_vservers='vservers' - ;; + host_or_vservers='vservers' + ;; "both") - host_or_vservers='both' - ;; + host_or_vservers='both' + ;; esac } @@ -236,13 +237,13 @@ choose_one_or_more_vservers() { local vserver_was_selected= REPLY= while [ -z "$REPLY" ]; do - listBegin "$title" "Choose at least one Linux-Vserver to backup:" - # list existing vservers, preselecting the previously selected ones - for vserver in $found_vservers; do - listItem "$vserver" "Backup $vserver vserver" `vserver_is_selected $vserver` - done - listDisplay checklist - [ $? = 0 ] || return 1 + listBegin "$title" "Choose at least one Linux-Vserver to backup:" + # list existing vservers, preselecting the previously selected ones + for vserver in $found_vservers; do + listItem "$vserver" "Backup $vserver vserver" `vserver_is_selected $vserver` + done + listDisplay checklist + [ $? = 0 ] || return 1 done # remove quotes around each vserver name selected_vservers=`echo $REPLY | tr -d '"'` diff --git a/src/Makefile.am b/src/Makefile.am index 3d5bdb8..5694e9a 100644 --- a/src/Makefile.am +++ b/src/Makefile.am @@ -11,6 +11,7 @@ edit = sed \ -e "s,@BASH\@,$(BASH),g" \ -e "s,@AWK\@,$(AWK),g" \ -e "s,@SED\@,$(SED),g" \ + -e "s,@STAT\@,$(STAT),g" \ -e 's,@datadir\@,$(pkgdatadir),g' \ -e "s,@libdir\@,$(pkglibdir),g" \ -e 's,@localstatedir\@,$(localstatedir),g' \ diff --git a/src/backupninja.in b/src/backupninja.in index afb9556..e8a820f 100755 --- a/src/backupninja.in +++ b/src/backupninja.in @@ -1,5 +1,6 @@ #!@BASH@ # -*- mode: sh; sh-basic-offset: 3; indent-tabs-mode: nil; -*- +# vim: set filetype=sh sw=3 sts=3 expandtab autoindent: # # |\_ # B A C K U P N I N J A /()/ @@ -22,31 +23,31 @@ ## FUNCTIONS function setupcolors () { - BLUE="\033[34;01m" - GREEN="\033[32;01m" - YELLOW="\033[33;01m" - PURPLE="\033[35;01m" - RED="\033[31;01m" - OFF="\033[0m" - CYAN="\033[36;01m" - COLORS=($BLUE $GREEN $YELLOW $RED $PURPLE $CYAN) + BLUE="\033[34;01m" + GREEN="\033[32;01m" + YELLOW="\033[33;01m" + PURPLE="\033[35;01m" + RED="\033[31;01m" + OFF="\033[0m" + CYAN="\033[36;01m" + COLORS=($BLUE $GREEN $YELLOW $RED $PURPLE $CYAN) } function colorize () { - if [ "$usecolors" == "yes" ]; then - local typestr=`echo "$@" | @SED@ 's/\(^[^:]*\).*$/\1/'` - [ "$typestr" == "Debug" ] && type=0 - [ "$typestr" == "Info" ] && type=1 - [ "$typestr" == "Warning" ] && type=2 - [ "$typestr" == "Error" ] && type=3 - [ "$typestr" == "Fatal" ] && type=4 - [ "$typestr" == "Halt" ] && type=5 - color=${COLORS[$type]} - endcolor=$OFF - echo -e "$color$@$endcolor" - else - echo -e "$@" - fi + if [ "$usecolors" == "yes" ]; then + local typestr=`echo "$@" | @SED@ 's/\(^[^:]*\).*$/\1/'` + [ "$typestr" == "Debug" ] && type=0 + [ "$typestr" == "Info" ] && type=1 + [ "$typestr" == "Warning" ] && type=2 + [ "$typestr" == "Error" ] && type=3 + [ "$typestr" == "Fatal" ] && type=4 + [ "$typestr" == "Halt" ] && type=5 + color=${COLORS[$type]} + endcolor=$OFF + echo -e "$color$@$endcolor" + else + echo -e "$@" + fi } # We have the following message levels: @@ -65,71 +66,71 @@ echo_debug_msg=0 usecolors=yes function printmsg() { - [ ${#@} -gt 1 ] || return - - type=$1 - shift - if [ $type == 100 ]; then - typestr=`echo "$@" | @SED@ 's/\(^[^:]*\).*$/\1/'` - [ "$typestr" == "Debug" ] && type=0 - [ "$typestr" == "Info" ] && type=1 - [ "$typestr" == "Warning" ] && type=2 - [ "$typestr" == "Error" ] && type=3 - [ "$typestr" == "Fatal" ] && type=4 - [ "$typestr" == "Halt" ] && type=5 - typestr="" - else - types=(Debug Info Warning Error Fatal Halt) - typestr="${types[$type]}: " - fi - - print=$[4-type] - - if [ $echo_debug_msg == 1 ]; then - echo -e "$typestr$@" >&2 - elif [ $debug ]; then - colorize "$typestr$@" >&2 - fi - - if [ $print -lt $loglevel ]; then - logmsg "$typestr$@" - fi + [ ${#@} -gt 1 ] || return + + type=$1 + shift + if [ $type == 100 ]; then + typestr=`echo "$@" | @SED@ 's/\(^[^:]*\).*$/\1/'` + [ "$typestr" == "Debug" ] && type=0 + [ "$typestr" == "Info" ] && type=1 + [ "$typestr" == "Warning" ] && type=2 + [ "$typestr" == "Error" ] && type=3 + [ "$typestr" == "Fatal" ] && type=4 + [ "$typestr" == "Halt" ] && type=5 + typestr="" + else + types=(Debug Info Warning Error Fatal Halt) + typestr="${types[$type]}: " + fi + + print=$[4-type] + + if [ $echo_debug_msg == 1 ]; then + echo -e "$typestr$@" >&2 + elif [ $debug ]; then + colorize "$typestr$@" >&2 + fi + + if [ $print -lt $loglevel ]; then + logmsg "$typestr$@" + fi } function logmsg() { - if [ -w "$logfile" ]; then - echo -e `date "+%h %d %H:%M:%S"` "$@" >> $logfile - fi + if [ -w "$logfile" ]; then + echo -e `LC_ALL=C date "+%h %d %H:%M:%S"` "$@" >> $logfile + fi } function passthru() { - printmsg 100 "$@" + printmsg 100 "$@" } function debug() { - printmsg 0 "$@" + printmsg 0 "$@" } function info() { - printmsg 1 "$@" + printmsg 1 "$@" } function warning() { - printmsg 2 "$@" + printmsg 2 "$@" } function error() { - printmsg 3 "$@" + printmsg 3 "$@" } function fatal() { - printmsg 4 "$@" - exit 2 + printmsg 4 "$@" + exit 2 } function halt() { - printmsg 5 "$@" - exit 2 + printmsg 5 "$@" + exit 2 } msgcount=0 function msg { - messages[$msgcount]=$1 - let "msgcount += 1" + messages[$msgcount]=$1 + let "msgcount += 1" } # @@ -142,14 +143,14 @@ function check_perms() { local perms local owners - perms=($(stat -L --format='%A' $file)) + perms=($(@STAT@ -L --format='%A' $file)) debug "perms: $perms" local gperm=${perms:4:3} debug "gperm: $gperm" local wperm=${perms:7:3} debug "wperm: $wperm" - owners=($(stat -L --format='%g %G %u %U' $file)) + owners=($(@STAT@ -L --format='%g %G %u %U' $file)) local gid=${owners[0]} local group=${owners[1]} local owner=${owners[2]} @@ -158,7 +159,7 @@ function check_perms() { echo "Configuration files must be owned by root! Dying on file $file" fatal "Configuration files must be owned by root! Dying on file $file" fi - + if [ "$wperm" != '---' ]; then echo "Configuration files must not be world writable/readable! Dying on file $file" fatal "Configuration files must not be world writable/readable! Dying on file $file" @@ -180,12 +181,12 @@ function check_perms() { # simple lowercase function function tolower() { - echo "$1" | tr '[:upper:]' '[:lower:]' + echo "$1" | tr '[:upper:]' '[:lower:]' } # simple to integer function function toint() { - echo "$1" | tr -d '[:alpha:]' + echo "$1" | tr -d '[:alpha:]' } # @@ -200,42 +201,45 @@ function toint() { # we grab the current time once, since processing # all the configs might take more than an hour. -nowtime=`date +%H` -nowday=`date +%d` -nowdayofweek=`date +%A` +nowtime=`LC_ALL=C date +%H` +nowday=`LC_ALL=C date +%d` +nowdayofweek=`LC_ALL=C date +%A` nowdayofweek=`tolower "$nowdayofweek"` function isnow() { - local when="$1" - set -- $when - whendayofweek=$1; at=$2; whentime=$3; - whenday=`toint "$whendayofweek"` - whendayofweek=`tolower "$whendayofweek"` - whentime=`echo "$whentime" | @SED@ 's/:[0-9][0-9]$//' | @SED@ -r 's/^([0-9])$/0\1/'` - - if [ "$whendayofweek" == "everyday" -o "$whendayofweek" == "daily" ]; then - whendayofweek=$nowdayofweek - fi - - if [ "$whenday" == "" ]; then - if [ "$whendayofweek" != "$nowdayofweek" ]; then - whendayofweek=${whendayofweek%s} - if [ "$whendayofweek" != "$nowdayofweek" ]; then - return 0 - fi - fi - elif [ "$whenday" != "$nowday" ]; then - return 0 - fi - - [ "$at" == "at" ] || return 0 - [ "$whentime" == "$nowtime" ] || return 0 - - return 1 + local when="$1" + set -- $when + + [ "$when" == "manual" ] && return 0 + + whendayofweek=$1; at=$2; whentime=$3; + whenday=`toint "$whendayofweek"` + whendayofweek=`tolower "$whendayofweek"` + whentime=`echo "$whentime" | @SED@ 's/:[0-9][0-9]$//' | @SED@ -r 's/^([0-9])$/0\1/'` + + if [ "$whendayofweek" == "everyday" -o "$whendayofweek" == "daily" ]; then + whendayofweek=$nowdayofweek + fi + + if [ "$whenday" == "" ]; then + if [ "$whendayofweek" != "$nowdayofweek" ]; then + whendayofweek=${whendayofweek%s} + if [ "$whendayofweek" != "$nowdayofweek" ]; then + return 0 + fi + fi + elif [ "$whenday" != "$nowday" ]; then + return 0 + fi + + [ "$at" == "at" ] || return 0 + [ "$whentime" == "$nowtime" ] || return 0 + + return 1 } function usage() { - cat << EOF + cat << EOF $0 usage: This script allows you to coordinate system backup by dropping a few simple configuration files into @CFGDIR@/backup.d/. Typically, this @@ -254,18 +258,18 @@ The following options are available: -n, --now Perform actions now, instead of when they might be scheduled. No output will be created unless also run with -d. - --run FILE Execute the specified action file and then exit. + --run FILE Execute the specified action file and then exit. Also puts backupninja in debug mode. - + When in debug mode, output to the console will be colored: EOF - usecolors=yes - colorize "Debug: Debugging info (when run with -d)" - colorize "Info: Informational messages (verbosity level 4)" - colorize "Warning: Warnings (verbosity level 3 and up)" - colorize "Error: Errors (verbosity level 2 and up)" - colorize "Fatal: Errors which halt a given backup action (always shown)" - colorize "Halt: Errors which halt the whole backupninja run (always shown)" + usecolors=yes + colorize "Debug: Debugging info (when run with -d)" + colorize "Info: Informational messages (verbosity level 4)" + colorize "Warning: Warnings (verbosity level 3 and up)" + colorize "Error: Errors (verbosity level 2 and up)" + colorize "Fatal: Errors which halt a given backup action (always shown)" + colorize "Halt: Errors which halt the whole backupninja run (always shown)" } ## @@ -276,89 +280,93 @@ EOF ## function process_action() { - local file="$1" - local suffix="$2" - local run="no" - setfile $file - - # skip over this config if "when" option - # is not set to the current time. - getconf when "$defaultwhen" - if [ "$processnow" == 1 ]; then - info ">>>> starting action $file (because of --now)" - run="yes" - elif [ "$when" == "hourly" ]; then - info ">>>> starting action $file (because 'when = hourly')" - run="yes" - else - IFS=$'\t\n' - for w in $when; do - IFS=$' \t\n' - isnow "$w" - ret=$? - IFS=$'\t\n' - if [ $ret == 0 ]; then - debug "skipping $file because it is not $w" - else - info ">>>> starting action $file (because it is $w)" - run="yes" - fi - done - IFS=$' \t\n' - fi - debug $run - [ "$run" == "no" ] && return - - let "actions_run += 1" - - # call the handler: - local bufferfile=`maketemp backupninja.buffer` - echo "" > $bufferfile - echo_debug_msg=1 - ( - . $scriptdirectory/$suffix $file - ) 2>&1 | ( - while read a; do - echo $a >> $bufferfile - [ $debug ] && colorize "$a" - done - ) - retcode=$? - # ^^^^^^^^ we have a problem! we can't grab the return code "$?". grrr. - echo_debug_msg=0 - - _warnings=`cat $bufferfile | grep "^Warning: " | wc -l` - _errors=`cat $bufferfile | grep "^Error: " | wc -l` - _fatals=`cat $bufferfile | grep "^Fatal: " | wc -l` - _halts=`cat $bufferfile | grep "^Halt: " | wc -l` - - ret=`grep "\(^Warning: \|^Error: \|^Fatal: \|Halt: \)" $bufferfile` - rm $bufferfile - if [ $_halts != 0 ]; then - msg "*halt* -- $file" - errormsg="$errormsg\n== halt request from $file==\n\n$ret\n" - passthru "Halt: <<<< finished action $file: FAILED" - elif [ $_fatals != 0 ]; then - msg "*failed* -- $file" - errormsg="$errormsg\n== fatal errors from $file ==\n\n$ret\n" - passthru "Fatal: <<<< finished action $file: FAILED" - elif [ $_errors != 0 ]; then - msg "*error* -- $file" - errormsg="$errormsg\n== errors from $file ==\n\n$ret\n" - error "<<<< finished action $file: ERROR" - elif [ $_warnings != 0 ]; then - msg "*warning* -- $file" - errormsg="$errormsg\n== warnings from $file ==\n\n$ret\n" - warning "<<<< finished action $file: WARNING" - else - msg "success -- $file" - info "<<<< finished action $file: SUCCESS" - fi - - let "halts += _halts" - let "fatals += _fatals" - let "errors += _errors" - let "warnings += _warnings" + local file="$1" + local suffix="$2" + local run="no" + setfile $file + + # skip over this config if "when" option + # is not set to the current time. + getconf when "$defaultwhen" + if [ "$processnow" == 1 ]; then + info ">>>> starting action $file (because of --now)" + run="yes" + elif [ "$when" == "hourly" ]; then + info ">>>> starting action $file (because 'when = hourly')" + run="yes" + else + IFS=$'\t\n' + for w in $when; do + IFS=$' \t\n' + isnow "$w" + ret=$? + IFS=$'\t\n' + if [ $ret == 0 ]; then + debug "skipping $file because current time does not match $w" + else + info ">>>> starting action $file (because current time matches $w)" + run="yes" + fi + done + IFS=$' \t\n' + fi + debug $run + [ "$run" == "no" ] && return + + let "actions_run += 1" + + # call the handler: + local bufferfile=`maketemp backupninja.buffer` + echo "" > $bufferfile + echo_debug_msg=1 + ( + . $scriptdirectory/$suffix $file + ) 2>&1 | ( + while read a; do + echo $a >> $bufferfile + [ $debug ] && colorize "$a" + done + ) + retcode=$? + # ^^^^^^^^ we have a problem! we can't grab the return code "$?". grrr. + echo_debug_msg=0 + + _warnings=`cat $bufferfile | grep "^Warning: " | wc -l` + _errors=`cat $bufferfile | grep "^Error: " | wc -l` + _fatals=`cat $bufferfile | grep "^Fatal: " | wc -l` + _halts=`cat $bufferfile | grep "^Halt: " | wc -l` + _infos=`cat $bufferfile | grep "^Info: " | wc -l` + + ret=`grep "\(^Info: \|^Warning: \|^Error: \|^Fatal: \|Halt: \)" $bufferfile` + rm $bufferfile + if [ $_halts != 0 ]; then + msg "*halt* -- $file" + errormsg="$errormsg\n== halt request from $file==\n\n$ret\n" + passthru "Halt: <<<< finished action $file: FAILED" + elif [ $_fatals != 0 ]; then + msg "*failed* -- $file" + errormsg="$errormsg\n== fatal errors from $file ==\n\n$ret\n" + passthru "Fatal: <<<< finished action $file: FAILED" + elif [ $_errors != 0 ]; then + msg "*error* -- $file" + errormsg="$errormsg\n== errors from $file ==\n\n$ret\n" + error "<<<< finished action $file: ERROR" + elif [ $_warnings != 0 ]; then + msg "*warning* -- $file" + errormsg="$errormsg\n== warnings from $file ==\n\n$ret\n" + warning "<<<< finished action $file: WARNING" + else + msg "success -- $file" + if [ $_infos != 0 -a "$reportinfo" == "yes" ]; then + errormsg="$errormsg\n== infos from $file ==\n\n$ret\n" + fi + info "<<<< finished action $file: SUCCESS" + fi + + let "halts += _halts" + let "fatals += _fatals" + let "errors += _errors" + let "warnings += _warnings" } ##################################################### @@ -371,71 +379,71 @@ loglevel=3 ## process command line options while [ $# -ge 1 ]; do - case $1 in - -h|--help) usage;; - -d|--debug) debug=1;; - -t|--test) test=1;debug=1;; - -n|--now) processnow=1;; - -f|--conffile) - if [ -f $2 ]; then - conffile=$2 - else - echo "-f|--conffile option must be followed by an existing filename" - fatal "-f|--conffile option must be followed by an existing filename" - usage - fi - # we shift here to avoid processing the file path - shift - ;; - --run) - debug=1 - if [ -f $2 ]; then - singlerun=$2 - processnow=1 - else - echo "--run option must be followed by a backupninja action file" - fatal "--run option must be followed by a backupninja action file" - usage - fi - shift - ;; - *) - debug=1 - echo "Unknown option $1" - fatal "Unknown option $1" - usage - exit - ;; - esac - shift -done + case $1 in + -h|--help) usage;; + -d|--debug) debug=1; export BACKUPNINJA_DEBUG=yes;; + -t|--test) test=1;debug=1;; + -n|--now) processnow=1;; + -f|--conffile) + if [ -f $2 ]; then + conffile=$2 + else + echo "-f|--conffile option must be followed by an existing filename" + fatal "-f|--conffile option must be followed by an existing filename" + usage + fi + # we shift here to avoid processing the file path + shift + ;; + --run) + debug=1 + if [ -f $2 ]; then + singlerun=$2 + processnow=1 + else + echo "--run option must be followed by a backupninja action file" + fatal "--run option must be followed by a backupninja action file" + usage + fi + shift + ;; + *) + debug=1 + echo "Unknown option $1" + fatal "Unknown option $1" + usage + exit + ;; + esac + shift +done #if [ $debug ]; then -# usercolors=yes +# usercolors=yes #fi ## Load and confirm basic configuration values # bootstrap if [ ! -r "$conffile" ]; then - echo "Configuration file $conffile not found." - fatal "Configuration file $conffile not found." + echo "Configuration file $conffile not found." + fatal "Configuration file $conffile not found." fi # find $libdirectory libdirectory=`grep '^libdirectory' $conffile | @AWK@ '{print $3}'` if [ -z "$libdirectory" ]; then - if [ -d "@libdir@" ]; then - libdirectory="@libdir@" - else - echo "Could not find entry 'libdirectory' in $conffile." - fatal "Could not find entry 'libdirectory' in $conffile." - fi + if [ -d "@libdir@" ]; then + libdirectory="@libdir@" + else + echo "Could not find entry 'libdirectory' in $conffile." + fatal "Could not find entry 'libdirectory' in $conffile." + fi else - if [ ! -d "$libdirectory" ]; then - echo "Lib directory $libdirectory not found." - fatal "Lib directory $libdirectory not found." - fi + if [ ! -d "$libdirectory" ]; then + echo "Lib directory $libdirectory not found." + fatal "Lib directory $libdirectory not found." + fi fi # include shared functions @@ -452,6 +460,7 @@ getconf reportemail getconf reporthost getconf reportspace getconf reportsuccess yes +getconf reportinfo no getconf reportuser getconf reportwarning yes getconf loglevel 3 @@ -479,15 +488,15 @@ getconf admingroup root init_vservers nodialog if [ ! -d "$configdirectory" ]; then - echo "Configuration directory '$configdirectory' not found." - fatal "Configuration directory '$configdirectory' not found." + echo "Configuration directory '$configdirectory' not found." + fatal "Configuration directory '$configdirectory' not found." fi [ -f "$logfile" ] || touch $logfile if [ "$UID" != "0" ]; then - echo "`basename $0` can only be run as root" - exit 1 + echo "`basename $0` can only be run as root" + exit 1 fi ## Process each configuration file @@ -504,34 +513,34 @@ actions_run=0 errormsg="" if [ "$singlerun" ]; then - files=$singlerun + files=$singlerun else - files=`find $configdirectory -follow -mindepth 1 -maxdepth 1 -type f ! -name '.*.swp' | sort -n` + files=`find $configdirectory -follow -mindepth 1 -maxdepth 1 -type f ! -name '.*.swp' | sort -n` - if [ -z "$files" ]; then - fatal "No backup actions configured in '$configdirectory', run ninjahelper!" - fi + if [ -z "$files" ]; then + fatal "No backup actions configured in '$configdirectory', run ninjahelper!" + fi fi for file in $files; do - [ -f "$file" ] || continue - [ "$halts" = "0" ] || continue - - check_perms ${file%/*} # check containing dir - check_perms $file - suffix="${file##*.}" - base=`basename $file` - if [ "${base:0:1}" == "0" -o "$suffix" == "disabled" ]; then - info "Skipping $file" - continue - fi - - if [ -e "$scriptdirectory/$suffix" ]; then - process_action $file $suffix - else - error "Can't process file '$file': no handler script for suffix '$suffix'" - msg "*missing handler* -- $file" - fi + [ -f "$file" ] || continue + [ "$halts" = "0" ] || continue + + check_perms ${file%/*} # check containing dir + check_perms $file + suffix="${file##*.}" + base=`basename $file` + if [ "${base:0:1}" == "0" -o "$suffix" == "disabled" ]; then + info "Skipping $file" + continue + fi + + if [ -e "$scriptdirectory/$suffix" ]; then + process_action $file $suffix + else + error "Can't process file '$file': no handler script for suffix '$suffix'" + msg "*missing handler* -- $file" + fi done ## mail the messages to the report address @@ -546,38 +555,38 @@ else doit=0 fi if [ $doit == 1 ]; then - debug "send report to $reportemail" - hostname=`hostname` - [ $warnings == 0 ] || subject="WARNING" - [ $errors == 0 ] || subject="ERROR" - [ $fatals == 0 ] || subject="FAILED" - - { - for ((i=0; i < ${#messages[@]} ; i++)); do - echo ${messages[$i]} - done - echo -e "$errormsg" - if [ "$reportspace" == "yes" ]; then - previous="" - for i in $(ls "$configdirectory"); do - backuploc=$(grep ^directory "$configdirectory"/"$i" | @AWK@ '{print $3}') - if [ "$backuploc" != "$previous" -a -n "$backuploc" ]; then - df -h "$backuploc" - previous="$backuploc" - fi - done - fi - } | mail -s "backupninja: $hostname $subject" $reportemail + debug "send report to $reportemail" + hostname=`hostname` + [ $warnings == 0 ] || subject="WARNING" + [ $errors == 0 ] || subject="ERROR" + [ $fatals == 0 ] || subject="FAILED" + + { + for ((i=0; i < ${#messages[@]} ; i++)); do + echo ${messages[$i]} + done + echo -e "$errormsg" + if [ "$reportspace" == "yes" ]; then + previous="" + for i in $(ls "$configdirectory"); do + backuploc=$(grep ^directory "$configdirectory"/"$i" | @AWK@ '{print $3}') + if [ "$backuploc" != "$previous" -a -n "$backuploc" -a -d "$backuploc" ]; then + df -h "$backuploc" + previous="$backuploc" + fi + done + fi + } | mail -s "backupninja: $hostname $subject" $reportemail fi if [ $actions_run != 0 ]; then - info "FINISHED: $actions_run actions run. $fatals fatal. $errors error. $warnings warning." - if [ "$halts" != "0" ]; then - info "Backup was halted prematurely. Some actions may not have run." - fi + info "FINISHED: $actions_run actions run. $fatals fatal. $errors error. $warnings warning." + if [ "$halts" != "0" ]; then + info "Backup was halted prematurely. Some actions may not have run." + fi fi if [ -n "$reporthost" ]; then - debug "send $logfile to $reportuser@$reporthost:$reportdirectory" - rsync -qt $logfile $reportuser@$reporthost:$reportdirectory + debug "send $logfile to $reportuser@$reporthost:$reportdirectory" + rsync -qt $logfile $reportuser@$reporthost:$reportdirectory fi diff --git a/src/ninjahelper.in b/src/ninjahelper.in index 040727b..2a7faa9 100755 --- a/src/ninjahelper.in +++ b/src/ninjahelper.in @@ -1,5 +1,6 @@ #!@BASH@ # -*- mode: sh; sh-basic-offset: 3; indent-tabs-mode: nil; -*- +# vim: set filetype=sh sw=3 sts=3 expandtab autoindent: #################################################### ## Functions @@ -27,7 +28,7 @@ get_next_filename() { require_packages() { for pkg in "$@"; do installed=`dpkg -s $pkg | grep 'ok installed'` - if [ -z "$installed" ]; then + if [ -z "$installed" ]; then booleanBox "install $pkg?" "This backup action requires package $pkg. Do you want to install it now?" if [ $? = 0 ]; then apt-get install $pkg @@ -54,7 +55,7 @@ donew() { listItem $helper_function "$helper_info" done listDisplay menu - + [ $? = 1 ] && return result="$REPLY" [ "$result" = "return" -o "$result" = "" ] && return @@ -62,10 +63,10 @@ donew() { $run_wizard result=$? # 0 is ok, 1 is cancel, anything else is bad. - if [ $result != 1 -a $result != 0 ]; then + if [ $result != 1 -a $result != 0 ]; then echo "An error occurred ($result), bailing out. Hit return to continue." read - fi + fi } do_rm_action() { @@ -84,17 +85,17 @@ do_run() { do_xedit() { if [ -z "$EDITOR" -o ! -x "`which $EDITOR`" ]; then if [ -h /etc/alternatives/editor -a -x "`readlink /etc/alternatives/editor`" ]; then - EDITOR="`readlink /etc/alternatives/editor`" + EDITOR="`readlink /etc/alternatives/editor`" elif [ -x "`which nano`" ]; then - EDITOR="`which nano`" + EDITOR="`which nano`" elif [ -x "`which vim`" ]; then - EDITOR="`which vim`" + EDITOR="`which vim`" elif [ -x "`which vi`" ]; then - EDITOR="`which vi`" + EDITOR="`which vi`" else - echo "No suitable editor found." - echo "Please define $EDITOR or configure /etc/alternatives/editor." - exit + echo "No suitable editor found." + echo "Please define $EDITOR or configure /etc/alternatives/editor." + exit fi fi $EDITOR $1 @@ -138,17 +139,17 @@ doaction() { name "change the filename" \ run "run this action now" \ test "do a test run" \ - kill "remove this action" + kill "remove this action" [ $? = 1 ] && return; result="$REPLY" case "$result" in - "view") dialog --textbox $action 0 0;; + "view") dialog --textbox $action 0 0;; "xedit") do_xedit $action;; "disable") do_disable $action; return;; "enable") do_enable $action; return;; "name") do_rename $action; return;; "run") do_run $action;; - "test") do_run_test $action;; + "test") do_run_test $action;; "kill") do_rm_action $action; return;; "main") return;; esac @@ -178,7 +179,7 @@ fi # bootstrap conffile="@CFGDIR@/backupninja.conf" if [ ! -r "$conffile" ]; then - echo "Configuration file $conffile not found." + echo "Configuration file $conffile not found." exit 1 fi @@ -188,12 +189,12 @@ if [ -z "$libdirectory" ]; then if [ -d "@libdir@" ]; then libdirectory="@libdir@" else - echo "Could not find entry 'libdirectory' in $conffile." + echo "Could not find entry 'libdirectory' in $conffile." exit 1 fi else if [ ! -d "$libdirectory" ]; then - echo "Lib directory $libdirectory not found." + echo "Lib directory $libdirectory not found." exit 1 fi fi @@ -247,7 +248,7 @@ done menuBox "main menu" "Select a backup action for more options, or create a new action:" $menulist \ new "create a new backup action" \ - quit "leave ninjahelper" + quit "leave ninjahelper" [ $? = 1 -o $? = 255 ] && exit 0; diff --git a/src/ninjareport.in b/src/ninjareport.in index b68efb8..4bc244e 100755 --- a/src/ninjareport.in +++ b/src/ninjareport.in @@ -1,5 +1,6 @@ #!@BASH@ # -*- mode: sh; sh-basic-offset: 3; indent-tabs-mode: nil; -*- +# vim: set filetype=sh sw=3 sts=3 expandtab autoindent: # # # Ninjareport - generate a single simple report for a lot of hosts @@ -13,7 +14,7 @@ # . check for logs that are never updating and warn # . change constantly updating logs (maildir) to be more friendly # . documentation -# . maybe make config option that lists all hosts that should report-in, and if +# . maybe make config option that lists all hosts that should report-in, and if # one doesn't then warn # . restrict rsync somehow? # . abstract path for logtail @@ -29,7 +30,7 @@ process() { # # note: some backups never finish, such as the maildir one, need to handle these # perhaps by looking for FAILED messages? -# note2: what about logs that aren't being updated? this is a failure case and +# note2: what about logs that aren't being updated? this is a failure case and # should be looked for # note3: there are also these entries: # Jan 20 14:00:01 Fatal: No backup actions configured in '/etc/backup.d', run ninjahelper! @@ -38,27 +39,27 @@ process() { logupdates=`maketemp ninjadata` /usr/sbin/logtail -f $host > $logupdates -grep FINISHED $logupdates | +grep FINISHED $logupdates | ( - fatal=0 - warning=0 - error=0 - - while read line - do - line_array=($line) - fatal=$(($fatal + ${line_array[8]})) - error=$(($error + ${line_array[10]})) - warning=$(($warning + ${line_array[12]})) - done - if (( $fatal || $warning || $error )); then - echo "`basename $host .log`: $fatal fatals found, $error errors found, $warning warnings found" >> $statusfile - echo "" >> $reportappend - echo "`basename $host .log` log entries since last ninjareport" >> $reportappend - echo "---------" >> $reportappend - cat $logupdates >> $reportappend - rm $logupdates - fi + fatal=0 + warning=0 + error=0 + + while read line + do + line_array=($line) + fatal=$(($fatal + ${line_array[8]})) + error=$(($error + ${line_array[10]})) + warning=$(($warning + ${line_array[12]})) + done + if (( $fatal || $warning || $error )); then + echo "`basename $host .log`: $fatal fatals found, $error errors found, $warning warnings found" >> $statusfile + echo "" >> $reportappend + echo "`basename $host .log` log entries since last ninjareport" >> $reportappend + echo "---------" >> $reportappend + cat $logupdates >> $reportappend + rm $logupdates + fi ) } @@ -69,31 +70,31 @@ reportfile=`maketemp ninjareport` # Generate a report, only if there are failures if [ -s $statusfile ]; then - echo " backupninja mission failures - `date`" >> $reportfile - echo " --------------------------------------------------------------" >> $reportfile - echo "" >> $reportfile - cat $statusfile | column -t >> $reportfile - echo "" >> $reportfile - echo " log entries from failed reports" >> $reportfile - echo " -----------------------------------" >> $reportfile - cat $reportappend >> $reportfile + echo " backupninja mission failures - `date`" >> $reportfile + echo " --------------------------------------------------------------" >> $reportfile + echo "" >> $reportfile + cat $statusfile | column -t >> $reportfile + echo "" >> $reportfile + echo " log entries from failed reports" >> $reportfile + echo " -----------------------------------" >> $reportfile + cat $reportappend >> $reportfile fi } usage() { - cat << EOF + cat << EOF This script generates a backupninja status report for all configured systems. It requires that each status report is placed in a spot where ninjareport can read it, reports are mailed to the reportemail configured in @CFGDIR@/backupninja.conf. The following options are available: --h, --help This usage message --f, --conffile FILE Use FILE for the configuration instead - of @CFGDIR@/backupninja.conf --m, --mail <email> Mail the report to this address --o, --out Don't mail the report, just display it +-h, --help This usage message +-f, --conffile FILE Use FILE for the configuration instead + of @CFGDIR@/backupninja.conf +-m, --mail <email> Mail the report to this address +-o, --out Don't mail the report, just display it EOF } @@ -106,45 +107,45 @@ conffile="@CFGDIR@/backupninja.conf" ## process command line options while [ $# -ge 1 ]; do - case $1 in - -h|--help) - usage - exit 0 - ;; - -f|--conffile) - if [ -f $2 ]; then - conffile=$2 - else - echo "-f|--conffile option must be followed by an existing filename" - fatal "-f|--conffile option must be followed by an existing filename" - usage - fi - # we shift here to avoid processing the file path - shift - ;; - -m|--mail) - reportemail=$2 - shift - ;; - -o|--out) - mail=0 - display=1 - ;; - *) - echo "Unknown option $1" - usage - exit - ;; - esac - shift -done + case $1 in + -h|--help) + usage + exit 0 + ;; + -f|--conffile) + if [ -f $2 ]; then + conffile=$2 + else + echo "-f|--conffile option must be followed by an existing filename" + fatal "-f|--conffile option must be followed by an existing filename" + usage + fi + # we shift here to avoid processing the file path + shift + ;; + -m|--mail) + reportemail=$2 + shift + ;; + -o|--out) + mail=0 + display=1 + ;; + *) + echo "Unknown option $1" + usage + exit + ;; + esac + shift +done ## Load and confirm basic configuration values # bootstrap if [ ! -r "$conffile" ]; then - echo "Configuration file $conffile not found." - fatal "Configuration file $conffile not found." + echo "Configuration file $conffile not found." + fatal "Configuration file $conffile not found." fi # find $libdirectory @@ -153,12 +154,12 @@ if [ -z "$libdirectory" ]; then if [ -d "@libdir@" ]; then libdirectory="@libdir@" else - echo "Could not find entry 'libdirectory' in $conffile." + echo "Could not find entry 'libdirectory' in $conffile." exit 1 fi else if [ ! -d "$libdirectory" ]; then - echo "Lib directory $libdirectory not found." + echo "Lib directory $libdirectory not found." exit 1 fi fi @@ -176,17 +177,17 @@ getconf reportemail hosts=`find $reportdirectory -follow -mindepth 1 -maxdepth 1 -type f ! -name '*.offset' | sort -n` if [ -z "$hosts" ]; then - echo "Fatal: No backupninja reports found in '$reportdirectory'!" - mail=0 + echo "Fatal: No backupninja reports found in '$reportdirectory'!" + mail=0 fi statusfile=`maketemp ninjastatus` reportappend=`maketemp ninjaappend` for host in $hosts; do - [ -f "$host" ] || continue - # Check somehow that the file is a valid report file - process $host + [ -f "$host" ] || continue + # Check somehow that the file is a valid report file + process $host done generatereport @@ -194,12 +195,11 @@ generatereport ## mail the report to the report address or display it if [ -s $reportfile ]; then - if [ $mail == 1 ]; then - mail -s "backupninja mission failure report" $reportemail < $reportfile - fi + if [ $mail == 1 ]; then + mail -s "backupninja mission failure report" $reportemail < $reportfile + fi fi if [ $display == 1 ]; then - cat $reportfile + cat $reportfile fi - |