Age | Commit message (Collapse) | Author |
|
|
|
|
|
set the backuptag on crabgrass to not use the default one, otherwise
it wont be collected
|
|
add vsnames support to mysql config
|
|
as true
|
|
been evaluated at this point
|
|
when you have two rdiff-backup calls, then the User is a Duplicate
definition, i think the only way to do it is to have a variable that
you can set to turn off user handling (which we make $installuser and
$usermanage be consistant with $installkey and $keymanage), which is
set true by default, but when you have a second rdiff call on a host,
you can set it to false
|
|
change the $host variable to no longer use the $user variable, that way
multiple rdiff's can be instantiated for a particular host, with different
name variables
in order to do that, the class also has to have the $installkeys variable that
can be passed to backupninja::client::key so that the second rdiff instantiation would
set installkeys => false, otherwise there will be an attempt to duplicate the resource
|
|
differentiated
by their name variable
|
|
|
|
|
|
|
|
|
|
|
|
|
|
get rid of the ninjareport directory
configure v.admin's ssh-key auth to kakapo
move v.admin's backups into the /crypt/serverbackups directory on kakapo
setup puffin to do ninjareport
fix backupserver_tag
|
|
|
|
declaration doesn't
agree with your $fqdn
|
|
$backupdir, not $backupdir_override
|
|
move backup keys into a 'backupkeys' directory
make a ninjareport directory to contain the pieces for ninjareport
move the backupninja::server definition for kakapo into site.pp, including the $backupdir definition
add a backupkeys piece to the backupninja::server module which should install the key
|
|
and then made better later.
The ninjareport is a pretty bad way of collecting all the backupninja
reports into one email called the 'ninjareport' which contains all the
hosts with backupninja problems, and a notice about any hosts that
haven't reported in for a while.
The way it works is each participating host has a limited command ssh
key installed in /root/.ssh/id_rsa which enables it to rsync its
/var/log/backupninja.log to
ninja@admin:/var/lib/backupninja/reports/$hostname.log. After the
backup runs on the host, it does this rsync.
On v.admin, there is a /var/lib/backupninja/.ssh/authorized_keys which
has the ssh public key part of each host's ssh key, command-limited to
only allow an rsync of their backupninja.log file. There is a cronjob
there that runs once a day and collects new logs and does some basic
figuring and then collects those all into one email.
There are *many* ways that this needs to be improved. From the puppet
side, the approach is all wrong, and there is a lot of dupliacted
configuration that is being done, it needs to be turned into a define
and export and collection needs to be done. The whole thing needs to
be redesigned from the ground up, but I don't have the mental space to
do it right now, and it needs to work :(
|
|
|
|
|
|
Wed Apr 30 16:22:48 EDT 2008 anarcat@koumbit.org
* remove read permissions on backup directory on group so that a sandbox user can't read other people's directories
Wed Apr 30 16:01:53 EDT 2008 anarcat@koumbit.org
* default the sandbox host to local ($fqdn) and depend on the sandbox directory properly
Thu Mar 13 18:49:38 EDT 2008 anarcat@koumbit.org
* create the backup directory along with the user
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|