I have been using XenServer now for a few years. I was using Hyper-V and VMware, but I inherited a bunch of VMs on a few XenServer hosts and decided I would just work with Xen and see how I like it.
I’ve gained a love/hate relationship with XenServer. But for the love, I can say that its pretty rock solid- Hyper-V was definitely not.
I use the free version of XenServer and this whole backup thing can be run in the free version.
I had a user email me with this set of corrections, be aware. I don’t use XenServer anymore, not to mention that XenServer 6 is EOL. But I thought I’d paste these here:
– dbtool is not installed by default. I had to download it. It’s free but it would have been so much easier to just have the link to it.
– in the section where you give execute attribute to the files you point to a ‘dbutil’ file. This file doesn’t exist anymore and isn’t used in the script. You should scrap that line 🙂
– in the first lines of vm_backup.sh you call upon ‘vm_backup.lib’ wich is non existent. Again put a link to the file location 🙂
– editing ssmtp.conf you should have ‘UserTLS=YES’ AND ‘UseSTARTTLS=YES”.
1. Notes and References, etc
Here are a few notes to begin with:
I chose to backup to an NFS share rather than iSCSI because I could have a single point of backup for my many servers.
The existing writeups were a little lacking on some points, so I decided to make it as easy as possible for others; as well as give myself a place to use for expansion and reference.
I intend to update this when I find bugs or new XenServer editions are released.
Also, remember that this script runs in Dom0 on a XenServer, so use at your own risk. I did all my testing on some non production machines before I went production with this.
This is a no downtime script set, it backs up the VM “live” by making a snapshot, and then exporting that snapshot to a mount point on Dom0.
Any time you see xxxx, this is just me crossing out my specific names, and ask you to edit in your names and IP addresses/etc.
Here is a link to the script that I have put up on a dropbox.
**1/11/15 modified the link to a current**
2. The Layout and Explanation of Ideas
So, as you might have read, this resides in Dom0 of the XenServer. That means, that it will back up that particular Xen host and its guests, or if the host resides in a pool, you can backup that host, other hosts in the pool and the other guests as well. Keep in mind how you structure your network layout as this could put a heavy strain on some networks while the scripts are running.
This also means that If you mess up in some syntax on a portion of the script, you could shut the Xen host down by possibly filling up the Dom0 with data and making the Xen get angry (this did happen to me by missing a trailing / on one of the portions of the script).
So this is how the scripts are organized- the orange/white are parts that you will eventually need to modify to fit your environment.
You have Audit.sh:
/opt/backup/dbtool -a -v /var/xapi/state.db > /mnt/backup/auditlog-xxxx.txt rm -f /opt/backup/message.tmp cat /opt/backup/mailheader.txt > /opt/backup/message.tmp echo "Subject: Auditlog from Xen Cluster" >> /opt/backup/message.tmp echo "Here is the auditlog from the Xen Cluster" >> /opt/backup/message.tmp cat /mnt/backup/auditlog-xxxx.txt >> /opt/backup/message.tmp /usr/sbin/ssmtp email@example.com < /opt/backup/message.tmp
This file assists the administrator by running a simple audit of the XenServer host that the backup is running on and then writes it to the NFS share and then sends it as an email out to the administrator with output on critical systems of the XenServer, information about attached SR’s, etc. Plenty of info to keep you busy for years.
Second you have Cleanup.sh:
#!/bin/bash umount /mnt/backup mount -t nfs 10.x.x:x/iSCSI/xxxx /mnt/backup sleep 20 #How many copies to keep #If you want 3 copies, uncomment keep 3 AND keep 2 #If you want 4 copies, uncomment keep 4 AND keep 3 AND keep 2 #etc. #Keep four copies #find /mnt/backup/*.xva -type f -mtime 3 -delete #Keep three copies #find /mnt/backup/*.xva -type f -mtime 2 -delete #Keep two copies find /mnt/backup/*.xva -type f -mtime 1 -delete #Keep one copy only #Use ONLY the line below when drive isn't big enough for two copies anymore #Removes all backup copies so we can start fresh every time, somewhat dangerous... #rm -f /mnt/backup/*.xva #If you are keeping more than one, don't uncomment this line rm -f /opt/backup/message.tmp cat /opt/backup/mailheader.txt > /opt/backup/message.tmp echo "Subject: Cleaned Backup Drive" >> /opt/backup/message.tmp echo "I cleaned up the backup drive" >> /opt/backup/message.tmp ls /mnt/backup >> /opt/backup/message.tmp df -h >> /opt/backup/message.tmp /usr/sbin/ssmtp firstname.lastname@example.org < /opt/backup/message.tmp
This file will run right after Audit.sh and will (as you can see) do this: unmount your NFS share, remount it, wait, then clean old xva backups from the nfs share. After it has done this, it will write a temp message to the share and send the administrator an email.
Doing this will prep your backup share to receive the new XVA files.
Third, you have Meta-Backup.sh:
#!/bin/bash rm -f /mnt/backup/xxxx.bak rm -f /mnt/backup/xxxx-metadata.bak #rm -f /mnt/backup/xxxx2-host-backup.bak xe host-backup file-name=/mnt/backup/xxxx.bak host=xxxx xe pool-dump-database file-name=/mnt/backup/xxxx-metadata.bak #xe host-backup file-name=/mnt/backup/xxxx2-host-backup.bak host=xxxx2 rm -f /opt/backup/message.tmp cat /opt/backup/mailheader.txt > /opt/backup/message.tmp echo "Subject: Metadata Backed up" >> /opt/backup/message.tmp ls /mnt/backup/*.bak >> /opt/backup/message.tmp /usr/sbin/ssmtp email@example.com < /opt/backup/message.tmp
This bash script will backup the actual XenServer and the Xenserver Metadata. Note that while the Xen Dom0 is being backed up, it first DELETES the copy, then backs up, so there is a point in time that the XenServer Dom0 is vulnerable. In the future, i think it would be good to make a second copy of Dom0 and keep one on hand like we do with the Guest VM backups in the other portions of the script.
To: firstname.lastname@example.org From: email@example.com
This is obviously a simple mail header that directs where the mail in the scripts gets sent to. Fill out appropriately.
Fifth, you have vm_backup.cfg:
!/bin/bash # Set log path log_path="/mnt/backup/vm_backup.log" # Enable logging # Remove to disable logging log_enable # Local backup directory backup_dir="/mnt/backup/" # Backup extension # .xva is the default Citrix template/vm extension backup_ext=".xva" # Which VMs to backup. Possible values are: # "all" - Backup all VMs # "running" - Backup all running VMs # "list" - Backup all VMs in the backup list (see below) # "none" - Don't backup any VMs, this is the default backup_vms="all" # VM backup list # Only VMs in this list will be backed up when backup_ext="list" # You can add VMs to the list using: add_to_backup_list "uuid" # Example: # add_to_backup_list "2844954f-966d-3ff4-250b-638249b66313" # Current Date # This is appended to the backup file name and the format can be changed here # Default format: YYYY-MM-DD_HH-MM-SS date=$(date +%Y-%m-%d_%H-%M-%S)
This gives the parameters for backing up the VMs and is called by vm_backup.sh. One example edit here is modifying the backup VMs from all to something else to speed up testing- this way you can test out backing up a small, non essential sandbox VM before you deploy to everywhere.
Sixth, we have vm_backup.sh:
#!/bin/bash # Get current directory dir=`dirname $0` # Load functions and config . $dir"/vm_backup.lib" . $dir"/vm_backup.cfg" # Switch backup_vms to set the VM uuids we are backing up in vm_backup_list case $backup_vms in "all") if [ $vm_log_enabled ]; then log_message "Backup All VMs" fi set_all_vms ;; "running") if [ $vm_log_enabled ]; then log_message "Backup running VMs" fi set_running_vms ;; "list") if [ $vm_log_enabled ]; then log_message "Backup list VMs" fi ;; *) if [ $vm_log_enabled ]; then log_message "Backup no VMs" fi reset_backup_list ;; esac # Backup VMs backup_vm_list # End if [ $vm_log_enabled ]; then log_disable fi
This file makes the calls for the backup to happen.
Lastly, we have DBTOOL, an example crontab and vm_backup.lib. DBTOOL runs when you do the audit and writes out to a text file an audit of the SenServer.
After the scripts, we have the basic idea of how this works.
1. The local Dom0 on the XenServer runs cron.
2. The cleanup.sh file runs and removes old data and emails regarding what happened.
3. The audit.sh runs and outputs the info via email to the admin.
4. The meta-backup.sh runs and backs up Dom0/XenServer database and structure.
5. The vm_backup runs and backs up the VMs running on the host.
a. enters the right dir
b. calls the .lib and the .cfg files
c. finds out what needs to be backed up (all, specific, etc-defined in the vm_backup.cfg)
d. calls for a backup (via live snapshot/export)
e. outputs to a log
3. Installation, Prerequisites, and Emotional Support
First thing you want to do is create a share for this to mount to. I am using a Windows 2008 R2 server that is connected via iSCSI to a SAN running Freenas.
Once you have the NFS share up, make note of your IP and mount point. This will come in handy when we do the NFS mount in XenServer.
Get on your XenServer machine via SSH and create a mount point for the share. This can be in any location, but the scripts call for it here, so for ease of this tutorial, I put the mount at /mnt/backup.
[root@xxxx ~]# mkdir -p /mnt/backup [root@xxxx ~]#
Next, lets make a temp mount of this remote NFS share to the local machine.
[root@xxxx ~]# mount -t nfs 10.x.x.x:/iSCSI/x /mnt/backup [root@xxxx ~]#
Notice I used the IP address of the share and the folder within the share. I had premade this folder inside that share on the windows machine specifically for this XenServer to use.
You can do some temp making of folders, deleting folders/files etc in the share to make sure that permissions are set up right. If you have problems writing or deleting, head back to Windows and figure out what you did wrong.
Now up we create the /opt/backup folder and put all our files inside.
[root@xxxx backup]# mkdir /opt/backup [root@xxxx backup]#
Now we get all our files into the /opt/backup folder.
I just copied my files to the NFS share that is mounted and copied them over.
[root@xxxx backup]# cd /opt/backup [root@xxxx backup]# cp /mnt/backup/* /opt/backup [root@xxxx backup]# ls audit.sh cleanup.sh crontab.txt dbtool mailheader.txt meta-backup.sh vm_backup.cfg vm_backup.lib vm_backup.sh
Now that I have my files, you want to systematically (audit.sh, cleanup.sh, mailheader.txt, meta-backup.sh, and MAYBE vm_backup.cfg) edit them and fill in the right IP addresses and locations.
Go back up to the top of this article and see the White/Orange changes to the scripts that I linked in-line to see what needs to be changed.
start with using vi and edit them appropriately.
[root@xxxx backup]# vi audit.sh
Once all those files are modified to fit your environment, lets get the DBTOOL executable, in the right place, and all the scripts executable while were at it.
[root@HSM-XEN-1 backup]# cd /opt/backup [root@HSM-XEN-1 backup]# chmod +x *.sh [root@HSM-XEN-1 backup]# chmod +x dbutil [root@HSM-XEN-1 backup]# chmod +x dbtool [root@HSM-XEN-1 backup]# cp ./dbtool /sbin/dbtool [root@HSM-XEN-1 backup]# chmod +x /sbin/dbtool
Now that that is done, we want to make sure that email can flow to the proper destinations via mssmtp.
Ill make a note here that I use gmail for my notifications. Even though I have a local mail server, for me its good form to have notifications running outside the local mail server for reasons like my local mail server dies during a backup.
So to counter this, I just use a gmail account to send the notices out through.
Lets edit the /etc/ssmtp/ssmtp.conf file.
[root@HSM-XEN-1 backup]# vi /etc/ssmtp/ssmtp.conf
In here the only options that you want for gmail are
firstname.lastname@example.org mailhub=smtp.gmail.com:587 AuthUser=xxxx AuthPass=xxxx UseSTARTTLS=YES
Those options set (and everything else hashed out) should allow email to flow.
Lastly, we want to put the mount in fstab in order for the mount to open up each time the XenServer starts.
[root@xxxx ~]# vi /etc/fstab 10.x.x.x:/iSCSI/xxxx /mnt/backup nfs_netdev 0 0
You’ll obviously need to change the variables from xxxx to the IP/location of your share.
4. Testing Your Work
First up. Let’s run a test cleanup.sh
[root@xxxx backup]# bash cleanup.sh
This should complete with no console output, and no errors, then send an email to your email specified in the configs.
If it works, like it should, then run the audit.sh by using the same command above but substituting the cleanup.sh with audit.sh.
If you have any trouble, check the logs.
[root@xxxx ~]# tail -f /var/log/maillog [root@xxxx ~]# tail -f /var/log/messages
I find it helpful to open another ssh window and have it just look at logs, and then run my scripts. This helps isolate where and when the problems are being encountered.
Also, you want to make sure that the output is being directed to the NFS share.
If that is working, go ahead and run the meta-backup.sh, like the audit.sh and cleanup.sh above.
You should start seeing the backup get built to the NFS share. Let this run. Depending on your network and all that, it could take some time. My XenServers are only 700mb or so. They take me about 10 minutes.
If your XenServer backup is successful, you want to move onto a VM or group of VM’s.
Set the vm_backup.cfg where you can specify all or specific VM’s. Make sure its set to be what you want, then give it a shot by running vm_backup.sh.
This should take a long time depending on your environment, as we are backing up the entire VM.
Again, this takes a snapshot and exports the snapshot to the share. This only backs up used space on the VM, so if you have a 200GB disk with only 10mb used, your output file will only be 10mb.
5. Setting Up Recurring Schedules
At this point, if the VM backup and Dom0 backup ran well, its time to set the job to run on a schedule.
This is done through cron.
There is an example crontab.txt file with the scripts that will run the scripts in the following order:
Ordering is important, so timing is important.
You should get a stopwatch and time how long it takes to do each of these scripts so we can hardwire that into the cron.
Cron is a simple scheduler. Each job runs in a line in the /etc/crontab file.
Here is my default /etc/crontab on XenServer 6.1:
SHELL=/bin/bash PATH=/sbin:/bin:/usr/sbin:/usr/bin MAILTO=root HOME=/ # run-parts 01 * * * * root run-parts /etc/cron.hourly 02 4 * * * root run-parts /etc/cron.daily 22 4 * * 0 root run-parts /etc/cron.weekly 42 4 1 * * root run-parts /etc/cron.monthly
Under the run-parts section, they are laid out like this,
Minute | Hour | Day | Month | DayOfWeek | user | ThingToDo
For example, the first one (/etc/cron.hourly) runs minute 1 (01) of every hour (*), every day (*), every month (*), every day of the week (*).
So the /etc/cron.monthly will run minute 42, 4th hour, 1st day of the month, any day of the week. i.e. 4:42am every first day of the month.
Remember, this is military time.
There is a great writeup here that makes cron easy.
But for us, I’d like my cron to run every other day. This will give me some sort of DR on the servers. I want it to run starting at 11pm any day of the week. So my cron will look like this:
0 23 */2 * * root /opt/backup/cleanup.sh
10 23 */2 * * root /opt/backup/audit.sh
13 23 */2 * * root /opt/backup/meta-backup.sh
59 23 */2 * * root /opt/backup/vm_backup.sh
This cron set will start my cleanup script at 11pm every other day, then audit will run at 11:10pm followed by meta-backup.sh at 11:13pm, and finally vm_backup at 11:59pm.
The next step is to hop into /etc/crontab and insert your backup schedule under the existing text.
SHELL=/bin/bash PATH=/sbin:/bin:/usr/sbin:/usr/bin MAILTO=root HOME=/ # run-parts 01 * * * * root run-parts /etc/cron.hourly 02 4 * * * root run-parts /etc/cron.daily 22 4 * * 0 root run-parts /etc/cron.weekly 42 4 1 * * root run-parts /etc/cron.monthly 0 23 */2 * * root /opt/backup/cleanup.sh 10 23 */2 * * root /opt/backup/audit.sh 13 23 */2 * * root /opt/backup/meta-backup.sh 59 23 */2 * * root /opt/backup/vm_backup.sh
That should be your first step into getting some low cost backup solution for DR of the XenServer. I could see this being useful for offline migrations as well across pools or whatever.
Remember, this is VERY important: updates have the potential to wipe out Dom0! So when you are updating your XenServer to the latest version, make sure you BACKUP!
Backup these files/locations:
These are the areas of the disk we have actually modified. So make sure you dont have to kick yourself on an upgrade.
Once you’ve done this a few times, it should be a snap to get backups running again in about 10 minutes with no downtime.