Tag: Tech

AMANDA & Services

AMANDA backup in five steps

As with http caching software, there are many software applications you can use to backup Linux and other clients or peers. I give no comment to those products that their authors do not make easy to purchase or obtain. When preparing tenders or quotations it is unacceptable to have to haggle over hardware specifics for the purchase of a minor part of a task. Many have offered to make me a reseller; this is just nonsense, I have no interest in selling software and it is not honest, my role is to provide the best solution, not that which I receive most money.

I started using Arkeia for my last client as I wanted to provide a GUI to make backups and restores easier. This was in fact too complex, I replaced it with Amanda and it has been nothing but rosy. One department gets a daily email advising what has been backed up and which tape to put in next, the other department receives only two emails a week and has less onerous backup requirements. Restores are infrequent; however it was easier to provide complete documentation on the FTP-like interface to AMANDA.

Those that I have looked at, and this only equates to my personal opinions on these items:

  • Amanda
    A beautiful piece of software. It seems improvements come fast and thick, in addition it is possible to make your own additions and improvements as Amanda is open source software. It is tricky to set up; however once done, it requires minimal management. Amanda provides very useful output at all times for both management of backups and debugging during setup.
  • Bacula
    I have not had a good look at this software; however on a brief overview looks quite powerful.
  • Tapeware
    I like the fact the Tapeware publish their prices. Many other software companies selling backup software need your shoe size and dietary requirements before they will give you a price for the software. The price for Tapeware is cheaper than many; however I found it totally unacceptable that Tapeware binds to the SCSI interface stopping anything else from operating on the SCSI chain. Hence this was rejected.
  • Arkeia
    Great as they offer three licences free, so it is useful for small domestic networks.
    Unfortunately, expensive and no clear pricing, poor to non-existent logging for trouble shooting and the latest GUI is a useless monster.
  • Netbackup
    No clear pricing.
  • Bru Pro
    No clear pricing
  • Bakbone
    No clear pricing.
  • Time Navigator
    No clear pricing
  • SBA
    Pricing published, much cheaper than products such as Netbackup; however I have not tried this software, though it would have to be very special to move me away from AMANDA.

The set up of Amanda may be done over a couple of days, there is quite a time consuming process (time consuming for the tape drive, not the operator) that makes life a little easier. This was originally written as a “Five days to Amanda” guide. Often Sys Admins are busy and backups are left and neglected as other ‘more important’ items are done instead. The idea was to provide a 30 minute job everyday and have the backups working by the end of the week :-)


Day One:

run the command amtapetype -f /dev/nst0
This is done so that you may receive an analysis of the tape device and the tape types. This is relevant, for example a Compaq 12/24 Gb DAT drive using DDS3 tapes may only provide 10Gb of space for backups. If the AMANDA configuration is told that the tapes will hold 12Gb then you will consistently get failed backups owing to lack of space.

The output from amtapetype may well take many hours as it writes to the complete tape a couple of times.

The output will look something like this:
tapetype tape-dds3 {
comment “just produced by amtapetype program”
length 9922 mbytes
filemark 0 kbytes
speed 973 kps
}

You need to edit the file /etc/amanda/DailySet1/amanda.conf about half-way down the file you will find many tape definitions and you should add the new one you have just created. Please note that the name of the tape should be unique within the file, I have called mine tape-dds3. do not put spaces in, keep it short and simple. You will see other tape names such as DAT and QIC-60. Also ensure you get both parentheses in too. Further up in the file you will see the definition within the configuration file: tapetype HP-DAT (or similar), this should be changed to tapetype yournewname. I have tapetype tape-dds3.

There are three amanda packages available from Red Hat, these are amanda, amanda-client and amanda-server. For the clients you will only need amanda and amanda client. For the server you will obviously need amanda-server too. There is an example configuration in /etc/amanda/DailySet1/, using this and modifying the files to make them work for the particular setup required is the easiest option.


Day Two:

edit the file /etc/amanda/DailySet1/amanda.conf, the items you need to set in this file are as follows:

  1. org – This is in the email subject to differentiate between different backups.
  2. mailto – This should be set to the administrator’s email addresses. Multiple addresses may be entered separated by spaces.
  3. tapecycle – This is the number of tapes in circulation. It is much easier if the tapes are labelled before we start testing. If Amanda encounters an unlabelled tape, it will be rejected rather than attempt to overwrite it.
  4. tapedev /dev/null – this should be changed to tapedev /dev/nst0, this is the non-rewinding device for Linux. If you are using a different Unix, then this will be different.

Day Three:

Label a tape:

The command should be run as amanda, so here we have the command, by the user root

su amanda -c “/usr/sbin/amlabel DailySet1 DailySet101”

This command may be broken down as follows:

su (super user / switch user) to amanda -c (run command as this user) amlabel a tape belonging to backup group DailySet1 and label it as DailySet101. Note that the amanda.conf file specifies what the tapes may be labelled as, by default this is DailySet100 to DailySet199. The default back up cycle is a month.

If there is an error you normally get a good error message to advise you how to fix it. You will probably get a warning message regarding inability to read .bashrc, this is not something to worry about. I would suggest getting one backup to work and then labelling all the tapes, this may take some time as each tape could take 30-45 seconds to label.

Now we set one partition to backup:

edit file /etc/amanda/DailySet1/disklist down at the bottom, comment out using # all of the backups that are there by default or for example.

Put in one backup that being the name of your machine:

localhost /etc comp-root-tar This will only backup the /etc partition, which is normally quite small and makes a good test.

final task for Day three is to test and then run a backup:

run the command:

su amanda -c “/usr/sbin/amcheck DailySet1”

This should give us some diagnostic data and tell us whether there are any problems. If this is the first time the backup has been run, it will normally complain about missing index files. These are created during the backup.

Try running the backup and it will continue until finished at which point it will send the admins an email to tell you all is done and provide a report:

su amanda -c “/usr/sbin/amdump DailySet1”

 


Day Four:

Once we have operational backups, we can then automate them:

As root run the following command:

crontab -e

This will normally open the vi editor to edit your cron schedule.

press i to insert and type the following:

# backup daily
5 2 * * 1,2,3,4,5 su amanda -c “/usr/sbin/amdump DailySet1”
0 16 * * 1,2,3,4,5 su amanda -c “/usr/sbin/amcheck DailySet1”

These will run backups at 2:05am every weekday (late Sunday night – Monday morning) and at 4pm run a diagnostic to ensure that you have put the correct tape in and that all backup hosts are available.


Day Five:

The last jobs are to add disks / machines to the backup, this will probably carry on over time as the system is tweaked and machines are added to the network.

The Amanda client is only available to Unix systems at present; however other systems may be backed up using file shares and the samba libraries.

Unix systems:

For every machine that you add to the disklist / backup you have to tell the client machine that it authorises the backup server to access it. This is done by putting the backup server name in the following file:

/var/lib/amanda/.amandahosts

Just put the server name on a line on its own. Please note that the authorisation will fail if it fails to resolve this IP using DNS. To get round this you can enter the name and IP address in the /etc/hosts file.

you can then add more machines to the disk list as follows:

client1.domain.co.uk /etc comp-root-tar
client1.domain.co.uk /home comp-root-tar
client2.domain.co.uk hda1 comp-root-tar

note that you can either use hardware names or the partitions:

The hard drive names may be found using the command mount with no arguments. Please also note that amanda cannot backup a disk or partition larger than one disk at present. This may change in the future. Amanda works very well spreading full backups across the backup cycle, so that full backups are done as frequently as possible and does not leave a huge backup rush at the month end.

You may also want to exclude a number of files from the backup, this is done using the following file: /usr/local/lib/amanda/exclude.gtar
The contents of the file will be similar to the following:
core
*.o
*.gz.tmp
*/.netscape/cache */gnutar-lists/*.new
*/spool/mqueue/?f[A-Z]*
*/tmp/*.errout
*pagefile.sys
*/.thumbnails/*
*/trader-cache/*
*/cache/*
*/.thumbcache/*

Windows systems:

To backup windows systems you may wish to ensure that you have the latest version of Amanda available, these instructions assume you are using at least 2.4.3.

These systems are also entered into the disklist file as follows:
sambaserver.domain.com //windowsclient/share comp-root-tar

Windows clients also need to use another file containing the passwords for the shares:
The share containing the passwords is /etc/amandapass and has the following format:
//windowsmachine/share user%pass
This file must be readable only by user amanda and group disk. This can be adjusted using the following commands:
chown amanda.disk /etc/amandapass
chmod o-rwx /etc/amandapass

Please note that the samba server can be the backup server or another system; however this must be enabled as a client.


Squid General

Proxy Overview

A proxy intercepts requests from a web browser to a web server. The advantage of this is that if you have a proxy in an office or school where people may request the same page multiple times such as google.co.uk or yahoo.co.uk, save bandwidth and speed up browsing. The proxy will cache (keep a copy) the page and images and serve them directly to the browser rather than downloading them from the server again.

This sort of thing can assist greatly when doing bug fixes on computers. If multiple computers are running the same operating system, the update files will only have to be downloaded to the LAN once making updates much faster.

The proxy focused on here is squid a piece of GplSoftware availavle from http://www.squid-cache.org
Squid configuration
There is extensive documentation available for Squid and a mailing list where many of the queries you may have are covered in detail many times before :-)

What people often get stuck on is the ACLs (Access Control Lists). ACLs are important, for if the proxy is left open to all, it will be abused by mail spammers and others.

It is possible to have password authentication against a samba server, a Windows computer (NT W2K or XP), htpasswd files, Novell servers and LDAP servers. I have probably missed a few – check the documentation.

It is possible to provide continuous logins against a samba server or NT server using NTLM logins, check again for how to do this in the docs.

The ACLs were misunderstood by me for a long time, I had assumed that the ACLs were an OR argument rather than AND.

So for example if you create an ACL for the local network 192.168.1.0/24 and a colleague’s network 80.233.192.0/24 to allow both of these you would have to do the following:

acl local src 192.168.10.0/24
acl colleague src 80.233.135.0/24

http_access allow local
http_access allow colleague

The following would NOT work

http_access allow local colleague

The second would not work because a computer / user would never be in both networks

The ACLs are stackable and so Squid will work through them in order to get a match; hence you can order them to make finer adjustments of the ACLs

You might then want to allow remote users passworded access if they are not on either of the two networks. If the password ACL was put first, everyone would be asked for a password, which would be undesirable.

auth_param basic program /usr/lib/squid/pam_auth
auth_param basic children 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours

The realm is required and here the authentication is set to last for two hours before e re-request.

So here we have pam authentication. Pam is modular and allows authentication against a whole range of services. Note that you also have to make a squid entry for pam in /etc/pam.d/squid consisting of

auth required /lib/security/pam_unix.so
account required /lib/security/pam_unix.so

You will also need to set the file /usr/lib/squid/pam_auth to be SUID. This is so when the squid user executes the file it is run with root permissions and authentication is granted. I have been caught on this more than once as it works well when testing as root :-) to change the file to SUID execute the following at a root prompt: chmod 4755 /usr/lib/squid/pam_auth Each user on the system with a password will be able to use the proxy.

I have found pam a bit flakey for authenticating against a samba or NT server and here it is better to use the NTLM authentication.

acl password proxy_auth REQUIRED
http_access allow password

Site filtering

This is just a quick view. There is much you can do here, you can also use http://www.squidguard.org to filter sites.

acl filter url_regex “/etc/squid/banned”
deny_info XANDER_ERR filter
http_access deny filter

stick partial site matches such as doubleclick into the file /etc/squid/banned to block urls containing these words in the titles.

banned file

the file XANDER_ERR is found at /etc/squid/errors/English/XANDER_ERR and is a standard html page giving a custom error message.

Have a look at the multitude of variables you can embed in the file.
Web Proxy Auto-discovery
WPAD may be provided by a number of methods. DHCP, DNS (by A record), DNS (by SVR record)

I have found very limited documentation of WPAD by DHCP and have not managed to implement it.
WPAD using DNS by A record is relatively straight forward once you know the pitfalls :-)

WPAD by SVR record is not implemented in many clients.

A client set to try to discover their own proxys looks at its own FQDN (Fully Qualiffied Domain Name) such as workstation1.domain.co.uk and will look for the following hosts wpad.domain.co.uk then wpad.co.uk It looks for the A record of this host.

It will then try to download the following file

http://ipaddress/wpad.dat
This is important if you are running virtual hosts.

The client does not request a domain when it asks for the wpad.dat file, it only forwards the IP address (or at least IE6 does), therefore if you are running multiple websites on that IP address, the wpad.dat file must be placed in the default website. In short you must be able to access it by IP.

The dat file would be similar to this, however it may be much more complex:

function FindProxyForURL(url, host)
{
if (isPlainHostName(host) ||
dnsDomainIs(host, “domain.co.uk”) ||
isInNet(host, “192.168.10.0”, “255.255.255.0”))
return “DIRECT” ;
return “PROXY proxy.domain.co.uk:3128 ; PROXY 192.168.10.217:3128” ;
}

At present there is a patch available for Mozilla. which will allow it to do WPAD; however for Mozilla it may be easier to copy the wpad.dat file to proxy.pac file on the same server and set Mozilla to use the URL http://ipaddress/proxy.pac as an autmatic configuration URL (see Mozilla’s settings).


Web Filtering

Web Filtering

The http caching that has been reviewed here uses Squid Cache. Similarly for filtering we will look at configuration details and modules to be used with Squid will be reviewed here.

* url_regex
* Squid Guard

url_regex

This is integrel to Squid and grabs url segments from a specified file and if a match occurs squid will either allow or disallow dependent upon the configuration. Here is a segment from the example squid file and an example of a banned file.

acl filter url_regex “/etc/squid/banned”
http_access deny filter

Here we have an acl called filter, the type of filter is a url_regex and we use the file /etc/squid/banned.
The http_access is set to deny upon match, as you can see from the example file, this is set to block advert sites and other rubbish. It is easy to add new sites to block just by adding another domain to the list.

After a new entry has been added, squid needs to be told by the following command:
squid -k reconfigure

To restart the http cache, you can run the following command:
service squid restart

Squid Guard

Squid Guard has to be downloaded and compiled. This is easier than it sounds. It is dependant upon having gcc package installed.

It runs as follows:

tar zxvf squidguard-xxxx.tar.gz

cd squidGuard-xxx

./configure

make

The install has to be done as root.
make install

Have a read of the documentation and any other information on the site. You will also have to download and install the block lists. There are a large number of different blacklists available, from porn to violence. These are regularly updated and contain tens of thousands of sites and IPs. These are located normally in /var/spool/squidguard/

The Access Control Lists work very similarly to those in the squid configuration file.

Read what documentation you can. Once you have it up and working it is launched from squid using the re-director config option, have a look at the sample file for details.

Once you have downloaded or changed any of the files, you can rebuild the database files using the command:

squidGuard -C all

You will note that there are blockfiles such as:
drwxr-xr-x 2 squid squid 4096 Mar 3 01:23 ads
drwxr-xr-x 2 squid squid 4096 Feb 11 19:12 aggressive
drwxr-xr-x 2 squid squid 4096 Feb 11 19:12 audio-video
drwxr-xr-x 2 squid squid 4096 Feb 11 19:12 drugs
drwxr-xr-x 2 squid squid 4096 Feb 11 19:12 gambling
drwxr-xr-x 2 squid squid 4096 Feb 11 19:12 hacking
drwxr-xr-x 2 squid squid 4096 Feb 12 18:26 mail

Within these directories you will find files such as:
ls -l /var/spool/squidguard/blacklists/ads
total 184
-rw-r—– 1 squid squid 44500 Mar 3 01:23 domains
-rw-r–r– 1 squid squid 122880 Mar 3 01:24 domains.db
-rw-r–r– 1 squid squid 27 Feb 25 13:18 expressions
-rw-r—– 1 squid squid 3147 Feb 7 23:55 urls
-rw-r–r– 1 squid squid 8192 Mar 3 01:24 urls.db

Note that you have domains and urls, and domains.db and urls.db, these are the database files that are built by the command above.

The blocklists also provide a good list, if you build your ACLs with good then !bad the URL will be accepted if it is found in the good list, even if it is in any of the blacklists.


Copyright © 1996-2013 Xander Harkness. All rights reserved.
iDream theme by Templates Next | Powered by WordPress
loading