|  Secure, 
              Automated File Distribution
 Tim Maletic
              There are countless ways to send a file from one host to another. 
              But what if you want your systems to do it without your intervention? 
              There are still many answers: use the Berkeley notion of trust (implemented 
              via the r-commands and hosts.equiv or .rhosts files), 
              embed clear-text passwords into scripts or pre-load them into memory, 
              or use anonymous authentication. But what if you want to do it securely? 
              And what if you want to do it efficiently?
              I recently ran into this problem when I wanted to push a set of 
              files to all the UNIX systems at my site. Sys Admin has presented 
              solutions to this type of problem before, but none have sufficiently 
              tackled the security issues. Jim McKinstry is rather explicit about 
              the security shortcomings of his file replication strategy, which 
              includes use of SUID programs, embedded clear-text passwords, and 
              the r-commands (Sys Admin, February 1998), while Robert Blader's 
              techniques focus on physically separated networks (Sys Admin, 
              October 1999). Michael Watson recently presented a brief introduction 
              to using SSH for automated file distribution (Sys Admin, 
              February 2001). However, Watson neglects the possibility of using 
              of public-key authentication, leaving his strategy open to the age-old 
              problems of automated password authentication (see Libes).
              In my situation, the files to distribute were the master-configuration 
              files for a UID 0 process that runs on every host -- the solution 
              had to be highly paranoid. To prevent a rogue server from masquerading 
              as the true master server, I decided that some form of cryptographic, 
              host-to-host authentication was required. Symmetric key cryptography, 
              with a shared secret key on each host, is risky because the compromise 
              of one host breaks the entire system. An asymmetric, public key 
              system would alleviate this threat. The Secure Shell protocol was 
              a good fit -- it is widely ported and resides in user space 
              (as opposed to most IPSec implementations). I didn't want to 
              get into kernel modifications for my entire site. SSH, however, 
              turned out to be only part of the solution.
              The Ingredients
              I considered a number of options, including rdist, NFS, 
              Coda, ftp, SSL, IPSec, and stunnel, but none of them 
              satisfied all of my requirements. The solution I settled on uses 
              Rsync (http://rsync.samba.org) to connect to a chrooted, 
              unprivileged account via OpenSSH (http://www.openssh.com). 
              An Rsync daemon can serve files from a chrooted directory, but it 
              can't do strong host-to-host authentication. scp, part 
              of the SSH suite, can do strong authentication, but its performance 
              doesn't scale because it has no built-in mechanism for copying 
              only the differences between source and target files. Rsync can 
              use SSH for its transport layer, but SSH requires a shell account 
              on the SSH server. My breakthrough came when I finally read the 
              contrib/README file in the OpenSSH distribution -- Ricardo 
              Cerqueira has contributed a patch for chrooting SSH accounts. I 
              have since learned that this functionality is built into the SSH 
              product from SSH Communications Security, Inc. (http://www.ssh.com).
              With these ingredients, we can create a highly secure architecture 
              for automated file distribution. Master copies of the files live 
              under the home directory of a specially-created, unprivileged account 
              on a SSH server. I'll call this user "ssync". Slave 
              copies of the files can reside on any other SSH-capable system that 
              can hit port 22 (the registered SSH port) of the server, even if 
              it crosses an untrusted network. Client systems run a regularly 
              scheduled Rsync command to update the slave files. For the following, 
              let's assume the client-side Rsync runs as root, because we 
              want to preserve the ownership of arbitrary master files (but it 
              can run as any user that can write the slave files to their destination 
              on the client). These Rsync sessions connect over SSH to the server 
              as the ssync user, and the SSH daemon is configured to chroot() 
              to ssync's home directory. The chroot() imprisons the 
              ssync user in his home directory, effectively changing that directory 
              into the root directory, and is an extra precaution in the event 
              that a client system becomes compromised. (Several Sys Admin 
              articles have covered chroot in various contexts. See "Securing 
              Apache" by Kyle Dent, May 1999, for an introduction to chroot.) 
              Public keys are distributed during the initial installation to allow 
              for password-less logins. And since Rsync only transfers the differences 
              of changed files, these sessions can run frequently, relative to 
              the amount of data to sync and its rate of change.
              Mix Well
              I will later analyze the security of this model in some detail. 
              In the meantime, let's look at the practical side of putting 
              these pieces together. We'll build a network consisting of 
              the hosts "sol", "mercury", "venus", 
              ..., "pluto". sol will run sshd and host the master 
              copies of the files under the home directory of the ssync user. 
              mercury, venus, and the rest of the planets will run Rsync over 
              SSH as root to connect to the ssync account on sol.
              You'll need SSH and Rsync on all the planets, while sol will 
              need an SSH server, and statically linked versions of both Rsync 
              and a shell for the chrooted environment. Our examples will use 
              OpenSSH v2.3.0, and Rsync v2.4.6. OpenSSH requires OpenSSL (http://www.openssl.org) 
              and Zlib (http://www.info-zip.org/pub/infozip/zlib) for their 
              cryptographic and compression libraries, respectively.
              OpenSSH, OpenSSL, and Zlib build easily on a wide variety of platforms. 
              See their documentation for details. (I've had a hard time 
              with other applications finding OpenSSL if it is installed in a 
              custom location, so you may save some frustration by letting it 
              use its preferred /usr/local/ssl.) If you are new to the 
              Secure Shell, familiarize yourself with the client's and server's 
              copious options and get a copy of SSH, The Secure Shell: The 
              Definitive Guide, by Barrett and Silverman. The default client 
              and server configurations in OpenSSH are sufficient for our purposes, 
              but I recommend setting:
              
             
Protocol 2
PermitRootLogin no
in sol's sshd_config file. The first option disables support 
            for any clients running less than SSH Protocol Version 2. SSH Protocol 
            2 improves upon its earlier incarnations in several respects, and 
            should be required where possible. The second option modifies the 
            default PermitRootLogin behavior, which may bypass your /etc/securetty 
            configuration (especially if you use OpenSSH's src/contrib/sshd.pam.generic, 
            which leaves out a reference to pam_securetty.so). After all, 
            you should never log in directly as root anyway. I also recommend 
            modifying each client's ssh_config to enable StrictHostKeyChecking. 
            This will prevent an ssh client from connecting to a host whose 
            private key has changed, unless the client explicitly overrides this 
            safety feature at the command line.  Fire up sshd on sol and test logging in with the ssh 
              client from the planets. If you've enabled StrictHostKeyChecking 
              by default, you'll have to manually exchange keys, or temporarily 
              urn it off with the -o StrictHostKeyChecking=no option to 
              the ssh client. If you run into problems, remember to try 
              running sshd in debug mode (-d) and the client in 
              verbose mode (-v). Once the installation is complete, you'll 
              need to create key pairs for root on each of the clients:
              
             
root@venus# ssh-keygen -d -P ""
This will create a DSA key pair with an unencrypted private key in 
            the default locations ~/.ssh/id_dsa and id_dsa.pub. 
            An encrypted private key requires a passphrase for each use. This 
            is obviously more secure, but hard to automate. We'll have to 
            rely on filesystem protections for our private keys -- more on 
            this later.  Create the ssync user on sol. Its /etc/passwd entry should 
              look something like:
              
             
ssync:x:111:99::/usr/local/ssync:/bin/sh
For our example, the master copies of the distribution files will 
            live under /usr/local/ssync on sol, so this becomes ssync's 
            home directory. Create that directory, as well as /usr/local/ssync/.ssh/. 
            Now concatenate each planet's id_dsa.pub file and place 
            the result in ssync's ~/.ssh/authorized_keys2 file on 
            sol. This should be sufficient for passwordless logins to the ssync 
            account. Test it from a planet; you should see something like:  
             
root@jupiter# ssh ssync@sol "uname -n; whoami"
sol
ssync
root@jupiter#
You can lock this new account now; you won't be using the password 
            anyway.  The final twist for the SSH configuration is forcing sshd 
              to chroot() the ssync user. Apparently, this can be done 
              through the use of the "ChrootUser" configuration directive 
              in SCS, Inc.'s SSH implementation. With OpenSSH, you need to 
              apply a contributed patch to src/session.c. The patch, unfortunately, 
              is slightly out of date, but it is simple enough that you'll 
              find it easy to insert the changes into the newer session.c. 
              After fixing session.c, rerun the make and copy the 
              resulting sshd into its production directory. Stop and restart 
              sshd.
              Your new sshd will chroot() a user when it encounters 
              the magic token "/./" in the sixth field of their 
              /etc/passwd entry (the home directory). So modify the ssync account 
              on sol with a password entry such as:
              
             
ssync:x:111:99::/usr/local/share/./:/.ssh/ash.static
The directory to the left of the "." in the sixth 
            field is ssync's real home directory, and the directory to the 
            right of the "." is ssync's home directory relative 
            to the chroot. We'll try to keep the special configuration 
            files as contained as possible by keeping ssync's statically 
            linked shell in its ~/.ssh directory, along with its authorized_keys2 
            and environment files. From the perspective of the real root, our 
            directory should look like:  
             
root@sol: /usr/local/ssync >ls -al
total 28
drwxr-xr-x   7 root     root        4096 Nov 29 09:20 .
drwxr-xr-x  19 root     root        4096 Feb 20 12:59 ..
dr-x------   2 ssync    root        4096 Jan 17 10:58 .ssh
drwxr-xr-x   2 root     root        4096 Feb  9 09:42 bin
drwxr-xr-x   4 root     root        4096 Jan 13 09:29 etc
drwxr-xr-x   2 root     root        4096 Nov  6 16:04 lib
drwxr-xr-x   4 root     root        4096 Nov 29 09:20 platform
root@sol: /usr/local/ssync >ls -al .ssh/
total 2064
dr-x------   2 ssync    root        4096 Jan 17 10:58 .
drwxr-xr-x   7 root     root        4096 Nov 29 09:20 ..
-r-xr-xr-x   1 root     root      275556 Nov  8 13:08 ash.static
-rw-r--r--   1 root     root       12044 Jan 17 10:58 authorized_keys2
-rw-r--r--   1 root     root          11 Nov  8 13:08 environment
-r-xr-xr-x   1 root     root     1801225 Nov  8 13:08 rsync
root@sol: /usr/local/ssync >cat .ssh/environment
PATH=/.ssh
root@sol: /usr/local/ssync >
We use the environment file (documented in the ssh(1) man pages) 
            to set ssync's $PATH so that it can find its shell, a 
            statically linked version of the simple ASH shell that I found installed, 
            by default, on my Red Hat 6.2 system.  To get the above to work, I discovered that sshd and ssync 
              must agree as to the full path to ssync's shell. Because I 
              didn't want a copy of ash.static getting mixed up with 
              the bulk of my distribution directory, I made the change at the 
              real root: I created a .ssh directory under the real /, 
              and copied ash.static there.
              With that cheap hack out of the way, we're now ready for 
              testing:
              
             
root@neptune: / >ssh ssync@sol
Last login: Sat Feb 01 20:01:49 2001 from uranus.example.net
$ pwd
Cannot exec /bin/pwd
$ ls
ls: not found
Oh yeah, all we have to work with is the static ash shell. What can 
            we do with shell built-ins?  
             
$ echo *
bin etc lib platform
$ echo .*
. .. .ssh
$ cd ..
$ echo *
bin etc lib platform
Great! We can't cd out of /usr/local/ssync.  
             
$ cd bin
$ echo foo > test
cannot create test: permission denied
And our file modes and owners forbid write-access to the ssync user.  
             
$ exit
Connection to sol closed.
root@neptune: / >
Now that is a limited environment. But it is enough for Rsync.  Fetch the Rsync sources from rsync.samba.org. Build and 
              install on the planets, as per the traditional:
              
             
user@saturn$ cd /usr/local/src/rsync-2.4.6
user@saturn$ ./configure ; make
user@saturn$ /bin/su
root@saturn# make install
(See the src/README file for details.) For sol, however, skip 
            the make install, and modify the configuration step for static 
            linking:  
             
user@sol$ LDFLAGS="-static" ./configure ; make
Then copy the resulting Rsync binary to ssync's ~/.ssh 
            directory.  Serve with Lime Twist
              We're finally ready for a real test. On the planets, create 
              the destination directory. Let's make it the same as the master 
              copy: /usr/local/ssync. Now we should be able to run Rsync 
              to retrieve the first batch of files -- the contents of /usr/local/ssync/.ssh.
              
             
root@mars# rsync -avz --delete --rsh=ssh ssync@sol:/.ssh /usr/local/ssync
receiving file list ... done
.ssh/
.ssh/ash.static
.ssh/authorized_keys2
.ssh/environment
.ssh/rsync
.ssh/
wrote 80 bytes  read 678888 bytes  271587.20 bytes/sec
total size is 2088836  speedup is 3.08
root@mars#
(If you run into problems at this stage, try removing the chroot 
            restriction from the ssync account to isolate the problem.) See the 
            rsync(1) man pages to become familiar with its many options. 
            Above, we're using the following options:  
              -a -- Archive mode (recurse, preserve modes, owners, 
              etc.)
              -v -- Verbose (just for testing)
              -z -- Compress
              --delete -- Delete files on target that aren't on source
              --rsh -- Path to ssh client
              
              The "-a" and "--delete" options 
              will ensure that the slave directories will exactly replicate the 
              master. This kind of configuration is useful when you want the master 
              to be completely authoritative for the content of the slaves. For 
              example, this could be used to simulate a push of configuration 
              files to all of your hosts. It's only a simulated push, because 
              really each slave system would be regularly Rsyncing to the master. 
              I've had success using cron to schedule an Rsync script that 
              runs every five minutes. To prevent hitting the SSH server all at 
              once, the script sleeps for a pseudo-random amount of time between 
              1 and 180 seconds before launching the Rsync.
              This technique also has applications to static Web content distribution. 
              First, public Web sites could pull their static content from a trusted 
              intranet system. This would save Web authors publishing to or authoring 
              on vulnerable, external hosts. It would also provide an automated 
              update of the site from a trusted copy in the event of defacement 
              (unless they root your box, in which case they'll most likely 
              disable your cron scripts!). Second, such an arrangement fits naturally 
              into Web-clustering strategies, where content synchronization is 
              already a problem. We could sync a 10-node cluster as easily as 
              a single host.
              Don't Drink and Drive
              The proposed file distribution strategy demands a high price in 
              initial configuration, but it pays big dividends on security. Let's 
              think about some possible attack scenarios.
              First of all, anonymous users on separate systems (i.e., neither 
              sol nor one of the planets) will have no access to our files, because 
              they can't authenticate to the SSH server. Because we've 
              locked the ssync account, password authentication isn't an 
              option. Authenticating, therefore, requires a copy of a private 
              DSA key that corresponds to one of the public keys in ssync's 
              authorized_keys2 file, and those should only be readable 
              by root@[planet] (or by anyone with access to your backup 
              tapes, which is a good reason to regularly rotate your SSH keys).
              For users with local access to one of the planets, root's 
              private key is protected by the mode 700 .ssh directory. 
              (OpenSSH creates that directory mode 700 -- make sure it stays 
              that way!) While users won't be able to access sol's ssync 
              account, they may very well be able to access the files you are 
              distributing. Set their access modes and owners appropriately on 
              sol, and Rsync's "-a" option will preserve 
              those settings on the planets.
              Version 2 of the SSH protocol will thwart all but the most determined 
              packet-level attacks. There are currently no known vulnerabilities 
              (though there are several in protocols 1.x). Passive packet sniffing 
              will yield no passwords or other data, and active attacks such as 
              session hijacking are prevented as well. Both DNS and the more difficult 
              IP spoofing are blocked by SSH's host authentication. If a 
              rogue server IP spoofs as sol, the planet's SSH clients will, 
              at the very least, complain loudly that sol's host key has 
              changed. If this is a concern, configure your SSH clients to refuse 
              such a connection with the StrictHostKeyChecking option, 
              and manually initialize your clients' known_hosts2 files.
              If an attacker gains root access to one of the planets, they'll 
              have one of the golden private keys, and will be able to access 
              the ssync account on sol. However, we've chrooted that account, 
              and (take another look at the file permissions listed above) the 
              ssync user has no write-access to any file or directory. All they'll 
              be able to do with the ssync account is update your file set. (Of 
              course, the client system is hosed, and you've got serious 
              problems, but there is no threat to the distribution system as a 
              whole.)
              You may be wondering whether this public-key authentication really 
              buys us any extra security when we're leaving the private key 
              unencrypted. How is this more secure than embedded clear-text passwords, 
              when in either case, the game is over when an attacker gets the 
              right file? The answer is that public-key thereby restricts access 
              to SSH clients (assuming we haven't done anything silly, like 
              allowing .rhosts files). We can also take additional steps, 
              such as configuring the SSH server to only accept connections from 
              a fixed set of client IP addresses. Then an attacker won't 
              be able to authenticate from arbitrary points on the network. (You'll 
              also get the illusory feeling of safety from knowing how much more 
              difficult it is to shoulder-surf a public DSA key than a clear-text 
              password.)
              The real risk is a root-level compromise of the master host. The 
              severity of this risk depends on what you're distributing. 
              If it is Web content, your Web content is corruptible and no longer 
              trustworthy. If it is host configuration files, your hosts are corruptible 
              and no longer trustworthy. In the latter case, you must protect 
              the master at all costs. Like a Kerberos key server, it should run 
              the fewest services possible, and those should be secured.
              Hangover
              The major weakness of the above strategy is the management complexity 
              when scaling beyond a handful of distribution filesets. Each new 
              directory structure to replicate will need to be analyzed to determine 
              whether or not it can use the ssync account on sol. Perhaps the 
              master files will need to reside on another host, or their security 
              requirements or filesystem location may dictate using another unprivileged 
              account. So, each new fileset to distribute may require the configuration 
              of sshd and the unprivileged account on a new system. Its 
              scalability within a single fileset, on the other hand, is another 
              strength of our solution. It should perform well as the number of 
              files and the number of clients rise.
              References
              Barrett, Daniel J. and Richard Silverman. SSH, The Secure Shell: 
              The Definitive Guide.O'Reilly & Associates.
              Blader, Robert. "File Transfer and Verification Between Non-Connected 
              Networks". Sys Admin, October 1999.
              Libes, D. "Handling Passwords with Security and Reliability 
              in Background Processes" Proceedings of the Eighth USENIX System 
              Administration Conference (LISA VIII), pp. 57-64, San Diego, CA, 
              September 19-23, 1994, http://www.nist.gov/msidlibrary/doc/ \ 
              libes94d.ps).
              McKinstry, Jim. "File Replication". Sys Admin, 
              February 1998.
              Watson, Michael. "Replacing rdist and ftp with scp and Associated 
              Utilities". Sys Admin, February 2001
              Tim Maletic was a doctoral candidate in Philosophy before he 
              started down the path of true enlightenment. He is now a Senior 
              UNIX System Administrator for Priority Health, specializing in Information 
              Security. He can be reached at: tmaletic@alumni.indiana.edu.
           |