rsync in a bash script appears to work, but the files aren't updated on destination server

04
2013-09
  • Kenny Wyland

    I've got a server where I do my web development and testing, and I have a bunch of frontend servers that sit behind a load balancer. I wrote a simple bash script to rsync a directory on my dev server to all of the production servers when I fix a bug, etc.

    The rsync APPEARS to work, but when I actually go to the other servers, none of the files have been updated. What am I doing wrong?

    #!/bin/bash
    
    directory=$1
    
    echo "$directory"
    
    set -x
    for host in "xxx.xxx.xxx.xxx"
    do
        rsync -avz -e ssh ${directory} root@${host}:${directory}
    done
    

    At the moment I only have one ip address in the for loop, but I'll be adding more as time goes on. This is how I'm executing the script and the abbreviated output:

    [root@admin vhosts]# ./rsync_to_frontend.sh /var/www/scripts
    /var/www/scripts
    + for host in '"xxx.xxx.xxx.xxx"'
    + rsync -avz -e ssh /var/www/scripts [email protected]:/var/www/scripts
    sending incremental file list
    scripts/
    scripts/fetchTweets.php
    scripts/syncMediaFiles.log
    scripts/syncMediaFiles.php
    .....
    
    sent 7395053 bytes  received 1252 bytes  2113230.00 bytes/sec
    total size is 36718000  speedup is 4.96
    [root@admin vhosts]#
    

    If I run the command again, rsync appears to work appropriately and as expected does not show any files in the list, ostensibly because they've all been updated. However, when I go and look at the files on server xxx.xxx.xxx.xxx, they haven't been updated at all.

  • Answers
  • Kenny Wyland

    Due to the strict requirements on trailing slashes when using rsync, I just needed to make sure to put a trailing slash when rsyncing a directory:

    Bad:

    # ./rsync_to_frontend.sh /var/www/scripts
    

    vs Good:

    # ./rsync_to_frontend.sh /var/www/scripts/
    

  • Related Question

    linux - rsync hack to bounce files between two unconnected servers
  • regulatre

    Here's the connection:

    [Server1] <---> [my desktop] <---> [Server2]

    Server1 and server2 are not permitted to talk directly to eachother (don't ask). My desktop, however is able to access both servers via ssh.

    I need to copy files from server1 to server2.

    Traiditionally I have been using a ssh+tar hack, as such:

    ssh -q root@Server1 'tar -vzc /path/to/files ' | ssh -q root@Server2 'tar -vzx -C /'

    And that works great, but I would like to take it a step further and get rsync working between the two servers VIA my desktop.

    Now I know that I could start an ssh port-forward tunnel in one terminal and then rsync over that tunnel in another window, but I don't want to fuss around with a second terminal or making and breaking a separate port forward tunnel. what I want is:

    • One liner command to rsync files from Server1 to server2 VIA my desktop
    • all on ONE command line, one terminal window
    • I want the port forward tunnel to only exist for the life of the rsync command.
    • I don't want to scp, I want to rsync.

    Does anybody have a trick for doing that?

    EDIT: Here is the working command! Great work everyone: 1. For rsa key path, can't use tildae, had to use "/root/". 2. Here's the final commandline:

    ssh -R 2200:SERVER2:22 root@SERVER1 "rsync -e 'ssh -p 2200 -i /root/.ssh/id_rsa_ROOT_ON_SERVER2' --stats --progress -vaz /path/to/big/files root@localhost:/destination/path"
    

    Boom goes the dynamite.


  • Related Answers
  • David Spillett

    If you are happy to keep a copy of the data on the intermediate machine then you could simply write a script that updated the local copy using server1 as a reference then updated the backup on server2 using the local copy as a reference:

    #!/bin/sh
    rsync user@server1:/path/to/stuff /path/to/loca/copy -a --delete --compress
    rsync /path/to/loca/copy user@server2:/path/to/where/stuff/should/go -a --delete --compress
    

    Using a simple script means you have he desired single command to do everything. This of course could be a security no-no if the data is sensitive (you, or others in your company, might not want a copy floating around on your laptop). If server1 is local to you then you could just delete the local copy afterwards (as it will be quick to reconstruct over the local LAN next time).

    Constructing a tunnel so the servers can effectively talk to each other more directly should be possible like so:

    1. On server 2 make a copy of /bin/sh as /usr/local/bin/shforkeepalive. Use a symbolic link rather than a copy then you don;t have to update it after security updates that patch /bin/sh.
    2. On server 2 create a script that does nothing but loop sleeping for a few seconds then echoing a small amount of text out, and have this use the now "copy" of sh:

      #!/usr/local/bin/shforkeepalive
      while [ "1" != "0" ]; do
              echo Beep!
              sleep 5
      done
      

      (the echo probably isn't needed, as the session is not going to be idle for long enough to time-out even if SSHd is configured to ignore keep-alive packets from the ssh client)

    3. Now you can write a script on your laptop that starts your reverse tunnel in the background, tells server1 to use rsync to perform the copy operation, then kills the reverse tunnel by killing the looping script (which will close the SSH session):

      #!/bin/sh
      ssh user@server2 -L2222:127.0.0.1:22 /usr/local/bin/keepalivesctipt &
      ssh user@server1 -R2222:127.0.0.1:2222 rsync /path/to/stuff [email protected]:/destination/path/to/update -a --delete --compress -e 'ssh -p 2222'
      ssh user@server2 killall shforkeepalive
      

    The way this works:

    • Line 1: standard "command to use to interpret this script" marker
    • Line 2: start a SSH connection with reverse tunnel and run the keepalive script via it to keep it open. The trailing & tells bash to run this in the background so the next lines can run without waiting for it to finish
    • Line 3: start a tunnel that will connect to the tunnel above so server1 can see server2, and run rsync to perform the copy/update over this arrangement
    • Line 4: kill the keep-alive script once the rsync operation completes (and so the second SSH call returns), which will and the first ssh session.

    This doesn't feel particularly clean, but it should work. I've not tested the above so you might need to tweak it. Making the rsync command a single line script on server1 may help by reducing any need to escape characters like the ' on the calling ssh command.

    BTW: you say "don't ask" to why the two servers can not see each other directly, but there is often good reason for this. My home server and the server its online backups are held on can not login to each other (and have different passwords+keys for all users) - this means that if one of the two is hacked it can not be used as an easy route to hack the other so my online backups are safer (someone malicious deleting my data from live can't use its ability to update the backups to delete said backups, as it has no direct ability to touch the main backup site). Both servers can both connect to an intermediate server elsewhere - the live server is set to push its backups (via rsync) to the intermediate machine early in the morning and backup server is set (a while later to allow step one to complete) to connect and collect the updates (again via rsyc followed by a snapshotting step in order to maintain multiple ages of backup). This technique may be usable in your circumstance too, and if so I would recommend it as being a much cleaner way of doing things.

    Edit: Merging my hack with Aaron's to avoid all the mucking about with copies of /bin/sh and a separate keep-alive script on server2, this script on your laptop should do the whole job:

    #!/bin/sh
    ssh user@server2 -L2222:127.0.0.1:22 sleep 60 &
    pid=$!
    trap "kill $pid" EXIT 
    ssh user@server1 -R2222:127.0.0.1:2222 rsync /path/to/stuff [email protected]:/destination/path/to/update -a --delete --compress -e 'ssh -p 2222'
    

    As with the above, rsync is connecting to localhost:2222 which forwards down the tunnel to your laptop's localhost:2222 which forwards through the other tunnel to server2's localhost:22.

    Edit 2: If you don't mind server1 having a key that allows it to authenticate with server2 directly (even though it can't see server2 without a tunnel) you can simplify further with:

    #!/bin/sh
    ssh user@server1 -R2222:123.123.123:22 rsync /path/to/stuff [email protected]:/destination/path/to/update -a --delete --compress -e 'ssh -p 2222'
    

    where 123.123.123.123 is a public address for server2, which could be used as a copy+paste one-liner instead of a script.

  • Aaron Digulla

    Why one line? Use a small shell script:

    #!/bin/bash
    # Run me on server1
    
    # Create the port forward server1 -> desktop -> server2 (i.e.
    # the first forward creates a second tunnel running on the desktop)
    ssh -L/-R ... desktop "ssh -L/-R ... server2 sleep 1h" &    
    pid=$!
    
    # Kill port forward process on exit and any error
    trap "kill $pid" EXIT 
    
    rsync -e ssh /path/to/files/ root@localhost:/path/to/files/on/server2
    

    IIRC, you can set the sleep time lower; the first ssh will not terminate as long as someone uses the channel.

  • Gilles

    Here are a few methods that make the synchronization a simple one-liner, but require some setup work.

    • Set up a reverse ssh tunnel from server1 to your desktop (sorry, I can't tell you the .ssh/config incantation off the top of my head). Chain it with a connection from your desktop to server2. Run rsync from server1.

    • Set up a socks proxy (or an http proxy which accepts CONNECT) on your desktop. Use it to establish an ssh connection from server1 to server2. Run rsync from server2.

    • Use unison instead of rsync. But the workflow is different.

    • Mount the directories from one or both servers on your desktop using sshfs.