linux - Running sshfs as user via autofs

08
2013-08
  • billyjmc

    My situation:

    • There are several servers on my LAN which I do not administer
    • I access them using SSH for sshfs, shells, and remote X11 apps
    • I have set ControlMaster auto in my ~/.ssh/config file so I don't experience authentication lag
    • I use Compression and fast/weak Ciphers since I'm either on a private LAN or using VPN
    • Wherever possible, I have exported my (passwordless) public RSA key to the servers

    I've started using autofs to make my life easier, but autofs wants to run all of its mount commands as root. I can, of course, generate a new RSA keypair as root and export that, and also replicate my own ~/.ssh/config options to the superuser's config file, but I'd rather not maintain two copies of these things, and it doesn't solve my desire to have only one open SSH connection to each host. Therefore, I want to have autofs run sshfs as an unprivileged user, just like it does when manually invoked at the terminal.

    I've looked into autofs scripts, but those don't appear to be a solution to my problem. Any suggestions?

  • Answers
  • billyjmc

    Taken directly from the afuse homepage (emphasis mine):

    afuse is an automounting file system implemented in user-space using FUSE. afuse currently implements the most basic functionality that can be expected by an automounter; that is it manages a directory of virtual directories. If one of these virtual directories is accessed and is not already automounted, afuse will attempt to mount a filesystem onto that directory. If the mount succeeds the requested access proceeds as normal, otherwise it will fail with an error. See the example below for a specific usage scenario.

    The advantage of using afuse over traditional automounters is afuse runs entirely in user-space by individual users. Thus it can take advantage of the invoking users environment, for example allowing access to an ssh-agent for password-less sshfs mounts, or allowing access to a graphical environment to get user input to complete a mount such as asking for a password.

    This option seems like a shoo-in.

  • billyjmc

    Drawing heavily from another similar question, I've found a solution. It required some serious experimentation and tweaking, though. Note that this modified script is now incompatible with mounting from /etc/fstab.

    /etc/auto.master

    /- /etc/auto.sshfs uid=1000,gid=1000,--timeout=30,--ghost
    


    /etc/auto.sshfs

    /local/mountpoint -fstype=fuse,rw,nodev,nonempty,noatime,allow_other,workaround=rename,ssh_command=/usr/local/sbin/ssh_user :sshfs\#remoteuser@server\:/remote/path
    


    This needs to be executable, of course: /usr/local/sbin/ssh_user

    #!/bin/bash
    
    # declare arrays for ssh options
    declare -a ADD_OPTIONS
    declare -a CLEANED_SSH_OPTS
    
    # add options to be automatically added to the ssh command here.
    # example
    #ADD_OPTIONS=( '-C' )
    # empty default
    ADD_OPTIONS=(  )
    # The following options to SSH cause it to open a connection and immediately
    # become a background task. This allow this script to open a local socket
    # for future invocations of ssh. (use "ControlMaster auto" in ~/.ssh/config)
    SOCKET_OPTIONS=( '-fN' )
    
    for OPT in "$@"; do 
      # Add list of values to be removed from sshfs ssh options. By default, sshfs
      # disables X11 forwarding. We're overriding that behavior.
      case $OPT in
        "-x")
         # this and these like this will be removed
        ;;
        "-a")
        ;;
        "-oClearAllForwardings=yes")
        ;;
        *)
          # These are ok.. add
          NUM=${#CLEANED_SSH_OPTS[@]}
          CLEANED_SSH_OPTS[$NUM]="$OPT"
        ;;
      esac
    done
    
    # For some reason, I needed to generate strings of the ssh command before
    # passing it on as an argument to the 'su' command. It simply would not
    # work otherwise.
    # Throwing the $SOCKET_OPTIONS in with the rest of the arguments is kind
    # of hackish, but it seems to handily override any other specified behavior.
    
    # Establishes an ssh socket if none exists...
    SSH_SOCKET_CMD="ssh $SOCKET_OPTIONS ${ADD_OPTIONS[@]} ${CLEANED_SSH_OPTS[@]}"
    su localuser -c "$SSH_SOCKET_CMD"
    
    # ...and use that socket to mount the remote host
    SSH_SSHFS_CMD="ssh ${ADD_OPTIONS[@]} ${CLEANED_SSH_OPTS[@]}"
    exec su localuser -c "$SSH_SSHFS_CMD"
    


    And, in case anyone cares: ~/.ssh/config

    Host *
    ControlMaster auto
    ControlPath /tmp/%u@%l→%r@%h:%p
    ServerAliveInterval 10
    Compression yes
    
    Host host1 host1.myschool.edu host2 host2.myschool.edu
    ForwardX11 yes
    Ciphers arcfour256,arcfour128,arcfour,blowfish-cbc
    
    Host host3 host3.myschool.edu
    ForwardX11 no
    Ciphers arcfour256,arcfour128,arcfour,blowfish-cbc
    
  • kreator

    JFTR, I've modified (and simplified) ssh_user so that it first tries to contact the user's ssh-agent:

    #!/bin/bash
    # Open a ssh connection as a given user, thus using his/hers authentication
    # agent and/or config files.
    : ${ADDOPTS:="-2Ax"}
    : ${LOCAL:="kreator"}
    export SSH_AUTH_SOCK=$(find /tmp/ssh-* -type s -user ${LOCAL} -name agent* | tail -1)
    declare -a options=( $* )
    
    # Remove unwanted options
    for (( i=0,fin=${#options[*]} ; i < fin ; i++ ))
    do
        case ${options[$i]} in
                (-a|-oClearAllForwardings=*)    unset options[$i]
                                                ;;
        esac
    done
    
    exec /bin/su ${LOCAL} -c "$(which ssh) ${ADDOPTS} ${options[*]}"
    

  • Related Question

    ssh - How to mount remote SSHFS via intermediate machine? Tunneling?
  • Andrei

    I would like to mount a remote file system (A) using SSHFS, but sometimes I have IP address, access from which is not allowed. So my plan is to access it via another machine (B) in that network. Do I need to mount A on B and then to mount B (and A) on my local computer? Is there a better way to do it?

    Update

    Just to clarify the procedure:

    First, I make a tunnel

    ssh -f user@machineB -L MYPORT:machineA:22 -N
    

    And then I mount the remote file system

    sshfs -p MYPORT [email protected]:/myremotepath /mylocalpath
    

    Is it correct?

    How do I destroy the tunnel when I am done?


  • Related Answers
  • edk

    yeah tunneling. You connect machine B, create local tunnel (-L) to SSHd port of machine A then sshfs to localhost to the port of newly created tunnel.