So if your reading this you most likely have needed to automatically mount a remote share via ssh/nfs automatically to use as a local folder. This is super handy for doing daily backups or accessing remote backups as a local mount point.

What is autofs?

Autofs is a service daemon that automatically mounts and remounts any remote sshfs, NFS, and other type shares for you on demand. Whenever the mount point is accessed it remounts the remote share if not already mounted. This is super efficient and way more efficient then doing custom fstab mounts which can be unreliable if your machine experiences network hiccups and fails to reestablish. You can read more about the nitty gritty on the man page here.

How to install aufos

This should cover both Centos and Debian/Ubuntu/Mint Linux based installations. The below guide covers how I setup a few mount points over sshfs via autofs.

Install autofs first via sudo or root.

Debian/Ubuntu/Mint Linux

sudo apt-get install autofs sshfs

Centos/RHEL/ rpm based

sudo yum install autofs sshfs

Once installed we setup our default base mount for any shares were going to add.

In my case I wanted all added shares to show up under the “/mnt” as a subfolder.

So to configure this I added the below line to the bottom of the file located here /etc/auto.master . Please Note: If this is going be run and mounted under a sudo user the uid and gid aka userid and groupid will need to be adjusted to match yours. The below line is assuming this is going to be mounted as the root user.

/mnt /etc/auto.sshfs uid=0,gid=0,--timeout=600,--ghost

This would be done by using nano or vi to open and edit the file. I’m a big fan of nano so these commands have that as default. If you prefer vi you probably already know how to do this and don’t need any handholding. /end rant lol

nano /etc/auto.master

Now that we have the base default path setup we can focus on adding the mount points. If this is for a remote server ssh/sshfs based path then we highly recommend ensuring you already have password-less key base authentication setup between the machines. This is highly recommended so there are no password prompts or passwords saved to a text file or command locally for security purposes. If that is not already done please see a tutorial like this one to get that setup first. Please ensure when generating the key pair the password is empty meaning hit enter twice when prompted for the password vs specifying one and this would be done via the machine that’s going to access the remote share.

With that now setup we can create the mount points.

In my case I wanted to mount 2 remote sshfs based folders to access and store backups.

Folder paths:

These would end up becoming the below paths once mounted.


So first we would need to craft the sshfs mount string needed for them both. See below examples. You would replace the name in front and the “Username@HostnameorIP:/remote/path” section with your sshfs connection information.

morgan  -fstype=fuse,rw,nodev,noatime,allow_other,max_read=65536 :sshfs\#Username@HostnameorIP:/remote/path
rsyncnet -fstype=fuse,rw,nodev,noatime,allow_other,max_read=65536 :sshfs\#Username@HostnameorIP:

Once you have the mount strings created we need to add them to the “/etc/auto.sshfs” file. So lets create or edit this file and add the line/s you created above. Were going to use nano for that via the root or sudo user and then exit and save.

sudo nano /etc/auto.sshfs

Now if you already had these paths mounted manually via fstab you will want to unmount them before proceeding.

umount /previous/mount/point

Now we need to reload autofs service so it sees the configuration changes we made to the files /etc/auto.master and /etc/auto.sshfs

service autofs reload||systemctl reload autofs

Now we need to restart autofs.

service autofs restart||systemctl restart autofs

Now we can test if its working by doing a “df -h” and seeing if the mounts are currently there. If there not showing we can test the auto mount by changing into the mount point or trying to access it to kick start it. In my case were testing with the first mount point /mnt/morgan

cd /mnt/morgan
ls -lah /mnt/morgan

Once you try to access it might hang for a sec the first time while it connects then you should see a listing of any files in that path. Then to see if its mounted you can do another “df -h” to see current mount points. It should look something like this.

root@coby:~# df -h
 Filesystem                                       Size  Used Avail Use% Mounted on
 udev                                              63G     0   63G   0% /dev
 tmpfs                                             13G  1.3G   12G  11% /run
 rpool/ROOT/pve-1                                 424G  276G  149G  65% /
 tmpfs                                             63G   37M   63G   1% /dev/shm
 tmpfs                                            5.0M     0  5.0M   0% /run/lock
 tmpfs                                             63G     0   63G   0% /sys/fs/cgroup
 rpool                                            149G     0  149G   0% /rpool
 rpool/ROOT                                       149G     0  149G   0% /rpool/ROOT
 rpool/data                                       149G     0  149G   0% /rpool/data
 tmpfs                                             13G     0   13G   0% /run/user/0
 /dev/fuse                                         30M   72K   30M   1% /etc/pve
 /dev/sdd                                         916G   80M  870G   1% /sdd
 user@hostname:/remote/path  708G  131G  577G  19% /mnt/morgan  <<<<  Note the first mount
 user@hostname:                    1.1T  538G  563G  49% /mnt/rsyncnet <<<Note the second mount 

If you do not see it mounted you will want to stop the service and debug as outlined in:


systemctl stop autofs.service
automount -f --debug

Now you have an automatic remote share mounted you can copy from or too to access or store backups etc. As its a now a local path its much easier to do an rsync or DB dump to this mount location and it be stored offsite.

We hope you enjoyed this tutorial and it saves you alot of frustration working with remote shares. If you have multiple servers which all need to add one remote backup server you can easily reuse the specific share line from the file /etc/auto.sshfs after installing autofs and configuring ssh keybased connections on the server.

Special shoutout to the people behind the guides linked below for inspiration. If you still have questions. The below links also have more detailed stuff and edge use stuff I might not have covered like NFS shares which are done similarly but have some quirks.

KVM VPS Hostinglinux