So if your reading this you most likely have needed to automatically mount a remote share via ssh/
What is autofs?
Autofs is a service daemon that automatically mounts and remounts any remote
How to install aufos
- For Ubuntu/Debian see their documentation here.
- For Arch see their documentation here.
- For Centos/RHEL/Suse see their documentation here.
This should cover both Centos and Debian/Ubuntu/Mint Linux based installations. The below guide covers how I setup a few mount points over sshfs via autofs.
Install autofs first via sudo or root.
sudo apt-get install autofs sshfs
Centos/RHEL/ rpm based
sudo yum install autofs sshfs
Once installed we setup our default base mount for any shares were going to add.
In my case I wanted all added shares to show up under the “/mnt” as a subfolder.
So to configure this I added the below line to the bottom of the file located here /etc/
/mnt /etc/auto.sshfs uid=0,gid=0,--timeout=0,--ghost
This would be done by using nano or vi to open and edit the file. I’m a big fan of nano so these commands have that as default. If you prefer vi you probably already know how to do this and don’t need any handholding. /end rant lol
Now that we have the base default path setup we can focus on adding the mount points. If this is for a remote server ssh/
With that now setup we can create the mount points.
These would end up becoming the below paths once mounted.
So first we would need to craft the sshfs mount string needed for them both. See below examples. You would replace the name in front and the “Username@HostnameorIP:/remote/path” section with your sshfs connection information.
morgan -fstype=fuse,rw,nodev,nonempty,noatime,allow_other,max_read=65536 :sshfs\#Username@HostnameorIP:/remote/path rsyncnet -fstype=fuse,rw,nodev,nonempty,noatime,allow_other,max_read=65536 :sshfs\#Username@HostnameorIP:
Once you have the mount strings created we need to add them to the “/etc/auto.
sudo nano /etc/auto.sshfs
Now if you already had these paths mounted manually via
Now we need to reload
service autofs reload||systemctl reload autofs
Now we need to restart
service autofs restart||systemctl restart autofs
Now we can test if
cd /mnt/morgan ls -lah /mnt/morgan
Once you try to access it might hang for a sec the first time while it connects then you should see a listing of any files in that path. Then to see if
root@coby:~# df -h Filesystem Size Used Avail Use% Mounted on udev 63G 0 63G 0% /dev tmpfs 13G 1.3G 12G 11% /run rpool/ROOT/pve-1 424G 276G 149G 65% / tmpfs 63G 37M 63G 1% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 63G 0 63G 0% /sys/fs/cgroup rpool 149G 0 149G 0% /rpool rpool/ROOT 149G 0 149G 0% /rpool/ROOT rpool/data 149G 0 149G 0% /rpool/data tmpfs 13G 0 13G 0% /run/user/0 /dev/fuse 30M 72K 30M 1% /etc/pve /dev/sdd 916G 80M 870G 1% /sdd user@hostname:/remote/path 708G 131G 577G 19% /mnt/morgan <<<<<<<<<<<< Note the first mount user@hostname: 1.1T 538G 563G 49% /mnt/rsyncnet <<<<<Note the second mount also shows
Now you have an automatic remote share mounted you can copy from or too to access or store backups etc. As its a now a local path its much easier to do an rsync or DB dump to this mount location and it be stored offsite.
We hope you enjoyed this tutorial and it saves you alot of frustration working with remote shares. If you have multiple servers which all need to add one remote backup server you can easily reuse the specific share line from the file /etc/auto.sshfs after installing autofs and configuring ssh keybased connections on the server.
Special shoutout to the people behind the guides linked below for inspiration. If you still have questions. The below links also have more detailed stuff and edge use stuff I might not have covered like NFS shares which are done similarly but have some quirks.