Most storage setup guides are satisfying for the same reason infrastructure diagrams are satisfying: they stop at the point where the system appears to work. The bucket mounts. Finder shows a folder tree. A file opens. The screenshot looks good. What you usually do not see is the part that actually determines whether the setup is useful day to day: which mount flags make the illusion believable, and which ones quietly trade correctness for convenience.
I wanted something very specific. A local folder on macOS that felt close enough to Dropbox for browsing and occasional access, but backed by S3 instead of a sync client. Full folder tree visible in Finder. Files fetched on demand. Local cache with size limits. Cheap storage underneath. No heavyweight desktop product in the middle.
The obvious tool for that job is rclone mount. The less obvious part is that the setup only works cleanly if you get three things right: the FUSE layer, the binary you install, and the cache semantics you choose.
The short version is this: the mount worked, but not because rclone mount is magic. It worked because the system was tuned around a very particular trade-off profile: slow-changing remote storage, on-demand reads, bounded local cache, and tolerance for stale directory listings.
Let’s get into it.
The Goal
The target behavior was narrow:
- browse an S3 bucket like a normal folder in Finder
- fetch file contents only when needed
- keep a bounded local cache
- avoid downloading large files up front
- auto-mount on login
- keep storage cheap enough that the bucket can act like cold archive, not hot primary storage
That framing matters. I was not trying to build a collaborative filesystem. I was not trying to make object storage behave like a low-latency network share. I wanted a practical archive mount that was pleasant enough to browse.
The First Two Non-Obvious Requirements
Before the mount flags even matter, two setup decisions decide whether the system works at all.
1. Use FUSE-T on macOS
The mount depended on FUSE-T as the userspace FUSE driver.
brew install --cask fuse-t
That part is easy to miss because many generic guides talk about FUSE as if all macOS options are interchangeable. They are not. If the FUSE layer is wrong, the rest of the setup does not matter.
2. Use the official rclone binary, not the Homebrew one
This was the bigger gotcha.
The setup explicitly required the official binary at /usr/local/bin/rclone, not the Homebrew package. The reason was simple: the Homebrew build did not provide the mount support needed on macOS.
curl -L https://downloads.rclone.org/rclone-current-osx-arm64.zip -o /tmp/rclone.zip
cd /tmp && unzip -o rclone.zip
sudo cp /tmp/rclone-v*-osx-arm64/rclone /usr/local/bin/rclone
sudo chmod +x /usr/local/bin/rclone
And then verify the mount-capable build:
/usr/local/bin/rclone version
The guide explicitly checks for go/tags: cmount in the output. That one detail does a lot of work. Without it, the whole setup can look installed while the one feature you actually care about is missing.
The Remote Configuration
The remote itself was plain S3.
[archive]
type = s3
provider = AWS
access_key_id = <YOUR_ACCESS_KEY_ID>
secret_access_key = <YOUR_SECRET_ACCESS_KEY>
region = ap-south-1
storage_class = GLACIER_IR
The storage class choice matters more than it first appears.
This was not tuned for hot storage. It was tuned for Glacier Instant Retrieval. That means:
- storage is cheap
- retrieval is not free
- repeated reads should be absorbed by local cache
- object churn should stay relatively low
That is the economic model the rest of the mount configuration is built around.
The Mount Command
This was the core command:
/usr/local/bin/rclone mount archive:your-bucket ~/rclone \
--vfs-cache-mode full \
--vfs-cache-max-size 5G \
--vfs-cache-max-age 72h \
--vfs-read-chunk-size 16M \
--vfs-read-chunk-size-limit 256M \
--dir-cache-time 30m \
--poll-interval 0 \
--cache-dir /tmp/rclone-cache \
--volname "rclone" \
--daemon
This is the part most guides under-explain. The flags are not decoration. They are the behavior of the system.
What The Flags Were Actually Doing
--vfs-cache-mode full
This is what makes the mount feel usable.
Without it, reads and writes are much more constrained. With it, opened files are cached locally and the mount starts behaving more like something Finder can tolerate.
--vfs-cache-max-size 5G
This puts a hard ceiling on how much local disk the illusion is allowed to consume.
That boundary is doing most of the work. Without it, “cheap remote storage” can quietly turn into “surprisingly expensive local cache.”
--vfs-cache-max-age 72h
This is the retention rule for cached files.
If a file is not touched for three days, it can be evicted. That matches the archive use case well. Frequently revisited files stay fast. Old files fall back to remote.
--vfs-read-chunk-size 16M
and
--vfs-read-chunk-size-limit 256M
These two flags are about how aggressively reads expand.
The first chunk is modest. Sequential access can ramp upward. That means you do not fetch a giant object all at once just because someone opened it, but sequential reads still become more efficient over time.
For media, large PDFs, and other big files, this is a much better shape than eager full downloads.
--dir-cache-time 30m
This is the flag that looks innocent and changes the entire character of the mount.
It caches directory listings for 30 minutes.
That is excellent for browsing performance. It is terrible if you need rapid visibility of remote changes.
This deserves to be stated bluntly: a mount with --dir-cache-time 30m is not a good transport for low-latency coordination. If another writer changes the remote namespace, you can easily spend the next half hour looking at yesterday’s directory state.
I ran into that exact behavior later in a completely different system, which is why I ended up writing What Happened When I Tried to Coordinate Two AI Agents Over NFS. The mount was fine for archive browsing. It was the wrong abstraction for a message bus.
--poll-interval 0
This is another high-leverage choice.
Polling is disabled.
That is fine if the bucket is effectively single-writer or slow-changing. It reduces chatter and keeps the mount simple. It is not fine if you are expecting the mount to notice remote changes quickly.
Again, the theme here is that the mount is optimized for archive access, not coordination.
--cache-dir /tmp/rclone-cache
This puts cache data in /tmp, which means the machine gets a clean slate on reboot.
That is a good operational default for this kind of setup. The cache is a performance layer, not durable state.
The Finder Story
One thing I liked about the setup is that it did not stop at “the shell can see it.”
The guide also set:
- mount point:
~/rclone - volume name:
rclone - LaunchAgent for auto-mount on login
That gave the setup a real desktop UX:
- visible under Locations in Finder
- browseable like a mounted volume
- available after login without manual remount
That is more important than it sounds. A mount you have to remember to babysit is not really part of your workflow.
The LaunchAgent Matters
The mount was turned into a real boot-time service with a LaunchAgent.
That means:
- use the exact binary path
- keep the mount flags stable
- write logs to a known file
- restart predictably on login
This is not glamorous, but it is the difference between “a neat shell command” and “a storage surface you actually rely on.”
The Cost Model
The storage class economics were explicit:
- cheap storage
- paid retrieval
- cheap list requests
- paid uploads
- minimum storage duration
- minimum object billing size
That changes how you should think about the mount.
This is not where you put active working directories that churn constantly. This is where you put material that benefits from:
- infrequent access
- predictable browsing
- local cache absorbing repeated reads
- low monthly storage cost
If you ignore that, the mount will still work technically. It just will not be the system you thought you built.
What This Setup Is Good At
This setup is good when you want:
- a cheap browseable archive
- Finder-visible access to deep folder trees
- on-demand fetch instead of full sync
- a bounded local cache
- predictable startup via LaunchAgent
It is especially good if your real problem is:
“I want object storage to feel locally explorable without paying the cost of a full sync client.”
What It Is Bad At
This setup is bad when you want:
- rapid visibility of remote changes
- multi-writer coordination
- filesystem-like consistency guarantees
- lots of small-file churn
- anything that depends on directory listings being fresh right now
That is not a flaw in rclone. It is a consequence of the trade-offs chosen by the mount flags and the underlying storage model.
S3 is object storage. The mount can make it feel local. It cannot erase what it is.
The Real Lesson
The most useful thing in the playbook was not the install command. It was the explicitness.
It said exactly what the system was trying to optimize for:
- browse fast
- fetch lazily
- keep cache bounded
- accept stale listings
- do not pretend this is a true network filesystem
That framing matters.
A lot of storage setups fail because they are judged against the wrong job. This one works well if you treat it like a cold archive with a good desktop UX layer. It fails if you treat it like shared mutable infrastructure.
If I had to compress the whole setup into one line, it would be this:
rclone mount can make S3 feel local enough to be useful, but only if you are honest about what kinds of local behavior you are giving up.