#2694: Borg vs Restic vs Kopia: Best Linux Server Backup Tool

Borg, Restic, and Kopia compared for whole-server incremental backups on Ubuntu Docker hosts.

Featuring
Listen
0:00
0:00
Episode Details
Episode ID
MWP-2855
Published
Duration
36:26
Audio
Direct link
Pipeline
V5
TTS Engine
chatterbox-regular
Script Writing Agent
deepseek-v4-pro

AI-Generated Content: This podcast is created using AI personas. Please verify any important information independently.

Server backups have quietly transformed thanks to modern deduplication tools. Borg Backup, Restic, and Kopia each offer encrypted, incremental backups with content-addressable chunking — meaning only new data gets uploaded after the first full backup, and identical files across your system are stored once. For an Ubuntu Docker host, this makes whole-server backups practical where block-level clones like dd or Clonezilla cannot increment.

Borg uses variable-sized chunking via Buzhash, making it the most storage-efficient option, especially for files modified in place like databases or VM images. It operates over SSH to a dedicated repository format and supports FUSE mount for browsing individual files without full extraction. Its main limitation: no concurrent writes to a single repository, and off-site backup requires syncing the repo separately.

Restic is a single Go binary with broad backend support — S3, Backblaze B2, Google Cloud, SFTP, and more — letting you push directly to cloud object storage. It uses fixed-size chunking, which is slightly less efficient for in-place modifications but simpler. Prune performance was historically slow but has been rewritten to complete in minutes instead of hours.

Kopia is the newest contender, built with a client-server architecture and aggressive local metadata caching for near-instant browsing and restore. It handles garbage collection and compaction automatically in the background, and supports per-directory compression policies. All three tools make it feasible to treat your server as cattle while still preserving the hand-tuned configuration that inevitably accumulates over years of use.

Downloads

Episode Audio

Download the full episode as an MP3 file

Download MP3
Transcript (TXT)

Plain text transcript file

Transcript (PDF)

Formatted PDF with styling

#2694: Borg vs Restic vs Kopia: Best Linux Server Backup Tool

Corn
Daniel sent us this one — he's been thinking about Linux server backups, specifically for his Ubuntu Docker host, and he's noticed something that's been quietly shifting under our feet. The agentic AI stuff is actually changing what's practical now. You can describe what you want, and an agent can stitch together the rsync flags and the systemd timer and the exclusion patterns without you memorizing man pages. But the question he's really asking is, if you want a proper file-based incremental backup of an entire Linux server — something that can survive total hardware loss and let you rebuild without weeks of grief — what tooling actually delivers that, what directories do you safely ignore, and how do you think about moving that data across a network to an NFS mount or off-site cloud storage?
Herman
Before we dive in — DeepSeek V four Pro is writing our script today. Which feels appropriate, given we're talking about AI agents and backups. The model writing the script is the kind of thing that could also write your backup scripts.
Corn
There's something poetic about that. Alright, so let's get into it. Daniel's framing is interesting because he's identified the exact tension that makes server backups hard. He starts with the ephemeral ideal — treat the server like cattle, not a pet, provision a fresh Ubuntu box, move your containers and volumes back, done. But then he immediately catches himself, because the reality is that configuration drifts. Your Docker Compose files live in slash home slash Daniel slash docker. Your systemd unit files are in etc slash systemd slash system. You tweaked something in etc slash sysctl dot conf six months ago and forgot. Your cron jobs — sorry, your systemd timers — are scattered. The "just reprovision" approach works until you realize you've been hand-tuning this server for two years and you didn't document half of it.
Herman
That's exactly the problem. The cattle versus pets distinction was always a bit aspirational. Most home servers and even plenty of production servers end up as what I'd call... They're not quite pets you've named, but you'd definitely notice if they disappeared.
Corn
I'm stealing that. So Daniel's question is, if we accept that we need a proper file-based incremental backup of essentially the whole system, what does that look like? And I think we should start with the tooling, because the tooling landscape has genuinely improved in ways that make this conversation different than it would have been even three or four years ago.
Herman
It really has. And I want to frame this around three tools, because these are the ones that have emerged as the serious contenders for exactly this use case. Borg Backup, Restic, and Kopia. Each of them handles deduplication, compression, encryption, and incremental backups. Each of them can push to remote storage over SSH or to cloud object storage. But they have meaningfully different philosophies, and which one you pick changes how you think about your backup workflow.
Corn
Before you go deep on each one, give me the thirty-second version of what they all share. Because I think that shared foundation is actually the answer to half of Daniel's question. He mentioned the old cloning model — dd, Clonezilla, block-level copies — and how they don't do incremental. These three tools all solve that.
Herman
So the shared architecture is what's called content-addressable deduplication. Instead of copying files whole every time, these tools chunk your data into variable-sized pieces, hash each chunk, and only store chunks they haven't seen before. When you run a backup, they scan your filesystem, figure out which chunks are new, and only upload those. The result is that your first backup takes a while — it's copying everything — but every subsequent backup is fast, because it's only pushing the changes. And because of deduplication, if you have the same file in three different places, it's only stored once. If you move a file to a different directory, the next backup doesn't re-upload it. It just updates the metadata that says "this chunk now lives at this path.
Corn
That's the part that makes whole-server backups practical. Without deduplication, you'd be storing multiple copies of every system binary, every library, every Docker image layer. With it, your backup repository might actually be smaller than your live data.
Herman
I've seen cases where a terabyte of live data fits into three hundred gigabytes of repository after deduplication and compression. And that's before we even talk about incremental — once the initial snapshot is done, your daily backup might be a few hundred megabytes.
Corn
Okay, so let's go through them. This is the one I've used the most, and it's the one that probably has the most mindshare in the home server community.
Herman
Borg is the elder statesman here. It's been around since about twenty fifteen, it's written in Python with performance-critical parts in C, and it was designed from the ground up for exactly this use case — efficient, encrypted, deduplicated backups. The key architectural decision in Borg is that it's a push model with a dedicated repository format. You run Borg on the machine you're backing up, it connects to a remote machine over SSH where Borg is also installed, and it writes to a Borg repository — which is just a directory full of chunk files and indexes. That repository is not a mirror of your filesystem. You can't browse it with ls. You interact with it exclusively through Borg commands — borg list, borg extract, borg mount.
Corn
That's both a strength and a weakness. The strength is that Borg can do things a simple file mirror can't — compression, encryption, append-only mode so ransomware can't delete your old backups. The weakness is that you're dependent on Borg to recover your data. If Borg stops being maintained, you've got a problem.
Herman
That's a fair concern, but Borg has been actively maintained for a decade, and the repository format is well-documented. There are third-party tools that can read Borg repositories. It's not a black box. The other thing about Borg is that its deduplication is excellent. It uses a rolling hash algorithm called Buzhash to split files into variable-length chunks, which means it handles things like log files where new lines are appended at the end — only the final chunk changes, everything else is already stored. Compare that to fixed-size chunking, where appending one byte at the start of a file shifts everything and forces re-upload of the entire file. Borg handles that gracefully.
Corn
For Daniel's use case — backing up an Ubuntu Docker host — Borg has one feature that's particularly relevant. You can mount a Borg archive as a FUSE filesystem. So if you need to restore a single Docker volume or a single config file from three weeks ago, you don't have to extract the entire backup. You just mount the archive, browse to the file, and copy it out.
Herman
Borg mount is one of those features that you don't appreciate until you need it, and then it's the difference between a five-minute restore and an hour of waiting. Now, Borg's main limitation is that it's designed for a single machine backing up to a single repository. If you have multiple servers, you can back them all up to the same repository, but Borg doesn't handle concurrent writes to the same repo — you need to sequence your backups or use separate repos. For a home server setup like Daniel's, that's probably fine. But it's worth knowing.
Corn
That's Borg. What about Restic?
Herman
Restic is the newcomer that's gained a lot of traction. It's written in Go, which means it's a single static binary — no Python dependencies, no C extensions to compile. You download one file, put it in your path, and it works. That simplicity extends to its architecture. Restic supports a much wider range of backends than Borg — local directory, SFTP, Amazon S three, Backblaze B two, Google Cloud Storage, Azure Blob Storage, even rclone as a backend. You're not limited to SSH to a machine that also has Restic installed. You can push directly to cloud object storage.
Corn
That's a meaningful difference for the off-site copy Daniel mentioned. With Borg, your off-site strategy typically involves backing up to a local machine over SSH, then syncing that repository somewhere else. With Restic, you can point it directly at Backblaze B two or an S three-compatible bucket and skip the intermediate step.
Herman
And Restic's repository format is also simpler than Borg's. It stores data as pack files in a content-addressable structure, and the format is well-documented. The trade-off is that Restic's deduplication isn't quite as sophisticated as Borg's — it uses a fixed chunk size rather than content-defined chunking, so it doesn't handle insertions in the middle of files as gracefully. In practice, for most workloads, the difference is small. But if you have a lot of large files that get modified in place — database files, VM disk images — Borg's variable chunking can be significantly more efficient.
Corn
There's also the pruning question. Both Borg and Restic need periodic maintenance to clean up old chunks that are no longer referenced by any snapshot. Borg's prune can be slow on large repositories. Restic's prune was historically even slower — it was kind of infamous for this — but they've made major improvements in the last couple of years. Restic now has a rewrite of the prune operation that's dramatically faster.
Herman
I saw the benchmarks on that. The old prune on a repository with a few hundred snapshots could take hours. The new implementation brings it down to minutes. It was a complete rearchitecture of how Restic tracks which blobs are in use. For Daniel's use case, if he's keeping daily snapshots for a month and weekly snapshots going back six months, prune performance actually matters. You don't want your maintenance operation taking longer than your backup.
Corn
Then there's Kopia, which is the newest of the three. I know less about this one. What's its deal?
Herman
Kopia is interesting because it was designed from the start with a client-server architecture and a strong focus on performance. It's also written in Go, single binary, supports a wide range of storage backends — S three, B two, Azure, Google Cloud, SFTP, WebDAV, even Rclone. But the big differentiator is that Kopia maintains a local cache of metadata that makes browsing and restoring extremely fast. With Borg and Restic, listing snapshots or browsing files requires querying the remote repository, which can be slow if it's across the network. Kopia keeps an index locally, so operations feel instant.
Corn
That's a quality-of-life thing that matters more than people realize. When you're trying to find which backup has the version of a config file you need, waiting thirty seconds per snapshot to list contents adds up.
Herman
Kopia also handles its maintenance — what it calls "snapshot GC" and "content compaction" — automatically and in the background. You don't need to schedule a separate prune job. It just does it. And its compression is configurable per directory, so you can use heavier compression for log files and skip compression entirely for already-compressed data like images or video.
Corn
If you're starting fresh in twenty twenty-six, and you want to back up your entire Ubuntu Docker host, which one do you pick?
Herman
I think the honest answer is that all three will do the job, and the differences are in the details that matter to your specific setup. If you want maximum storage efficiency and you're backing up to a machine you control over SSH, Borg is probably still the best. Its deduplication is the gold standard. If you want to push directly to cloud object storage and you value simplicity — one binary, no dependencies — Restic is compelling. If you want the fastest browsing and restore experience and you like the idea of maintenance happening automatically, Kopia is worth a serious look.
Corn
For Daniel specifically, he mentioned an NFS mount as a target, and a second copy going off-site to cloud storage. That shapes the recommendation. Borg doesn't natively support cloud object storage — you'd need to use something like BorgBase or rsync dot net, which are hosted Borg repositories, or you'd roll your own with an SSH target and then sync. Restic and Kopia can both write directly to S three or B two, which simplifies the off-site piece.
Herman
NFS is an interesting target choice. All three tools can write to an NFS mount — you just point them at the mount point as a local path. But there's a caveat with Borg. Borg uses file locking on its repository, and NFS locking has historically been... let's say unreliable. If you're the only client writing to that Borg repo, it's probably fine. But if you ever want multiple machines backing up to the same NFS share, you're going to have locking issues. Restic and Kopia handle concurrent access better because they use a different locking model — Restic uses lock files in the repository, Kopia uses a content-addressable approach that naturally avoids conflicts.
Corn
That's the kind of detail that bites you six months in when you add a second server and suddenly your backups start failing mysteriously.
Herman
Those are the worst kind of failures, because they're intermittent and hard to diagnose. Alright, so let's talk about the actual command-line workflow. Daniel specifically asked about CLI tools to sync backup data onto the target. Let's walk through what a typical setup looks like with each of these.
Corn
Let's start with Borg, since that's the one I know best. The basic flow is borg init to create a repository, then borg create to make a backup, then borg prune to clean up old ones. The init step sets up encryption — you pick a passphrase, and Borg derives a key from it. Lose the passphrase, lose your backups. That's not a bug, it's a feature, but it means you need to store that passphrase somewhere safe and separate from the backups themselves.
Herman
The create command is where the exclusion patterns come in, which connects directly to Daniel's question about what directories to ignore. Borg uses a pattern file or command-line excludes. A typical borg create command for a full system backup might look like borg create colon colon etc hyphen hostname hyphen now colon colon etc hyphen hostname colon colon backup dot sh — sorry, let me say that naturally. You'd specify your repository, give the archive a name with a timestamp, and then list the paths you want to back up — usually slash, meaning the whole filesystem — and then a pile of exclude patterns.
Corn
That brings us to the exclusion list, which is really the heart of Daniel's question. He wants to know what parts of the Linux filesystem you can safely ignore. And the answer is, there's a standard set of directories that are either virtual filesystems, temporary data, or caches that will be regenerated on a new system.
Herman
Let's go through them. The big one is slash proc. That's a virtual filesystem that exposes kernel and process information. It doesn't exist on disk. If you try to back it up, you're just capturing a snapshot of running system state that will be meaningless on restore. Same with slash sys — that's the kernel's view of devices and drivers. Virtual, regenerated at boot. Slash dev is device files. You don't back those up. The kernel creates them dynamically, and modern systems use devtmpfs or udev to populate them. Slash run is a tmpfs mount on most systems — it's in-memory runtime data that disappears on reboot. Slash tmp is temporary files. By definition, anything in there is expendable.
Corn
Those five — proc, sys, dev, run, tmp — are the ones that are universally safe to exclude. They'll either be empty on restore or populated automatically by the system. Then there's the next tier, which are directories that contain data you probably don't want to back up, but might want to think about. Slash mnt and slash media are mount points. If you're backing up slash, and you have an NFS share mounted at slash mnt slash nas, you might accidentally back up the entire NAS. You want to exclude mount points or use borg create with the hyphen x flag, which tells it to stay on one filesystem and not cross mount points.
Herman
The hyphen x flag is the safer approach. It automatically excludes all mounted filesystems — external drives, NFS mounts, everything that isn't the root filesystem. If you do want to include a specific mount, you can explicitly add it as a separate path. But for a system backup, hyphen x prevents a lot of disasters.
Corn
Then there's slash lost plus found. That's where fsck puts recovered file fragments after a filesystem check. If you're restoring a backup, you don't want to restore corrupted fragments. Slash var cache is package manager caches — apt cache on Debian and Ubuntu, dnf cache on Fedora. Those will be repopulated the first time you run apt update. Slash var tmp is like tmp but supposed to survive reboots — in practice, it's still temporary and you can exclude it.
Herman
Slash var log can go either way. Logs are useful for debugging, but they grow without bound and they're rarely needed after a restore. I'd exclude them from the full system backup and handle them separately if you need log retention for compliance or debugging. On a home server, you probably don't.
Corn
Then there's the Docker-specific question. Daniel's running Docker containers. Where does Docker store its data, and how does that interact with a file-based backup?
Herman
This is the part where a naive whole-system backup can go wrong. Docker stores everything under slash var lib docker. That includes images, container filesystems, volumes, and metadata. If you just back up slash var lib docker while containers are running, you're backing up inconsistent state. Docker volumes that are being written to by a database container — you're getting a filesystem snapshot mid-transaction. That's the same problem as backing up a running database without proper tooling.
Corn
The recommendation is, exclude slash var lib docker from the file-level backup, and handle Docker data separately. For volumes, you'd use Docker's volume backup approaches — either stop the container, back up the volume directory, restart, or use a tool that does application-consistent snapshots. For the container configurations — the Docker Compose files, the Dockerfiles, the environment files — those live outside slash var lib docker, in Daniel's home directory or in slash opt or wherever he keeps them, and those do get picked up by the system backup.
Herman
That's exactly the split. Your infrastructure as code — the Compose files, the configs — gets backed up with the system. Your stateful data — the volumes — gets backed up with application-aware tooling. And your container images don't need to be backed up at all, because you can pull them again from Docker Hub or your private registry. They're rebuildable artifacts.
Corn
Unless you're using custom-built images that aren't pushed anywhere. In that case, you'd want to back up the Dockerfiles and build context, not the built image layers.
Herman
The principle is, back up the source of truth, not the derived artifact. Alright, so we've got our exclusion list. Let's talk about the restore side, because Daniel mentioned the gold standard objective — if the server is physically destroyed, you provision a new Ubuntu box, move your stuff over, and everything resumes working. What does that actually look like?
Corn
It looks like a sequence that you should test before you need it. Step one, fresh Ubuntu install on the new hardware. Step two, install your backup tool — Borg or Restic or Kopia. Step three, restore your backup to a temporary location, not directly over the running system — because if you restore slash etc over a running system's etc, you're going to have a bad time. Step four, selectively restore the directories you need. Slash etc for system configuration. Slash home for user data. Slash var for application state that isn't in Docker volumes. Slash opt if you installed things there. And any custom directories you created.
Herman
The selective restore is key, and it's where the FUSE mount feature really shines. You mount the backup archive, and you can browse it like a live filesystem. You can diff it against your current system. You can copy individual files. You don't have to do an all-or-nothing restore. For etc, you might want to restore specific config files rather than the entire directory, because the fresh Ubuntu install has its own etc with current defaults.
Corn
Then step five, restore your Docker volumes from their separate backups. Step six, run docker compose up and verify everything works. The whole process, if you've practiced it, should take maybe an hour. If you haven't practiced it, it'll take a day and you'll discover that you forgot to back up the SSL certificate in slash etc slash letsencrypt or the systemd unit file you hand-wrote six months ago.
Herman
Let's talk about LetsEncrypt specifically, because that's a gotcha. The certificates live in slash etc slash letsencrypt, and they're symlinked from a directory with a timestamp in the name. If you just restore the files, the symlinks might break. And Certbot stores some state in slash var lib slash letsencrypt too. The safer approach is to exclude LetsEncrypt from the backup and just request new certificates after restore. But if you're running a service that can't tolerate even a few minutes of certificate downtime during restore, you need to back up the entire letsencrypt directory structure and test the restore.
Corn
That's the kind of detail that separates a backup strategy from a backup wish. Let's talk about scheduling. Daniel mentioned cron jobs and then corrected himself to systemd timers, which is the modern way to do it on Ubuntu. What does a well-designed backup schedule look like?
Herman
For a home server, I like daily backups with a retention policy that keeps the last seven dailies, four weeklies, and six monthlies. That gives you fine-grained recovery for the past week, weekly granularity for the past month, and monthly going back six months. For each of the three tools, the implementation is slightly different. Borg has borg prune with the keep-daily, keep-weekly, keep-monthly flags. Restic has restic forget with similar flags. Kopia has a policy system where you define retention rules per snapshot source.
Corn
The actual scheduling mechanism — systemd timers are the right answer on modern Ubuntu. You create a service unit that runs your backup script, and a timer unit that triggers it on a schedule. The timer can be set to run daily at a specific time, and you can add randomized delay so all your servers don't hammer the backup target simultaneously. Systemd timers also handle missed runs — if the server was off during the scheduled time, it'll run the backup when it boots.
Herman
The backup script itself should handle the case where a previous backup is still running. Borg and Restic both use repository locking. If you try to run a backup while another one is in progress, it'll fail with a lock error. Your script should catch that and either wait or skip. A simple flock on a lock file before invoking the backup command is a clean way to handle it.
Corn
Daniel also mentioned the test restore, which is the thing everyone knows they should do and almost nobody actually does. What does a meaningful test restore look like?
Herman
The minimum viable test is, once a quarter, spin up a VM or a container that mimics your server, restore your latest backup, and verify that your services start. You don't need to do a full hardware restore to a physical machine. A VirtualBox VM on your laptop is enough to catch the most common failure modes — missing config files, broken symlinks, permission issues. The things that break are rarely the backup tool itself. They're the assumptions you made about what you were backing up.
Corn
The test restore forces you to document the restore procedure. The first time you do it, you'll discover gaps. You'll realize you forgot to back up the SSH host keys, so after restore, everyone connecting to the server gets a host key mismatch warning. You'll realize your database container needs to be restored in a specific order. You'll realize the backup of your Docker volumes is six months old because the volume backup script had a silent failure.
Herman
SSH host keys are a great example. They live in slash etc slash ssh, and if you restore them, your users don't get warnings. If you don't, they do. It's not a disaster, but it's friction. And for a home server where you're the only user, maybe you don't care. But if you're running services for family members, those host key warnings erode trust.
Corn
Let's circle back to the agentic AI angle that Daniel mentioned at the top, because I think there's something new here that changes the backup conversation. He said we've reached a golden point where it's becoming fairly easy to set up backups because AI agents can do the hard work. And I think that's right, but not in the way people might assume.
Herman
Say more about that.
Corn
The obvious use case is, you ask an agent to write you a backup script, and it spits out a borg create command with the right excludes and a systemd timer unit. That's useful, but it's table stakes. The more interesting thing is that agents change the economics of testing and validation. Writing the backup script was never the hard part. The hard part was maintaining it, verifying it, and actually doing the test restore. An agent can be told, every Sunday, spin up a test VM, restore last night's backup, start the services, and report back which ones are healthy. That's the kind of thing that was theoretically possible with scripting but practically too annoying to set up. An agent makes it a one-sentence request.
Herman
That's a compelling vision. And it connects to something I've been thinking about with these backup tools. All three — Borg, Restic, Kopia — have a check command that verifies the integrity of the repository. Borg check, restic check, kopia snapshot verify. They read through the data and confirm that all the chunks are present and uncorrupted. But almost nobody runs them regularly. An agent that runs the check weekly and alerts you if something's wrong — that's a meaningful improvement over the status quo.
Corn
The status quo being, you discover your backups are broken at the exact moment you need them. Which is the worst possible time to discover that.
Herman
The agent can do one more thing that humans are bad at. It can track the size and growth rate of your backups over time and flag anomalies. If your daily backup suddenly jumps from two hundred megabytes to fifty gigabytes, something changed. Maybe a log file went rogue. Maybe you accidentally downloaded a dataset into a directory that's being backed up. The agent notices and asks you about it, rather than you discovering three months later that your backup repository is full and backups have been silently failing.
Corn
The tooling recommendation, if I'm synthesizing all of this for Daniel's specific setup — Ubuntu Docker host, wants file-based incremental, wants to push to NFS and also have off-site cloud storage — I think Restic or Kopia are the strongest fits. Borg is excellent but the cloud storage piece requires an extra hop. With Restic, you can run one backup job targeting your NFS mount, and a second job — or a rclone sync — targeting Backblaze B two or S three. With Kopia, you can configure multiple storage backends and have it replicate automatically. The exclusion list is proc, sys, dev, run, tmp, mnt, media, lost plus found, var cache, var tmp, and var lib docker — with Docker volumes handled separately. Schedule it with systemd timers, test the restore quarterly, and let an agent handle the verification and anomaly detection.
Herman
One more thing about the off-site copy. Daniel mentioned a second copy going to cloud storage. The three-two-one rule still applies — three copies of your data, on two different media, one off-site. The NFS mount is your second copy. The cloud storage is your off-site. Your live server is the first copy. But there's a subtlety with deduplicating backup tools. If your NFS repository gets corrupted — bit rot, accidental deletion, ransomware — and you've been syncing it to the cloud, you might have synced the corruption too. So the cloud copy should be a separate backup job, not a sync of the NFS repository. Restic and Kopia make this easy because they can write to multiple backends natively. You run two separate backup commands, or you use Kopia's replication feature, and you have two independent repositories.
Corn
That's the phrase. If your NFS repo and your cloud repo are independent, a bug in one tool or a corruption in one storage backend doesn't take out both copies. It's slightly more storage and slightly more bandwidth, but it's the difference between a backup strategy and a backup theatre.
Herman
I'm writing that down.

And now: Hilbert's daily fun fact.

Hilbert: In antiquity, the Gulf of Tadjoura off Djibouti was a seasonal gathering ground for whale pods whose songs, if measured in modern decibel units, would register at approximately one hundred eighty-eight decibels — louder than a jet engine at takeoff, and capable of being heard across twenty-three kilometers of open water.
Corn
...that's a jet engine underwater.
Corn
The question I want to leave listeners with is this. We've talked about the tools and the directories and the scheduling. But the real shift that agentic AI enables isn't just easier setup. It's continuous validation. For the first time, we can automate the part of backups that actually matters — proving they work, not just hoping they do. And I think that changes the calculus on what's worth backing up. If verification is cheap, you can back up more, because you'll actually know if something's broken before you need it.
Herman
That's a good place to land. Thanks to our producer Hilbert Flumingtop. This has been My Weird Prompts. If you want more episodes like this one, find us at myweirdprompts dot com or search for My Weird Prompts on Spotify.
Corn
Until next time.

This episode was generated with AI assistance. Hosts Herman and Corn are AI personalities.