Daniel sent us this one — he's building containers on a beefy x86 machine and deploying to Raspberry Pis, which means he's wrestling with the multi-architecture problem. But the real question is about private container registries. He's looking at three options: Docker Hub with its stingy free tier, GitHub Container Registry with its unlimited repos but security question marks, and self-hosting as the pure play. There's a lot to unpack here.
Oh, this is a great one. And before we dive in — quick note, today's script is being generated by DeepSeek V four Pro. So if I sound unusually coherent, that's why.
I was going to say, you seem suspiciously well-rested.
Alright, let's get into the build workflow first, because that's the part most people gloss over. If you're building on a strong machine and targeting a Raspberry Pi, you can't just run docker build and call it a day. Your x86 machine produces amd64 images. The Pi needs arm64 — or arm/v7 if you're on an older Pi 3B. The tool that bridges this gap is Docker Buildx.
Buildx is what, exactly? I've seen the commands but I've never had to use it myself.
It's Docker's multi-platform build engine. You create a builder that supports multiple architectures, typically by pulling in QEMU for emulation, and then you run a single command — docker buildx build --platform linux/amd64,linux/arm64 --tag yourimage:latest --push. — and it produces what's called a manifest list. When the Pi pulls that image, Docker automatically selects the arm64 variant. When your x86 server pulls it, it gets amd64. Same tag, same registry, zero confusion.
The tag is the same, but the actual bits are architecture-specific. That's elegant. How slow is the QEMU emulation for ARM builds on x86?
That's where it gets interesting. It works, but it's slow. Emulating ARM on x86 with QEMU is not a fast path. For a simple Go binary or a small Python app, you probably won't notice. But if you're compiling anything substantial — a Rust project, a C++ application, anything with heavy dependencies — the ARM build can take several times longer than the native amd64 build. That's why a lot of serious users end up moving to native ARM runners in CI, or using something like GitHub Actions with ARM runners, or even building directly on an Apple Silicon Mac since those are ARM-native.
The "build on strong machine, push to Pi" workflow works at small scale, but it has friction. That's worth flagging upfront. Let's start with Docker Hub, since it's the default everyone knows.
Docker Hub's free tier is brutally simple. You get one private repository. That's it. Public repos are unlimited, but if you want to keep your containers private, you get exactly one. And there are pull rate limits — two hundred pulls per six hours for authenticated free users, and much stricter limits for anonymous pulls. If you're running multiple Pis or you're pulling images frequently in a CI pipeline, you can hit those limits fast.
One private repo. That's almost comical. What does it cost to get more?
The Pro tier starts at five dollars a month and gives you unlimited private repos. Team and Business tiers go up from there. It's not expensive in absolute terms, but for a solo developer or someone just tinkering with a Raspberry Pi cluster, five dollars a month for something that feels like it should be table stakes can sting. Docker Hub does have advantages though — it's purpose-built for Docker images, the search and discovery experience is excellent, and they now host curated catalogs for AI models, MCP servers, and hardened images. It's the "just works" option.
The storage situation?
Docker recommends keeping repos under one gigabyte, and strongly under five gigs. Push past that and you start running into problems. So if your images are large — and multi-architecture images with multiple variants can add up — Docker Hub might not be the right home anyway, even if you pay.
Alright, so Docker Hub is simple but restrictive. Now let's talk about the one that markets itself as the generous alternative — GitHub Container Registry. Unlimited private repos, right?
This is where the "unlimited" marketing gets tricky. Yes, GHCR gives you unlimited private repositories. But the storage across all GitHub Packages — that includes GHCR, Actions artifacts, and caches — is capped at five hundred megabytes on the free tier. Five hundred meg. And you get one gigabyte of monthly data transfer out. Transfers within GitHub Actions are unlimited, but if you're pulling images to a Raspberry Pi sitting on your desk, that's data transfer out, and it counts against your cap.
Unlimited repos, but a five hundred meg storage cap. That's a hard contradiction. You could have a hundred private repos, each with five megabytes of images, and you're done. That's not unlimited in any meaningful sense.
A lot of developers discover this the hard way — they push a few multi-architecture images, each with two or three variants, and suddenly they're hitting the storage ceiling with no warning beyond a billing page they probably didn't read. Docker Hub's "one private repo" is honestly more transparent. You know exactly what you're getting. GHCR's "unlimited repos" is technically true but practically misleading.
Then there's the security angle. Daniel specifically flagged security concerns with GHCR.
So there was a notable incident in twenty twenty-two. A security researcher named Jason Hall — he's at Chainguard, which does supply chain security — discovered a bug in GHCR that was leaking the names of private repositories through HTTP response headers. GHCR is backed by a single Azure Blob Storage bucket, and when blobs were copied internally, Azure's metadata was leaking the source repository name. Even for private repos.
Someone could discover that a private repo existed and what it was called, just from the headers?
They couldn't access the image layers — you'd still need the SHA256 hash for that — but knowing that a repo called "acmecorp-production-deploy" exists and is private tells an attacker where to look. It's reconnaissance. Phishing targets, credential-stuffing targets, places to search for accidentally exposed secrets. The bug was reported in March twenty twenty-two, confirmed within three days, fixed by May, and the disclosure was cleared in January twenty twenty-three. GitHub paid a six hundred and seventeen dollar bounty.
Six hundred and seventeen dollars for a data leak from private repos.
And the broader question is structural. GHCR uses a single Azure Blob Storage bucket for all customers. Multi-tenant architecture always carries this risk — metadata leaking between tenants in ways the provider didn't anticipate. The bug was fixed, but it makes you wonder: what other metadata channels exist that haven't been discovered yet? If you're a solo developer with a hobby project, this probably doesn't matter. If you're building something where the existence of the project itself is sensitive, it very much does.
GHCR has authentication quirks too, right? I've seen people complain about pulling private images into Kubernetes.
There are persistent reports of "unauthorized" errors when pulling private GHCR images into Kubernetes clusters — minikube especially. You need to configure imagePullSecrets with base64-encoded authentication, and the setup is finicky. It's not broken, but it's not seamless either. Docker Hub's authentication is comparatively straightforward because it's the default everywhere.
Alright, so Docker Hub is honest but restrictive, GHCR is generous on paper but stingy in practice with a security scar. That brings us to option three: self-hosting. What does the landscape look like?
Three main open-source contenders. The simplest is the CNCF Distribution registry — it used to be called Docker Distribution, sometimes just called "The Registry." It's the lightweight OCI-compliant base that powers Docker Hub itself. You run it as a single container, point it at a storage backend — S3, NFS, Google Cloud Storage, Azure Blob — and you're done. It's minimal, it's fast, and it has almost no features out of the box. No web UI, no access control beyond basic auth, no vulnerability scanning.
It's the "I just need a bucket for my images" option.
If you want more, there's Harbor. Harbor is a CNCF-graduated project — full-featured, with policy-driven role-based access control, vulnerability scanning, image replication, and a proper web UI. They recently launched something called Harbor Satellite, which is designed specifically for edge and IoT environments. You run a lightweight local registry at the edge, and it syncs with a central Harbor instance, which is perfect if you have inconsistent network connectivity.
That edge use case is actually compelling for Raspberry Pi deployments. If your Pis are in the field with spotty connections, having a local registry cache could be the difference between a working deployment and a failed one.
And the third option is Sonatype Nexus Repository. It's a multi-format artifact manager — Docker, Maven, PyPI, you name it. If you're already using Nexus for other artifacts, adding Docker registry support is natural. It also has image signing support, which is nice for supply chain security.
What about the practical setup? Daniel's a developer, not a full-time ops person. Can he actually get a self-hosted registry running in a reasonable amount of time?
There's a great guide from Thomas Bandt — March twenty twenty-six — that walks through a fifteen-minute setup. You deploy the registry:2 image behind Traefik as a reverse proxy, with Let's Encrypt for TLS certificates, add htpasswd authentication for basic access control, and pair it with Watchtower for automatic container updates on the deployment server. The whole stack — registry, Watchtower, Traefik — runs on a single machine.
Fifteen minutes sounds optimistic. What's the catch?
The catch is everything that comes after the fifteen minutes. Garbage collection is the big one. The registry keeps every pushed image forever unless you manually delete via the API and run bin/registry garbage-collect. There's no built-in retention policy. You push an image, you push a new version, the old layers sit there indefinitely. Over time, your storage fills up with orphaned blobs, and you have to remember to clean house.
It's not just "set it and forget it.
Not at all. You're also on the hook for TLS certificate renewal — Let's Encrypt automates most of it, but when it breaks, you're the one fixing it at 2 AM. Backups are your responsibility. Disaster recovery is your problem. If the server dies, your registry is gone unless you've been backing up the storage backend. And bandwidth costs — if you're self-hosting on a cloud VM, you're paying for every gigabyte of image pulls. On a Raspberry Pi at home, that's less of an issue, but then you're dealing with home internet reliability.
The real question is: for a solo developer with a Raspberry Pi or two, is self-hosting worth the operational overhead, or is paying five bucks a month for Docker Hub Pro actually the smarter move?
I think the answer depends on what you value. If you want zero operational burden, pay the five dollars. Docker Hub Pro gives you unlimited private repos, removes the pull rate limits, and you never think about garbage collection, TLS certificates, or backups. Five dollars a month is probably less than the value of your time if you spend even one hour a month maintaining a self-hosted registry.
That's the pragmatic take. But there's a counterpoint — self-hosting removes external dependencies and vendor lock-in. If Docker Hub changes its pricing, or has an outage, or decides to deprecate something, you're not affected. You own the infrastructure.
That's true, and it matters more at scale. If you're running a small business with a dozen Pis in the field, the calculus shifts. A self-hosted registry with Harbor Satellite at the edge gives you resilience that a cloud registry can't match. And if you're dealing with sensitive code — proprietary algorithms, unreleased products — the GHCR metadata leak is a cautionary tale about what can go wrong when your private artifacts live on someone else's multi-tenant infrastructure.
Let's talk about that GHCR leak a bit more, because I think it's the most interesting part of this whole comparison. The bug was in Azure Blob Storage metadata propagation, right? When blobs were copied internally, the source repository name leaked. That's not a GHCR application bug — that's a cloud infrastructure behavior that GitHub apparently didn't anticipate.
And the implication is that when you use any cloud-hosted registry, you're trusting not just the registry software but the entire underlying cloud stack — storage, networking, metadata services — to properly isolate your data. The GHCR bug was fixed, but it was present for at least six weeks before it was discovered, and it took an external researcher to find it. How many similar bugs exist in ECR, in Artifact Registry, in ACR, that nobody's found yet?
That's the uncomfortable question. And it's not unique to registries — it's the fundamental tradeoff of cloud services. But registries feel different because they hold your deployable artifacts. If someone knows what private repos you have, they know what you're building, what your infrastructure looks like, what dependencies you're pulling in. It's a reconnaissance goldmine.
The supply chain angle is worth mentioning too. Every registry — cloud or self-hosted — is vulnerable to typosquatting, where someone publishes a malicious image with a name similar to a popular one. Cached image staleness is another issue — you pull an image, it gets cached, and you don't realize there's a newer version with a security patch. These problems exist regardless of where you host.
Security-wise, self-hosting doesn't magically solve everything. It removes the multi-tenant metadata leak risk, but you still have to worry about access control, image signing, and supply chain integrity.
And if you're self-hosting on a Raspberry Pi that's also running other services, your attack surface might actually be larger than if you used a managed registry. Docker Hub has a security team. Your Pi has...
Alright, let's get practical for Daniel and anyone else in this situation. What's the actual recommendation?
I'd break it into three tiers. If you're a solo developer with one or two projects, just pay for Docker Hub Pro. Five dollars a month, zero maintenance, and you get the best Docker-native experience. The pull rate limits go away, you get unlimited private repos, and you never think about garbage collection. It's the boring, correct answer.
If you're cost-sensitive or philosophically opposed to paying for things that used to be free?
Then self-host the CNCF Distribution registry. The fifteen-minute Traefik-plus-Watchtower setup is real and it works. But budget time for maintenance — set a calendar reminder to run garbage collection monthly, make sure your TLS certificates are auto-renewing, and have a backup strategy. If you're already running a home server or a Pi cluster, this is a natural addition. If you're setting up a server just for the registry, the electricity and time probably cost more than five dollars a month.
Where does GHCR fit?
GHCR makes sense if you're already deep in the GitHub ecosystem and your images are small. If all your containers are under a hundred megabytes and you're using GitHub Actions for CI/CD, the storage cap won't bite you and the data transfer is free within Actions. The unlimited private repos are genuinely useful if you have a lot of small projects. But the five hundred meg storage cap is a hard ceiling, and the twenty twenty-two leak is a reminder that "free" sometimes means "you're not the customer, you're the product testing the infrastructure.
There's one more angle I want to explore — the multi-architecture build pipeline itself. You mentioned earlier that QEMU emulation is slow for ARM builds on x86. What's the alternative if you're building frequently?
The serious approach is to use native ARM runners. GitHub Actions now has ARM runners available. You can set up a matrix build in your CI pipeline — amd64 on x86 runners, arm64 on ARM runners — and then merge the manifests with docker buildx imagetools create. The builds are fast because they're native, and you can push directly to whatever registry you're using.
If you're not using GitHub Actions?
Apple Silicon Macs are ARM-native and increasingly common. If you have an M1, M2, or M3 Mac, you can build ARM images natively at full speed. The amd64 build will be emulated and slow, but for Raspberry Pi deployments, you probably only care about the arm64 variant. You can build single-architecture images and skip the multi-platform complexity entirely.
That's actually a simpler workflow for the solo developer case. Build on your Mac, push to your registry, pull on the Pi. No emulation, no manifest lists, no complexity.
The multi-architecture workflow is elegant when you need to support both amd64 and arm64 from a single tag, but if your entire deployment target is Raspberry Pis, you're overcomplicating things by building amd64 variants at all. Just build arm64 and move on.
The advice might be: simplify before you optimize. If everything's running on ARM, don't build x86 images. If you have one or two private images, Docker Hub's limitations might not actually matter. If GHCR's storage cap is bigger than your total image size, the "unlimited" trap doesn't affect you.
And that's the thing about infrastructure decisions — the best choice depends on your actual numbers, not the marketing. Measure your image sizes. Count your private repos. Estimate your pull frequency. Then pick the option whose constraints you'll never notice, rather than the one with the best-sounding feature list.
One more thing on self-hosting. You mentioned garbage collection as the big operational pain point. Are there tools that automate this?
Harbor has built-in retention policies and garbage collection scheduling. That's one of the reasons to choose Harbor over the bare registry:2 image if you go the self-hosted route. You can set policies like "keep the last ten versions of each image" or "delete images older than thirty days," and Harbor handles the cleanup automatically. The CNCF Distribution registry requires you to script this yourself — call the API to list and delete tags, then run the garbage collection binary. It's not hard, but it's another thing to maintain.
Harbor is the "I want features but I still want to self-host" sweet spot.
Yes, with the caveat that Harbor is heavier. The registry:2 image is tiny — you can run it on a Raspberry Pi without breaking a sweat. Harbor needs more resources. For a single developer with a Pi cluster, Harbor might be overkill. For a small team or a business, it's probably the right call.
Alright, let's summarize the landscape. Docker Hub: one private repo on free, five bucks a month for unlimited, best Docker-native experience, pull rate limits on free tier. GHCR: unlimited private repos but five hundred meg storage cap, one gig monthly data transfer, past security incident involving metadata leakage, authentication quirks with Kubernetes. Self-hosted: no external dependencies, no rate limits, full control, but you own the maintenance, garbage collection, backups, and TLS.
The multi-architecture build workflow: use Buildx with QEMU for occasional builds, native ARM runners or Apple Silicon for frequent builds, and consider whether you even need multi-architecture if your entire fleet is ARM.
That's a clean summary. Daniel, hopefully that gives you enough to make the call. And for everyone else — measure your actual usage before you optimize for constraints you don't have.
Now: Hilbert's daily fun fact.
The national flag of Nepal is the only non-quadrilateral national flag in the world. It consists of two overlapping triangular pennants.
Where does this leave us practically? Let's give listeners a decision framework they can apply to their own projects.
Start by answering three questions. First, how many private container images do you actually have? If it's one or two, Docker Hub's free tier is fine. If it's more, move to question two.
Question two is total storage?
Add up the size of all your images across all architectures. If it's under five hundred megabytes, GHCR will work — just be aware of the security history and the authentication quirks. If it's over five hundred megs, GHCR's free tier is a non-starter.
Question three is your tolerance for operational overhead.
If the answer to "do I want to manage a server?" is no, pay the five dollars for Docker Hub Pro and move on with your life. If you're already running a home server and you enjoy this stuff, self-host with registry:2 or Harbor. The fifteen-minute setup is real, but the ongoing maintenance is also real.
I think there's also a fourth question that's worth asking: how sensitive is the code you're containerizing? If the existence of the project itself is confidential, the GHCR metadata leak — even though it's fixed — is a reminder that cloud registries have attack surfaces you can't control. Self-hosting eliminates that class of risk entirely.
That's fair. And for most hobby projects, it doesn't matter. But if you're building something for a client under NDA, or you're working on a prototype that hasn't been announced, the operational burden of self-hosting might be worth the confidentiality guarantee.
The decision tree is: count your repos, measure your storage, assess your sensitivity, and be honest about your willingness to do server maintenance. That's a better framework than "which registry is best.
One thing I want to emphasize — none of these options are bad. Docker Hub is solid. GHCR works for its target use case. Self-hosting is empowering and educational. The worst outcome is analysis paralysis where you spend more time choosing a registry than building your application.
That's the real trap. Pick one, push your images, deploy to your Pi, and iterate. You can always migrate later — it's just a docker pull and docker push away.
Registries are commodity infrastructure at this point. The switching cost is low. Don't overthink it.
One last thought — Daniel mentioned he's building on a strong machine and deploying to a lightweight server. The registry question is important, but the build workflow is equally important. If you're cross-compiling or emulating, get that pipeline solid before you worry about where the images live.
A slow, flaky build pipeline will cause more pain than a suboptimal registry choice. Invest in getting Buildx configured correctly, or set up native ARM builds on Apple Silicon if you have it. The registry is just storage. The build is where the complexity lives.
Alright, I think we've covered this from every angle. Docker Hub, GHCR, self-hosting — the tradeoffs are clear, the decision framework is practical, and the build workflow context makes it actionable. Thanks to Daniel for the prompt, and thanks to Hilbert Flumingtop for producing.
This has been My Weird Prompts. If you enjoyed this episode, leave us a review wherever you listen — it helps other people find the show.
We're back next time with whatever Daniel throws at us.