#2477: Stop Polling: Push-to-Deploy for Solo Devs

Why your cron job is obsolete. Push-to-deploy with GitHub Actions and deploy keys — the simplest setup that actually works.

0:000:00
Episode Details
Episode ID
MWP-2635
Published
Duration
26:18
Audio
Direct link
Pipeline
V5
TTS Engine
chatterbox-regular
Script Writing Agent
deepseek-v4-pro

AI-Generated Content: This podcast is created using AI personas. Please verify any important information independently.

The Push-to-Deploy Mental Model**

Deployment is just two operations: copy new bits somewhere, then restart the process. Everything else — staging environments, test gates, approval workflows — is quality assurance. It's valuable, but it's not deployment.

Yet most developers reach for Jenkins, GitLab CI, or CircleCI the moment they need to push code to a server. These are great tools, but they're massive overkill for a solo developer or a team of three. A Continuous Delivery Foundation survey found that 68% of small teams using Jenkins describe it as overly complex for their needs. These are teams of fewer than ten people running a tool designed for enterprises with dedicated infrastructure teams.

The Inversion That Changes Everything

The key insight is event-driven deployment. Instead of a cron job polling GitHub every five minutes asking "is there new code yet?", the push from your laptop becomes the trigger. GitHub Actions acts as the messenger that knocks on your server's door and says, "new code, go fetch it."

This is the deployment version of realizing you don't need to keep checking if the pizza's arrived — you just wait for the doorbell. The checking is wasted energy.

The Minimal Viable Setup

The simplest thing that actually works: a GitHub Actions workflow that fires on push to main, SSHs into your production server, runs git pull and docker compose up --build -d. That's it. No agents, no polling, no Kubernetes, no dedicated CI server.

The actual workflow file is remarkably short:

  • Name: Deploy
  • Trigger: push to main branch
  • Runner: ubuntu-latest (free, ephemeral)
  • Action: SSH into server, pull latest code, rebuild and restart containers

Four lines of actual logic, if you strip the YAML scaffolding. And that replaces what people think they need — a Jenkins server, an agent pool, a webhook receiver, maybe a dedicated deployment user with sudo access.

How Deploy Keys Work

The authentication layer is deploy keys — an SSH key pair scoped to a single repository. The public key goes on your server's authorized_keys file. The private key gets stored as a GitHub secret. When the Actions workflow runs, it pulls that secret, authenticates the SSH connection, and executes your commands.

The scope limitation is the clever part. A deploy key only grants access to one repo. If that key gets compromised, the blast radius is contained — unlike a personal access token that might have access to every repo the user owns.

The Tradeoffs

Deploy keys don't expire. They don't get automatically revoked when someone leaves. They're typically not protected by a passphrase. If someone gets root on your server, they've got the key.

For a solo developer or small team, the convenience outweighs the risk. At some point — when you're dealing with sensitive data, multiple environments, or a growing team — you'd want to switch to something like GitHub App installation tokens that expire after an hour. But that's a "future you" problem.

When It Stops Working

The push-to-deploy model with deploy keys will carry a solo developer or small team surprisingly far. It's been run on projects with hundreds of users without breaking. But eventually, you'll need staging environments, rollback capabilities, approval workflows, and more granular permission scoping.

The key question is: what are those tools actually bringing beyond just copying files and restarting a process? Understanding the minimal viable deployment is a superpower right now, especially for solo devs and small teams burning cash on infrastructure they don't need.

Downloads

Episode Audio

Download the full episode as an MP3 file

Download MP3
Transcript (TXT)

Plain text transcript file

Transcript (PDF)

Formatted PDF with styling

#2477: Stop Polling: Push-to-Deploy for Solo Devs

Corn
I remember the exact moment it clicked for me. I was sitting there, staring at a cron job I'd written to poll GitHub every five minutes, and I thought — wait. Why am I asking the server to knock on GitHub's door when GitHub could just ring the doorbell? That inversion, from pull to push, it's one of those things that seems obvious after you see it but genuinely isn't before. I had spent an entire afternoon tuning that cron interval, trying to balance freshness against rate limiting, and the whole time the answer was sitting right there.
Herman
It's the deployment version of realizing you don't need to keep checking if the pizza's arrived — you just wait for the doorbell. The checking is wasted energy. And yet, I'd bet most developers have written some version of that polling cron job at least once before the lightbulb goes off.
Corn
I mean, sloths invented pizza delivery, so I've always understood this intuitively.
Herman
You did not invent pizza delivery.
Corn
The historical record is disputed. But the point stands — and by the way, today's episode is powered by DeepSeek V four Pro, which I'm told is handling our script. So if I sound unusually articulate today, that's why.
Herman
I was going to say you seemed sharper than usual. Now I'm slightly disappointed it's the AI.
Corn
Don't get used to it. So Daniel sent us this one — and it's a topic that I think hits home for a lot of developers who've found themselves drowning in tooling they don't actually need. He's asking about deployment pipelines, but starting from the absolute basics. You're building a CRM, you want to push code to a production server — maybe in the cloud, maybe self-hosted. And the question is, what's the simplest thing that actually works? Because what a lot of us reach for first is Jenkins or GitLab CI or CircleCI, and those are great tools, but they're also massive overkill for a solo developer or a team of three. Daniel's point is that the whole world of CI CD has obfuscated how simple deployment can be at its core. So he wants us to walk through the push deployment mechanism, how it bypasses the need for pulling entirely, the security model with deploy keys, and then trace the path from that minimal setup all the way up to the point where you'd actually need something more elaborate — like at a Netflix or an enterprise. And the key question underneath all of it is, what are those tools actually bringing beyond just copying files and restarting a process?
Herman
This is such a good framing because it's exactly the trap I see people fall into constantly. They Google "how to deploy a web app" and suddenly they're reading about Kubernetes operators and ArgoCD and they think, okay, I guess I need all of this. And they don't. They really don't.
Corn
With cloud costs rising and tool complexity just exploding, understanding the minimal viable deployment is a superpower right now. Especially for solo devs and small teams who are burning cash on infrastructure they don't need. I talked to someone last week who was spending eighty dollars a month on a managed CI service for a side project with twelve users. He could have bought a nice dinner every month instead.
Herman
Eighty dollars a month for twelve users. That's the kind of thing that keeps me up at night. And the thing is, once you strip it all back, deployment is just two operations — copy new bits somewhere, then restart the process. That's it. Everything else we've layered on top, all the staging environments and the test gates and the approval workflows, that's quality assurance. It's valuable, but it's not deployment. It's the safety net around deployment.
Corn
Right — and I think that distinction gets lost. People conflate the pipeline with the deploy step itself and end up building these elaborate Rube Goldberg machines when what they actually need is, like, four lines of YAML and an SSH key. I've seen a startup with five engineers running a full GitLab instance on a dedicated EC2 box, with a separate build server, and they were deploying a static website. A static website. The entire thing could have been an S3 bucket and a CloudFront distribution.
Herman
That's both hilarious and completely believable. I've seen worse. So let's ground this in something concrete. You've got a CRM app, it's in a GitHub repo, it runs in a Docker container, and you want to push changes from your laptop to a production server. The simplest thing that actually works — and I mean actually works, not "works in theory" — is a GitHub Actions workflow that fires on push to main, SSHs into your server, runs git pull and docker compose up with the build flag, and you're done.
Corn
No agents, no polling, no Kubernetes, no dedicated CI server. Just your repo, a YAML file, and a server that knows how to run Docker. And here's the thing — I've run that exact setup on projects with hundreds of users. It didn't break. It didn't crumble under the weight of its own simplicity. It just worked.
Herman
I think it's worth naming the over-engineering trap explicitly because it's so common. Jenkins still holds something like forty-four percent of the CI CD market, but a Continuous Delivery Foundation survey found that sixty-eight percent of small teams using it describe it as overly complex for their needs. These are teams of fewer than ten people running a tool designed for enterprises with dedicated infrastructure teams. They're spending more time maintaining the pipeline than writing features.
Corn
You've got nearly seven out of ten small teams admitting they're using something too heavy for what they actually do, and yet the default advice when someone asks "how should I deploy" still points them toward that ecosystem. It's like telling someone who needs a bicycle to go buy a freight truck. Technically it'll get you there, but you're going to have a bad time.
Herman
Which brings us back to Daniel's core question — what's the simplest thing that works, and when does it stop working? Because the push-to-deploy model with deploy keys will carry a solo developer or a small team surprisingly far. Let's walk through exactly how it operates.
Herman
The mechanism itself is almost embarrassingly simple. You create a GitHub Actions workflow file in your repo — dot github slash workflows slash deploy dot yaml — and it listens for a push event on your main branch. The moment you push, GitHub spins up a runner, and that runner executes exactly one meaningful step: it SSHs into your server and runs your deploy commands.
Corn
This is where the penny drops for people, right? Because the instinct is to think the server needs to be asking "is there new code yet? how about now?" on a loop. But the workflow inverts that entirely. The push from your laptop is the event, and GitHub Actions is just the messenger that knocks on your server's door and says, "new code, go fetch it." It's event-driven architecture applied to deployment.
Herman
What I love about this is that it works without a CI server because GitHub Actions itself is the CI server — it's just a hosted one that you don't have to manage. No Jenkins instance to patch, no agents to keep online, no polling whatsoever. The server doesn't need to know GitHub exists until the moment a deploy happens. It just sits there serving traffic, blissfully ignorant of the pipeline.
Corn
How does this actually play out in practice? Let's say I'm the solo dev on this CRM. I've been hacking on a new feature branch for two days. I merge to main. What happens next, step by step?
Herman
The merge triggers the push event. GitHub sees it, looks at your workflow file, and says "ah, this job runs on pushes to main." It spins up a fresh Ubuntu runner — you don't pay for it, you don't configure it, it just appears. That runner loads your repository's secrets, grabs the deploy key, opens an SSH connection to your production server, and executes whatever commands you've specified. Typically that's navigating to the app directory, running git pull to fetch the latest code, and then running docker compose up with the build flag to rebuild any changed containers and restart them. The whole thing takes maybe thirty seconds. And then the runner disappears. You're not paying for an idle server.
Corn
The authentication layer that makes all of this secure is deploy keys. Walk me through how those actually work, because I think people hear "SSH key" and assume there's more complexity than there really is.
Herman
A deploy key is just an SSH key pair that GitHub generates, scoped to a single repository. The public key gets added to your server's authorized underscore keys file — same as you'd do for your own laptop. The private key gets stored as a GitHub secret in your repository settings. When the Actions workflow runs, it pulls that secret, uses it to authenticate the SSH connection, and runs your commands. No passwords, no long-lived OAuth tokens with broad permissions, no personal access tokens tied to someone's account that survives after they leave the project. It's beautifully minimal.
Corn
The scope limitation is the clever part. A deploy key only grants access to one repo. So even if that key gets compromised, the blast radius is contained. Compare that to a personal access token, which might have access to every repo the user owns. One leaked PAT and suddenly an attacker has the keys to the entire kingdom.
Herman
That said, there are real tradeoffs. Deploy keys don't expire. They don't get automatically revoked when the person who created them leaves the organization. And they're typically not protected by a passphrase — so if someone gets root on your server, they've got the key. GitHub's own documentation is pretty blunt about this: the key is easily accessible if the server is compromised. They're not sugar-coating it.
Corn
The simplicity is a double-edged sword. For a solo developer or a small team, the convenience probably outweighs the risk. But at some point, you'd want to switch to something like GitHub App installation tokens, which expire after an hour and have much tighter permission scoping. The question is when that tradeoff flips.
Herman
But that's a "future you" problem. For someone just trying to get their CRM deployed without over-engineering everything, deploy keys are the sweet spot. Let me give you the actual workflow file, because seeing it written out makes it click for people. It's literally this: name colon Deploy, on colon push colon branches colon open bracket main close bracket, jobs colon deploy colon runs dash on colon ubuntu dash latest, steps colon dash name colon SSH and deploy, run colon ssh dash i key user at server single quote cd slash app and and git pull and and docker compose up dash dash build dash d single quote. That's the whole thing.
Corn
Four lines of actual logic, if you strip the YAML scaffolding. And that replaces what people think they need — a Jenkins server, an agent pool, a webhook receiver, maybe a dedicated deployment user with sudo access. All of that collapses into GitHub Actions plus one SSH key. It's almost anticlimactic when you see it written out.
Herman
This is event-driven. The server never polls. The developer pushes, GitHub Actions fires, the deploy happens. It's the same architectural pattern that makes webhooks work, just wrapped in a workflow runner. For a lot of people, realizing this is the moment they stop thinking of deployment as a daemon process and start thinking of it as a reaction to an event.
Corn
Which is exactly the mental model shift Daniel described — the penny dropping. You don't need a cron job checking every five minutes because the push is the trigger. The server can just sit there, serving traffic, completely unaware that a deployment pipeline even exists until it gets that SSH connection and starts pulling new code. It's a fundamentally different way of thinking about the relationship between your repo and your server.
Herman
Here's a fun fact that ties into this — the whole webhook pattern that makes this possible was originally called "reverse HTTP" when it was first proposed. The idea of a server initiating a connection to a client felt backwards to people. Now it's the backbone of basically every modern integration.
Corn
That's a great parallel. It took years for webhooks to feel natural, and I think push-to-deploy is going through the same adoption curve. People still default to polling because it matches their mental model of how computers should work — ask repeatedly until you get an answer.
Herman
That's the simple model — push, build, deploy. But here's where it gets interesting. What happens when that simplicity becomes a liability? Because it will, eventually. The question is when.
Corn
Daniel was asking exactly that — where's the pivot point? When do you stop being the solo developer who can push straight to production and start needing gates? Is it a headcount thing? A revenue thing? A "someone woke up at 3 AM to fix a broken deploy" thing?
Herman
The pivot point isn't a specific number of developers, though five to ten is the range where it usually starts hurting. It's really about what happens when a bad deploy lands. If you're the only developer and you break something, you fix it. You know the codebase, you know what you changed, the fix is usually minutes away. If you're on a team of eight and someone pushes a broken migration at four thirty on a Friday, and suddenly paying customers are looking at error pages — that's a different universe. You might not even know which commit broke it, because three people pushed that afternoon.
Corn
The simple push-to-deploy model has no gates. There's nothing between "I typed git push" and "the production server is now running this code." No tests, no staging, no second pair of eyes. It's all trust. And trust scales poorly.
Herman
I talked to a team recently — three developers, a SaaS product, about eighteen months into using exactly the model we just described. Worked beautifully until someone pushed a config change that broke the database connection string. Four hours of downtime because the rollback was manual and they had to figure out which commit was the last good one. They added a staging environment and a manual approval gate in GitHub Actions the next day. Took them maybe a day total.
Corn
Four hours of downtime for a three-person team. That's a brutal but effective teacher. And what strikes me about that story is how small the fix was. They didn't rip out their entire pipeline and install Jenkins. They added exactly the thing that would have caught that specific failure — a place to test before production.
Herman
Which tells you something important — the fix wasn't "install Jenkins." It was adding exactly two things: a place to test before production, and a human saying "yes, this looks right." That's the surgical approach to pipeline evolution.
Corn
That's the pattern for what you add first when you outgrow the simple model. Step one: a build step that runs tests and lints before the deploy step even fires. Catch the obvious stuff — the typo in the config, the failing unit test — before it ever reaches a server. This alone would have caught the database connection string issue in under a minute.
Herman
Step two: a staging environment that deploys automatically from a develop branch. Ideally it mirrors production as closely as possible — same database version, same environment variables, same infrastructure — and you treat it as the gate. If it works on staging, you promote to production. The promotion itself can be as simple as merging develop into main.
Corn
Step three: a manual approval step using GitHub Environments. Before the production deploy runs, someone has to click approve. That's your human gate. It's not fancy, but it stops the four-thirty-on-Friday disaster. It forces a second pair of eyes on the deploy before it goes live.
Herman
These three things — automated tests, a staging environment, and an approval gate — cover probably ninety percent of the failure modes that small teams actually encounter. And none of them require leaving GitHub Actions.
Corn
What do formal CI CD tools actually bring beyond those three things? Because you can do all of that in GitHub Actions. Tests, staging deploy, approval gate — none of it requires Jenkins or CircleCI. So why do those tools exist at all?
Herman
They add things that start mattering at scale. Artifact management — the idea of "build once, deploy many" so you're not rebuilding the same Docker image three times for dev, staging, and prod. That matters when your build takes fifteen minutes and you're deploying multiple times a day. Parallel test execution so your test suite doesn't take forty minutes. Rollback capabilities that are one-click rather than "ssh in and git checkout the old commit hash." Audit trails that tell you who approved what and when, which matters when you're in a regulated industry and an auditor is asking questions.
Corn
Then there's the enterprise extreme. Netflix built Spinnaker because they needed canary deployments — roll out to five percent of users, watch error metrics, automatically roll back if something spikes. That's not a nice-to-have at their scale; that's the difference between a bad deploy being an incident and a bad deploy being a headline. Spotify built Backstage because they had hundreds of microservices and needed a single pane of glass to manage deployments across all of them. When you've got two hundred services, each owned by a different team, you can't just have two hundred GitHub Actions workflows and hope for the best. You need a platform.
Herman
Here's the thing people miss — Spinnaker and Backstage require dedicated teams to operate. They are platforms, not tools. If you're a ten-person team, you don't need a platform. You need a workflow file and maybe a staging environment. Adopting a platform before you have the scale to justify it is like buying a factory to bake a dozen cookies.
Corn
Which circles back to the comparison Daniel was nudging us toward — GitHub Actions versus Jenkins for a small team. Actions wins on simplicity and zero maintenance. You're not patching a Jenkins server or managing plugins. Jenkins wins on its plugin ecosystem and customizability, but for most small teams, that's a liability masquerading as a feature. The plugin that was essential three years ago is now unmaintained, and suddenly your pipeline is held together with dental floss and hope.
Herman
The market data backs this up. That Continuous Delivery Foundation survey — forty-two percent of organizations with fewer than ten developers still use Jenkins, despite sixty-eight percent calling it overly complex. That's a lot of teams running a tool they don't like because they think they're supposed to. It's the infrastructure equivalent of eating your vegetables because someone told you they're good for you, even though you hate them.
Corn
There's probably a whole episode in that statistic alone. The gap between what people use and what people actually need is enormous, and it's driven almost entirely by perceived expectations. "Enterprise companies use Jenkins, so if I want to be a real company, I should use Jenkins." It's cargo cult DevOps.
Herman
If you're listening to this and thinking about your own setup, the first thing to do is brutally simple. Start with the push-to-deploy model we just walked through. GitHub Actions, one SSH deploy key, four lines of actual logic. That will serve you until you have multiple developers or paying customers who notice downtime.
Corn
The key word there is "until." You don't need to pre-build the staging environment and the approval gates on day one. You need them when the cost of not having them becomes real. A four-hour outage is a pretty good teacher. Don't solve problems you don't have yet.
Herman
The second thing is, when that moment comes, add only what you actually need. Tests first — catch the broken config before it leaves your laptop. Then staging — a place that isn't production where you can see if things actually work. Then approvals — a human gate. Don't install Jenkins because you heard enterprises use it. Don't set up Kubernetes because it's on your resume wishlist.
Corn
The data we cited earlier makes this painfully clear. Sixty-eight percent of small teams call Jenkins overly complex, but forty-two percent are still running it. That's inertia, not engineering judgment. That's "we set it up three years ago and nobody wants to touch it." And I get it — migrating pipelines is tedious work. But the cost of staying is higher than the cost of moving.
Herman
Here's a concrete test for anyone listening. Audit your current deployment pipeline. If it takes more than ten minutes to set up from scratch — I mean a clean repo to a live deploy on a fresh server — you're probably over-engineered for your current scale. The simple model we described can be set up in less time than it takes to drink a coffee. If your pipeline requires a wiki page to explain, that's a smell.
Corn
The thing that strikes me is how much of the CI CD industry is built around problems most developers don't actually have yet. Artifact management, parallel test execution, canary rollouts — these matter at scale. But most projects aren't at scale, and they're carrying complexity they don't need. It's like buying snow tires when you live in Florida because you might drive to Canada someday.
Herman
Which is why the "add as you grow" approach beats the "install everything upfront" approach every time. You feel the pain, you add the fix. You don't preemptively solve problems you might never encounter. The lean startup methodology applies to infrastructure just as much as it applies to product.
Corn
The beautiful thing about starting simple is that when you do hit the limits, you'll actually understand what you need. You'll know whether your bottleneck is build speed or test coverage or deployment frequency, because you lived with the simple version long enough to feel the specific pain. You won't be guessing.
Herman
That same logic brings me to an open question I keep coming back to. As AI-generated code becomes more common — and it's already happening — do deployment pipelines need to incorporate automated code review and security scanning as a standard step, even for solo developers?
Corn
That's an uncomfortable thought. The solo developer using the simple push-to-deploy model might be pushing code they didn't fully write or fully understand. The pipeline becomes the last line of defense, not just a delivery mechanism. And that's a fundamentally different role than what we've been describing.
Herman
If Claude or Copilot generated half your codebase, the human gate we talked about — the manual approval — becomes less meaningful. You might not catch the subtle bug or the security hole because you didn't write the function and you trust the AI did it correctly. And AI does make mistakes. It hallucinates APIs, it generates plausible-looking but wrong logic, it introduces subtle security issues.
Corn
The pipeline has to get smarter. Automated code review, static analysis, dependency scanning — these stop being enterprise luxuries and start being basic hygiene, even for a one-person project. The solo developer with an AI copilot needs more pipeline guardrails, not fewer.
Herman
The trend toward serverless and edge functions is already pulling deployment in the opposite direction. You push code, the platform handles the rest. No containers to build, no servers to SSH into, no keys to manage. Vercel, Cloudflare Workers, AWS Lambda — the deployment pipeline is essentially invisible. It's so simple that the whole discussion we just had about deploy keys and SSH feels almost quaint.
Corn
Which makes me wonder whether we're watching the era of complex CI CD pipelines start to end, at least for the vast majority of developers. The tools we spent this episode discussing — Jenkins, Spinnaker, even GitHub Actions workflows — might look archaic in five years. Like describing a manual transmission to someone who's only ever driven an electric car.
Herman
Or they'll specialize. The simple stuff goes serverless, and the complex pipelines survive only where the complexity is necessary — Netflix scale, regulated industries, multi-cloud orchestration. The middle ground, the small team running Jenkins, that's what disappears.
Corn
Either way, the principle we landed on holds. Start simple, add only what you actually need, and don't let the industry convince you that your three-person team needs the same deployment infrastructure as a Fortune 500 company. The best tool is the one that solves your actual problem, not the one that looks impressive in a conference talk.
Herman
Now: Hilbert's daily fun fact. The average cumulus cloud weighs about one point one million pounds.
Corn
That's a question worth sitting with. How does something that heavy just... I'm going to be thinking about that for the rest of the day. Thanks to our producer Hilbert Flumingtop, as always. This has been My Weird Prompts. You can find every episode at myweirdprompts dot com, and if you've got a minute, drop us a review wherever you listen. We're back soon.

This episode was generated with AI assistance. Hosts Herman and Corn are AI personalities.