Non-obvious ways self-hosting GitHub Actions can increase CI cost.
Runners
Apr 3, 2025
GitHub Actions has taken the devtools world by storm over the past few years - if you’re a software engineer at a startup, there's a high chance you're relying on GitHub Actions to run your CI/CD pipelines. Although convenient at first, as teams and codebases grow, a few shortcomings and become painfully obvious.
The obvious next step might be to host GitHub Actions jobs in your own cloud. However, as you go down this path, you'll find some unobvious ways self-hosting might actually wind up increasing your CI spend.
But let's start at the start. Why self-host to begin with?
The Pitfalls of GitHub-Hosted Runners
High Costs, Modest Performance GitHub Actions seems reasonably priced—until your build minutes skyrocket. Suddenly, you're paying premium prices for a service running on general-purpose hardware. A quick price comparison between the standard GitHub-hosted runner (2 CPU, 8GB memory) at $0.016 per minute vs m7i.large on AWS (2CPU, 8GB memory) is $0.00168 per minute - GitHub actions is almost a 10x premium!
Limited Cache Size GitHub enforces a 10 GB cache size limit. As your project grows, the limited cache capacity can quickly become a bottleneck, increasing build duration.
Limited Customization Need specific tooling or CLI for your workflow? GitHub-hosted runners don’t support pre-baked images, meaning you have to install these tools from scratch on every build - that adds extra minutes spent every time a job runs.
So the natural solution would be to turn to GitHub self-hosted runners: GitHub offers an open source agent that connects to GitHub, monitors the job queue, and runs jobs (executes workflow files). You can manage agents individually, or have them dynamically provisioned via GitHub auto-resource controller (arc). This way, you can run your builds in your own cloud environment.
"So let's just host builds ourselves"
Self-hosted runners offer control, cost-savings, and customization:
Cheaper Compute by running builds on a cloud provider, where compute cost is much cheaper.
Control & Flexibility with choices in compute platforms and machines configurations to fit your needs.
Security through the ability to lock down your CI infrastructure as you need.
However, great power comes with great responsibility — it's not all sunshine and rainbows. As you go down this road, you'll find some unobvious ways self-hosting actually might increase your CI spend.
The Gotcha's
I'll skip to the point.
Unforeseen Costs that aren’t immediately obvious:
Network Costs: Pulling dependencies, docker layers, and using GitHub’s cache or setup actions that involve caching result in additional network costs - which can exceed the cost of compute.
Compute Costs: Booting new instances, pulling images or tooling, and instance cooldown periods contribute to idle time your instances are running, but no work is being done. This inflates your bill.
Observability Costs: Storage cost of storing logs and metrics, compute cost of aggregation, or vendor costs.
Performance Costs: Standard EBS volumes are networked and by default (free tier) offer low IOPS, low bandwidth, and high latency - which could actually increase your build times compared to GitHub-hosted runners. Tuning these variables comes at a cost.
Security and Compliance Ensuring your self-hosted runners meet security and compliance standards needs constant vigilance. Regular security audits, vulnerability assessments, patch management, and strict access control policies become mandatory responsibilities, demanding considerable time and resources.
Complexity of Setup and Planning Managing your own build infrastructure involves significant upfront planning, everything from IP address management and instance rightsizing, to baking custom images. Since GitHub's default runner images don’t contain all the tooling that ships with the Github-hosted runner.
Fleet Rightsizing When running jobs, to avoid the cost of a fleet of hot instances, one common solution is autoscaling instances dynamically as jobs are queued. Resource-based autoscaling (cpu/memory saturation) is too noisy and leads to unpredictable job start times. Demand-based autoscaling doesn't understand the job queue ordering, so some jobs could be starved (queued early but started later, since other jobs randomly take newly available resources). Leaving this un-tuned either leaves idle compute, or lost developer time.
To keep job times and costs low, a custom scheduler is needed.
Maintenance involves everything from observability, to making sure images are up to date for security, to right-sizing infrastructure as demands change from the application. If any new tooling is needed, those requests must be handled. Because CI stability is paramount to developer productivity, any fires require an infrastructure engineer to act immediately - often at a cost to planned work.
These learnings often come after starting implementation, and sunk cost fallacy encourages additional investment or distractions, depending on how you see it. Not to worry, we've taken care of this for you.
BuildPulse Runners: Your jobs at 2x speed, half cost.
We’ve built optimized infrastructure that runs your GitHub Actions, CircleCI, and SemaphoreCI builds at half cost, 2x performance – with a 1-line change. We do this by leveraging self-hosted runners to run your builds on our infrastructure.
Integrations: Seamlessly integrates into (without replacing) your existing CI system – GitHub Actions, CircleCI, and SemaphoreCI – with minimal configuration.
Onboarding Time: Faster than your build takes to finish.
Tuned Infrastructure: Infrastructure that uses resources more efficiently to accelerate your builds and reduce cost.
Customizable Images and Machine Configurations: Bake your own images or choose optimized templates tailored specifically for your builds.
Toolkit: Building blocks that speed up your pipelines.
In-Cluster Caching: Caching at every level – Docker image layers, application dependencies, and artifacts.
Fast Job Starts: Fast, optimized instance booting, reducing wait times dramatically.
Instant Observability: Built-in Prometheus endpoints providing seamless integration with your existing monitoring tools.
Security: We’re SOC 2 Type 2-compliant, and your builds are network- and compute- isolated.
Whiteglove Support: Dedicated, proactive assistance ensuring smooth operations.
We're turning devops teams into heroes, book a meeting for a free demo!