What it does
Monitors workflow duration, concurrency, and compute type to identify expensive GitHub Actions jobs, then suggests optimisations like caching, matrix tweaks, or moving heavy steps elsewhere.
Why I recommend it
CI/CD costs creep silently. An automated review keeps engineering aware of waste and enforces budgets without manual audits.
Expected benefits
- Lower Actions bill
- Faster feedback cycles
- Clear visibility for eng managers
- Recommendations prioritised by savings
How it works
Nightly job pulls usage metrics via GitHub API -> calculates cost per workflow + per repo -> flags outliers based on thresholds -> Claude drafts optimisation suggestions referencing docs -> sends Slack/Email report with quick wins and links to config files.
Quick start
Export last month’s Actions usage report. Identify top 5 workflows by minutes and manually note potential fixes. Use that as baseline to show savings potential.
Level-up version
Auto-open GitHub issues with suggested YAML changes, enforce concurrency caps, compare self-hosted runner ROI, and integrate with finance for budget alerts.
Tools you can use
CI: GitHub Actions
Data: GitHub REST/GraphQL APIs, BigQuery
Automation: GitHub Apps, Zapier, n8n
AI: Claude for recommendations
Also works with
CircleCI, GitLab CI, Azure DevOps pipelines.
Technical implementation solution
- No-code: Scheduled GitHub report -> Google Sheets analysis -> Zapier sends weekly summary with top savings.
- API-based: Lambda pulls workflow runs -> stores metrics -> Claude produces suggestions -> GitHub App comments on workflow files or opens PR.
Where it gets tricky
Accessing org-wide usage data, avoiding false positives (long workflows by design), and translating recommendations into actionable YAML changes developers trust.
