Skip to content
GitHub Agentic Workflows

Cost Management

The cost of running an agentic workflow is the sum of two components: GitHub Actions minutes consumed by the workflow jobs, and inference costs charged by the AI provider for each agent run.

Every workflow job consumes Actions compute time billed at standard GitHub Actions pricing. A typical agentic workflow run includes at least two jobs:

JobPurposeTypical duration
Pre-activation / detectionValidates the trigger, runs membership checks, evaluates skip-if-match conditions10–30 seconds
AgentRuns the AI engine and executes tools1–15 minutes

Each job also incurs approximately 1.5 minutes of runner setup overhead on top of its execution time.

The agent job invokes an AI engine (Copilot, Claude, Codex, or a custom engine) to process the prompt and call tools. Inference is billed by the provider:

  • GitHub Copilot CLI (copilot engine): Usage is billed as premium requests against the GitHub account that owns the COPILOT_GITHUB_TOKEN. A typical workflow run uses 1–2 premium requests. See GitHub Copilot billing.
  • Claude (claude engine): Billed per token to the Anthropic account associated with ANTHROPIC_API_KEY.
  • Codex (codex engine): Billed per token to the OpenAI account associated with OPENAI_API_KEY.

The gh aw logs command downloads workflow run data and surfaces per-run metrics including elapsed duration, token usage, and estimated inference cost. Use it to see exactly what your workflows are consuming before deciding what to optimize.

Terminal window
# Overview table for all agentic workflows (last 10 runs)
gh aw logs
# Narrow to a single workflow
gh aw logs issue-triage-agent
# Last 30 days for Copilot workflows
gh aw logs --engine copilot --start-date -30d

The overview table includes a Duration column showing elapsed wall-clock time per run. Because GitHub Actions bills compute time by the minute (rounded up per job), duration is the primary indicator of Actions spend.

Use --json to get structured output suitable for scripting or trend analysis:

Terminal window
# Write JSON to a file for further processing
gh aw logs --start-date -1w --json > /tmp/logs.json
# List per-run duration, tokens, and cost across all workflows
gh aw logs --start-date -30d --json | \
jq '.runs[] | {workflow: .workflow_name, duration: .duration, cost: .estimated_cost}'
# Total cost grouped by workflow over the past 30 days
gh aw logs --start-date -30d --json | \
jq '[.runs[]] | group_by(.workflow_name) |
map({workflow: .[0].workflow_name, runs: length, total_cost: (map(.estimated_cost) | add // 0)})'

The JSON output includes duration, token_usage, estimated_cost, workflow_name, and agent (the engine ID) for each run under .runs[].

The agentic-workflows MCP tool exposes the same logs operation so that a workflow agent can collect cost data programmatically. Add tools: agentic-workflows: to any workflow that needs to read run metrics:

description: Weekly Actions minutes cost report
on: weekly
permissions:
actions: read
engine: copilot
tools:
agentic-workflows:

The agent then calls the logs tool with start_date: "-7d" to retrieve duration and cost data for all recent runs, enabling automated reporting or optimization.

The primary cost lever for most workflows is how often they run. Some events are inherently high-frequency:

Trigger typeRiskNotes
pushHighEvery commit to any matching branch fires the workflow
pull_requestMedium–HighFires on open, sync, re-open, label, and other subtypes
issuesMedium–HighFires on open, close, label, edit, and other subtypes
check_run, check_suiteHighCan fire many times per push in busy repositories
issue_comment, pull_request_review_commentMediumScales with comment activity
scheduleLow–PredictableFires at a fixed cadence; easy to budget
workflow_dispatchLowHuman-initiated; naturally rate-limited

Use Deterministic Checks to Skip the Agent

Section titled “Use Deterministic Checks to Skip the Agent”

The most effective cost reduction is skipping the agent job entirely when it is not needed. The skip-if-match and skip-if-no-match conditions run during the low-cost pre-activation job and cancel the workflow before the agent starts:

on:
issues:
types: [opened]
skip-if-match: 'label:duplicate OR label:wont-fix'
on:
issues:
types: [labeled]
skip-if-no-match: 'label:needs-triage'

Use these to filter out noise before incurring inference costs. See Triggers for the full syntax.

The engine.model field selects the AI model. Smaller or faster models cost significantly less per token while still handling many routine tasks:

engine:
id: copilot
model: gpt-4.1-mini
engine:
id: claude
model: claude-haiku-4-5

Reserve frontier models (GPT-5, Claude Sonnet, etc.) for complex tasks. Use lighter models for triage, labeling, summarization, and other structured outputs.

Inference cost scales with the size of the prompt sent to the model. Reduce context by:

  • Writing focused prompts that include only necessary information.
  • Avoiding whole-file reads when only a few lines are relevant.
  • Capping the number of search results or list items fetched by tools.
  • Using imports to compose a smaller subset of prompt sections at runtime.

Use rate-limit to cap how many times a user can trigger the workflow in a given window, and rely on concurrency controls to serialize runs rather than letting them pile up:

rate-limit:
max: 3
window: 60 # 3 runs per hour per user

See Rate Limiting Controls and Concurrency for details.

Scheduled workflows fire at a fixed cadence, making cost easy to estimate and cap:

schedule: daily on weekdays

One scheduled run per weekday = five agent invocations per week. See Schedule Syntax for the full fuzzy schedule syntax.

Agentic workflows can inspect and optimize other agentic workflows automatically. A scheduled meta-agent reads aggregate run data through the agentic-workflows MCP tool, identifies expensive or inefficient workflows, and applies changes — closing the optimization loop without manual intervention.

The agentic-workflows tool exposes the same operations as the CLI (logs, audit, status) to any workflow agent. A meta-agent can:

  1. Fetch aggregate cost and token data with the logs tool (equivalent to gh aw logs).
  2. Deep-dive into individual runs with the audit tool (equivalent to gh aw audit <run-id>).
  3. Propose or directly apply frontmatter changes (cheaper model, tighter skip-if-match, lower rate-limit) via a pull request.
SignalAutomatic action
High token count per runSwitch to a smaller model (gpt-4.1-mini, claude-haiku-4-5)
Frequent runs with no safe-output producedAdd or tighten skip-if-match
Long queue times due to concurrencyLower rate-limit.max or add a concurrency group
Workflow running too oftenChange trigger to schedule or add workflow_dispatch

These are rough estimates to help with budgeting. Actual costs vary by prompt size, tool usage, model, and provider pricing.

ScenarioFrequencyActions minutes/monthInference/month
Weekly digest (schedule, 1 repo)4×/month~1 min~4–8 premium requests (Copilot)
Issue triage (issues opened, 20/month)20×/month~10 min~20–40 premium requests
PR review on every push (busy repo, 100 pushes/month)100×/month~100 min~100–200 premium requests
On-demand via slash commandUser-controlledVariesVaries