Master GitHub Actions runs-on: The Complete Practical Guide (2025 Edition)
Featured Snippet Definition
GitHub Actions runs-on selects the runner that executes a job—GitHub-hosted (e.g., ubuntu-latest, windows-latest, macos-latest) or self-hosted via custom labels. Jobs start only when a runner matches all specified labels.
workflow job → runs-on labels → runner match (AND) → job runs
Table of Contents
Introduction: Why runs-on Matters in Your GitHub Actions Workflows
The runs-on key is the single most critical configuration that determines where your GitHub Actions workflow jobs execute. Think of it as the address system for your CI/CD pipeline—without the correct address, your job never finds its destination runner, and your entire deployment pipeline grinds to a halt.
When you misconfigure runs-on, the consequences cascade through your development workflow. Jobs sit in queue indefinitely because no runner matches your labels. Pipelines fail silently because they executed on Windows when they needed Linux. Security vulnerabilities emerge when production workloads accidentally run on misconfigured self-hosted runners. Teams waste hours debugging issues that trace back to a single incorrect label in their workflow file.
The role of runs-on extends beyond simple runner selection. It determines the operating system environment, available software toolchains, hardware resources like CPU architecture and GPU availability, network access patterns, and even security boundaries between different workload types. For organizations running hybrid infrastructure with both GitHub-hosted and self-hosted runners, mastering runs-on becomes essential for building resilient, performant CI/CD pipelines.
This comprehensive guide covers everything from fundamental runs-on syntax to advanced patterns like dynamic runner selection, fallback strategies, and complex matrix builds. You’ll learn how GitHub Actions matches jobs to runners, how to troubleshoot matching failures, and battle-tested best practices for label strategies that scale. Whether you’re building simple workflows or architecting enterprise CI/CD systems, understanding runs-on deeply will make you more effective.
Basics of GitHub Actions runs-on
Understanding how runs-on works starts with grasping the fundamental matching mechanism. When you define a job in your workflow file, the runs-on key tells GitHub Actions which type of runner should pick up and execute that job. GitHub Actions then searches through available runners—both hosted runners managed by GitHub and any self-hosted runners you’ve registered—looking for one that matches all the labels you specified.
The syntax for runs-on appears simple at first glance, but it contains important nuances. For GitHub-hosted runners, you specify a single string value that corresponds to a predefined runner image. Here’s the most basic example:
github actions runs-on multiple labels
# Basic GitHub-hosted runner syntax
name: Build Application
on: [push]
jobs:
build:
runs-on: ubuntu-latest # Single label for GitHub-hosted runner
steps:
- uses: actions/checkout@v4
- name: Build project
run: npm install && npm run build
The GitHub-hosted runner labels follow a predictable pattern. The primary options available include ubuntu-latest, ubuntu-22.04, ubuntu-20.04, windows-latest, windows-2022, windows-2019, macos-latest, macos-13, and macos-12. The -latest variants automatically point to the most recent stable version, which GitHub updates periodically. Using version-specific labels like ubuntu-22.04 locks your workflow to that exact image, preventing unexpected changes when GitHub updates their -latest pointers.
Github actions runs-on self-hosted example
For self-hosted runners, the syntax shifts to accommodate custom labeling schemes. Self-hosted runners use an array of labels rather than a single string, and the job only runs when a runner matches all specified labels:
# Self-hosted runner with multiple labels
jobs:
deploy:
runs-on: [self-hosted, linux, x64] # All three labels must match
steps:
- uses: actions/checkout@v4
- name: Deploy to production
run: ./deploy.sh
The matching logic for self-hosted runners follows an AND operation—the runner must have every label you specify. When you register a self-hosted runner, you assign it labels during setup. GitHub Actions then filters the runner pool, considering only runners that possess all the required labels. This allows you to create sophisticated targeting schemes where jobs find runners with specific capabilities, geographic locations, or security contexts.
Case sensitivity matters critically in label matching. The label Linux differs from linux, and a mismatch will prevent your job from finding any runner. GitHub’s hosted runner labels use lowercase consistently (ubuntu-latest, not Ubuntu-Latest), and maintaining this convention for self-hosted runners prevents frustrating debugging sessions.
The order of labels in the array doesn’t affect matching—[self-hosted, linux, x64] functions identically to [x64, self-hosted, linux]. However, maintaining consistent ordering across your workflows improves readability and reduces cognitive load when reviewing configurations.
Hosted vs Self-hosted Runners in runs-on Configuration
The distinction between GitHub-hosted and self-hosted runners fundamentally shapes how you approach runs-on configuration. Each type serves different use cases and imposes different operational constraints that influence your labeling strategy.
GitHub-hosted runners provide ephemeral, clean environments that GitHub provisions on-demand. When you specify ubuntu-latest in your runs-on declaration, GitHub spins up a fresh Ubuntu VM, executes your job in that isolated environment, and then destroys the VM completely. This ensures perfect isolation between jobs and eliminates state accumulation issues. The hosted runner labels map directly to specific VM images that GitHub maintains, updating them regularly with current software versions.
# GitHub-hosted runner example with multiple jobs
name: Multi-platform Build
on: [push]
jobs:
test-linux:
runs-on: ubuntu-latest # GitHub provisions Ubuntu 22.04 (as of 2025)
steps:
- uses: actions/checkout@v4
- name: Run tests
run: npm test
test-windows:
runs-on: windows-latest # GitHub provisions Windows Server 2022
steps:
- uses: actions/checkout@v4
- name: Run tests
run: npm test
test-macos:
runs-on: macos-latest # GitHub provisions macOS 13
steps:
- uses: actions/checkout@v4
- name: Run tests
run: npm test
Self-hosted runners introduce flexibility but require operational management. You provision the infrastructure yourself—physical servers, virtual machines, containers, or cloud instances—and register them with GitHub. During registration, you assign custom labels that describe the runner’s characteristics. These labels become the vocabulary your workflows use to target specific runners.
The labeling strategy for self-hosted runners should reflect the dimensions along which your runners differ. Common dimensions include operating system (linux, windows, macos), CPU architecture (x64, arm64), special hardware (gpu, high-memory), geographic location (us-east, eu-central), and security context (prod, staging, dev).
# Self-hosted runner with descriptive labels
jobs:
gpu-training:
runs-on: [self-hosted, linux, x64, gpu, cuda-11] # Targets GPU runner
steps:
- uses: actions/checkout@v4
- name: Train ML model
run: python train_model.py --gpu
deploy-production:
runs-on: [self-hosted, linux, prod, us-east-1] # Production runner in specific region
steps:
- uses: actions/checkout@v4
- name: Deploy application
run: kubectl apply -f deployment.yaml
The key insight is that hosted runners use predefined, immutable labels controlled by GitHub, while self-hosted runners use arbitrary labels you define at registration. This means your runs-on configuration for hosted runners must use GitHub’s exact label strings, but for self-hosted runners you have complete freedom to design a labeling taxonomy that matches your infrastructure reality.
A hybrid approach combining both runner types offers powerful flexibility. You might use GitHub-hosted runners for standard build and test jobs to minimize operational overhead, while reserving self-hosted runners for deployment jobs that need access to your production network, or specialized workloads requiring GPUs or large memory allocations.
Dynamic and Conditional runs-on Patterns
Static runs-on values work well for straightforward workflows, but complex scenarios often require dynamic runner selection based on workflow inputs, branch names, or environmental conditions. GitHub Actions expressions enable sophisticated conditional logic that determines runner selection at workflow execution time.
The simplest dynamic pattern uses workflow inputs to let users choose runner types when triggering workflows manually. This proves especially valuable for testing workflows across different environments before committing to a specific configuration:
# Dynamic runner selection via workflow input
name: Flexible Build
on:
workflow_dispatch:
inputs:
runner-type:
description: 'Runner type to use'
required: true
type: choice
options:
- ubuntu-latest
- self-hosted
jobs:
build:
runs-on: ${{ github.event.inputs.runner-type }} # Dynamic runner selection
steps:
- uses: actions/checkout@v4
- name: Build application
run: make build
Github actions matrix runs-on os
Matrix builds represent another powerful pattern for executing identical jobs across multiple runner types simultaneously. This pattern commonly appears in testing scenarios where you need to verify cross-platform compatibility:
# Matrix build across multiple runner types
name: Cross-platform Test Suite
on: [push, pull_request]
jobs:
test:
strategy:
matrix:
os: [ubuntu-latest, windows-latest, macos-latest] # Test on all platforms
node-version: [18, 20] # Test multiple Node versions
runs-on: ${{ matrix.os }} # Dynamic based on matrix value
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: ${{ matrix.node-version }}
- name: Install dependencies
run: npm ci
- name: Run tests
run: npm test
Github actions runs-on conditional
More advanced patterns use conditional expressions to implement fallback logic. Imagine a scenario where you prefer self-hosted runners for cost savings but want to fall back to GitHub-hosted runners if self-hosted capacity is unavailable. While you can’t directly implement fallback within a single job’s runs-on, you can structure your workflow with conditional jobs:
# Fallback pattern using job dependencies and conditions
name: Build with Fallback
on: [push]
jobs:
# Try self-hosted first
build-self-hosted:
runs-on: [self-hosted, linux]
timeout-minutes: 5 # Quick timeout if no self-hosted available
continue-on-error: true # Don't fail workflow if this job fails
outputs:
success: ${{ steps.build.outcome == 'success' }}
steps:
- uses: actions/checkout@v4
- id: build
name: Build on self-hosted
run: make build
timeout-minutes: 3
# Fall back to hosted if self-hosted failed
build-hosted:
needs: build-self-hosted
if: ${{ needs.build-self-hosted.outputs.success != 'true' }}
runs-on: ubuntu-latest # Fallback to GitHub-hosted
steps:
- uses: actions/checkout@v4
- name: Build on hosted runner
run: make build
Branch-based runner selection allows different branches to use different infrastructure, useful when production branches require deployment on specific runners while development branches can use general-purpose runners:
# Branch-specific runner selection
name: Environment-aware Deploy
on: [push]
jobs:
deploy:
runs-on: ${{ github.ref == 'refs/heads/main' && '[self-hosted, linux, prod]' || 'ubuntu-latest' }}
steps:
- uses: actions/checkout@v4
- name: Deploy application
run: ./deploy.sh ${{ github.ref == 'refs/heads/main' && 'production' || 'staging' }}
These dynamic patterns become especially powerful when combined with reusable workflows, where the calling workflow can pass runner configuration as inputs to the called workflow, creating flexible, composable CI/CD systems.
Handling Runner Availability and Fallback Strategies
When no runner matches your runs-on labels, GitHub Actions places the job in a queued state where it waits indefinitely until a matching runner becomes available. This behavior can block your entire CI/CD pipeline if you’ve misconfigured labels or if your self-hosted runner pool experiences capacity issues.
Understanding the queuing mechanism helps you design more resilient workflows. GitHub Actions maintains a queue for each repository, and jobs enter this queue immediately upon workflow triggering. The runner assignment process continuously scans available runners, attempting to match queued jobs with idle runners that possess all required labels. For GitHub-hosted runners, this matching typically happens within seconds because GitHub maintains substantial capacity. For self-hosted runners, matching depends entirely on your infrastructure availability.
The timeout behavior provides a critical safety mechanism. Each job has a configurable timeout (defaulting to 360 minutes) that applies to total job duration, including queue time. If a job sits in queue for the entire timeout period without finding a matching runner, GitHub Actions marks it as failed. This prevents jobs from queuing forever, but it also means you might experience failures not from actual build problems but from infrastructure unavailability.
# Explicit timeout configuration for runner availability issues
jobs:
build:
runs-on: [self-hosted, gpu]
timeout-minutes: 30 # Fail after 30 minutes if no GPU runner available
steps:
- uses: actions/checkout@v4
- name: Train model
run: python train.py
Several strategies prevent blocked workflows caused by runner availability issues. The first and most important strategy involves maintaining sufficient runner capacity. For self-hosted runners, monitor your runner pool size relative to your typical job concurrency. If you frequently trigger ten simultaneous workflows but only maintain five runners, jobs will queue even when labels match correctly.
Assigning multiple labels thoughtfully can improve runner utilization without overconstraining job matching. Consider a scenario where you have separate runner pools for different teams. Rather than requiring every label (team-backend, linux, x64, docker), use the minimum set necessary for job requirements. If the job genuinely needs the backend team’s runner for network access but doesn’t care about the other attributes, specify only team-backend:
# Minimal label strategy for better runner availability
jobs:
# Overcontrained - harder to match
build-overcontrained:
runs-on: [self-hosted, linux, x64, docker, team-backend, us-east]
steps:
- run: make build
# Right-constrained - easier to match, still correct
build-optimized:
runs-on: [self-hosted, team-backend] # Only essential labels
steps:
- run: make build
Implementing backup runner strategies provides resilience against availability issues. One effective pattern uses workflow structure to attempt self-hosted execution with a hosted fallback, as shown in the dynamic patterns section. Another approach maintains a small pool of “overflow” runners with generic labels that can handle any job type when specialized runners are unavailable.
Monitoring runner health becomes essential for preventing availability issues before they impact workflows. Implement automated health checks that verify runner connectivity, resource availability, and configuration correctness. Many teams run scheduled workflows that exercise their runner pool and alert on failures:
# Runner health check workflow
name: Runner Health Check
on:
schedule:
- cron: '*/15 * * * *' # Every 15 minutes
jobs:
check-prod-runners:
runs-on: [self-hosted, prod]
steps:
- name: Verify runner health
run: |
# Check disk space
df -h | awk '$5 >= 80 {exit 1}'
# Check docker daemon
docker ps > /dev/null
# Check network connectivity
curl -sf https://api.github.com > /dev/null
- name: Alert on failure
if: failure()
run: |
# Send alert to monitoring system
curl -X POST ${{ secrets.ALERT_WEBHOOK }} \
-d '{"text":"Production runner health check failed"}'
Geographic distribution of runners adds another dimension to availability planning. If your organization operates globally, maintaining runner pools in multiple regions ensures that workflow triggers during any timezone find available capacity. However, this requires careful label design to allow jobs to run in any region unless they specifically need regional resources.
Performance, Version, and Compatibility Concerns
The choice of runner type and configuration significantly impacts workflow performance, and understanding these performance characteristics helps you optimize your CI/CD pipelines. GitHub-hosted runners and self-hosted runners exhibit fundamentally different performance profiles that affect both individual job execution time and overall pipeline throughput.
GitHub-hosted runners incur provisioning overhead at job start. When your workflow starts a job on ubuntu-latest, GitHub must locate available capacity, provision a fresh VM, initialize the operating system, start the runner agent, and establish connectivity. This provisioning process typically takes 20 to 60 seconds before your first step executes. For workflows with many short jobs, this startup time can dominate total execution time:
# Startup overhead demonstration
name: Many Short Jobs
on: [push]
jobs:
# Each of these jobs incurs ~30-45 seconds startup overhead
# on GitHub-hosted runners, even though actual work takes seconds
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: npm run lint # Might take only 5 seconds
format-check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: npm run format-check # Might take only 3 seconds
# Better: combine into single job to pay startup cost once
quality-checks:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: npm run lint
- run: npm run format-check
Self-hosted runners eliminate provisioning overhead since they remain running continuously and pick up jobs immediately. However, they introduce different performance considerations. State accumulation from previous job executions can affect subsequent jobs unless you implement proper cleanup. Disk space fills gradually, Docker images accumulate, temporary files persist, and installed dependencies remain. This persistence can actually benefit performance through caching—a self-hosted runner that already has Docker images pulled or dependencies cached can execute subsequent jobs much faster—but it requires active management.
Resource capacity differences between runner types dramatically affect performance for resource-intensive workloads. GitHub-hosted runners provide standardized allocations: standard Linux runners offer 2 CPU cores, 7 GB RAM, and 14 GB SSD space; standard Windows runners provide 2 CPU cores, 7 GB RAM, and 14 GB SSD space; standard macOS runners offer 3 CPU cores, 14 GB RAM, and 14 GB SSD space. If your build or test suite requires more resources, these constraints become bottlenecks. Self-hosted runners let you provision whatever hardware your workload demands—64-core machines with 256 GB RAM for parallel test execution, GPU machines for ML workloads, or high-IOPS storage for database testing.
Software version drift presents a subtle but significant compatibility concern, especially for self-hosted runners. GitHub-hosted runners follow a published schedule of image updates, and GitHub maintains detailed documentation of installed software versions for each image. Self-hosted runners, however, reflect whatever software versions you’ve installed, and these versions drift over time unless you actively manage updates:
# Pinning software versions to avoid drift issues
jobs:
build:
runs-on: [self-hosted, linux]
steps:
- uses: actions/checkout@v4
# Explicit version pinning prevents drift issues
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20.10.0' # Exact version, not 'latest'
- name: Install dependencies
run: npm ci # Use 'ci' not 'install' for reproducible builds
- name: Build
run: npm run build
Compatibility issues often surface when workflows developed on GitHub-hosted runners migrate to self-hosted runners or vice versa. Differences in preinstalled software, file system layouts, environment variables, and available commands cause workflows to fail in unexpected ways. The most robust approach uses containerized jobs that guarantee consistent environments across any runner type:
# Container-based job for consistent environment
jobs:
build:
runs-on: ubuntu-latest # Can be self-hosted too
container:
image: node:20-alpine # Exact environment regardless of runner
options: --user 1001 # Run as non-root
steps:
- uses: actions/checkout@v4
- name: Build application
run: npm ci && npm run build
Network performance varies between runner types and affects workflows that transfer large amounts of data. GitHub-hosted runners benefit from GitHub’s high-bandwidth network infrastructure, providing fast access to GitHub repositories, package registries, and cloud services. Self-hosted runners depend on your network infrastructure—a runner on a corporate network behind restrictive proxies might experience slow downloads, while a runner co-located in AWS us-east-1 with your S3 buckets might achieve dramatically faster data transfers for deployment workflows.
Debugging runs-on Issues and Runner Matching Problems
When your workflow job sits in queue indefinitely or fails immediately with cryptic runner-related errors, systematic debugging helps identify and resolve the root cause quickly. The most common issue—no runner matches your specified labels—manifests as a job that enters the queued state and never progresses to running.
Understanding how to inspect runner availability provides the foundation for debugging. The GitHub UI offers several visibility points. Navigate to your repository’s Settings → Actions → Runners to view all self-hosted runners registered to your repository, organization, or enterprise. Each runner displays its current status (Idle, Active, Offline), assigned labels, and recent job history. For jobs stuck in queue, verify that at least one runner exists with all the labels your job requires, and that the runner shows an “Idle” status.
Label mismatch errors represent the majority of runner matching failures. Typos, case sensitivity issues, or outdated labels create situations where your workflow requests labels that don’t exist on any runner. Consider this common mistake:
# Incorrect - case sensitivity and typo issues
jobs:
deploy:
runs-on: [self-hosted, Linux, x86_64] # Should be 'linux' and 'x64'
steps:
- run: ./deploy.sh
The GitHub Actions UI provides debugging information in the workflow run view. When a job fails to find a matching runner, examine the job’s annotations and logs carefully. GitHub sometimes provides hints about label mismatches, though the error messages can be subtle. The workflow run visualization shows jobs in yellow while queued, and clicking on a queued job displays how long it’s been waiting.
The GitHub CLI (gh) offers powerful inspection capabilities for debugging runner issues. You can list runners and their labels programmatically, which helps identify mismatches:
# List all self-hosted runners for a repository
gh api /repos/OWNER/REPO/actions/runners --jq '.runners[] | {id, name, status, labels: [.labels[].name]}'
# Check specific runner labels
gh api /repos/OWNER/REPO/actions/runners/RUNNER_ID --jq '.labels[].name'
# List queued workflow jobs
gh run list --limit 5 --json status,conclusion,databaseId | jq '.[] | select(.status=="queued")'
Implementing a methodical debugging workflow helps resolve issues systematically. First, verify that the runner is online and idle. If using self-hosted runners, check the runner service status on the host machine itself—the runner might show as offline in GitHub UI due to connectivity issues. Second, compare the labels in your workflow file character-by-character against the labels shown in GitHub UI for available runners. Copy-paste labels rather than retyping them to avoid transcription errors. Third, temporarily simplify your label requirements to the bare minimum to determine whether the issue stems from a specific label or from runner unavailability.
Testing runner matching in staging before deploying workflow changes to production prevents availability issues from blocking critical pipelines. Many teams maintain dedicated staging workflows that exercise runner configurations:
# Staging runner test workflow
name: Test Runner Configuration
on:
workflow_dispatch:
inputs:
test-labels:
description: 'Runner labels to test (JSON array format)'
required: true
default: '["self-hosted", "linux"]'
jobs:
test-runner:
runs-on: ${{ fromJSON(github.event.inputs.test-labels) }}
steps:
- name: Report runner details
run: |
echo "Runner name: $RUNNER_NAME"
echo "Runner OS: $RUNNER_OS"
echo "Runner architecture: $RUNNER_ARCH"
echo "Available labels: Self-hosted runners don't expose all labels in env vars"
echo "Test successful - runner matched and executed job"
Common runner configuration issues extend beyond simple label mismatches. Permissions problems can prevent runners from accepting jobs even when labels match—organization runners restricted to specific repositories won’t pick up jobs from other repositories. Runner group configurations might limit which repositories or workflows can use particular runners. Token expiration or authentication failures cause runners to appear online in UI but fail to accept jobs.
For self-hosted runners experiencing intermittent matching problems, examine the runner logs on the host machine. These logs provide detailed information about job acceptance, connectivity issues, and configuration problems. The logs typically reside in _diag folders within the runner installation directory. Patterns of repeated “Job request failed” messages often indicate network or authentication issues rather than label mismatches.
Diagnostic jobs help establish baseline runner health and surface configuration issues proactively:
# Comprehensive runner diagnostic job
name: Runner Diagnostics
on:
schedule:
- cron: '0 */4 * * *' # Every 4 hours
workflow_dispatch:
jobs:
diagnose:
runs-on: [self-hosted, linux]
steps:
- name: System information
run: |
echo "=== System Info ==="
uname -a
cat /etc/os-release
- name: Resource availability
run: |
echo "=== Resources ==="
df -h
free -h
nproc
- name: Network connectivity
run: |
echo "=== Network ==="
curl -I https://github.com
curl -I https://api.github.com
- name: Docker availability (if needed)
run: |
echo "=== Docker ==="
docker --version
docker ps
- name: Installed software versions
run: |
echo "=== Software ==="
git --version
node --version || echo "Node not installed"
python --version || echo "Python not installed"
Best Practices and Label Strategy Patterns
Developing a coherent label strategy prevents runner matching issues and creates maintainable workflows that scale as your infrastructure grows. The fundamental principle guiding effective label design is minimalism—use the fewest labels necessary to uniquely identify the runner capabilities your job requires.
Overconstrained labels cause unnecessary queuing and reduce runner utilization. When you specify six labels for a job that only truly needs two, you limit the pool of eligible runners artificially. Consider whether each label genuinely affects job execution or merely describes incidental characteristics of your runner infrastructure:
# Overcontrained - poor practice
jobs:
build:
runs-on: [self-hosted, linux, x64, ubuntu-22.04, docker-24, region-us-east, rack-5a]
# This job probably doesn't care about rack location or exact Docker version
steps:
- run: make build
# Right-constrained - best practice
jobs:
build:
runs-on: [self-hosted, linux] # Only essential requirements
# Job can run on any Linux runner, maximizing availability
steps:
- run: make build
Consistent naming conventions across your runner labels improve workflow readability and reduce errors. Establish conventions for label format and stick to them organization-wide. Common patterns include lowercase with hyphens (gpu-enabled, high-memory), namespacing for organizational scope (team-backend, project-alpha), and hierarchical labeling for related capabilities (python-3.9, python-3.10, python-3.11).
For organizations with multiple teams sharing runner infrastructure, namespacing prevents label collisions and clarifies ownership:
# Namespaced label strategy for multi-team environments
jobs:
backend-build:
runs-on: [self-hosted, team-backend, linux]
steps:
- run: make build-backend
frontend-build:
runs-on: [self-hosted, team-frontend, linux]
steps:
- run: npm run build-frontend
shared-security-scan:
runs-on: [self-hosted, security, linux] # Shared security scanning runner
steps:
- run: security-scan.sh
Hardware and capability labels distinguish runners with special resources. When certain workflows require GPUs, large memory allocations, or specific hardware, create explicit labels for these capabilities rather than relying on team or location labels to imply hardware availability:
# Hardware capability labels
jobs:
ml-training:
runs-on: [self-hosted, linux, gpu, cuda-12] # Explicit GPU requirement
steps:
- run: python train.py --gpu
memory-intensive-build:
runs-on: [self-hosted, linux, high-memory] # Explicitly requires 64GB+ RAM
steps:
- run: gradle build -Xmx48g
arm-build:
runs-on: [self-hosted, linux, arm64] # Architecture-specific build
steps:
- run: cargo build --target aarch64-unknown-linux-gnu
Environment separation labels prevent accidental execution of production workloads on non-production infrastructure and vice versa. Security and compliance requirements often mandate clear boundaries between environments:
# Environment separation pattern
jobs:
deploy-staging:
runs-on: [self-hosted, linux, staging]
if: github.ref == 'refs/heads/develop'
steps:
- run: ./deploy.sh staging
deploy-production:
runs-on: [self-hosted, linux, production]
if: github.ref == 'refs/heads/main'
environment: production # Also use GitHub environment protection rules
steps:
- run: ./deploy.sh production
Periodic runner maintenance and label auditing ensure your infrastructure matches your workflow expectations. Implement regular reviews of runner labels and workflow requirements. Remove obsolete labels from runners when hardware changes or organizational structure shifts. Update workflow files to reflect current label conventions when renaming or consolidating labels.
Automated label validation catches configuration drift before it causes workflow failures. Some teams implement admission control workflows that verify new workflow files against known-good label patterns:
# Label validation workflow (runs on pull requests)
name: Validate Workflow Labels
on:
pull_request:
paths:
- '.github/workflows/**'
jobs:
validate-labels:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Extract and validate runs-on labels
run: |
# Extract all runs-on declarations from workflow files
# Validate against organization's approved label list
# Fail if unknown labels detected
python scripts/validate_runner_labels.py
Hybrid infrastructure strategies combine GitHub-hosted and self-hosted runners strategically. Use hosted runners for standard builds and tests where provisioning overhead is acceptable and you want zero infrastructure management. Reserve self-hosted runners for scenarios requiring specific hardware, network access to internal resources, longer-running jobs where provisioning overhead becomes proportionally expensive, or workloads with compliance requirements mandating specific infrastructure controls.
The label strategy for hybrid setups should clearly distinguish between hosted and self-hosted runners. While GitHub’s hosted runners use predefined labels, prefix your self-hosted runner labels with self-hosted explicitly in workflows to maintain clarity:
# Hybrid workflow with clear runner distinctions
jobs:
test:
runs-on: ubuntu-latest # GitHub-hosted, fast provisioning, no maintenance
steps:
- uses: actions/checkout@v4
- run: npm test
build:
runs-on: ubuntu-latest # GitHub-hosted
steps:
- uses: actions/checkout@v4
- run: npm run build
- uses: actions/upload-artifact@v4
with:
name: build-artifacts
path: dist/
deploy:
needs: [test, build]
runs-on: [self-hosted, linux, production] # Self-hosted for network access
steps:
- uses: actions/download-artifact@v4
with:
name: build-artifacts
- run: ./deploy-to-internal-network.sh
Real-World Examples and Use Cases
Examining practical examples from production environments illustrates how different organizations apply runs-on strategies to solve real challenges. These patterns emerge repeatedly across teams building robust CI/CD pipelines.
GPU-Accelerated ML Training Pipeline
Machine learning teams frequently require GPU resources for model training, but GPU runners are expensive to maintain continuously. This example shows a workflow that uses standard runners for data preparation and validation, reserving GPU runners only for the actual training step:
# ML training workflow with selective GPU usage
name: Train ML Model
on:
workflow_dispatch:
inputs:
dataset-version:
description: 'Dataset version to use'
required: true
jobs:
prepare-data:
runs-on: ubuntu-latest # Standard runner for data prep
steps:
- uses: actions/checkout@v4
- name: Setup Python
uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: Download and preprocess dataset
run: |
pip install pandas numpy
python scripts/prepare_data.py --version ${{ github.event.inputs.dataset-version }}
- name: Upload prepared data
uses: actions/upload-artifact@v4
with:
name: training-data
path: data/prepared/
train-model:
needs: prepare-data
runs-on: [self-hosted, linux, gpu, cuda-12] # GPU runner only for training
timeout-minutes: 180 # 3 hours for training
steps:
- uses: actions/checkout@v4
- name: Download prepared data
uses: actions/download-artifact@v4
with:
name: training-data
path: data/prepared/
- name: Verify GPU availability
run: |
nvidia-smi
python -c "import torch; print(f'CUDA available: {torch.cuda.is_available()}')"
- name: Train model
run: |
python train.py \
--data data/prepared/ \
--epochs 100 \
--gpu \
--output models/
- name: Upload trained model
uses: actions/upload-artifact@v4
with:
name: trained-model
path: models/
evaluate-model:
needs: train-model
runs-on: ubuntu-latest # Back to standard runner for evaluation
steps:
- uses: actions/checkout@v4
- name: Download model
uses: actions/download-artifact@v4
with:
name: trained-model
- name: Run evaluation metrics
run: python evaluate.py --model models/model.pt
Hybrid Cloud-Native Deployment Workflow
Organizations running Kubernetes clusters often need runners with direct network access to their clusters for deployments, while other workflow steps can run on GitHub-hosted infrastructure:
# Hybrid deployment workflow
name: Deploy to Kubernetes
on:
push:
branches: [main, staging]
jobs:
build-and-test:
runs-on: ubuntu-latest # GitHub-hosted for standard CI tasks
steps:
- uses: actions/checkout@v4
- name: Build Docker image
run: |
docker build -t myapp:${{ github.sha }} .
docker save myapp:${{ github.sha }} > image.tar
- name: Run tests
run: |
docker run myapp:${{ github.sha }} npm test
- name: Upload image artifact
uses: actions/upload-artifact@v4
with:
name: docker-image
path: image.tar
push-to-registry:
needs: build-and-test
runs-on: [self-hosted, linux, ecr-access] # Self-hosted with AWS credentials
steps:
- name: Download image
uses: actions/download-artifact@v4
with:
name: docker-image
- name: Load and push to ECR
run: |
docker load < image.tar
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin ${{ secrets.ECR_REGISTRY }}
docker tag myapp:${{ github.sha }} ${{ secrets.ECR_REGISTRY }}/myapp:${{ github.sha }}
docker push ${{ secrets.ECR_REGISTRY }}/myapp:${{ github.sha }}
deploy-staging:
needs: push-to-registry
if: github.ref == 'refs/heads/staging'
runs-on: [self-hosted, linux, k8s-staging] # Staging cluster access
steps:
- uses: actions/checkout@v4
- name: Deploy to staging
run: |
kubectl set image deployment/myapp myapp=${{ secrets.ECR_REGISTRY }}/myapp:${{ github.sha }} -n staging
kubectl rollout status deployment/myapp -n staging --timeout=5m
deploy-production:
needs: push-to-registry
if: github.ref == 'refs/heads/main'
runs-on: [self-hosted, linux, k8s-production] # Production cluster access
environment: production # Requires manual approval
steps:
- uses: actions/checkout@v4
- name: Deploy to production
run: |
kubectl set image deployment/myapp myapp=${{ secrets.ECR_REGISTRY }}/myapp:${{ github.sha }} -n production
kubectl rollout status deployment/myapp -n production --timeout=5m
Multi-Architecture Build Matrix
Teams supporting multiple CPU architectures or operating systems use matrix strategies to build and test across all target platforms simultaneously:
# Cross-platform and cross-architecture builds
name: Multi-Architecture Build
on: [push, pull_request]
jobs:
build:
strategy:
matrix:
include:
# Linux builds on different architectures
- os: ubuntu-latest
arch: x64
target: x86_64-unknown-linux-gnu
- os: [self-hosted, linux, arm64]
arch: arm64
target: aarch64-unknown-linux-gnu
# Windows builds
- os: windows-latest
arch: x64
target: x86_64-pc-windows-msvc
# macOS builds on different architectures
- os: macos-latest
arch: x64
target: x86_64-apple-darwin
- os: [self-hosted, macos, arm64]
arch: arm64
target: aarch64-apple-darwin
runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@v4
- name: Setup Rust
uses: actions-rs/toolchain@v1
with:
toolchain: stable
target: ${{ matrix.target }}
- name: Build
run: cargo build --release --target ${{ matrix.target }}
- name: Run tests
run: cargo test --target ${{ matrix.target }}
- name: Upload binary
uses: actions/upload-artifact@v4
with:
name: binary-${{ matrix.target }}
path: target/${{ matrix.target }}/release/myapp*
Geographic Distribution with Failover
Global organizations with runners in multiple regions implement failover patterns using job dependencies and conditional execution:
# Geographic failover pattern
name: Deploy with Regional Failover
on:
workflow_dispatch:
inputs:
target-region:
description: 'Preferred deployment region'
type: choice
options:
- us-east
- eu-west
- ap-south
jobs:
deploy-preferred-region:
runs-on: [self-hosted, linux, region-${{ github.event.inputs.target-region }}]
timeout-minutes: 10
continue-on-error: true
outputs:
success: ${{ steps.deploy.outcome == 'success' }}
steps:
- uses: actions/checkout@v4
- id: deploy
name: Deploy to preferred region
run: ./deploy.sh ${{ github.event.inputs.target-region }}
deploy-fallback-us:
needs: deploy-preferred-region
if: |
github.event.inputs.target-region != 'us-east' &&
needs.deploy-preferred-region.outputs.success != 'true'
runs-on: [self-hosted, linux, region-us-east]
steps:
- uses: actions/checkout@v4
- run: ./deploy.sh us-east
deploy-fallback-eu:
needs: deploy-preferred-region
if: |
github.event.inputs.target-region != 'eu-west' &&
needs.deploy-preferred-region.outputs.success != 'true'
runs-on: [self-hosted, linux, region-eu-west]
steps:
- uses: actions/checkout@v4
- run: ./deploy.sh eu-west
Monorepo with Path-Specific Runners
Large monorepos benefit from routing different paths to appropriate runners based on the modified files:
# Path-based runner selection for monorepos
name: Monorepo CI
on:
push:
branches: [main]
pull_request:
jobs:
detect-changes:
runs-on: ubuntu-latest
outputs:
backend: ${{ steps.filter.outputs.backend }}
frontend: ${{ steps.filter.outputs.frontend }}
mobile: ${{ steps.filter.outputs.mobile }}
steps:
- uses: actions/checkout@v4
- uses: dorny/paths-filter@v2
id: filter
with:
filters: |
backend:
- 'backend/**'
frontend:
- 'frontend/**'
mobile:
- 'mobile/**'
backend-ci:
needs: detect-changes
if: needs.detect-changes.outputs.backend == 'true'
runs-on: [self-hosted, linux, high-memory] # Backend needs more resources
steps:
- uses: actions/checkout@v4
- run: cd backend && make test
frontend-ci:
needs: detect-changes
if: needs.detect-changes.outputs.frontend == 'true'
runs-on: ubuntu-latest # Frontend uses standard runner
steps:
- uses: actions/checkout@v4
- run: cd frontend && npm test
mobile-ci:
needs: detect-changes
if: needs.detect-changes.outputs.mobile == 'true'
runs-on: [self-hosted, macos, xcode-15] # iOS builds need macOS
steps:
- uses: actions/checkout@v4
- run: cd mobile && xcodebuild test -scheme MyApp
These real-world patterns demonstrate how thoughtful runs-on configuration enables sophisticated CI/CD workflows that balance performance, cost, security, and maintainability.
Conclusion and Actionable Checklist
Mastering the runs-on key transforms your GitHub Actions workflows from fragile, unpredictable pipelines into robust, efficient CI/CD systems. The configuration choices you make here ripple through every aspect of your software delivery process—affecting build times, infrastructure costs, security boundaries, and developer productivity.
The most critical insight is that runs-on is not merely a runner selector; it’s an architectural decision that shapes your entire CI/CD infrastructure strategy. Teams that treat it as an afterthought experience chronic issues with blocked pipelines, environment inconsistencies, and wasted runner capacity. Teams that design thoughtful label strategies and runner architectures build resilient systems that scale gracefully as their organization grows.
Actionable Workflow Audit Checklist
Use this checklist to review and improve your existing workflows:
Label Configuration
- [ ] Every
runs-ondeclaration uses the minimum necessary labels for job requirements - [ ] Labels follow consistent naming conventions (lowercase, hyphens, namespaced where appropriate)
- [ ] No typos or case sensitivity errors in label names
- [ ] Labels accurately reflect current runner infrastructure (no obsolete labels)
- [ ] GitHub-hosted runner labels use exact GitHub syntax (
ubuntu-latest, notUbuntu-Latest)
Runner Availability
- [ ] Self-hosted runner pools maintain sufficient capacity for typical concurrent job loads
- [ ] Critical workflows have timeout values appropriate for expected queue + execution time
- [ ] Monitoring alerts exist for self-hosted runner offline events
- [ ] Runner health checks verify connectivity, disk space, and essential software
- [ ] Fallback strategies exist for workflows requiring high availability
Environment Consistency
- [ ] Self-hosted runners use pinned software versions or containerized jobs
- [ ] Runner software versions match workflow requirements
- [ ] Cleanup processes prevent state accumulation on self-hosted runners
- [ ] Documentation exists mapping runner labels to their capabilities and installed software
Security & Compliance
- [ ] Production runners use distinct labels preventing accidental usage by non-production workflows
- [ ] Sensitive workflows use appropriate GitHub environment protection rules alongside
runs-on - [ ] Self-hosted runners follow security hardening practices
- [ ] Runner access controls limit which repositories can use specific runner pools
Performance Optimization
- [ ] Short jobs are combined to amortize GitHub-hosted runner provisioning overhead
- [ ] Resource-intensive jobs use appropriately sized self-hosted runners
- [ ] Workflows leverage matrix builds efficiently across multiple runner types
- [ ] Geographic runner distribution minimizes network latency for relevant workflows
Regularly auditing your workflows against this checklist prevents configuration drift and ensures your CI/CD infrastructure continues meeting your team’s evolving needs. Schedule quarterly reviews where you examine runner utilization metrics, evaluate whether label strategies still serve your architecture, and update workflows to leverage new runner capabilities.
The investment in understanding and properly configuring runs-on pays dividends immediately through faster pipelines, fewer debugging sessions, and more predictable CI/CD behavior. Your future self—and your entire development team—will thank you for building this foundation correctly.
Appendix: Quick Reference and Cheatsheet
Common GitHub-Hosted Runner Labels
| Label | Operating System | Architecture | Typical Use Cases |
|---|---|---|---|
ubuntu-latest | Ubuntu 22.04 (as of 2025) | x64 | Linux builds, Docker, general CI |
ubuntu-22.04 | Ubuntu 22.04 (pinned) | x64 | When specific Ubuntu version required |
ubuntu-20.04 | Ubuntu 20.04 (pinned) | x64 | Legacy compatibility |
windows-latest | Windows Server 2022 | x64 | Windows builds, .NET projects |
windows-2022 | Windows Server 2022 (pinned) | x64 | Specific Windows version |
windows-2019 | Windows Server 2019 (pinned) | x64 | Legacy Windows compatibility |
macos-latest | macOS 13 (as of 2025) | x64 | iOS builds, macOS applications |
macos-13 | macOS 13 (pinned) | x64 | Specific macOS version |
macos-12 | macOS 12 (pinned) | x64 | Legacy macOS compatibility |
Self-Hosted Runner Label Examples
Basic Infrastructure Labels
# Operating system and architecture
runs-on: [self-hosted, linux, x64]
runs-on: [self-hosted, linux, arm64]
runs-on: [self-hosted, windows, x64]
runs-on: [self-hosted, macos, arm64] # M1/M2/M3 Macs
Hardware Capability Labels
# GPU-enabled runners
runs-on: [self-hosted, linux, gpu, cuda-12]
# High-memory runners
runs-on: [self-hosted, linux, high-memory]
# High-CPU runners for parallel builds
runs-on: [self-hosted, linux, high-cpu]
Environment and Security Labels
# Environment separation
runs-on: [self-hosted, linux, production]
runs-on: [self-hosted, linux, staging]
runs-on: [self-hosted, linux, development]
# Team or project namespacing
runs-on: [self-hosted, linux, team-backend]
runs-on: [self-hosted, linux, project-alpha]
Geographic Labels
# Regional runners
runs-on: [self-hosted, linux, us-east-1]
runs-on: [self-hosted, linux, eu-central-1]
runs-on: [self-hosted, linux, ap-south-1]
Sample Workflow Patterns
Simple Single-Job Workflow
name: Basic Build
on: [push]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: make build
Multi-Job with Dependencies
name: Build and Deploy
on: [push]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: npm test
build:
needs: test
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: npm run build
deploy:
needs: build
runs-on: [self-hosted, linux, production]
steps:
- uses: actions/checkout@v4
- run: ./deploy.sh
Matrix Build Pattern
name: Cross-Platform Test
on: [push]
jobs:
test:
strategy:
matrix:
os: [ubuntu-latest, windows-latest, macos-latest]
node-version: [18, 20, 22]
runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: ${{ matrix.node-version }}
- run: npm test
CLI Commands for Runner Management
List Runners
# List all repository runners
gh api /repos/OWNER/REPO/actions/runners --jq '.runners[] | {id, name, status, labels: [.labels[].name]}'
# List all organization runners
gh api /orgs/ORG/actions/runners --jq '.runners[] | {id, name, status, labels: [.labels[].name]}'
# List all enterprise runners
gh api /enterprises/ENTERPRISE/actions/runners --jq '.runners[] | {id, name, status, labels: [.labels[].name]}'
Check Runner Status
# Get specific runner details
gh api /repos/OWNER/REPO/actions/runners/RUNNER_ID
# Check runner online/offline status
gh api /repos/OWNER/REPO/actions/runners --jq '.runners[] | select(.status=="online") | {name, labels: [.labels[].name]}'
Inspect Workflow Runs
# List recent workflow runs
gh run list --limit 10
# View specific run details
gh run view RUN_ID
# List queued jobs
gh run list --json status,databaseId --jq '.[] | select(.status=="queued")'
ASCII Flow Diagram
Workflow Triggered
|
v
+------------------+
| Parse Workflow |
| Extract runs-on |
+------------------+
|
v
+------------------+
| Search Available |
| Runners |
+------------------+
|
+----------------+------------------+
| | |
v v v
GitHub-Hosted Self-Hosted No Match
Runner Match Runner Match Found
| | |
v v v
Provision VM Assign to Job Queued
& Execute Runner & Execute (Waits/Times Out)
How to Use runs-on in GitHub Actions: Step-by-Step Guide
Follow these steps to properly configure runs-on in your workflows:
- Open or create a workflow file in the
.github/workflows/directory of your repository (e.g.,.github/workflows/build.yml). - Define a job within your workflow and add the
runs-onkey at the job level (not at the workflow level or step level). - Choose your runner type:
- For GitHub-hosted runners: Use a single string value like
ubuntu-latest,windows-latest, ormacos-latest. - For self-hosted runners: Use an array of labels like
[self-hosted, linux, x64].
- For GitHub-hosted runners: Use a single string value like
- For self-hosted runners only: Ensure you’ve registered a runner with matching labels:
- Navigate to repository/organization Settings → Actions → Runners.
- Click “New self-hosted runner” and follow the setup instructions.
- During setup, assign labels that match what your workflow specifies.
- Commit and push your workflow file to trigger the workflow (if configured for push events) or manually trigger it if using
workflow_dispatch. - Monitor the workflow run:
- Go to the Actions tab in your repository.
- Click on your workflow run to see job status.
- Verify the job starts executing (not stuck in queued state).
- Debug if the job doesn’t start:
- Check that runner labels exactly match (case-sensitive).
- Verify at least one runner is online and idle.
- Simplify labels to minimum required set.
- Check runner logs on the host machine for self-hosted runners.
- Optimize your label strategy:
- Use minimal labels to maximize runner availability.
- Implement consistent naming conventions.
- Document your label meanings for team members.
Comparison Tables
GitHub-Hosted vs Self-Hosted Runners
| Aspect | GitHub-Hosted Runners | Self-Hosted Runners |
|---|---|---|
| Provisioning | Automatic, on-demand | Manual setup required |
| Startup Time | 20-60 seconds (VM provisioning) | Immediate (runner already running) |
| Environment | Clean for every job | Persistent, requires cleanup |
| Software | Pre-installed, GitHub-maintained | You install and maintain |
| Cost | Included in GitHub plan (with limits) | Infrastructure costs only |
| Security | Isolated, ephemeral | Shared resources, persistent |
| Labels | Fixed by GitHub (ubuntu-latest, etc.) | Custom labels you define |
| Resource Limits | Fixed (2-4 cores, 7-14 GB RAM) | Whatever you provision |
| Network Access | Public internet only | Can access internal networks |
| Maintenance | Zero (GitHub handles it) | Full responsibility |
| Best For | Standard builds, tests, portability | Custom hardware, network access, long jobs |
Single vs Multiple Label Matching
| Configuration | Matching Behavior | Use Case | Example |
|---|---|---|---|
| Single Label (Hosted) | Exact match to GitHub’s predefined runners | Standard builds on specific OS | runs-on: ubuntu-latest |
| Single Label (Self-hosted) | Any runner with this label | Broad runner pool | runs-on: self-hosted |
| Multiple Labels (AND) | Runner must have ALL labels | Specific capabilities required | runs-on: [self-hosted, gpu, cuda-12] |
| Minimal Labels | More runners match, better availability | Maximize runner utilization | runs-on: [self-hosted, linux] |
| Overconstrained Labels | Fewer runners match, potential queuing | Accidental over-specification | runs-on: [self-hosted, linux, x64, docker, region, rack] |
Matrix Build Runner Strategies
| Strategy | Configuration | Benefits | Drawbacks |
|---|---|---|---|
| Homogeneous | Same runner type for all matrix jobs | Simple, predictable | May not test real deployment targets |
| Heterogeneous | Different runner types per matrix dimension | Tests actual target platforms | More complex, requires more runner types |
| Mixed Hosted/Self-hosted | Some matrix jobs on hosted, some on self-hosted | Cost optimization, flexibility | Requires hybrid infrastructure |
| Geographic Distribution | Matrix includes region labels | Lower latency, compliance testing | Needs runners in multiple regions |
Example: Heterogeneous Matrix
strategy:
matrix:
include:
- os: ubuntu-latest
target: linux-x64
- os: windows-latest
target: windows-x64
- os: [self-hosted, macos, arm64]
target: macos-arm64
runs-on: ${{ matrix.os }}
For comprehensive GitHub Actions guidance, explore these related articles on thedevopstooling.com:
- GitHub Hosted Runner Guide – Deep dive into GitHub’s managed runner infrastructure, available software, and usage limits
- GitHub Actions Self-Hosted Runner Guide – Complete setup, security hardening, and operational best practices for self-hosted runners
- GitHub Actions Workflow Triggers Explained – Master all workflow trigger types to complement your runner configuration
- Terraform CI/CD with GitHub Actions – Implement infrastructure-as-code pipelines with proper runner selection
- Kubernetes Deployments with GitHub Actions – Deploy to Kubernetes clusters using self-hosted runners with cluster access
Frequently Asked Questions (People Also Ask)
What does runs-on mean in GitHub Actions?
The runs-on key specifies which type of runner (execution environment) will execute a workflow job. It determines the operating system, hardware resources, and available software for your job. You can use GitHub-hosted runners with predefined labels like ubuntu-latest, or self-hosted runners with custom labels you define.
What are the available runs-on labels for GitHub-hosted runners?
GitHub provides these hosted runner labels: ubuntu-latest, ubuntu-22.04, ubuntu-20.04 for Linux; windows-latest, windows-2022, windows-2019 for Windows; and macos-latest, macos-13, macos-12 for macOS. The -latest variants automatically update to the newest stable version, while version-specific labels remain fixed.
How do self-hosted runner labels work in runs-on?
Self-hosted runners use custom labels you assign during runner registration. When you specify runs-on: [self-hosted, linux, gpu], GitHub Actions searches for a self-hosted runner that has all three labels. The runner must match every label in the array. You define these labels based on your infrastructure characteristics like OS, architecture, hardware capabilities, or team ownership.
Can I specify multiple runs-on labels in a workflow?
Yes, for self-hosted runners you specify multiple labels as an array: runs-on: [self-hosted, linux, x64]. The runner must possess all labels for a match. For GitHub-hosted runners, you use a single string value: runs-on: ubuntu-latest. You can also use matrix strategies to run the same job across multiple different runs-on configurations simultaneously.
Why does my workflow fail with “no matching runner”?
This error occurs when no available runner matches all the labels you specified in runs-on. Common causes include: typos in label names, case sensitivity mismatches (labels are case-sensitive), using labels that don’t exist on any runner, all matching runners being offline or busy, or specifying too many labels that overconstrain matching. Check your runner status in Settings → Actions → Runners and verify label spelling.
What are best practices for runs-on in GitHub Actions?
Best practices include: use minimal necessary labels to maximize runner availability, maintain consistent naming conventions (lowercase with hyphens), separate production and non-production runners with explicit labels, implement runner health monitoring and automated alerts, combine short jobs to reduce GitHub-hosted provisioning overhead, use containerized jobs for environment consistency, and regularly audit workflow runs-on configurations against actual runner infrastructure.
Downloadable Resources and Visual Assets
runs-on Matching Flow Diagram
┌─────────────────────────────────────────────────────────────┐
│ Workflow Execution Flow │
└─────────────────────────────────────────────────────────────┘
│
▼
┌───────────────────────────────────────┐
│ Workflow Triggered (push/PR/manual) │
└───────────────────────────────────────┘
│
▼
┌───────────────────────────────────────┐
│ Parse YAML: Extract runs-on │
│ Example: [self-hosted, linux, gpu] │
└───────────────────────────────────────┘
│
▼
┌───────────────────────────────────────┐
│ Query Available Runners in Pool │
│ (Repository/Org/Enterprise scope) │
└───────────────────────────────────────┘
│
┌───────────┴───────────┐
▼ ▼
┌─────────────────────┐ ┌─────────────────────┐
│ GitHub-Hosted │ │ Self-Hosted │
│ Label Check │ │ Label Check │
│ (exact match) │ │ (all labels match) │
└─────────────────────┘ └─────────────────────┘
│ │
├───────────┬───────────┤
▼ ▼ ▼
┌─────────┐ ┌─────────┐ ┌─────────┐
│ Match │ │ Match │ │ No │
│ Found │ │ Found │ │ Match │
│ (Hosted)│ │ (Self) │ │ │
└─────────┘ └─────────┘ └─────────┘
│ │ │
▼ ▼ ▼
┌─────────────┐ ┌──────────┐ ┌──────────┐
│ Provision │ │ Assign │ │ Job │
│ Fresh VM │ │ to Idle │ │ Queued │
│ (20-60s) │ │ Runner │ │ (Wait) │
└─────────────┘ └──────────┘ └──────────┘
│ │ │
└────────┬───┘ ▼
▼ ┌──────────────┐
┌───────────────┐ │ Timeout? │
│ Execute Job │ │ Yes: Fail │
│ Steps │ │ No: Continue│
└───────────────┘ │ Waiting │
│ └──────────────┘
▼
┌───────────────┐
│ Job Complete │
│ Success/Fail │
└───────────────┘
