Continue runs AI checks on every pull request. Each check is a markdown file in your repo that shows up as a GitHub status check — green if the code looks good, red with a suggested fix if not.
Paste this into Claude Code (or any coding agent):
Help me write checks for this codebase: https://continue.dev/walkthrough
This walks you through creating your first checks, connecting GitHub, and seeing them run on a PR. See
Write Your First Checkfor the full guide.
You define checks as markdown files in .continue/checks/. Each file has a name, a description, and a prompt that tells the AI what to look for.
---
name: Security Review
description: Flag hardcoded secrets and missing input validation
---
Review this pull request for security issues.
Flag as failing if any of these are true:
- Hardcoded API keys, tokens, or passwords in source files
- New API endpoints without input validation
- SQL queries built with string concatenation
- Sensitive data logged to stdout
If none of these issues are found, pass the check.
When a PR is opened, Continue runs each check against the diff and reports the result as a GitHub status check. If a check fails, it suggests a fix you can accept or reject directly from GitHub.
Comments
tl;dr
- a _lot_ of people still use the VS Code extension and so we're still putting energy toward keeping it polished (this becomes easier with checks : ))
- our checks product is powered by an open-source CLI (we think this is important), which we recommend for jetbrains users
- the general goal is the same: we start by building tools for ourselves, share them with people in a way that avoids creating walled gardens, and aim to amplify developers (https://amplified.dev)
Do you support exporting metrics to something standard like CSV? https://docs.continue.dev/mission-control/metrics
A brief demo would be nice too.
One of the fundamental differences between checks and code review bots is that you trade breadth for consistency. There are two things Continue should never, ever do:
1. find a surprise bug or offer an unsolicited opinion
2. fail to catch a commit that doesn't meet your specific standards
- we do! right now you can export some metrics as images, or share a link publicly to the broader dashboard. will be curious if others are interested in other formats https://imgur.com/a/7sgd81r
- working on a loom video soon!
Some of these are:
- Having a local experience to run them with Claude Code, etc.
- Making it easy to accept/reject suggested changes
- A single folder dedicated to just checks so you don't have to think about triggers
- Built-in feedback loops so you can tune your checks with feedback
- Metrics so you can easily track which have a high suggestion / merge rate
Are you using a lot of `gh-aw`?