I scanned 50 real projects with cursor-doctor. Most Cursor rules are broken.
How healthy are Cursor rules in the wild? Not in tutorials or curated examples, but in real GitHub repos where people are actually shipping code. I cloned 50 public repos that use Cursor rules and ran npx cursor-doctor scan on every one.
The results: 60% of projects scored a C
A: 3 ███ (6%)
B: 15 ███████████████ (30%)
C: 30 ██████████████████████████████ (60%)
D: 2 ██ (4%)
Average health score: 67%. Only 3 out of 50 projects had healthy Cursor rules. The median project scored 69%, which is a C. That means most Cursor setups have enough issues to noticeably affect how well the AI follows instructions.
I dug deeper into 15 of those projects and categorized every issue cursor-doctor flagged. 998 total issues. Here are the five problems that kept showing up.
1. Rules that eat your context window (146 issues)
This was the single most common problem. Nearly 15% of all issues were rules that are too long.
Rules with 2,000+ characters each. Some over 5,000. Each one burns 500 to 1,250 tokens of your context window before Cursor even reads your code.
One project had 12 rules averaging 4,000 characters each. That is roughly 12,000 tokens of instructions alone, leaving less room for the code Cursor needs to work with.
The fix: Split long rules into focused ones. Remove the parts that repeat what Cursor already knows. Cut examples down to one or two instead of eight.
2. Rules that target nothing (36 issues)
Empty globs arrays. The rule exists, has a description, has instructions, but the globs field is []. Cursor doesn't know which files the rule applies to.
---
description: React component standards
globs: []
---
The fix: Add the right glob pattern (**/*.tsx) or set alwaysApply: true if the rule should apply everywhere.
3. Vague instructions Cursor ignores (27 issues)
"Try to keep functions small." "Consider using TypeScript." "Maybe add error handling."
Cursor doesn't try, consider, or maybe. It either follows the instruction or it doesn't. Weak language gives the model permission to ignore you.
Compare: "Try to use TypeScript" vs "All new files must use TypeScript. No .js files except config files in the project root."
4. Dead code burning tokens (32 issues)
Commented-out sections, TODO markers, notes-to-self. These all consume tokens and confuse the model about what is actually a rule and what is a draft.
<!-- TODO: add React rules later -->
<!-- OLD: use class components -->
If it is commented out, delete it. Rules are not source code. There is no reason to keep old versions around.
5. Missing or broken frontmatter (48 issues)
28 projects had rules with no description. 20 had rules with no YAML frontmatter at all. Without frontmatter, Cursor cannot properly categorize or prioritize the rule. Without a description, you have no idea what the rule does when you revisit your setup six months later.
A properly formatted rule file needs at minimum:
---
description: Enforces consistent error handling in API routes
globs: "src/api/**/*.ts"
---
All API route handlers must wrap their body in a try/catch block.
Return a 500 status with a generic error message. Log the full error to the server console.
What the healthy projects did differently
The 3 projects that scored an A had a few things in common:
- Short, focused rules. One concern per file. Under 1,000 characters each.
- Correct frontmatter. Every rule had a description, appropriate globs, and explicit
alwaysApplysettings. - Concrete instructions. No "try to" or "consider." Direct statements with examples.
- No dead weight. No commented code, no TODOs, no redundant instructions.
Check your own Cursor rules
npx cursor-doctor scan
Takes about 2 seconds. Gives you a health grade from A through F and tells you exactly what to fix. The scan and lint commands are free, no install needed.
For a detailed breakdown of every issue, file by file:
npx cursor-doctor lint
Also available as a VS Code and Cursor extension with inline diagnostics.