Code Metrics Overview
Understanding the 7 metrics SpecThis uses to measure code health and complexity
SpecThis scans your codebase and tracks 7 metrics that give you a comprehensive picture of code health. These metrics are extracted on every scan and stored as time-series data so you can track trends over time.
How Metrics Work
Each scan analyzes your source files using tree-sitter parsing. Metrics are computed per function, per file, and per repository, then stored in time-series hypertables for efficient trend queries. You can configure alert thresholds for any metric so you are notified when values exceed your standards.
Click any metric below for a detailed explanation including what it measures, why it matters, problematic values, and example thresholds.
Complexity Metrics
These three metrics measure cognitive complexity — how hard your code is to understand and maintain. They are derived from the same underlying per-function complexity scores but provide different views into the distribution.
complexity_avg
Average Complexity
The mean cognitive complexity score across all functions in the repository. Indicates the general level of complexity in your codebase — rising averages suggest that new code is getting harder to maintain.
complexity_max
Maximum Complexity
The highest complexity score of any single function in the repository. Highlights your most complex function — the one most likely to harbor bugs and be hardest to modify safely.
complexity_p95
95th Percentile Complexity
The complexity score at the 95th percentile — only 5% of functions exceed this value. More useful than max for understanding the "worst typical" complexity, filtering out one-off outliers.
Codebase Size Metrics
These metrics track the size and structure of your codebase. They help you understand growth trends and detect when the codebase is expanding faster than expected.
Observability Metrics
These metrics provide visibility into what the scanner found and any issues it detected.
Configuring Alerts
Setting Thresholds
Configure alert thresholds in your organization settings. Alerts are evaluated on every scan and categorized by severity:
Critical
Metric value significantly exceeds threshold (200%+). Immediate attention recommended.
Warning
Metric value moderately exceeds threshold (120-200%). Should be addressed soon.
Info
Metric value slightly exceeds threshold (100-120%). Worth monitoring.
Using Metrics Effectively
- Focus on trends rather than absolute values — is complexity going up or down over time?
- Use complexity_p95 over complexity_max for threshold alerts — max is easily skewed by a single outlier function
- Monitor alert_count as a leading indicator — rising alert counts mean your code is drifting from your standards
- Correlate file_count and total_loc growth — if LOC grows faster than file count, functions are getting longer
- Review alerts during code reviews to prevent complexity from accumulating
- Set different thresholds by criticality — security-critical code should have stricter limits than internal utilities
Next Steps
- CLI Setup — Install the scanner and run your first scan
- Click any metric above to learn what it measures and how to set thresholds
- Getting Started — Full setup walkthrough including GitHub integration