Observability Metrics

symbol_count and alert_count — Visibility into code structure and quality thresholds

Symbol Count

symbol_count is the number of symbols (functions, methods, and classes) discovered by the tree-sitter parser during a scan. Each symbol represents a discrete unit of logic in your codebase that receives its own complexity score.

Decomposition Indicator

A high symbol count relative to total LOC indicates that your code is well-decomposed into small, focused units. Each function does one thing and has low complexity. This is the pattern you want to see.

Monolithic Code Signal

A low symbol count relative to total LOC means you have fewer but larger functions. These monolithic functions tend to score high on complexity and are harder to test, review, and maintain.

How Symbol Count Is Calculated

The scanner uses tree-sitter to parse each source file and extract every function declaration, method definition, arrow function, and class body. Each extracted unit counts as one symbol.

// This file contributes 4 symbols:

class UserService {              // symbol 1: class
  constructor(db: Database) {}   // symbol 2: method

  async getUser(id: string) {    // symbol 3: method
    return this.db.query(id);
  }
}

export function validateId(      // symbol 4: function
  id: string
): boolean {
  return id.length === 36;
}

Interpreting Symbol Count

LOC Per Symbol Ratio

avg LOC per symbol = total_loc / symbol_count

This derived ratio tells you the average function size. Lower is generally better.

Avg LOC/SymbolAssessmentAction
10Excellent decomposition — small, focused functionsMaintain current practices
11 - 25Typical — functions are reasonably sizedMonitor for upward trends
26 - 50Large — functions are doing too muchExtract sub-functions to reduce size
> 50Very large — likely monolithic codePrioritize decomposition

Alert Count

alert_count is the number of active code alerts across your repository, grouped by severity. Alerts fire when any metric exceeds the threshold you have configured in your organization settings. This metric is the single most direct indicator of whether your code meets your team's quality standards.

Critical Alerts

Metric value exceeds the threshold by 200% or more. These represent serious quality issues that should be addressed immediately — a function with double the allowed complexity, or a repository that has far exceeded its growth budget.

Warning Alerts

Metric value exceeds the threshold by 120-200%. These are issues trending in the wrong direction. Address them before they become critical — they indicate code that is getting harder to maintain.

Info Alerts

Metric value exceeds the threshold by 100-120%. These are early signals worth monitoring. A function has just crossed the line but is not yet significantly over.

How Alerts Work

  1. Configure thresholds in your organization settings for any metric (e.g., complexity_avg > 10, complexity_p95 > 25)
  2. Run a scan via the CLI or GitHub Actions
  3. Alerts are evaluated automatically after each scan completes
  4. Severity is assigned based on how far the value exceeds the threshold
  5. Results appear on your dashboard with the metric name, current value, threshold, file path, and severity level

Example

Threshold configured: complexity_p95 > 20

Scan results:
  complexity_p95 = 18  → No alert (below threshold)
  complexity_p95 = 22  → Info alert (110% of threshold)
  complexity_p95 = 30  → Warning alert (150% of threshold)
  complexity_p95 = 45  → Critical alert (225% of threshold)

Why Alert Count Matters

Leading Indicator

A rising alert count is the earliest signal that code quality is degrading. Each new alert means another function or file has crossed your team's threshold. Catching this trend early lets you address problems before they compound.

Team Accountability

When alert count is visible on the dashboard, it creates shared awareness. Teams that track alert count tend to address issues during code reviews rather than letting them accumulate.

Refactoring Progress

When you run a refactoring initiative, alert count gives you a concrete measure of progress. If you start with 30 alerts and reach 10, you can quantify the improvement.

Interpreting Alert Count

Alert CountAssessmentAction
0All metrics within thresholdsMaintain standards; consider tightening thresholds
1 - 5A few functions need attentionAddress during regular code reviews
6 - 20Multiple areas of the codebase need workDedicate time in upcoming sprints to reduce
> 20Widespread quality issuesPlan a focused refactoring initiative

Best Practices

  • Start with conservative thresholds — set them slightly above your current values and tighten over time as you improve
  • Use complexity_p95 over complexity_max for alerts — max is easily skewed by a single outlier function that nobody plans to refactor
  • Review alerts in code reviews — make it a practice to check for new alerts before approving PRs
  • Track alert_count on your dashboard — a downward trend means your codebase is getting healthier
  • Combine symbol_count with complexity metrics — many small symbols with low complexity is the ideal pattern

Related Metrics