Skip to main content

Overview

FERAL exposes operational metrics across all major subsystems. Metrics work in two modes:
  1. In-memory counters (always available, zero dependencies)
  2. OpenTelemetry SDK (when opentelemetry-sdk is installed) — exports to any OTLP-compatible backend

Enable Prometheus Scraping

# Expose the /metrics endpoint
export FERAL_METRICS_ENDPOINT=1

# Start FERAL
feral serve
Then configure your Prometheus scrape_configs:
scrape_configs:
  - job_name: feral
    static_configs:
      - targets: ["localhost:9090"]
    metrics_path: /metrics

Export to OTLP (Grafana Cloud, Honeycomb, etc.)

Install the OpenTelemetry extras:
pip install feral-ai[observability]
Set the OTLP endpoint:
export OTEL_EXPORTER_OTLP_ENDPOINT=https://otlp.example.com:4318

# Optional: also log metrics to console
export FERAL_METRICS_CONSOLE=1
FERAL auto-detects the SDK at startup and begins exporting.

Metrics Reference

LLM

MetricTypeLabelsDescription
feral.llm.calls_totalCounterprovider, modelTotal LLM API calls
feral.llm.errors_totalCounterprovider, modelFailed LLM API calls
feral.llm.latency_msHistogramprovider, modelLLM call latency in ms

Channels

MetricTypeLabelsDescription
feral.channel.message_totalCounterchannel, directionMessages sent/received per channel

Proactive Engine

MetricTypeLabelsDescription
feral.proactive.trigger_totalCountertriggerProactive triggers fired

Skills

MetricTypeLabelsDescription
feral.skill.invocations_totalCounterskill, endpointSkill endpoint invocations
feral.skill.exec_latency_msHistogramskill, endpointSkill execution latency in ms

Programmatic Access

from observability.metrics import increment, observe, measure, in_memory_snapshot

# Counter
increment("my_custom.counter", attributes={"env": "prod"})

# Histogram
observe("my_custom.latency_ms", 42.5)

# Context manager (auto-records duration)
with measure("my_custom.operation"):
    do_work()

# Debug snapshot
print(in_memory_snapshot())