The performance layer for modern engineering

Navigate your
AI transformation.

Run AI-native engineering on data. Built by CTOs who got tired of guessing.

AI Transformation
Benchmarks
Reporting
Targets
Slack
Workspace / Insights / AI Transformation

The Findings

Performance grew 215% YoY → Headcount grew 35% (42 devs). Matching today's performance at Q1 2025 would have required 218 additional developers.

Last updated 12 min ago
Overview
Benchmarks
Adoption
Quality
Cost
Engineering headcount
+35%
121 → 163 year over year.
Avg. developer performance
+128%
31.2 ETV from 13.7 ETV in Q1'25
Total performance
+215%
Total Performance grew from 1,658 ETV to 5,220 ETV YoY
Virtual developers
+218
Today's ETV would require 381 devs at Q1 2025 performance.
Performance vs Headcount
Headcount (left axis, dashed) vs Total Performance (right axis, solid). Labels show performance (ETV) change vs Q1'25 baseline.
200150100500Headcount6,0004,5003,0001,5000ETV−17%+37%+121%+215%Q1'25Q2'25Q3'25Q4'25Q1'26
HeadcountTotal Performance
Tips & tricks for your org
3 actions
01
Roll Cursor out to the iOS team
Web and Platform teams using Cursor lifted velocity 2.3× vs. peers. iOS hasn't onboarded yet.
ai-tooling
02
Shorten review SLA on the Data team
Median review wait is 9.2h — 3× the org. Cycle time is suffering despite high commit volume.
process
03
Investigate Pay team revert spike
Revert rate up 1.8pt in the last 2 weeks. Likely tied to the new Stripe webhook refactor.
quality
02Researchresearch.navigara.com

Built on top of research.Navigara's measurement layer is grounded in the Open-Source Engineering Performance. A study of public repositories across six AI-forward organizations. The same model that benchmarks Cloudflare, Meta, OpenAI, and Microsoft is what your team gets pointed at on day one.

EDGE
Cloudflare
FRAMEWORKS
Vercel
PLATFORM
Google
RUNTIME
Meta
FRONTIER
OpenAI
DEVTOOLS
Microsoft
Navigara model
Performance
See how engineering performance changes month over month — and the signals that explain why.
Work mix
Growth, Maintenance, and Fixes shares — so leaders see where output is being booked, not just how much.
Benchmark
Your team's per-quarter trajectory placed alongside the six reference organizations.
Performance change · Q1'25 → Q1'26
+116%
Mean ETV / Developer · CI [+84, +148]
Growth share · Q1'25 → Q1'26
30% → 36%
New-value output · +7pp shift
Fixes share · Q1'25 → Q1'26
15% → 18%
Repair output · +3pp shift
Trajectory range
+51% → +373%
Meta · slowest  →  OpenAI · fastest
Read the white paperSee how it benchmarks your team →OSS-EPR · v1.0 · 2026-04-30
03Security & Deployment

Your data, your environment, your rules.

Navigara runs in your cloud or ours. We read commit metadata and review activity — never source code, never credentials. Deployments are typed, audited, and live within a day.

01
Private cloud or self-hosted
Deploy into your AWS, GCP, or Azure account. Zero data leaves your perimeter. We ship a Terraform module and a one-click Helm chart.
02
Metadata only — never source
We index commit shape, review patterns, and CI signals. We do not pull source code, secrets, or .env files. Connector scopes are read-only and per-repo.
03
Live within a day
Connect GitHub, GitLab, Linear, and Slack with OAuth. First insights in under an hour. Full backfill of 12 months of history within 24h.
04
Granular access control
SSO via SAML/OIDC. Role-based scopes for execs, managers, and ICs. Per-team data masking for regulated orgs.
Compliance
verified
SOC 2 Type II
Annual audit
Last report: Mar 2026 · Available under NDA
GDPR
EU-resident data plane
DPA on request · Sub-processor list public
SSO / SAML
Okta, Entra, Google
SCIM provisioning · Role-based access
ISO 27001
In progress · Q3 2026
Stage 1 audit complete · Cert pending
# navigara connector scope
read:metadata
read:reviews 
read:source  never
read:secrets never
→ residency: us-east-1 / eu-west-1
04Capabilities

Five things you can do on Monday morning.

F.01Benchmarking

Prove AI is working.

The easiest way to prove your AI tooling is delivering: benchmark your teams against themselves a year ago. Real velocity, real delta, no debates.

  • Like-for-like comparisons, controlled for headcount and surface area
  • Drill down by team, repo, or individual contributor
  • Share-ready board view with confidence intervals
Self-vs-self · Velocity index
rolling 90d
2.0×1.5×1.0×0.5×2025 baseline1.92×
AprJunAugOctDecFebApr
F.02Industry pulse

See what other companies are doing.

Real-time view into what's happening across engineering — new tools, new processes, new patterns. Implement what's working, faster.

  • Anonymized adoption curves from 200+ engineering orgs
  • Filter by company stage, headcount, and stack
  • Weekly digest on what your peers shipped this week
Industry pulse · This week
live
Cursor 0.9
Adoption +14pt among Series B+ orgs
42 of 84 sampled · last 7 days
+14pt
Linear M
Stacked PR workflow rolled out at 12 cos.
Median cycle time dropped 31%
−31%
Claude C.
Auto-review now active in 38% of repos
Up from 9% three months ago
+29pt
Trunk-only
Single-branch teams ship 1.6× more
Controlled for team size
1.6×
214 orgs · anonymized · refreshed hourly
F.03Reporting

Unbiased reporting your board will read.

Weekly, monthly, and quarterly reports on what your teams are actually doing — adoption rates, tool usage, output. Numbers you can take to the board.

  • One-click board pack — PDF, Notion, or Google Slides
  • Audit trail with raw signals behind every metric
  • No vibes, no proxies, no team self-reports
Q1-2026-Board-Pack.pdf
Section 03 · Engineering output
Velocity is up 47%, quality held flat, and AI adoption crossed the 60% threshold this quarter.
VELOCITY
+47%
REVERT RATE
2.1%
AI ADOPTION
62%
14,206 PRs312 engineers86 projectsci=GH actions
F.04Targets

Performance-based targets that move with the curve.

The benchmark moves every quarter. Engineers everywhere are getting faster. Set targets that keep your team on the curve, not behind it.

  • Auto-recalibrating targets indexed against your peer cohort
  • Per-team and per-individual goal threading
  • Slack alerts when a team drifts off-curve
Targets · Q2 2026
cohort: B+ saas
Velocity
8.4 / 10.0on-track
AI adoption
62 / 75%behind
Cycle time
14 / 12hon-track
Revert rate
2.1 / <1.5%behind
│ marker = peer cohort medianrecalibrates Apr 30
F.05Slack

Ask in Slack. Get an answer.

Native Slack interface. Ask anything about your engineering org and get an answer. No dashboards to open, no filters to set.

  • Natural language across every metric, team, and timeframe
  • Citations link back to raw activity for verification
  • Schedule recurring questions as Slack digests
#eng-leadership42 members
AS
Aarav Singh10:42 AM
@navigara which team's velocity has improved most this quarter, and what changed?
N
NavigaraAPP10:42 AM
Platform team — velocity up +71% QoQ (4.2 → 7.2 PRs/eng/wk).
Three changes correlated:
→ Cursor rolled out Mar 12 (+38% adoption)
→ Stacked PRs enabled Mar 24
→ Review SLA cut from 24h → 8h
cite: 1,402 PRs · n=18 engineers · 95% CI
Message #eng-leadership/navigara
Ready when you are

Stop guessing.
Start measuring.

Connect your stack in 20 minutes. See your first delta in an hour. Live across your org within a day. Built by CTOs, for CTOs.

Start for freeGet started
20 min Connect your stack with OAuth — GitHub, GitLab, Linear, Slack.
1 hour First insights live. Velocity, adoption, cycle time, revert rate.
1 day Full 12-month backfill. Org-wide rollout. Board-ready report.