RESEARCH

LOOM Research

Data-driven insights on AI coding tools, code quality, and developer productivity. Original analysis to help you make better decisions.

LOOM was built by developers who use AI coding tools every day. We noticed a problem: vendor claims often diverge from developer experience, and finding reliable data requires sifting through dozens of studies, surveys, and reports.

So we started tracking the research ourselves. We synthesize findings from Stack Overflow, GitClear, DORA, Veracode, METR, and other sources. We acknowledge limitations. We link to originals. And we share what we learn.

This is not marketing. These are real numbers from industry studies, presented with context about what they do and do not tell us.

FEATURED RESEARCH

Latest Analysis

New

AI-Generated Code Quality Report 2026

84% adoption, 33% trust. What the data actually says about AI code quality.

An analysis of AI code quality findings from Stack Overflow, METR, DORA, Veracode, and GitClear. Eight data tables covering adoption rates, vulnerability metrics, code churn, and productivity claims.

Read the full report

What We Cover

Four areas where data matters more than marketing.

AI Adoption & Trust

Most developers use AI coding tools. Far fewer trust the output. We track this gap and what drives it: adoption rates across experience levels, satisfaction scores, and how trust correlates with verification practices.

Code Quality Metrics

Vulnerability rates in AI-generated code. Code churn and revert patterns. Technical debt accumulation. The numbers security teams and engineering leaders need to make informed decisions about AI tool policies.

Developer Productivity

What does the research actually show about productivity gains? We examine the claims, the methodologies, and the caveats. Time-to-completion versus long-term maintenance costs. Individual versus team metrics.

Context & Comprehension

AI tools generate better code when they understand the codebase. We analyze how context quality affects output quality, where current tools fall short, and what architectural awareness actually changes.

Our Approach

We cite sources

Every claim links to the original study or survey. We do not paraphrase without attribution or present interpretations as facts.

We acknowledge limitations

Sample sizes, methodology constraints, self-selection bias. If a study has caveats, we note them. No overclaiming.

We update when data emerges

Research moves fast. When new studies contradict or refine previous findings, we revise our analysis accordingly.

We are practitioners first

LOOM was built with AI tools. We live with these tradeoffs daily. Our perspective comes from experience, not just analysis.

Explore the Research

Start with our comprehensive analysis of AI code quality data from the past year.