State of Agent Adoption (quarterly)
Introducing Dailybot's quarterly survey on how engineering teams adopt and integrate coding agents. Methodology, key metrics, and early findings on the agentic transition.
The agentic transition in software engineering is happening fast, but it is happening unevenly. Some teams have integrated coding agents into their daily workflows and cannot imagine working without them. Others are still experimenting with occasional use cases. Most fall somewhere in between, unsure whether their pace of adoption is ahead of the curve, behind it, or roughly normal.
Without data, every team navigates this transition in isolation. That is why Dailybot is launching the State of Agent Adoption — a quarterly survey and report that tracks how engineering teams are adopting, integrating, and governing coding agents across the industry.
Why this report exists
The conversation around coding agents is dominated by vendor announcements and anecdotal success stories. What is missing is systematic data on how real teams — not just early adopters and evangelists — are actually using agents in practice.
Leaders need answers to practical questions. What percentage of engineering teams are using agents daily? Which workflows are agents most commonly applied to? What are the most reported challenges? How are teams handling oversight and review? What does the adoption curve look like by company size, industry, and team maturity?
These are not questions that a single company’s experience can answer. They require aggregated, anonymized data from across the industry. That is what this report provides.
What we measure
The State of Agent Adoption survey covers six dimensions of the agentic transition.
Adoption rates
How many teams use coding agents, how frequently, and in what capacity? We track adoption by role (IC developer, team lead, manager, executive), by company size (startup, mid-market, enterprise), and by use case (greenfield development, maintenance, refactoring, testing, documentation).
Workflow integration
Where in the development workflow do agents participate? We map the distribution across coding, code review, testing, deployment, documentation, and incident response. This reveals which parts of the software lifecycle are being transformed fastest and which remain primarily human-driven.
Satisfaction and productivity
How do practitioners feel about agents? We measure perceived productivity impact, satisfaction with agent output quality, confidence in agent-generated code, and overall sentiment about the trajectory. These subjective measures complement the objective adoption data.
Visibility and oversight
How do teams track agent output? We assess whether teams have unified visibility across human and agent work, what review practices they apply to agent output, and how they handle agent-generated errors. This dimension is critical because agent adoption without oversight creates risk.
Challenges and blockers
What prevents teams from adopting agents or using them more effectively? We categorize blockers into technical (context window limitations, hallucination, tooling gaps), organizational (resistance to change, unclear policies, security concerns), and practical (review bottlenecks, quality inconsistency, training needs).
Human-agent coordination
How do teams coordinate between human contributors and agents? We examine standup practices, task assignment, communication channels, and whether teams have developed explicit processes for human-agent collaboration or are improvising.
Methodology
The survey targets engineering leaders, managers, and individual contributors at software companies of all sizes. Responses are collected quarterly, anonymized, and analyzed in aggregate. We use stratified sampling to ensure representation across company sizes, industries, and geographies.
Key methodological principles:
- Anonymity: Individual and company responses are never identified in the report
- Consistency: Core questions remain stable across quarters to enable trend analysis
- Depth: In addition to multiple-choice questions, we include open-ended responses that surface insights quantitative data alone would miss
- Transparency: Methodology, sample size, and limitations are published alongside every report
Early findings
While the formal first edition is in development, early signals from our community suggest several emerging patterns.
Adoption is faster than expected. A majority of engineering teams at technology companies report some level of agent usage, even if informal. The gap between “experimenting” and “integrated” is closing faster than similar adoption curves for previous developer tools.
Visibility is the top concern. Among teams using agents regularly, the most commonly cited challenge is not agent quality or reliability — it is visibility. Leaders report difficulty knowing what agents produced, tracking agent output alongside human work, and maintaining a coherent picture of team velocity.
Review practices are lagging behind adoption. Teams that adopted agents quickly often did not update their code review practices simultaneously. The result is agent-generated code merging with less scrutiny than human-generated code, which creates quality and security risk.
Coordination patterns are emerging organically. Rather than following prescribed frameworks, most teams are developing their own coordination patterns through trial and error. There is an opportunity to accelerate this learning by sharing patterns across teams.
How to participate
The State of Agent Adoption survey is open to engineering teams of all sizes and industries. Participation takes approximately ten minutes, and respondents receive early access to the full report with industry benchmarks and anonymized comparisons.
By participating, your team contributes to the industry’s shared understanding of the agentic transition — and gains data to inform your own strategy.
Why benchmarking matters
Every technology transition creates winners and laggards. The difference is often not talent or resources — it is information. Teams that know where they stand relative to the industry can make better decisions about investment, training, and process changes.
The State of Agent Adoption report gives leaders that information. Not vendor marketing. Not conference hype. Systematic data on how the industry is actually navigating the biggest shift in software development since the cloud.
Dailybot publishes this report quarterly because the pace of change demands regular measurement. What is true this quarter may not be true next quarter. The teams that stay informed will be the ones that stay ahead.
FAQ
- What does the State of Agent Adoption report measure?
- The report measures adoption rates by role and company size, common agent workflows, satisfaction and productivity perceptions, visibility and oversight practices, challenges and blockers, and how teams structure human-agent coordination. It surveys engineering leaders, managers, and individual contributors quarterly.
- Why does benchmarking agent adoption matter?
- Without benchmarks, teams cannot tell whether their adoption pace is typical, lagging, or leading. Benchmarking helps leaders make investment decisions, identify gaps in their adoption strategy, and learn from patterns across the industry — rather than navigating the transition in isolation.
- How can teams participate in the survey?
- Teams can participate by signing up through Dailybot's website. The survey takes approximately 10 minutes, covers adoption patterns and challenges, and participants receive early access to the full report with industry benchmarks and anonymized comparisons.