Search Ask

Makeroom

RegisterLogin

Discussion

General
Tech
Photos

Library

Chevron Right Icon
Design
Resources
Websites
Chevron Right Icon
Web development
Cool Libraries
Tools
Resources
Papers and Studies
Language Models
Chevron Right Icon
Computers
Chevron Right Icon
Windows Tools and Modding
Windhawk
Raycast for Windows
Rainmeter
Chevron Right Icon
Random fun stuff
Esoteric File Systems
Cool websites
Chevron Right Icon
Storyden
Selfh.st
OpenAlternative
Microlaunch
Peerlist
Glama.ai
AlternativeTo
Brandfetch
PitchHut
 Collections Links Members

Makeroom

Icon

A small rag-tag assortment of makers, engineers and designers sharing mentoring, support and projects to work on at any stage in their career.

Join our Discord server!


Welcome to the Makeroom installation of Storyden!

This acts as a live demo of Storyden's forum and library software. On this site you'll find a curated collection of web and design resources as well as anything our members share.

Feel free to participate, this may be a demo but it's never wiped. That being said, Storyden is in active development and we encourage you to experiment respectfully as well as report any security issues you find to @Southclaws or by opening an issue.

Have an amazing day!

powered by storyden

Login
Library
ai-tools-slowed-developers-in-field-study

AI Tools Slowed Down Experienced Developers

The integration of advanced AI tools into software development workflows has been heralded as a major productivity boost, yet recent empirical research suggests that this promise may be more complex in practice. A randomized controlled trial conducted in early 2025 explored the real-world impact of state-of-the-art AI tools on experienced open-source developers. Over the course of the study, 16 seasoned contributors—each with an average of five years working on mature codebases—completed hundreds of tasks, some with access to AI tools and others without.

The AI tools in question included popular platforms such as Cursor Pro and Claude 3.5/3.7 Sonnet, considered cutting-edge as of early 2025. Before beginning the tasks, developers confidently predicted a significant reduction in completion time (around 24%) when using AI. Interestingly, similar optimism was echoed by domain experts in both economics and machine learning. However, the post-trial findings revealed a starkly different reality: developers actually took 19% longer on tasks when AI tools were allowed.

This paradoxical outcome challenges prevailing assumptions about AI’s immediate benefits in skilled programming contexts. The study delved deeper into 20 plausible contributing factors, ranging from project complexity and tooling familiarity to quality standards and human-AI collaboration dynamics. While experimental limitations cannot be fully discounted, the persistence of the slowdown effect across different conditions suggests that the issue lies more with the nuanced interaction between developers and AI tools than with the experimental design itself.

These findings underline the importance of cautious optimism when integrating AI into high-skill domains like software engineering. Rather than a blanket solution for productivity, AI may introduce new cognitive and coordination costs that offset or even outweigh its intended benefits—at least until tools and workflows evolve to better support expert users.

Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity

Despite widespread adoption, the impact of AI tools on software development in the wild remains understudied. We conduct a randomized controlled trial (RCT) to understand how AI tools at the February-June 2025 frontier affect the productivity of experienced open-source developers. 16 developers with moderate AI experience complete 246 tasks in mature projects on which they have an average of 5 years of prior experience. Each task is randomly assigned to allow or disallow usage of early 2025 AI tools. When AI tools are allowed, developers primarily use Cursor Pro, a popular code editor, and Claude 3.5/3.7 Sonnet. Before starting tasks, developers forecast that allowing AI will reduce completion time by 24%. After completing the study, developers estimate that allowing AI reduced completion time by 20%. Surprisingly, we find that allowing AI actually increases completion time by 19%--AI tooling slowed developers down. This slowdown also contradicts predictions from experts in economics (39% shorter) and ML (38% shorter). To understand this result, we collect and evaluate evidence for 20 properties of our setting that a priori could contribute to the observed slowdown effect--for example, the size and quality standards of projects, or prior developer experience with AI tooling. Although the influence of experimental artifacts cannot be entirely ruled out, the robustness of the slowdown effect across our analyses suggests it is unlikely to primarily be a function of our experimental design.

arxiv.org