Back to blog

Vibe Coding May Harm Open Source: What the Study Reveals

Hello HaWkers, a recent study is generating intense debate in the developer community. According to researchers, "vibe coding" tools - where developers use AI to generate code quickly with little review - may be creating serious problems for the open-source ecosystem.

Is the pursuit of productivity compromising contribution quality? Let's analyze the data and understand what this means for developers.

What is Vibe Coding

Defining the Phenomenon

Vibe coding is a term that emerged in 2025 to describe an increasingly common practice: using AI tools like Copilot, Cursor, or Claude Code to quickly generate code, focusing more on "making it work" than deeply understanding what was generated.

Vibe coding characteristics:

  • Vague prompts to generate code quickly
  • Little or no review of generated code
  • Focus on immediate results over comprehension
  • Excessive dependence on AI suggestions
  • Committing code without adequate testing

Difference from responsible AI use:

Aspect Vibe Coding Responsible Use
Review Minimal or none Detailed line by line
Understanding Superficial Deep knowledge of generated code
Testing Often ignored Always implemented
Documentation Absent Present and updated
Maintenance Problematic Considered from the start

What the Study Reveals

Concerning Data

The study analyzed contributions to popular open-source repositories over the past 18 months and identified troubling patterns.

Key findings:

  • 340% increase in PRs with identical "boilerplate" code
  • 67% more issues related to code that "works but nobody understands"
  • 45% reduction in documentation contributions
  • 89% increase in bugs introduced by poorly understood code
  • 23% of maintainers report burnout from reviewing low-quality code

Identified patterns:

  1. Code copied without adaptation: AI-generated snippets that don't integrate well with the project
  2. Lack of context: Contributors who don't understand the existing architecture
  3. Superficial tests: Test cases that just "pass" without validating real behavior
  4. Missing documentation: Complex functions without explanation of purpose

Impact on Maintainers

The study also revealed the human cost to those who maintain open-source projects.

Additional burden on maintainers:

  • Average PR review time increased 78%
  • Need to rewrite accepted code increased 56%
  • Communication with contributors became more difficult
  • Frustration with contribution quality rising

"Before I received PRs that needed minor adjustments. Now I receive code that the person clearly didn't understand what it does. And when I ask, the answer is 'the AI generated it that way'."

— Maintainer of popular GitHub project

Why This Happens

Misaligned Incentives

Several factors contribute to the growth of vibe coding in open-source projects.

Main causes:

  1. GitHub gamification: Commit streaks and green graphs incentivize quantity over quality
  2. Inflated resumes: Open-source contributions seen as differential without quality evaluation
  3. Ease of access: Lower barriers to contribute may attract less prepared contributors
  4. Productivity pressure: "Ship fast" culture that ignores consequences

The problematic cycle:

Developer wants to contribute to open-source

Uses AI to generate code quickly

Submits PR without fully understanding

Maintainer needs to review/reject/rewrite

Maintainer becomes overloaded

Project suffers or maintainer gives up

Skills That Are Atrophying

Vibe coding may be affecting the development of fundamental skills.

Skills at risk:

  • Code reading: Understanding existing code before modifying
  • Debugging: Investigating problems systematically
  • Architecture: Thinking about how pieces connect
  • Communication: Explaining technical decisions clearly
  • Patience: Working on difficult problems without shortcuts

Real Cases and Examples

Documented Problems

Several open-source projects reported specific situations.

Case 1: Utility Library

A contributor submitted an "AI-generated" email validation function that passed tests but had a regex with ReDoS (Regular Expression Denial of Service). The code was accepted and remained in the project for 3 months before being discovered.

Case 2: Web Framework

Automatic PRs generating "performance improvements" that actually broke backward compatibility. The contributor couldn't explain the changes when questioned.

Case 3: CLI Tool

AI-generated documentation describing features that didn't exist in the project, causing confusion among users.

The Maintainers' Perspective

Maintainers of popular projects shared their experiences.

Strategies they're adopting:

  • More rigorous PR templates requiring explanations
  • Bots that detect AI-generated code patterns
  • Requirement for more comprehensive tests
  • Active mentoring for new contributors
  • Faster closing of low-quality PRs

How to Contribute Responsibly

Best Practices

If you use AI to help with open-source contributions, there are ways to do it responsibly.

Before contributing:

  1. Understand the project: Read documentation, existing issues, and related code
  2. Talk first: Open an issue or discussion before implementing major changes
  3. Follow guidelines: Each project has its style and standards
  4. Set up the environment: Make sure you can run and test locally

When using AI:

  1. Review line by line: Understand each generated snippet
  2. Test beyond basics: Create edge cases and error scenarios
  3. Document your decisions: Explain the "why" beyond the "what"
  4. Be honest: If you used AI, be prepared to explain the code
  5. Iterate: Use AI as a starting point, not the final product

Checklist before submitting PR:

  • I fully understand the code I'm submitting
  • I can explain each decision if asked
  • Tests cover edge cases
  • Documentation is updated
  • I followed the project's style
  • I read and responded to the PR template
  • I'm available for iterations

The Future of Open Source with AI

Necessary Balance

AI can be a powerful tool for open-source contributions, but it requires balance.

Positive uses of AI in open-source:

  • Generate initial tests to expand coverage
  • Translate documentation to other languages
  • Identify bug patterns in existing code
  • Assist with issue triage
  • Accelerate repetitive tasks like formatting

Where AI should complement, not replace:

  • Architecture decisions
  • Critical code review
  • Community communication
  • Understanding project context
  • Long-term maintenance

Recommendations for the Community

The study suggests actions to protect the open-source ecosystem.

For contributors:

  • Develop fundamental skills first
  • Use AI as an assistant, not a substitute
  • Prioritize quality over quantity of contributions
  • Invest time understanding projects before contributing

For maintainers:

  • Establish clear expectations in CONTRIBUTING.md
  • Consider onboarding processes for new contributors
  • Don't be afraid to reject low-quality PRs
  • Create quality metrics beyond quantity

For companies:

  • Don't evaluate candidates only by number of contributions
  • Ask about specific contributions in interviews
  • Support open-source projects financially
  • Encourage quality contributions, not volume

Conclusion

The study on vibe coding and open-source raises important questions about how we use AI in development. The technology itself isn't the problem - the problem is the mindset of seeking shortcuts without understanding the consequences. Quality open-source contributions require time, attention, and most importantly, understanding of what we're doing.

Key points:

  1. Vibe coding is increasing the burden on open-source project maintainers
  2. Low-quality contributions can harm projects and drive away maintainers
  3. AI can be used responsibly with careful review
  4. Quality of contributions matters more than quantity
  5. Developing fundamental skills remains essential

The open-source community was built on genuine collaboration and knowledge sharing. Preserving these values while leveraging new tools is this generation of developers' challenge.

For more on AI's impact on development, read: AI Already Writes 30% of Microsoft and Google Code: What This Means for Devs.

Let's go! 🦅

Comments (0)

This article has no comments yet 😢. Be the first! 🚀🦅

Add comments