BrokenApp
$2,000 in prizes

Can AI fix every bug we find?

We scan your app. Your AI agent fixes it. We re-scan to verify. This isn't a hackathon. It's a proof that AI agents with runtime data ship better code. 1,000 spots. 30 days. $2,000 in prizes.

1,000

Spots

30

Days

Free

To enter

How it works

Six steps. Thirty days.

1

Submit your web app

Enter your web app's URL. BrokenApp scans it automatically and builds the runtime spec.

2

We scan it

Full runtime scan — every route, form, endpoint. Your findings and app spec land in findings.json and spec.json.

3

AI gets context via MCP

Connect your AI agent to BrokenApp's MCP server. It reads the runtime spec and security findings directly.

4

AI fixes what's broken

Your AI coding agent generates fixes with full runtime context — not just source code. 30 days to fix as many as you can.

5

We re-scan to verify

BrokenApp re-scans your web app automatically. Diff report shows exactly which bugs are resolved.

6

Share your results

Post your before-and-after progress publicly. Show the world what AI-assisted debugging can do.

Prizes

$2,000 total.

Bug Blitz Track

10+ bugs found. Maximum fixes, maximum competition.

Grand Prize$1,000
Runner-Up$500
3x Mentions$50 each

Clean Code Track

3-9 bugs found. Focused hardening, quality over quantity.

Best Hardening$250
Best Writeup$100
2x Mentions$50 each

Judging criteria

Three dimensions.

40%

Fix rate

How many bugs did you fix? The re-scan proves it. Quantity matters.

40%

Fix quality (AI-assisted)

Were the fixes solid? Did you leverage MCP context effectively? Root causes, not patches.

20%

Documentation

How well did you share your process? Did your posts help other developers learn?

Timeline

Seven weeks. Start to finish.

Week 1

Applications open

Submit your web app URL. Scans run automatically. Qualified participants notified within 48 hours.

Weeks 2-5

Competition phase

Fix bugs, post updates, compete. Live leaderboard. Community forum for tips and questions.

Week 6

Final submissions

BrokenApp re-scans all competing web apps. Submit your final writeup and public post links.

Week 7

Winners announced

Results published. Prizes distributed. Full research report with aggregate data across all 1,000 web apps.

Who should enter

Developers with real web apps
and real bugs.

Side projects, indie SaaS apps, freelance client work — any deployed web application with a publicly accessible URL.

Web apps only. BrokenApp scans deployed websites and web applications. Desktop software, mobile apps, CLI tools, and native programs are not supported.

Required

  • Must be a deployed web application with a public URL (not desktop, mobile, or CLI software)
  • The project must be real (not tutorial code or throwaway repos)
  • You must have authorization to test the web application
  • You must share progress publicly (at least 2 posts over 30 days)
  • You must complete the challenge within 30 days

Not required

  • The web app doesn't need to be profitable or have users
  • You don't need to fix every bug
  • You don't need to be an expert — the AI does the heavy lifting

Why we're doing this

AI agents with runtime context produce better fixes.

Our thesis: AI agents with runtime data — the app spec, the security findings, the actual behavior of the running application — ship better fixes than AI agents with source code alone.

1,000 developers working on 1,000 real web applications for 30 days will produce the most comprehensive proof of this claim. We'll publish the results openly — fix rates, fix quality, where MCP context made the difference, where it didn't.

Every participant walks away with a cleaner codebase, proof that AI-assisted debugging works, and a portfolio-worthy writeup.

Your web app is probably broken.
Let's find out.

1,000 spots. Free to enter. 30 days to compete.

Enter the Challenge

brokenapp.io/challenge