← Back to Library

Bug Reporting

How to Write Bug Reports Developers Actually Use

A good bug report gets fixed on the first pass. A bad one generates five emails, a Slack thread, and ends up closed as "cannot reproduce." Here's the difference.

Why most bug reports fail

The gap between "it's broken" and "I fixed it" is almost always a gap in information. The reporter saw the bug. The developer didn't. Everything that happened in between — what the user clicked, what the browser sent, what the server returned — is invisible unless it was explicitly captured.

"The checkout button doesn't work on my phone" is a symptom. "POST /api/v1/checkout returned 500 with body {"error": "invalid_amount"} because the cart amount was NaN" is a bug report.

The second version tells a developer exactly where to look. The first sends them into a debugging session that might take hours — and might not reproduce the issue at all.

The seven fields that matter

1. A one-sentence description of the symptom

Lead with what the user sees, not what you think caused it. "The payment form shows a blank error message" is better than "I think there's a JavaScript error in the payment component."

2. Steps to reproduce

Numbered, specific steps. If a developer can't follow them and see the bug, the report is incomplete. Include the specific URL, the exact data you entered, and the order of actions.

  • Go to https://example.com/checkout
  • Add a product with quantity 0
  • Click "Proceed to payment"
  • Expected: validation error message. Actual: spinner that never resolves.

3. Environment

Browser, OS, screen resolution, and whether it's reproducible in other browsers. A bug in Safari that doesn't appear in Chrome is half the answer already.

4. Console output

Paste the full console log — not just errors. A console.warn that fires 200ms before the crash is often the real culprit. Most bug report tools only capture errors; make sure yours captures all five levels (.error, .warn, .info, .log, .debug).

5. Network requests

Which API calls were made? Which ones failed? What was the status code, what did the request body look like, and what did the server respond with? A 401 means something different than a 500. A malformed request body is different from a server-side data problem.

6. An annotated screenshot

Draw a box around the broken element. Add an arrow pointing to the error message. Add a text note with context. A screenshot with annotations tells the developer exactly where to look without a call.

7. Severity

Be honest. A blocking bug that breaks checkout is different from a cosmetic spacing issue. Severity affects prioritization — mislabeling wastes everyone's time.

The "HAR export" trick

For complex network issues, attach a .har file. A HAR (HTTP Archive) file captures every network request in a structured format that any developer can open in Chrome DevTools → Network → Import. It includes timing, headers, request bodies, and response bodies — the full picture of what happened on the network during the session.

This single artifact can replace 10 screenshots and 5 paragraphs of explanation.

What "can't reproduce" really means

When a developer marks a bug "cannot reproduce," it almost always means the report was missing environment or step information — not that the bug doesn't exist. The user definitely saw something. The question is whether the report gave the developer enough to find it.

Complete reports get fixed. Incomplete reports get "can't reproduce." The difference is usually two or three fields.

Automating the hard parts

Manually capturing all seven fields is tedious. That's why tools like Site Reviewer exist: a Chrome extension that automatically captures the full console log, all network requests (including bodies), browser metadata, and an annotated screenshot the moment a user clicks "Report." The developer receives everything they need without asking.

Stop getting incomplete bug reports

Site Reviewer captures all seven fields automatically — console, network, screenshot, browser, and more.