Security Testing

How to Set Up a Bug Bounty Program for Your Startup

VDP vs bug bounty vs paid program, writing scope and rules of engagement, reward tiers, triage process, avoiding program abuse, HackerOne vs Bugcrowd vs Intigriti, self-hosted VDP with security.txt, responding to disclosures professionally.

October 8, 20259 min readShipSafer Team

Security researchers find vulnerabilities in production systems every day. Some report them responsibly. Some sell them. Some exploit them. A bug bounty or vulnerability disclosure program creates a structured channel for responsible researchers to report issues to you — before bad actors find the same issues.

This guide covers how to build a program that attracts quality researchers, filters out noise, and improves your security without breaking your team.


Three Types of Programs: VDP, Private Bounty, Public Bounty

Vulnerability Disclosure Policy (VDP)

A VDP is a public statement that says: "If you find a vulnerability in our systems, here's how to report it to us, and here's what you can expect in return." There are no monetary rewards — just a commitment to respond, investigate, and fix valid reports without legal action against the researcher.

Appropriate for: Startups, companies preparing for SOC 2, organizations that want to do the right thing without the complexity of a bounty program. Many enterprise procurement teams now ask if you have a VDP.

Cost: $0 (staff time only)

Signal: Demonstrates good faith and security maturity. Having a VDP is better than nothing; having no disclosure channel at all means researchers either post publicly, sell the vulnerability, or go silent.

Private Bug Bounty Program

A program with monetary rewards, but access is restricted to invited researchers. You control who participates, which gives you more predictable volume and researcher quality.

Appropriate for: Companies with enough engineering capacity to triage reports, a reasonable security baseline already in place, and budget for rewards (even $200-$500/finding adds up).

Cost: Platform fees + bounty payouts. Typically $5k-$30k/year depending on volume.

Public Bug Bounty Program

Open to any registered researcher on the platform. Higher volume, more noise, but also more coverage. The brand recognition of a public program attracts talented researchers who are motivated by both reputation and money.

Appropriate for: Companies with a mature product, a dedicated security team, and a well-understood attack surface. Not recommended for early-stage startups without a security team.

Cost: Platform fees + bounty payouts. Large programs can pay out $100k+ per year.


Recommended Sequence for Startups

  1. Start with a VDP — Get the basics in place: a security.txt file, a reporting email, and a clear policy document. Takes one day.
  2. Move to a private bounty — After your first pentest, once you've fixed obvious issues, invite a handful of trusted researchers to find the next tier of issues.
  3. Graduate to a public program — Once you have SOC 2, a security team, and a well-understood scope, open to the public.

Platform Options

HackerOne

The largest bug bounty platform. Offers both VDP (free for public programs, paid for private) and bounty programs. Has the largest researcher community and the most sophisticated triaging tools. Good choice for companies targeting enterprise customers — HackerOne's brand is well-recognized.

Pricing: VDP starts at $14,999/year. Bounty programs have additional fees.

Bugcrowd

Strong in the enterprise market with a good Australian and UK presence. Offers managed triaging services, which can help small teams that can't handle volume themselves.

Pricing: Similar to HackerOne. Managed services add cost but save internal time.

Intigriti

European platform with a strong researcher community, particularly relevant for GDPR-focused companies or those targeting EU customers. Often less expensive than HackerOne for similar coverage.

Pricing: Generally lower than HackerOne, with flexible options.

Self-Hosted VDP

For a simple VDP without a budget, you can self-host:

  1. Create a security.txt file at /.well-known/security.txt
  2. Create a security@yourdomain.com email
  3. Write a policy page at yourdomain.com/security or security.yourdomain.com
  4. Reference the policy page in security.txt

This is completely free and satisfies most enterprise questionnaire requirements.


Writing Your Scope

Scope is the most important part of your program. Unclear scope generates reports on things you don't care about and frustrates researchers who spend time on out-of-scope issues.

In-Scope vs Out-of-Scope

In-scope examples:

  • app.yourdomain.com — main product
  • api.yourdomain.com — public API
  • Mobile apps (iOS and Android)

Out-of-scope examples:

  • status.yourdomain.com — third-party status page (Statuspage.io)
  • blog.yourdomain.com — if hosted on a third-party CMS
  • marketing.yourdomain.com — if separate from product and low-risk
  • Social engineering against employees
  • Physical security
  • Denial of service attacks
  • Automated scanning without prior approval

Vulnerability Types to Specify

Be explicit about what you consider valid:

Typically in-scope and high reward:

  • Remote code execution (RCE)
  • SQL injection leading to data access
  • Authentication bypass
  • Privilege escalation
  • IDOR (Insecure Direct Object Reference) exposing other users' data
  • SSRF (Server-Side Request Forgery) with internal network access

Typically in-scope but lower reward:

  • XSS (Cross-Site Scripting) — reflected, stored, or DOM-based
  • CSRF without significant impact
  • Open redirect
  • Information disclosure (error messages with stack traces, etc.)

Often excluded:

  • Self-XSS (attacker can only attack themselves)
  • Missing rate limiting on non-sensitive endpoints
  • Clickjacking on pages that don't contain sensitive actions
  • SSL/TLS configuration issues (certificate issues, weak ciphers) — these are low-risk operational findings
  • Missing security headers without demonstrated impact
  • Reports from automated scanners without manual verification

Reward Tiers

If running a paid program, establish clear reward tiers so researchers know what to expect and don't feel lowballed.

Example Reward Structure

SeverityExamplesReward Range
CriticalRCE, authentication bypass, mass data exposure$2,000 – $10,000
HighIDOR (other users' data), privilege escalation, SQLi$500 – $2,000
MediumStored XSS, CSRF with impact, sensitive info disclosure$150 – $500
LowReflected XSS, open redirect, minor information disclosure$50 – $150
InformationalBest practice recommendation, low-risk findingNo payout

Tips on rewards:

  • Lowballing researchers damages your reputation in the community. If a researcher finds a critical RCE and you pay $100, that gets talked about.
  • Generous rewards attract repeat researchers who do deeper work on your product.
  • Many programs add bonuses for high-quality reports that include clear PoC and remediation guidance.

Triage Process

Every report needs to be acknowledged and triaged. The lifecycle:

1. Acknowledgment (within 24-48 hours)

Acknowledge receipt of the report. This is a simple "we received your report and will investigate within [X] business days." Researchers who hear nothing assume their report was ignored and sometimes go public.

2. Initial Triage (within 1-5 business days)

Assess the report:

  • Can you reproduce it?
  • Is it in scope?
  • Is it a valid vulnerability or a known limitation?
  • What's the severity?

3. Validation and Severity Assignment

Reproduce the issue. If you can't reproduce it, ask the researcher for more information. Assign a CVSS score or use your internal severity rubric.

4. Remediation

Fix the issue. High and critical issues should be on a short fix cycle — many programs commit to 30 days for critical, 90 days for lower severity.

5. Payout and Closure

Once fixed, pay the bounty (if applicable), thank the researcher, and close the report. Many programs offer public credit (a Hall of Fame page) which researchers value.

Triage Staffing

For a small startup, designate one engineer as the primary triage contact. They don't need to fix every issue — just triage and route. Budget 2-4 hours per week for a VDP, more for a bounty program.


Avoiding Program Abuse

Some researchers are less than scrupulous. Common abuse patterns:

Volume spamming: Submitting dozens of low-quality, automated scanner results hoping some pay out. Address this by specifying in your policy that automated scan results without manual verification will be rejected.

Severity inflation: Calling a missing header a "critical" to get a higher payout. Have a clear severity rubric and enforce it consistently.

Threatening to publish: "Fix this in 24 hours or I'm posting on Twitter." This is pressure tactics, not responsible disclosure. Your policy should state your response SLAs. If a researcher is making threats, consult legal counsel.

Reporting known issues: Researchers sometimes report issues that are already tracked internally or are known limitations. Your policy should state that you'll acknowledge reports for known issues but won't pay for them.


Rules of Engagement

Your policy must be explicit about what researchers are and are not allowed to do:

Permitted:

  • Testing on accounts you create for testing purposes
  • Passive reconnaissance (DNS lookups, certificate transparency logs)
  • Testing in-scope domains as specified

Prohibited:

  • Accessing other users' data
  • Destructive testing (deleting or modifying data)
  • Denial of service attacks
  • Physical attacks or social engineering
  • Accessing systems out of scope
  • Exfiltrating more data than necessary to prove the vulnerability

Safe harbor: Explicitly state that you will not pursue legal action against researchers who comply with your policy. Without a safe harbor clause, researchers in many jurisdictions face legal risk even for responsible disclosure.


Setting Up Your security.txt File

A security.txt file is the simplest way to tell the world where to report vulnerabilities. Place it at /.well-known/security.txt:

Contact: mailto:security@yourdomain.com
Expires: 2026-12-31T23:59:59Z
Acknowledgments: https://yourdomain.com/security/hall-of-fame
Policy: https://yourdomain.com/security
Preferred-Languages: en

This file is discovered by:

  • Security researchers doing manual reconnaissance
  • Automated scanners that look for a reporting channel
  • Enterprise buyers checking your security posture
  • Bug bounty platforms indexing programs

Responding Professionally to Your First Report

When you receive your first report (especially before your program is set up), the way you respond matters:

Do:

  • Acknowledge within 24 hours
  • Thank the researcher, even if the finding is low severity
  • Keep them updated on remediation progress
  • Credit them publicly (with their permission) when fixed

Don't:

  • Ignore the report
  • Send a legal cease-and-desist without consulting counsel
  • Deny the finding is valid when it clearly is
  • Fix the issue without communicating back to the researcher

The security research community is small and reputation travels fast. Companies that treat researchers poorly get reputations that deter future reports — meaning issues stay hidden until they're exploited rather than disclosed.

bug bounty program
vulnerability disclosure policy
responsible disclosure
security testing
VDP

Check Your Security Score — Free

See exactly how your domain scores on DMARC, TLS, HTTP headers, and 25+ other automated security checks in under 60 seconds.