Security Engineering

Threat Modeling: STRIDE, PASTA, and How to Find Threats Before Attackers Do

A practical guide to threat modeling methodologies—STRIDE for systematic threat identification, PASTA for risk-centric analysis, how to build and read data flow diagrams, and how to integrate threat modeling into your software development lifecycle.

September 15, 20259 min readShipSafer Team

Why Threat Modeling Before You Build

Security vulnerabilities discovered in production cost roughly 30x more to fix than vulnerabilities found during design, according to the Systems Sciences Institute at IBM. That ratio is driven by the cost of rearchitecting live systems, coordinating deployments, and managing potential breach fallout—versus changing a data flow diagram before a line of code is written.

Threat modeling is the practice of systematically identifying what can go wrong with a system before you build it. It answers four core questions:

  1. What are we building?
  2. What can go wrong?
  3. What are we going to do about it?
  4. Did we do a good job?

The output is not a lengthy report—it is a prioritized list of threats with corresponding mitigations that feeds directly into development backlog items.

Data Flow Diagrams: The Foundation

Before applying any methodology, you need a model of what you are building. Data Flow Diagrams (DFDs) are the standard representation for threat modeling.

DFDs use four element types:

  • External entity (rectangle): Something outside your system that interacts with it—a user, a third-party API, another service
  • Process (circle/oval): A transformation of data—your application, a microservice, a function
  • Data store (parallel lines): Where data is stored—a database, a file, a cache
  • Data flow (arrow): Movement of data between elements, labeled with what the data is

Trust boundaries are drawn as dashed lines separating zones of different trust levels. A browser (untrusted) calling your API (trusted) crosses a trust boundary. Your API calling an internal database (same trust zone) does not.

Start with a Level 0 DFD (system context): your entire system as one process, showing external entities and data flows in/out. Then decompose into Level 1: the major internal components. For threat modeling most systems, Level 1 is sufficient.

Common mistake: drawing DFDs that show what you want the system to do, not what it actually does (or will do). Data flows should reflect actual network calls, not idealized abstractions.

STRIDE Methodology

STRIDE was developed at Microsoft in 1999 and remains one of the most widely used threat categorization frameworks. The acronym maps to six threat categories, each corresponding to a violated security property:

ThreatViolated PropertyQuestion
SpoofingAuthenticationCan an attacker pretend to be someone else?
TamperingIntegrityCan an attacker modify data or code?
RepudiationNon-repudiationCan an actor deny performing an action?
Information DisclosureConfidentialityCan an attacker read data they should not?
Denial of ServiceAvailabilityCan an attacker disrupt the service?
Elevation of PrivilegeAuthorizationCan an attacker gain permissions they should not have?

Applying STRIDE per Element

The most systematic approach is STRIDE-per-element: apply each STRIDE category to each element in your DFD and ask whether that threat is applicable.

External entities can spoof—an attacker claiming to be a legitimate user or service.

Processes can be spoofed (impersonated), tampered with (code injection, deserialization attacks), repudiation applies (did the process log what it did?), information disclosure (does the process leak data in error messages or logs?), DoS applies, and elevation of privilege (can a low-privilege process gain higher privileges?).

Data stores can be tampered with, can disclose information, and can be subject to DoS (exhaustion of storage quota).

Data flows can be spoofed (man-in-the-middle), tampered with (packet modification if unencrypted), and can disclose information (eavesdropping on unencrypted channels).

Trust boundary crossings are where the highest concentration of threats lies. Every data flow that crosses a trust boundary should be examined carefully for spoofing (is the source authenticated?) and tampering (is the channel integrity-protected?).

STRIDE Example: A Simple API

Consider a Next.js application with a /api/orders endpoint that reads from a MongoDB database.

Data flow: Browser → (trust boundary) → Next.js API → MongoDB

STRIDE analysis of the API process:

  • S: Can an attacker spoof a legitimate user's session cookie? → Mitigation: strong session tokens, HttpOnly/Secure cookies
  • T: Can an attacker inject malicious input to tamper with MongoDB queries? → Mitigation: parameterized queries, input validation
  • R: Can the system prove which user performed an order action? → Mitigation: audit logging tied to authenticated user identity
  • I: Does the API return data the requesting user should not see? → Mitigation: authorization checks before returning data
  • D: Can an attacker flood the endpoint to deny service? → Mitigation: rate limiting per user/IP
  • E: Can a standard user call admin-only functions? → Mitigation: role-based access control enforced server-side

STRIDE is fast—a team of three can apply it to a medium-complexity system in 2–4 hours. Its weakness is that it generates long threat lists without inherent prioritization.

PASTA Methodology

Process for Attack Simulation and Threat Analysis (PASTA) is a seven-stage, risk-centric methodology developed by Tony UcedaVelez. Where STRIDE is threat-centric (what categories of threat exist?), PASTA is risk-centric (what is the business risk of each threat?).

The Seven Stages

Stage 1: Define the Objectives (business context) What are the business goals? What data is valuable? What are the regulatory requirements? What would a breach cost in terms of financial penalty, reputational damage, customer churn? This stage ensures threat modeling is grounded in business impact, not just technical elegance.

Stage 2: Define the Technical Scope Enumerate the technical components: application stack, infrastructure, network topology, third-party dependencies. This is where you build or refine your DFDs.

Stage 3: Application Decomposition Decompose the application into its components, identify data flows, enumerate trust zones, and catalog the assets that need protection (PII, payment data, intellectual property, authentication credentials).

Stage 4: Threat Analysis Research threat actors relevant to your industry and application type. Who would attack you? Script kiddies? Organized crime? Nation-state actors? Insiders? Each actor has different capabilities, motivations, and targets. PASTA uses threat libraries and threat intelligence to enumerate realistic attack scenarios, not just theoretical ones.

Stage 5: Vulnerability and Weakness Analysis Map existing weaknesses in the system: known CVEs in dependencies, configuration weaknesses, code review findings. This stage connects threat scenarios to actual attack paths—a threat is only a real risk if there is a corresponding weakness to exploit.

Stage 6: Attack Modeling and Simulation Build attack trees showing how a threat actor would chain weaknesses to achieve an objective. An attack tree for "exfiltrate customer PII" might show: (1) exploit SSRF to reach internal API, OR (2) steal admin credentials via phishing + bypass MFA via SIM swap, OR (3) exploit unsanitized file upload to achieve RCE.

Stage 7: Risk Analysis and Residual Risk Management Score each attack scenario by likelihood and business impact. Produce a prioritized risk register. For each risk, decide: accept, mitigate, transfer (insurance), or avoid (remove the feature). This output feeds directly into the security backlog.

PASTA vs STRIDE

STRIDE is faster and more accessible for development teams—it takes an afternoon, not a week. PASTA produces more actionable risk-prioritized output but requires security expertise to apply correctly. A practical approach: use STRIDE during sprint planning for feature-level threat modeling, and use PASTA quarterly for architecture-level reviews.

Prioritizing Threats: DREAD and Risk Matrices

STRIDE produces a list; DREAD helps prioritize it. DREAD scores threats on five dimensions, each 0–10:

  • Damage: How bad is the worst-case impact?
  • Reproducibility: How easy is it to reproduce?
  • Exploitability: How easy is it to exploit?
  • Affected users: How many users are affected?
  • Discoverability: How easy is it for attackers to discover the vulnerability?

DREAD score = (D + R + E + A + D) / 5

A threat scoring 8+ is high priority and should be mitigated before shipping. A threat scoring 4–7 is medium and should be in the backlog. Below 4 is low and can be accepted or deferred.

Microsoft later deprecated DREAD as an official scoring system due to subjectivity in scoring, but the framework remains widely used informally. CVSS v3 is a more standardized alternative for scoring specific vulnerabilities once they are identified in code.

Integrating Threat Modeling into the SDLC

The biggest reason threat modeling fails is timing: conducting it after implementation means the team has to undo architectural decisions to address threats. The correct integration points are:

Design phase (highest value): Before any code is written for a new feature, service, or significant architectural change. The DFD is built from design documents, not production systems. Threats found here can be addressed by changing the design.

Pull request review: A lightweight threat model checklist embedded in PR template. Not a full STRIDE exercise, but key questions: does this PR handle user input? Does it touch authentication or authorization? Does it access sensitive data? If yes, a brief threat review is warranted.

Sprint retrospective: Review the threat model after a sprint to assess whether implemented mitigations addressed the identified threats. Update the DFD to reflect what was actually built.

Quarterly architecture review: Full PASTA-style review for system-wide architecture, new third-party integrations, and significant infrastructure changes.

Tooling

Several tools support threat modeling:

  • OWASP Threat Dragon: Open-source, web-based DFD tool with STRIDE integration. Free.
  • Microsoft Threat Modeling Tool: Desktop application, Windows-only, with built-in threat templates.
  • IriusRisk: Commercial platform with automated threat suggestion based on components selected in the diagram.
  • Threagile: YAML-based, code-first threat modeling that integrates with CI/CD pipelines.
  • draw.io / Lucidchart: Generic diagramming tools that many teams use informally for DFDs, without structured threat analysis.

Making It Sustainable

Threat modeling sessions are most effective when:

  • Small scope: One feature or service at a time, not the entire system
  • Right participants: The architect who designed the feature + one security person + one developer who will implement it
  • Time-boxed: 1–2 hours maximum for a single session
  • Living artifacts: DFDs and threat registers are version-controlled alongside code, not stored in a forgotten wiki page
  • Actionable output: Every threat has a named owner and a resolution deadline or a documented acceptance decision

The goal is not a perfect threat model—it is a useful one. A threat model that misses 20% of threats but gets the team thinking about the other 80% before writing code is enormously more valuable than a comprehensive report that nobody reads until after the production incident.

threat-modeling
STRIDE
PASTA
security-design
SDLC
appsec

Check Your Security Score — Free

See exactly how your domain scores on DMARC, TLS, HTTP headers, and 25+ other automated security checks in under 60 seconds.