Disclaimer: The information and resources in this document are driven directly at and for CMS internal teams and ADOs to help them initiate and complete threat model exercises. While you may be viewing this document as a publicly available resource to anyone, any information excluded as well as context included is meant for CMS-specific audiences.
Threat Modeling is a proactive, holistic approach of analyzing potential threats and risks in a system or application to identify and address them proactively. It involves analyzing how an attacker might try to exploit weaknesses in the system and then taking steps to mitigate those risks. It enables informed decision-making about application security risks. In addition to producing a model diagram, the process also produces a prioritized list of security improvements to the conception, requirements gathering, design, or implementation of an application.
At CMS, we use threat modeling to help identify potential weaknesses that could be exploited by malicious actors. The CMS Threat Modeling Team works with System Teams to analyze their system's components, understand how they interact, and envision how an attacker might exploit vulnerabilities. This important work allows System/Business Owners, ISSOs, and Developers to implement appropriate security measures – such as encryption, access controls, or regular software updates – to reduce the chances of a successful attack and to protect sensitive information.
Threat Modeling is typically done with end-phase security testing, can be conducted anytime, but is ideally done early in the design phase of the Software Development Life Cycle (SDLC). Once completed, a threat model can be updated as needed throughout the SDLC, and should be revisited with each new feature or release. This practice promotes identifying and remediating threats, as well as continuously monitoring the effects of internal or external changes.
At CMS, Threat Modeling supports CMS’ system security and continuous monitoring efforts by supporting the following goals:
Teams choosing to participate in Threat Modeling at CMS will have the option to work with the CMS Threat Modeling Team during a series of sessions. To successfully complete these sessions, the CMS Threat Modeling Team will use a number of proven frameworks including:
These methods were chosen by the CMS Threat Modeling Team because they are expedient, reliable models that use industry-standard language and provide immediate value to CMS teams. Read on to learn about the specifics of these frameworks.
As your team embarks on its Threat Modeling journey, it’s important that these four questions remain top-of-mind:
These questions form the base of the work that your team and the CMS Threat Modeling Team will complete together. The questions are actionable, and designed to quickly identify problems and solutions, which is the core purpose of Threat Modeling .
The STRIDE Threat Modeling framework is a systematic approach used to identify and analyze potential security threats and vulnerabilities in software systems. It provides a structured methodology for understanding and addressing security risks during the design and development stages of a system.
The acronym STRIDE stands for the six types of threats that the framework helps to identify:
Threat type | Property Violated | Threat Definition |
Spoofing | Authentication | Pretending to be something or someone other than yourself |
Tampering | Integrity | Modifying something on disk, network, memory, or elsewhere |
Repudiation | Non-Repudiation | Claiming that you didn’t do something or were not responsible; can be honest or false |
Information Disclosure | Confidentiality | Providing information to someone not authorized to access it |
Denial of service | Availability | Exhausting resources needed to provide service |
Elevation of Privilege | Authorization | Allowing someone to do something they are not authorized to do |
More information about using the STRIDE method to complete your Threat Modeling Session can be found in section “How to create your Threat Model ”.
Apart from the STRIDE Threat Modeling framework, there are several other popular Threat Modeling frameworks commonly used in the field of software security. Here are a few notable ones:
PASTA is a risk-centric Threat Modeling framework that focuses on the business impact of threats. It involves a seven-step iterative process, including defining the objectives, creating an application profile, identifying threats, assessing vulnerabilities, analyzing risks, defining countermeasures, and validating the results with active vulnerability or penetration testing.
LINDDUN threat modeling is a comprehensive approach that extends beyond traditional security threat modeling by focusing explicitly on various aspects of privacy. It is particularly relevant in the development of systems where user data privacy is of utmost importance, such as in applications handling personal or sensitive information. Here's a breakdown of what LINDDUN stands for and how it is applied:
RRA is designed to quickly identify and prioritize security risks in software projects, allowing teams to allocate their resources effectively. It aims to be a lightweight and agile approach to risk assessment.
These are just a few examples of additional Threat Modeling frameworks. Each framework has its strengths and focuses on different aspects of Threat Modeling, but they all aim to identify and address potential security risks effectively. It may be beneficial for your team to review these frameworks as you start your own threat model.
CVSS is a vulnerability severity classification system which identifies metrics around the ease-of-exploitation and privilege levels required to exploit a CVE. It is not a method of threat modeling or tracking risk. It is used to advise on remediation cadence and urgency. Once a threat is identified, its associated vulnerability can receive a CVSS score from Critical, High, Medium, Low, or Informational to guide prioritization.
ATT&CK is not a threat modeling methodology per se but can be used in conjunction with other threat modeling frameworks. ATT&CK is a collection of tactics, techniques, and procedures (TTPs) which enumerate the exploitation and post-exploitation actions threat actors can take against vulnerabilities. Some attacks get CVE classifications but rather this is a repository of steps an adversary can chain together which in their whole create a Kill Chain or successful attack. It is a good tool for referencing attack actions in the same manner across technical and non-technical departments. It can be used with threat modeling once threats have been identified to associate the attack actions with the identified threat. ATT&CK is not a compliance framework.
Many tools and frameworks exist that support threat modeling activities or which can be mapped to a threat modeling methodology such as STRIDE but these should not be relied upon in isolation from other methods.
The tools needed for Threat Modeling can be as simple as using a Whiteboard to brainstorm ideas and a method to record threats and mitigations (paper, a photo of a diagram, etc.). At CMS, the CMS Threat Modeling Team uses the following tools to communicate with teams and record ideas and information:
Teams primarily use Mural as a digital whiteboard for drawing Data Flow Diagrams (DFDs). You can sign up for a Mural space to complete this work by contacting the CMS Cloud Team (CMS email account required).
NOTE: Some other drawing tools may be alternatively used such as app.diagrams.net (formerly Draw.io), Lucidchart, etc.
Teams use Confluence to fill out their Threat Model Template in a space that is protected and safe from outside users.
The CMS Threat Modeling Team will use Zoom to collaborate with other team members on a Threat Model. Threat Modeling sessions are recorded so that all artifacts can be transferred to other systems of record.
Your team is encouraged to review the CMS CASP Threat Modeling playlist on CMS YouTube channel before you start your Threat Model.
Additional or alternative tools may be added in the future to further help CMS ADO Teams with creating and maintaining Threat Models.
As a reference, here are some other threat modeling tools in the industry that may be considered in the future for use at CMS:
Free Tools:
The OWASP Threat Dragon is a free, open-source, cross-platform application for creating threat models. Use it to draw threat modeling diagrams and to identify threats for your system. With an emphasis on flexibility and simplicity it is easily accessible for all types of users.
The Threat Modeling Tool is a core element of the Microsoft Security Development Lifecycle (SDL). It allows software architects to identify and mitigate potential security issues early, when they are relatively easy and cost-effective to resolve. As a result, it greatly reduces the total cost of development. Also, the tool is designed with non-security experts in mind, making threat modeling easier for all developers by providing clear guidance on creating and analyzing threat models.
NOTE: The Microsoft Threat Modeling Tool is a desktop-only tool that can be installed on Microsoft operating systems only.
Paid Tools (requires paid / annual license(s) for usage):
IriusRisk is an open Threat Modeling platform that automates and supports creating threat models at design time. The threat model includes recommendations on how to address the risk. IriusRisk then enables the user to manage security risks throughout the rest of the software development lifecycle (SDLC) with best-in-class architectural diagramming and full customization to enable every stakeholder to collaborate.
Our patented technology enables intuitive, automated, collaborative threat modeling and integrates directly into every component of your DevSecOps tool chain, automating the “Sec” in DevSecOps from design to code to cloud at scale. ThreatModeler’s SaaS platform ensures secure and compliant applications, infrastructure, and cloud assets in design, saving millions in incident response costs, remediation costs and regulatory fines. It is trusted by software, security and cloud architects, engineers, and developers at companies across the world. Founded in 2010, ThreatModeler is headquartered in Jersey City, NJ.
Welcome to Devici, where secure design is driven by threat modeling from the inception of every project. Imagine a platform that allows you to integrate security into your software's blueprint. That's the essence of Secure by Design, and we make it attainable for teams of any size. We're not just a threat modeling tool; we're a movement that embraces the craftsmanship required for secure software development. Our name draws inspiration from the genius of Leonardo Da Vinci, who saw the intricate connections between art and science, much like our approach to crafting secure and private software. Just as Da Vinci meticulously studied anatomy, engineering, and more to improve his art, we empower developers and engineers to delve deep into the design of their software, uncovering potential security and privacy threats. We help implement secure by design foundations.
Learn about the process of Threat Modeling to decide when the right time is to engage with the CMS Threat Modeling Team based on your system’s current compliance and authorization schedule.
Please complete the Threat Modeling Intake Form. The CMS Threat Modeling Team will use the answers you provide in this questionnaire to help inform future planning sessions.
To start things off, facilitators from the CMS Threat Modeling Team will meet with the System/Business Owner, ISSO, and up to two Senior Developers to talk about the process, time commitment, and outputs expected in future Threat Model Sessions.
Your team should gather and document high level system information, including:
This information will help the CMS Threat Modeling Team in the initial stages of creating your Threat Model .
The team should gather any existing diagrams such as architecture diagrams, sequence diagrams, etc. that would be helpful in understanding the system or application. This will help inform the creation (or update) of a Data Flow Diagram (DFD) during the first whiteboard session.
NOTE: The DFD doesn’t have to be created before the first Threat Modeling session – it can be created together with the CMS Threat Modeling Team.
Before conducting the Threat Model Session, it is important to identify the key stakeholders who will be participating in the creation of the Threat Model . These perspectives/personas are critical to a successful Threat Modeling session. You can use the following table to inform your work to develop these personas:
Someone who understands the current application design, and has had the most depth of involvement in the design decisions made to date.
They were involved in design brainstorming or whiteboarding sessions leading up to this point, when they would typically have been thinking about threats to the design and possible mitigations to include.
This is used to help answer “What are we working on” in terms of change to the system.
The CMS Threat Modeling Team uses Confluence to organize their threat models. Copy the Threat Model Template to your own Confluence space, and record the data collected in the previous steps.
Work with your team to coordinate dates and times, and then reach out to the CMS Threat Modeling Team to schedule your Threat Model Sessions. It’s up to the team if they prefer to have one session or to break it up into multiple sessions. Breaking up the session (e.g., three sessions, two hours each, one day apart) gives the team the time and space to learn the structure and concepts involved before going into the next session.
Send a welcome email to everyone who will attend your Threat Modeling Session. Be sure to include the following in your email:
These shared resources will allow everyone on the team to have access to the information they need to successfully complete the Threat Model .
As a structured method of Threat Modeling, STRIDE is meant to help teams locate threats in a system. It offers a way to organize information so that teams can plan how to mitigate or eliminate the threats. Remember that the acronym STRIDE stands for the six types of threats that the framework helps to identify:
Spoofing Identity
Identity spoofing occurs when the hacker pretends to be another person, assuming the identity and information in that identity to commit fraud. A very common example of this threat is when an email is sent from a false email address, appearing to be someone else. Typically, these emails request sensitive data. A vulnerable or unaware recipient provides the requested data, and the hacker is then easily able to assume the new identity.
Identities that are faked can include both human and technical identities. Through spoofing, the hacker can gain access through just one vulnerable identity to then execute a much larger cyber attack.
Tampering With Data
Data tampering occurs when data or information is changed without authorization. Ways that a bad actor can execute tampering could be through changing a configuration file to gain system control, inserting a malicious file, or deleting/modifying a log file.
Change monitoring, also known as file integrity monitoring (FIM), is essential to integrate into your business to identify if and when data tampering occurs. This process critically examines files with a baseline of what a ‘good’ file looks like. Proper logging and storage are critical to support file monitoring.
Repudiation Threats
Repudiation threats happen when a bad actor performs an illegal or malicious operation in a system and then denies their involvement with the attack. In these attacks, the system lacks the ability to actually trace the malicious activity to identify a hacker.
Repudiation attacks are relatively easy to execute on e-mail systems, as very few systems check outbound mail for validity. Most of these attacks begin as access attacks.
Information Disclosure
Information disclosure is also known as information leakage. It happens when an application or website unintentionally reveals data to unauthorized users. This type of threat can affect the process, data flow and data storage in an application. Some examples of information disclosure include unintentional access to source code files via temporary backups, unnecessary exposure of sensitive information such as credit card numbers, and revealing database information in error messages.
These issues are common, and can arise from internal content that is shared publicly, insecure application configurations, or flawed error responses in the design of the application.
Denial of Service
Denial of Service (DoS) attacks restrict an authorized user from accessing resources that they should be able to access. This affects the process, data flow and data storage in an application.
Despite increases in DoS attacks, it does seem that protective tools such as AWS Shield and CloudFlare continue to be effective.
Elevation of Privileges
Through the elevation of privileges, an authorized or unauthorized user in the system can gain access to other information that they are not authorized to see. An example of this attack could be as simple as a missed authorization check, or even elevation through data tampering where the attacker modifies the disk or memory to execute non-authorized commands.
When using the STRIDE method for Threat Modeling to create your DFD, your team can evaluate threats per interaction and per element. To do this, your team will need to analyze the potential risks associated with each interaction and element within your system. Remember that:
To apply STRIDE to your DFD, your team will complete the following steps to apply the STRIDE method to your Threat Model :
At the start of your analysis, your team will apply STRIDE per interaction to determine if there are any threats related to the data flows between components. After completing the interaction analysis, you will then investigate any additional threats further by applying STRIDE to any element. Any threats that fall outside of interactions and elements should be classified as unstructured threats.
Consider how each type of threat can manifest and brainstorm potential attack scenarios or vulnerabilities that align with each category. Many development teams will already have ideas of what issues exist inside their systems. Their first-hand experience should be welcomed into the Threat Model Session. Key questions to ask during your session include: How would you attack the system? What are you (most) concerned about?
Evaluate the potential impact of each identified threat. Consider the consequences in terms of confidentiality, integrity, availability, regulatory compliance, or other relevant factors. Assess the potential damage or harm that can occur if the threat is successfully exploited. Also consider factors such as the level of access required, the complexity of the attack, the presence of mitigating controls, and the motivation and capabilities of potential attackers. Once the initial threat analysis is complete, your team may find that many of the threats are unlikely, low impact, and/or not in the scope of the team’s area of responsibility.
Review the remaining threats and work with the team, specifically the ISSO and Business Owner, to identify the major threats. The team then should work on the proposed mitigation plan by identifying team members that are responsible for mitigating the threats, estimate dates of completion, and include this information in the final report for follow-up at a later date (generally 90 days).
In order to answer the question “Did we do a good enough job?”, it is important to review the identified threats, understand the mitigations, determine the risks, and communicate the results with others.
Using the Threat Model Report Template, the data gathered from the Threat Model Session is transferred into a shared report or PDF that can be used for a final review with all stakeholders. It provides information from the Threat Model Session, including system information, DFD, identified (possible) threats, and proposed mitigations. Your teams options for post-session reporting include:
Create a post-session email to all attendees thanking them for their participation and providing a link to the Threat Model Session feedback form. This information will be used by the CMS Threat Modeling Team for continuous improvement of the CMS Threat Modeling process.
Mitigation follow-up is managed by the application ISSO, but should be completed approximately 90 days after the Threat Model Session. All mitigations should be commented on and updated, then attached with the Threat Model report.
Term | Definition |
Impact | A measure of the potential damage caused by a particular threat. Impact and damage can take a variety of forms. A threat may result in damage to physical assets, or may result in obvious financial loss. Indirect loss may also result from an attack and needs to be considered as part of the impact. |
Likelihood | A measure of the possibility of a threat being carried out. A variety of factors can impact the likelihood of a threat being carried out, including how difficult the implementation of the threat is, and how rewarding it would be to the attacker. |
Controls | Safeguards or countermeasures that you put in place in order to avoid, detect, counteract, or minimize potential threats against your information, systems, or other assets. |
Preventions | Controls that may completely prevent a particular attack from being possible. |
Mitigations | Controls that are put in place to reduce either the likelihood or the impact of a threat, while not completely preventing it. |
Data Flow Diagram | A depiction of how information flows through your system. It shows each place that data is input into or output from each process or subsystem. It includes anywhere that data is stored in the system, either temporarily or long-term. |
Trust boundary (in the context of Threat Modeling ) | A location on the Data Flow Diagram where data changes its level of trust. Any place where data is passed between two processes is typically a trust boundary. If your application makes a call to a remote process, or a remote process makes calls to your application, that's a trust boundary. If you read data from a database, there's typically a trust boundary because other processes can modify the data in the database. Any place you accept user input in any form is always a trust boundary |
Workflows (Use Cases) | A written description of how users will perform tasks within your system or application. It outlines, from a user's point of view, a system's behavior as it responds to a request. Each workflow is represented as a sequence of simple steps, beginning with a user's goal and ending when that goal is fulfilled. |
System Name | FISMA system name that can be found in CFACTS |
System Description | High level description of the system that can be found in CFACTS |
External Entity | An outside system or process that sends or receives data to and from the diagrammed system- sources or destinations of information |
Process | A procedure that manipulates the data and its flow by taking incoming data, changing it, and producing an output with it. |
Data Store | Holds information for later use waiting to be processed. Data inputs flow through a process and then through a data store while data outputs flow out of a data store and then through a process. |
Data Flow | The path the system’s information takes from external entities through processes and data stores. |
Spoofing | Threat action aimed at accessing and use of another user’s credentials, such as username and password. |
Tampering | Threat action intending to maliciously change or modify persistent data, and the alteration of data in transit between two computers over an open network, such as the Internet. |
Repudiation | Threat action aimed at performing prohibited operations in a system that lacks the ability to trace the operations. |
Information Disclosure | Threat action intending to read a file that one was not granted access to, or to read data in transit. |
Denial of Service (DoS) | Threat action attempting to deny access to valid users, such as by making a web server temporarily unavailable or unusable. |
Escalation of Privileges | Threat action intending to gain privileged access to resources in order to gain unauthorized access to information or to compromise a system. |
Tuple | Looking at a section of a Data Flow Diagram by identifying the source, destination, and data type of the data flow. |
The following are a list of industry resources the CMS Threat Modeling Team has identified as helpful for those within the CMS community who want to learn more about Threat Modeling: