Hello, you are using an old browser that's unsafe and no longer supported. Please consider updating your browser to a newer version, or downloading a modern browser.

Zero Trust

Zero Trust Theory: What NIST 800-207 Actually Defines

M
Mark Sabo Training Camp
Published
Read Time 18 min read
Zero Trust Theory: What NIST 800-207 Actually Defines

Zero Trust gets discussed constantly in security circles, but the theory underneath the term rarely gets the attention it deserves. Most practitioners can recite the slogan, never trust always verify, without grasping the architectural model that gives the phrase any real meaning. The theory matters because Zero Trust is an architecture rather than a product, and treating it as something you buy off a shelf is the single most common way implementations go sideways. NIST Special Publication 800-207 lays out the formal definition through seven specific tenets.

After two decades teaching network and security architecture, I’ve watched Zero Trust move through three distinct phases. It began in 2010 as a Forrester research concept proposed by analyst John Kindervag. By 2018 the term had become a marketing label slapped onto roughly every security product on the market. Now, following Executive Order 14028 and the federal mandate requiring civilian agency adoption by fiscal year 2027, Zero Trust is a formally defined architectural framework backed by NIST, CISA, and DoD reference documents. The theory underneath all three phases stayed the same. Understanding that theory is what separates practitioners who implement Zero Trust correctly from those who buy a vendor product and call the project finished.

Zero Trust is an architectural model, not a technology. NIST SP 800-207 defines it through seven specific tenets. Everything else is implementation choice.


Where Zero Trust Theory Actually Came From

Zero Trust as a named model traces directly to John Kindervag’s 2010 Forrester research paper titled “No More Chewy Centers: Introducing the Zero Trust Model of Information Security.” Kindervag’s central observation was that traditional perimeter security treated the inside of a corporate network as inherently trustworthy and the outside as inherently hostile. That assumption broke down completely once mobile devices, cloud services, remote work, and lateral movement attacks made the perimeter porous beyond repair.

The deeper theoretical roots go back further. The Jericho Forum, a security industry consortium active in the mid 2000s, published its Commandments in 2007 advocating for “deperimeterization.” Their model called for security controls that travel with the data and the user rather than living at the network edge. Kindervag’s contribution was packaging those abstract ideas into an operational model that an enterprise architect could actually implement.

NIST formalized the architecture in August 2020 with Special Publication 800-207, turning Zero Trust from research concept into government standard. CISA followed with the Zero Trust Maturity Model in 2021, then updated it to version 2.0 in 2023. DoD released its Zero Trust Reference Architecture in 2022 along with a strategy that established the FY2027 implementation deadline for the entire department. By 2026, every major federal civilian agency operates under formal Zero Trust adoption plans, and the architectural theory shows up directly in CISSP Domain 1 content, the CCSP exam, and Security+ objectives.


The Seven Tenets of Zero Trust According to NIST 800-207

NIST SP 800-207 defines Zero Trust through seven foundational tenets. These aren’t best practices or recommendations. They constitute the formal definition of the model, and any architecture that fails to honor them isn’t really Zero Trust regardless of what a vendor’s marketing says.

I teach the tenets in order because each one reinforces the previous. Together they answer one question: what does it mean to design a system that grants no implicit trust by default?

📐 The Seven Tenets (NIST SP 800-207)
TENET 1

All data sources and computing services are considered resources. A laptop, a SaaS application, an API endpoint, an IoT sensor: each one is a resource subject to the same architectural rules.

TENET 2

All communication is secured regardless of network location. Internal traffic gets the same authentication and encryption requirements as traffic crossing the public internet.

TENET 3

Access to individual enterprise resources is granted on a per session basis. Authentication for one session doesn’t grant access to other resources or future sessions.

TENET 4

Access to resources is determined by dynamic policy. The policy considers identity, device state, behavioral analytics, environmental conditions, and other context, not just credentials.

TENET 5

The enterprise monitors and measures the integrity and security posture of all owned and associated assets. No device gets a permanent pass based on past compliance.

TENET 6

All resource authentication and authorization are dynamic and strictly enforced before access is allowed. Decisions get re evaluated as context changes throughout the session.

TENET 7

The enterprise collects as much information as possible about the current state of assets, network infrastructure, and communications, then uses it to improve the security posture.

Reading the Tenets Together

The tenets describe a coherent worldview, not a product feature list.

Tenet 1 redefines what counts as a resource. Tenet 2 eliminates the location based trust assumption that made flat enterprise networks so vulnerable. The middle four (Tenets 3 through 6) govern how authorization happens and what informs each decision. Tenet 7 closes the loop by requiring continuous data collection that feeds the policy engine.

Notice what isn’t in the tenets. There’s no requirement for any specific product, vendor, or network technology. NIST deliberately wrote the tenets at the architectural level so that organizations can implement them with whatever combination of tools fits their environment. That’s a feature, not a bug. Architecture should outlast any individual technology.


The Implicit Trust Zone Problem

The single most important theoretical concept in Zero Trust is the implicit trust zone. NIST SP 800-207 defines it as the area within an architecture where subjects are trusted by default after some initial authentication event. In traditional perimeter security, the implicit trust zone was the entire internal corporate network. Once you got past the firewall and authenticated to the VPN, you were assumed trustworthy until something went obviously wrong.

That model created a systemic vulnerability. An attacker who got inside the perimeter (through phishing, a compromised vendor, a stolen laptop, or an insider threat) gained access to a large implicit trust zone where lateral movement was easy and detection was difficult. Most major breaches in the past decade share the same shape. Initial compromise of one system, followed by weeks or months of undetected lateral movement, ending in data exfiltration or ransomware deployment. The 2013 Target breach started with a compromised HVAC vendor and ended with 40 million card numbers leaving the network through internal systems that should never have been reachable from a vendor portal. Two years later the OPM compromise stretched the same dynamic across federal personnel networks, with attackers persisting long enough to exfiltrate 22 million records before anyone caught on. SolarWinds in 2020 took the model to supply chain scale, where the implicit trust between SolarWinds and its thousands of downstream customers turned a single vendor compromise into a federal incident. None of those breaches would have looked the same inside a Zero Trust environment, not because the initial compromise would have been prevented (initial compromises happen) but because the lateral movement window would have been compressed enough to catch the activity before it reached the target data.

Zero Trust theory addresses this by aggressively shrinking the implicit trust zone. The goal is to compress it to the smallest possible footprint, ideally a single subject to resource interaction at one specific moment. Every other interaction requires a fresh authentication and authorization decision based on current context. The implicit trust zone never disappears entirely, because you have to trust something somewhere, but it gets reduced to its minimum viable size.

⚖️ Traditional Perimeter vs Zero Trust Architecture
PERIMETER

Trust granted at network entry. Large implicit trust zone covering the internal network. Lateral movement permitted by default. Static defenses configured at the edge.

ZERO TRUST

Trust evaluated per request. Implicit trust zone compressed to a single interaction. Lateral movement requires fresh authorization. Dynamic policy decisions driven by current context.

This is why network segmentation alone isn’t Zero Trust. Microsegmentation reduces the size of trust zones, sure. But Zero Trust requires that even within those small zones, individual access decisions get made dynamically. The theory demands per session, per request evaluation, not bulk trust granted to anyone who happens to be on a particular network segment.


How the Policy Decision Point and Policy Enforcement Point Work

NIST 800-207 specifies a logical architecture with two core components: the Policy Decision Point (PDP) and the Policy Enforcement Point (PEP). Together they form what the document calls the policy engine, and understanding how they interact is fundamental to understanding the entire model.

The Policy Decision Point is where access decisions actually get made. When a subject (a user, device, or service) requests access to a resource, the PDP evaluates the request against current policy, available context, and information from supporting systems. The decision incorporates identity attributes, device posture, time of day, location, behavioral analytics, threat intelligence, and whatever other signals the policy engine has been configured to use, then returns a binary answer of allow or deny.

The Policy Enforcement Point is the component that actually allows or blocks the access. It sits between the subject and the resource, intercepts requests, asks the PDP what to do, and enforces the decision. The PEP is logically distinct from the PDP, even when they’re implemented in the same product, because the separation matters architecturally. One PDP can make decisions for thousands of PEPs distributed across an enterprise. That’s how Zero Trust scales beyond a single product or network segment.

A useful classroom analogy. Think of the PDP as the brain doing the analysis behind the scenes. The PEP is the muscle in front of every resource, asking the brain whether each individual request should get through. Real Zero Trust deployments have many PEPs distributed throughout the environment, sitting in front of applications, APIs, network segments, data stores, and infrastructure components. They all consult a central PDP, or a federated set of PDPs, for decisions.


The Trust Algorithm and How Decisions Get Made

The trust algorithm is the logical process inside the PDP that takes inputs and produces a decision. NIST SP 800-207 doesn’t prescribe a specific algorithm. It outlines two general approaches that real implementations tend to combine.

One approach is criteria based. Access decisions check whether specific conditions are met, things like a valid certificate on the device, current patch level reported through MDM, origin geography on the approved list, and time of access falling within allowed windows for that user’s role. If every required criterion passes, access is granted. Simpler to implement and easier to audit, which is part of why compliance teams tend to prefer it when they have a choice.

The score based approach takes a different angle, what some practitioners call confidence scoring. Each input contributes a weighted value, and the algorithm combines them into a confidence score that gets compared against a threshold. Take a user with strong authentication, a healthy device, normal behavioral patterns, and a request consistent with their role. They might score high enough to access sensitive resources. Drop in a compromised device or an unfamiliar location, and that same user might score below the threshold for the same resource even when every individual control still passes in isolation.

Most enterprise Zero Trust implementations combine both. Hard criteria handle the absolute requirements: a user can’t reach classified data without a clearance, period. Scoring layers on top to catch the contextual nuances that pure criteria checks miss, the kind of “something feels off about this session” signals that come from behavioral analytics or threat intelligence feeds.

The trust algorithm is also where architectural theory bumps up against practical engineering. Theoretically, every input should refresh continuously and decisions should re evaluate constantly. In practice, the cost of perpetual re evaluation is prohibitive. Real implementations cache decisions for short windows and re evaluate when triggers fire, things like session timeout, change in device posture, geographic anomaly, or a request for a more sensitive resource than the user previously accessed.


The Variants of Zero Trust Architecture

NIST SP 800-207 describes several variants of Zero Trust architecture, distinguished by where the architectural emphasis lands. No variant is “more correct” than the others. They represent different ways to apply the same underlying theory.

Identity Based Zero Trust

The identity centric variant builds the architecture around user identity as the primary trust signal. Strong authentication, continuous identity verification, behavioral analytics tied to user accounts, and mature identity governance form the foundation. Access decisions weight identity signals heavily. Common in enterprises with sophisticated identity programs and highly distributed workforces, especially after the shift to permanent hybrid work.

Microsegmentation Based Zero Trust

Network centric Zero Trust emphasizes dividing the network into many small zones, each with its own Policy Enforcement Point. Lateral movement between zones requires fresh authorization. This variant fits well for protecting industrial control systems, OT environments, and any infrastructure where the network itself is the primary defense layer. Defense contractors and utilities tend to land here because their environments often don’t tolerate the latency that aggressive identity based controls introduce.

Software Defined Perimeter

SDP, sometimes called the cloud or gateway variant, hides resources behind an authentication proxy that establishes encrypted tunnels only after the subject is authenticated and authorized. Until then, the protected resources are invisible to scanners and unreachable to attackers. This is the variant most commercial Zero Trust Network Access (ZTNA) products implement, and it’s the reference architecture for cloud first organizations whose primary attack surface lives outside any corporate data center.

Most real enterprise deployments combine elements from multiple variants. The theory accommodates this. What matters is that the seven tenets actually get honored across the architecture, and that the implicit trust zone shrinks to its smallest viable footprint. How any specific organization gets there depends on what’s already in the stack and which variant fits the environment best.


What Zero Trust Theory Doesn’t Actually Say

A surprising amount of confusion comes from claims about Zero Trust that aren’t actually in the theory. Several of these misconceptions get repeated frequently enough that they shape how organizations implement (or fail to implement) the architecture. Worth pulling them apart.

Zero Trust Doesn’t Eliminate Firewalls and VPNs

The theory doesn’t say firewalls disappear or that VPNs become obsolete. What changes is their architectural role. They stop being the primary trust boundary, and the implicit trust they used to grant gets replaced by per session authorization checks. A traditional firewall protecting a high value subnet inside a Zero Trust environment is still useful, as long as access through it is governed dynamically rather than handed out wholesale to anyone past the front door.

This Is Not a Rip and Replace Project

NIST and CISA both recognize that Zero Trust adoption is a multi year journey, not a single project. CISA’s Zero Trust Maturity Model 2.0 explicitly defines four maturity stages (Traditional, Initial, Advanced, Optimal) precisely because the model assumes incremental adoption. You can apply Zero Trust principles to identity first, then to applications, then to data, expanding coverage over time. Federal agencies aren’t rebuilding their entire infrastructure to meet the FY2027 deadline. They’re working through prioritized capability areas.

It’s About Architecture, Not User Trust

This one comes up in awareness training conversations more often than I would expect. Zero Trust theory is not making a claim about how trustworthy users are as people. The model is targeting implicit trust at the architectural level. It assumes that any system, account, or device might be compromised, so no architectural component should grant access based on assumed trustworthiness. The user’s actual character has nothing to do with it.


Why the Theory Matters for Practitioners

Twenty years of teaching architecture has shown me a clear pattern. Practitioners who understand the theory build better Zero Trust implementations than those who learn it as a product checklist. Grasp the implicit trust zone, and decisions about network segmentation get sharper. Without the PDP/PEP architecture in mind, it’s easy to conflate an identity platform with the entire model and miss what’s still missing. The seven tenets as a coherent definition also put an end to most of the pointless arguments about whose product “is” Zero Trust.

Theory matters for certification candidates too. CISSP exam questions test architectural understanding rather than vendor specifics. The CCSP applies the same model to cloud environments. And Security+ now folds Zero Trust foundational principles into the SY0-701 objectives. Candidates who memorize “never trust always verify” without grasping the architecture struggle on scenario based questions where the right answer depends on knowing what the model actually requires versus what it merely permits. There’s a related discussion in our piece on why Zero Trust matters in practice, and a separate look at how the major certifications cover the model.

By the end of this decade, every enterprise architect will need to design with Zero Trust assumptions in mind, whether the organization formally adopts the model or not. The federal FY2027 deadline has accelerated tooling and reference implementations across the industry, and the downstream effects are reaching commercial environments faster than many people expected. Spending the time to learn the theory now means avoiding the rework that comes from implementations built on a vendor brochure rather than NIST 800-207.

🎯 The Theory in One Paragraph

Zero Trust is an architectural security model defined by NIST SP 800-207 through seven specific tenets. It assumes that no implicit trust should be granted based on network location, identity alone, or device ownership. Every access decision gets evaluated dynamically by a Policy Decision Point and enforced by a Policy Enforcement Point, with the goal of compressing the implicit trust zone to the smallest possible footprint. Implementations designed with that architecture in mind tend to hold up under both audit and incident response. Skipping the theory leaves you with an environment that markets well to leadership but still grants every kind of implicit trust the model was designed to eliminate.


Frequently Asked Questions About Zero Trust Theory

What is the theoretical foundation of Zero Trust?

Zero Trust is built on the principle that no implicit trust should be granted to any subject, device, or network segment based solely on location or prior authentication. NIST SP 800-207 codifies this through seven tenets that define how access decisions get made, how trust zones get compressed, and how Policy Decision Points and Policy Enforcement Points interact to evaluate every request dynamically. The architecture replaces the perimeter based assumption that internal traffic is trustworthy.

Who created Zero Trust?

Zero Trust as a named model was introduced by John Kindervag in a 2010 Forrester Research paper titled “No More Chewy Centers: Introducing the Zero Trust Model of Information Security.” The deeper theoretical foundations come from the Jericho Forum’s 2007 Commandments on deperimeterization. NIST formalized the architecture in August 2020 with Special Publication 800-207, which is the document most widely cited as the formal definition of the model.

What are the seven tenets of Zero Trust?

NIST SP 800-207 lists seven tenets covering resource definition, secured communication regardless of network location, per session access, dynamic policy, continuous monitoring of asset posture, dynamic authentication and authorization, and continuous data collection used to improve security posture. Together they constitute the formal definition of Zero Trust architecture, and any environment that fails to honor them isn’t really Zero Trust regardless of what tools are deployed.

Is Zero Trust a product or an architecture?

Zero Trust is an architecture, not a product. NIST 800-207 deliberately defines the model at an architectural level so that organizations can implement it with whatever combination of identity, network, and access control tools fits their environment. Vendors who market products as “Zero Trust” are referring to tools that help implement the architecture, not to a finished product that delivers the model on its own.

How is Zero Trust different from traditional perimeter security?

Traditional perimeter security treats the internal network as a large implicit trust zone where authenticated users move freely. Zero Trust shrinks the implicit trust zone to its smallest possible size, ideally a single subject to resource interaction, and requires fresh authorization for every access attempt regardless of network location. The architectural goal is to eliminate the lateral movement that perimeter models permit by default after initial authentication.

What is NIST SP 800-207?

NIST Special Publication 800-207 is the National Institute of Standards and Technology document that formally defines Zero Trust architecture for the United States federal government and the broader cybersecurity industry. Published in August 2020, it specifies the seven tenets, describes the Policy Decision Point and Policy Enforcement Point components, and outlines variants of Zero Trust architecture. CISA, DoD, and most enterprise frameworks reference it as the authoritative source.

Does Zero Trust eliminate the need for firewalls and VPNs?

No. Zero Trust theory changes the role of firewalls and VPNs but doesn’t make them obsolete. Firewalls remain useful for segmentation and traffic filtering, but they stop functioning as the primary trust boundary. VPNs can still encrypt traffic, but they no longer grant blanket access to the internal network on the assumption that authenticated users are trustworthy. Both technologies operate within a Zero Trust architecture rather than substituting for it.

Mark Sabo

Director, Educational Services | Training Camp

Mark Sabo is the Director of Educational Services at Training Camp, where he oversees the training team, course design, and certification program development. He holds a B.S. in Information Sciences and Technology from Penn State University and more than 50 industry certifications. Mark joined Training Camp in 2005, became a Technical Trainer in 2007, and assumed his current leadership role in 2015. His specialty is practice exam development and exam preparation strategy, built from years of teaching students in the classroom and studying how certification exams are constructed. His writing focuses on the technical details that matter most to professionals preparing for high stakes exams.