Episode 9 — Classify Data with FIPS 199
We begin with the core terms because precision here prevents later confusion. Confidentiality asks what happens if unauthorized parties see the data. Integrity asks what happens if the data or system state is altered without authorization. Availability asks what happens if the data or service cannot be used when needed. Each category is independent and must be judged on its own merits; raising one does not automatically raise the others. The power of this triad is that it forces teams to think in consequences rather than technologies. Instead of asking which cipher suite is fashionable, you ask, “What injury would occur if this information were disclosed, tampered with, or withheld at the moment of need?” Answers tied to mission harm reveal the impact level with far less debate.
F I P S 199 then assigns each category—confidentiality, integrity, availability—a level of Low, Moderate, or High based on potential harm to the mission, organization, and individuals. Low indicates limited adverse effect: inconvenience, minor degradation, or small financial loss. Moderate signals serious adverse effect: significant operational damage, substantial cost, or notable harm to public confidence. High denotes severe or catastrophic adverse effect: mission failure, major financial loss, life safety implications, or widespread harm. The discipline is to map plausible incidents to these definitions using plain language. If a system’s failure would delay case processing by hours with workarounds available, availability might be Moderate. If tampered records could drive wrong benefits decisions at scale, integrity could be High. The words are simple, but the calibration they enforce is profound.
Let’s apply the lens with a concrete example. Consider a system that stores citizen records to deliver routine services. Disclosure of personally identifiable information is more than a public-relations issue; it harms individuals and erodes trust, so confidentiality rarely sits at Low. Availability also matters because delayed access could disrupt service delivery, but many programs have manual fallback or batch processing that cushions downtime. A reasonable outcome might be Moderate for confidentiality and availability, with integrity also at Moderate if the data shapes decisions but has controls to detect and correct errors. The point is not to rubber-stamp “Moderate for everything.” It is to narrate the harm: whose mission falters, which individuals are injured, and how quickly. When the narrative convinces, the level follows naturally.
One of the most common pitfalls is copy-pasting impact levels from a different system because “it looked similar.” That shortcut almost guarantees misfit. Two applications can share a domain and user base yet have radically different consequences when they fail. A read-only portal that mirrors approved data does not carry the same integrity risk as the system of record that feeds it. A public information site with static content does not carry the same confidentiality sensitivity as a case-management API handling eligibility decisions. When teams inherit numbers without inheriting the harm story, reviewers sense the mismatch and request a fresh analysis—usually at an awkward phase of the project. Avoid this by doing the short work up front: describe your system’s mission ties and decide your own levels on that basis.
A quick win is to hold a short “triad workshop” with the right voices and a one-page worksheet. Invite the system owner, a mission representative, the Information System Security Officer, and a risk manager from the sponsoring agency if available. In thirty minutes, walk each category, name credible bad events, and tag each with Low, Moderate, or High using F I P S 199 language. Record the rationale in full sentences, not shorthand. You are not seeking poetic prose; you are building an audit-ready justification while all the stakeholders are still on the call. The modest investment pays back immediately because the rest of the security package can point to this page as the source of truth for baseline selection and control tailoring.
Availability decisions often need special attention because uptime expectations can quietly elevate risk. Imagine a workflow that coordinates emergency resource dispatch or time-critical regulatory actions. Even brief outages could cascade into missed statutory deadlines, financial penalties, or safety impacts. In such a scenario, availability might be High even if confidentiality sits at Moderate. The crucial move is to document the rationale clearly: name the time sensitivity, the consequence of delay, and the absence of viable workarounds. Then tie the level to design choices—redundant regions, tested failover, practiced restoration—and to monitoring commitments that demonstrate you meet the heightened bar. When rationale, design, and monitoring align, reviewers nod rather than negotiate.
Keep the memory anchor close: “C I A impacts decide security baseline strength.” This phrase reminds teams that impact comes first and everything else—control depth, assessment intensity, and continuous monitoring cadence—flows from it. If the triad says Moderate across the board, your planning, documentation, testing, and operations should reflect a Moderate baseline without apologizing for not building a High-assurance cathedral. If a High emerges, it should echo throughout the package: stronger identity assurance, tighter change controls, deeper logging, and more frequent reporting. The anchor keeps you honest when a preference, a vendor pitch, or a habit tries to overturn a risk-based choice.
A quick mini-review consolidates agreement and reveals gaps. Ask each participant to state the three categories and the chosen level aloud, followed by one sentence of justification. The spoken version surfaces uncertainty faster than silent reading does. If someone hesitates over availability because they are unsure about fallback processes, that is a signal to investigate before locking the decision. If two stakeholders give conflicting harm narratives for integrity, that is your cue to clarify who consumes the data and how wrongness would propagate. Treat this readback as a safety check; it costs a minute and prevents weeks of rework born from silent disagreement.
Impact decisions deserve evidence, so assemble a compact packet: the filled worksheet, the justification narrative, and a sign-off by the risk owner or Authorizing Official delegate. Include references to mission documents or service-level obligations if they influenced the call. The sign-off need not be theatrical, but it should be explicit enough to show true sponsorship: a name, a date, and a title. Place the packet in your repository and link it in the System Security Plan where you declare the F I P S 199 result. Now the classification is not just an opinion; it is an organizational decision with traceable ownership, which is exactly what assessors and authorizing officials expect to see.
Before you publish the levels, cross-check alignment with the agency’s mission environment and data sensitivity policies. Some organizations maintain profiles that indicate typical impacts for classes of information—health records, law enforcement data, financial transactions, internal communications. Your analysis should harmonize with those profiles unless you have a strong, recorded reason to differ. Alignment here avoids surprise during board reviews and strengthens your argument that the chosen baseline is proportional. It also helps you anticipate inherited control expectations from platforms already authorized at particular impact levels. Consistency reduces debate, and when differences are necessary, documented reasoning reduces friction.
Classification is not a one-time act; it must be revisited when the system’s mission or data changes. New features that introduce sensitive attributes, expanded integration with external programs, or shifts from batch to real-time decisioning can alter harm calculations. Build re-assessment triggers into your change management: if a proposal adds a new data type, expands user populations, or changes uptime commitments, schedule a ten-minute triad check. Most of the time the levels will hold, but occasionally integrity or availability will climb, and catching that early lets you adjust designs and evidence before review cycles reopen the package.
Once levels are set, communicate them to design, operations, and assessment teams in plain words. Designers need to know whether redundancy is a nice-to-have or a non-negotiable obligation. Operations needs to calibrate patch timelines, backup frequency, and restoration practice to the declared harm of downtime or data loss. Assessors need to align sampling depth and test rigor with baseline expectations. Translate the abstract into operational consequences: “Because availability is High, we test failover quarterly and keep recovery time under N minutes.” When levels turn into concrete practice, the classification stops being a paragraph in a document and becomes a standard others can observe.
We close by locking the classification so it can guide every downstream choice. Record the three impact levels with their justifications, capture the sign-offs, and embed the packet where the System Security Plan and Security Assessment Report can reference it cleanly. Tell the story once so that everyone repeats the same facts later—stakeholders in design reviews, engineers explaining tradeoffs, assessors mapping controls, and authorizing officials weighing residual risk. Your next action is simple and high-leverage: formalize the decision in writing and publish it to the team spaces where design and planning happen. With F I P S 199 classification in place, control depth becomes a reasoned outcome rather than a negotiation, and your authorization path becomes faster, calmer, and far more defensible.