Cyber Defense Matrix 2.0 · Asset Row · AI/ML

AI/MLSS Extension

AI and ML systems — models, training data, prompt flows, agents. A SecurityStack extension that names an attack surface traditional security tooling was never built to cover.

AI/ML is a SecurityStack extension to the original Cyber Defense Matrix, covering machine-learning models, training data, prompt and inference flows, AI agents, and the RAG pipelines that feed them. This is a row where traditional security tooling is structurally insufficient — prompt injection, model theft, data poisoning, and agentic tool-use abuse are attack classes that endpoint, network, and application controls do not address.

Provenance note: This row/column is a SecurityStack extension and is not part of NIST CSF 2.0. Practitioners citing it externally should label it as such.

Why AI/ML gets its own row

The case for a dedicated AI/ML row comes down to attack classes that do not fit anywhere else in the matrix. Prompt injection is not an application-layer vulnerability in the conventional sense — there is no input validation that fully closes it. Data poisoning compromises the model itself through training-time inputs, an attack vector with no precedent in classical software. Model theft (extraction attacks via query-based inference) does not correspond to any traditional data-exfiltration pattern. Agentic-AI tool use introduces confused-deputy problems at machine scale.

Forcing these attack classes into Applications (where chatbots live) or Data (where training data lives) obscures the distinction. CDM 2.0 names the row explicitly so practitioners can ask the right coverage questions instead of the wrong ones. For most organizations in 2026 the row is mostly empty — not because the attack surface is absent but because the tooling market is still forming.

Tools starting to occupy this row

The emerging AI-security tool categories: AI security-posture management (Protect AI, HiddenLayer, CalypsoAI, Robust Intelligence, Lasso Security, Prompt Security), LLM-gateway controls that enforce prompt filtering and output moderation, model-theft detection (anomalous query pattern analysis on inference endpoints), and agent-runtime guardrails that gate tool use.

Established adjacent categories also contribute: DLP tools that inspect prompt traffic for sensitive-data exposure, SaaS-security posture management (SSPM) that governs third-party AI apps, and API security platforms that catch abnormal inference patterns. None of these is a complete AI/ML solution on its own; most mature programs combine three or four partial tools.

OWASP maintains the Top 10 for LLM Applications and MITRE has the ATLAS framework for AI attack taxonomy. Both are the reference points practitioners should cite when discussing this row — they are where the attack-class vocabulary is authoritative.

Coverage patterns and honest disclosures

Most organizations' AI/ML coverage is genuinely thin — red cells across the function columns, with rare exceptions in PROTECT × AI/ML (via DLP on prompt traffic) and GOVERN × AI/ML (via AI-use policies). This is not a failure mode to panic about; it reflects a real gap between attack surface and tooling maturity. A mature program in 2026 acknowledges the gap, documents the accepted risk, and tracks the tooling market for viable investments.

The 'You Already Own the Fix' pattern on AI/ML is thin but real: Microsoft Purview's AI-sensitivity controls, M365 Copilot's data-governance integrations, and the AI-usage monitoring in enterprise SSPM platforms are features many organizations are paying for and not enabling. These are narrow capabilities, not full AI security, but they are real coverage on cells that would otherwise be empty.

The Not-Applicable rule per CDM 2.0: organizations with no AI/ML workloads and outside the technology industry may mark the row N/A. A professional-services firm with no internal ML and no generative-AI products can legitimately do so. A technology company, or any organization using AI agents in production, cannot — the row is in scope and thin coverage should appear as gaps, not as exclusions.

Where the row will likely evolve

The most volatile row in the matrix. AI-security tooling is expected to consolidate rapidly over the next 18–36 months, either into dedicated CNAPP-for-AI platforms or into extensions of existing SSPM and DLP tools. The coverage questions themselves will stabilize faster than the tools — by 2026 the practitioner consensus on what needs to be covered (training-data integrity, prompt-flow inspection, agent-tool-use governance, model-endpoint protection) is clearer than the consensus on which product category owns each cell.

For organizations building AI products: treat AI/ML coverage as a product-security discipline, not a corporate-security discipline, for now. The risks ship with the product; the controls need to ship with the product too. Corporate AI usage (Copilot, ChatGPT Enterprise, employee-facing assistants) is the narrower case where SSPM + DLP + policy governance is often adequate.

Frequently asked

Is this the same as 'AI Security'?

Mostly, but CDM 2.0 is explicit that the row covers AI systems (models, data, agents) as assets in your environment. 'AI Security' the broader discipline also includes AI-powered security tools (defensive AI), which in the matrix are attributes of other tools, not the row itself. The row is about securing AI, not using AI for security.

Do I need this row if we only use ChatGPT Enterprise?

Yes, but the scope is narrow. Corporate use of a vendor-hosted LLM raises data-governance questions (what's being sent in prompts, where it's stored, who can access it) that belong on the row. A thin coverage — SSPM monitoring plus a DLP rule — is often appropriate and proportionate.

Is this row only for machine learning in a technical sense, or does it include generative AI?

Both. Classical ML (fraud models, recommendation engines, forecasting) and generative AI (LLMs, image models, agents) are both on the row. The attack classes overlap significantly — data poisoning and model theft apply to both — though prompt injection and agentic tool abuse are specific to generative AI.

How do I start covering this row if we are early?

Start with GOVERN × AI/ML: publish an internal policy naming what AI usage is acceptable, under what data-handling rules, and who owns the risk. Add IDENTIFY × AI/ML: inventory where AI is used (including shadow usage via third-party SaaS). Then narrow PROTECT × AI/ML to the one or two use cases that matter most for your business, and accept thin coverage elsewhere. Broad and shallow is less useful than narrow and operational on this row.