Product · How It Works

How SecurityStack Works

Five phases, 30–60 minutes, executive-ready output.

SecurityStack turns a six-to-twelve-week security tools rationalization engagement into a 30–60 minute questionnaire. Five phases capture what you own, where it's deployed, and where the gaps are. The coverage engine computes your posture in real time. The AI summary translates findings into executive language. You leave with a Cyber Defense Matrix 2.0 view, a ranked gap list, and — on paid tiers — a PDF report plus a board-ready PPTX deck.

The five phases

Phase 0 — Organization context ~5 min

Eight questions about your organization: headcount band, industry, deployment model (cloud, on-prem, hybrid), compliance frameworks in scope, who owns security decisions, annual security budget range, and whether operational technology or AI/ML systems are in play. These answers drive N/A row logic — if you have no OT, the OT/IoT row is marked Not Applicable automatically.

Phase 1 — Tool inventory ~10–25 min

List the security tools you own. Vendor name and product name are enough for matched tools. The typeahead matches against 156+ vendors and 321+ products (expansion to ~191 / ~366 underway). Unrecognized tools enter a guided self-mapping flow where you pick the CDM 2.0 cell(s) the tool covers. These provisional mappings show up on your matrix with a dashed border; they count toward coverage but are flagged in reports until an admin review confirms or adjusts them.

Phase 2 — Deployment details ~5–10 min

For each tool, select the asset rows it is actually deployed to. A tool licensed enterprise-wide but only rolled out to laptops gets marked Devices-only in this phase. This is the IS side of CAN vs. IS — what your tools are doing, separated from what they could do. The gap between the two drives the "You Already Own the Fix" recommendations.

Phase 3 — Coverage questions ~10–15 min

Adaptive questions fill in coverage the tool inventory alone cannot capture: policy maturity in GOVERN, incident-response process in RESPOND, recovery capability in RECOVER. Questions irrelevant to your environment are skipped automatically based on Phase 0 answers. This phase closes the gaps between tool-encoded coverage and process-encoded coverage.

Phase 4 — Review and report ~5 min

The coverage engine computes your matrix. You see the grid populated in real time, drill into any cell for the tools and NIST subcategories associated, and review AI-generated "Do This Now" recommendations. On Essentials and Expert tiers, export a PDF report (executive summary, matrix, gap analysis, 30/60/90 day roadmap) and a PPTX board deck.

What runs in the background

Three engines do the work.

  • The vendor-capability database. Encodes what each product CAN do against the 63-cell matrix. 27 years of field experience reduced to structured data. This is why you can add "CrowdStrike Falcon" and the platform knows which cells it covers without you mapping them manually.
  • The CAN-vs-IS coverage engine. Compares what your tools CAN do (from the vendor DB) with what they IS doing (from your questionnaire). Produces the gap list. Computes the headline coverage percentage. Identifies same-cell vendor overlap for consolidation flagging.
  • The AI summary layer. Translates the gap data into executive-language recommendations. Lead with "Do This Now" moves you can close without new spend, then new-purchase recommendations where required. Caches the output so results don't drift across page loads.

What you get out

Free & up

Cyber Defense Matrix view

Interactive 7×9 grid showing coverage by function and asset row. Click any cell for tools, NIST subcategories, and gap details.

Essentials & up

Gap analysis + AI recommendations

Ranked gap list, "Do This Now" recommendation blocks, spend-at-risk analysis, and the CAN-vs-IS comparison.

Essentials & up

PDF report + PPTX deck

Executive-ready PDF with 30/60/90 day roadmap. PPTX deck (5–8 slides) designed for a board or steering committee.

Frequently asked questions

How long does the assessment really take?

30 to 60 minutes for most organizations. The organization context phase takes about 5 minutes. Tool inventory is the largest variable — 20 tools typically takes 10 minutes, 100 tools takes closer to 25. The remaining coverage questions are 10–15 minutes. Most teams complete it in a single sitting.

Do I need to finish in one session?

No. Progress is auto-saved after each phase. You can sign out and return later without losing any answers. Phase 1 (tool inventory) is the phase where people most often return across sessions — tools keep coming to mind after the first sitting.

What do I need before I start?

A rough list of security tool vendors and product names — 80%-accurate is enough for the first pass. You'll also want a general sense of environment (cloud vs. on-prem mix, approximate headcount, industry, any regulatory frameworks in scope). Precise license counts are not needed for the free or Essentials tier.

What happens if I list a tool you don't recognize?

Unrecognized tools land in a guided self-mapping flow. You select which CDM 2.0 cell(s) the tool covers from a picker. These mappings show up on your matrix with a dashed border marking them as 'provisional' until an admin review confirms or adjusts them. Your coverage number includes them; your PDF report flags them transparently.

What does the CAN-vs-IS engine actually compute?

For each of the 63 cells in the Cyber Defense Matrix 2.0, the engine determines two things. CAN: does any tool you own have the capability to cover this cell (sourced from the vendor database)? IS: is that capability actually deployed and active for this asset row (captured from your questionnaire answers)? The gap between the two is where 'You Already Own the Fix' recommendations come from.

Is my data private?

Yes. Your assessment data is scoped to your account via row-level security — no other customer can see it. Data is not shared with third parties. Tool inventory is used to generate your report and is aggregated (anonymized) for vendor-database improvements only if you opt in.

Who reviews the AI-generated recommendations?

On Free and Essentials tiers, recommendations come directly from the Claude-powered engine, using your assessment data + the 27-year encoded field judgment in the vendor database. Expert tier adds a 1-hour consultation with Arien, where he walks through the recommendations and pressure-tests them against your specific context.

Ready to See What Your Security Stack Is Really Doing?

Free assessment. No credit card. Results in 30 minutes.

Start Your Free Assessment

or contact us at arien@security-stack.com for the Expert experience