Everyone's Deploying AI, Nobody's Validating Security
It seems we’ve being talking a lot about AI recently, and there’s a good reason for it. And it’s not what you might think.
Mark Shavlik and Senserva have been around for years. We're not some startup that pivots to whatever's hot on TechCrunch. We're a cyber security company, and we've always approached new technology the same way: customer security first, shiny new tech second.
We don't adopt technology because it's trendy. We adopt it when three things line up:
- It's safe (won't break what's working)
- It adds real value (not just buzzwords for the marketing site)
- It fits our long-term plan (not a distraction)
For AI and security work? We’ve been working on it for some time, but based on our principles, those three things just lined up. That's why you're seeing two AI posts from me. Not because we're chasing hype. Because our customers need help with AI security right now, and we're in a position to provide it.
Last week's post was about forward-looking stuff - where security automation should go over the next few years. This week is different. This is about practical reality: your organization is deploying AI tools, Microsoft built security controls for them, and somebody needs to make sure those controls are actually working.
That somebody is us. Let's talk about why.
The Problem: Everyone's Deploying AI, Nobody's Validating Security
Here's what's happening right now:
Your organization deployed Microsoft Copilot. Super. Your IT team configured Purview Communication Compliance to catch risky AI interactions. They set up sensitivity labels to control data access. They configured Content Safety filters. They documented everything.
Six months later, I guarantee you those configurations aren't what you think they are.
And not because your team is incompetent. Because things change. Someone adds a "temporary" exception that becomes permanent. A policy gets disabled during troubleshooting and nobody turns it back on. Microsoft updates their best practices and suddenly your six-month-old configuration is outdated. A developer makes what seems like a small change that has cascading effects nobody anticipated.
This isn't theoretical. We’ve been around the block. We've seen this pattern for years with traditional security tools. Now it's happening with AI security tools, and frankly, the stakes are higher because AI has broader access to data.
Microsoft built good security controls for AI. Purview, Defender, Azure Policy, Content Safety - solid tools. But tools don't stay configured correctly on their own. They require maintenance. Ongoing validation. Someone paying attention.
At present, most organizations aren't doing that validation. They're assuming if it was set up correctly once, it's still correct now.
Well, that assumption is wrong.
What We're Doing About It
Senserva has been validating Microsoft security configurations for years. We're just extending that to AI security now.
Today (what we do right now):
We scan your Microsoft environment and tell you whether your AI security controls are configured correctly.
Is Purview Communication Compliance actually catching risky AI prompts? Are your sensitivity labels properly protecting data from AI access? Is Azure Policy governing your custom AI apps? Are there gaps between what you think is configured and what's actually configured?
We answer those questions. We've been doing it for traditional security tools for years, now we're doing it for AI security tools. Same approach, different technology.
Customers tell us it's useful. Saves them time. Gives them confidence their AI security posture is what they think it is.
But it's still reactive. We show you the problems, and they go into your ticketing system.
2026 (what we're building):
Next year we're adding the automation layer.
Instead of just showing you configuration drift, we'll catch it as it happens and walk you through fixing it. Three parts:
- Continuous Validation We'll monitor your AI security configurations constantly. Not "run a scan quarterly." Not "check it when you remember." Constantly.
Someone modifies a Purview policy at 2 PM? You know by 2:05 PM. A sensitivity label changes? You're notified immediately. An Azure AI service gains access it shouldn't have? You get an alert with specifics.
- Immediate Notification Not generic "your security might have an issue" alerts. Specific ones:
- Which policy changed
- What it changed from and to
- Who made the change
- Why it matters for your security posture
- What compliance requirements it affects
Context, not just noise.
- Guided Remediation Here's the part I'm most interested in: we won't just tell you something's wrong, we'll walk you through fixing it. Turn-by-turn directions, not vague advice.
Why This Actually Matters
Microsoft's AI security stack is complex. Not bad mind you, just complex.
There is a lot of moving parts. Each one depends on correct configuration. Each one interacts with the others in ways that aren't always obvious. Each one requires ongoing maintenance as your environment and Microsoft's guidance evolve.
Misconfigure any of it and you've got a gap. Maybe AI is processing data it shouldn't touch. Maybe risky interactions aren't being caught. Maybe your compliance story isn't as solid as you thought.
Configuration drift isn't some abstract risk. It's happening in your environment right now. You just don't know where or how much.
We're building the tool to tell you where it's happening and how to fix it.
What About Other AI Vendors?
Back to our principles. Right now we're focused on Microsoft because that's where most enterprises are deploying AI.
But the same principle applies to any AI security vendor. If you're using tools from Anthropic, OpenAI, AWS, or Google Cloud, those tools also depend on correct configuration. Someone needs to validate they're working.
We're starting with Microsoft because that's where customer need is highest right now. We'll expand as it makes sense. But we're not trying to do everything at once. That's how you build mediocre products.
Back to Why Two AI Blogs
So yes, two AI posts in a row. Different topics though.
Last week was about where security automation should go over the next few years. Meta-cognitive AI, progressive autonomy, teaching systems to know their limitations. That's important thinking, and we're pursuing it, but it's forward-looking.
This week is about what customers need today and what we're delivering in 2026. Organizations are deploying AI at scale. Microsoft built security controls for it. Those controls need validation. We're building that validation.
Two posts about AI because that's what's relevant to customer security right now. Not because AI is trendy.
We're a security company. We focus on customer security problems first, technology second. Right now the security problem is "how do I know my AI security controls are working?" We're solving that.
That's it. That's the explanation.
If you want to understand your current AI security configuration, we can help today. If you want continuous validation and guided remediation, we'll have that in 2026. Either way, we're focused on making sure the security tools you already have are actually doing what you think they're doing.
Because that's a real problem, and solving real problems is what we do.
