AI Governance & Security

Your organisation is running AI it doesn't fully control.

Copilots, agents, and low-code tools have created a new class of risk that sits inside your organisation — invisible to traditional cyber defences. We help you see it, govern it, and respond when things go wrong.

The inside-out threat

Traditional cyber security was designed to defend a perimeter — keeping attackers out. It is well understood, well tooled, and maturing rapidly. But over the last two years, a different problem has emerged: the risk created by AI systems inside the organisation.

Microsoft 365 Copilot, Copilot Studio, Power Platform, Salesforce Einstein, ServiceNow, and a dozen other platforms now allow anyone in the enterprise to build powerful AI applications using natural language and drag-and-drop interfaces — no coding required. Gartner forecasts that 65% of all enterprise application development will happen on low-code platforms by 2025, with 80% of those builders sitting outside IT departments.

The result is an enormous and largely ungoverned attack surface that most security teams cannot see, and that traditional tools were not built to address.

65%
of enterprise app development on low-code platforms by 2025 (Gartner)
80%
of low-code builders will be outside IT departments by 2026 (Gartner)
#1
threat to LLM-based AI applications: indirect prompt injection (OWASP LLM Top 10, 2025)

OWASP — the globally recognised authority on application security — has published two dedicated frameworks that confirm the scale of the problem: the Low-Code/No-Code Top 10 and the LLM Top 10. Together they define the risk landscape that most enterprise security programmes are not yet equipped to address.

What the threats actually look like

These are not theoretical risks. They are documented vulnerability classes with published CVEs and real-world exploits in enterprise environments.

Prompt Injection & Promptware

Attackers embed malicious instructions inside documents, emails, or data that an AI copilot processes. The copilot follows those instructions — reading sensitive data, altering outputs, or performing actions — without the user knowing anything has changed.

OWASP LLM01 · NIST AI Risk

Zero-Click Data Exfiltration

CVE-2025-32711 (CVSS 9.3, "EchoLeak") — a critical vulnerability in Microsoft 365 Copilot — allowed an attacker to silently exfiltrate sensitive data from a user's environment without any interaction required from that user. Patched, but the vulnerability class persists.

CVE-2025-32711 · Microsoft MSRC

Overshared Copilots & Apps

Business users building AI apps routinely configure them to be accessible to everyone in the organisation — or beyond, including guest accounts. Combined with broad data access, this creates large unauthorised exposure surfaces.

OWASP LCNC-SEC-05 · OWASP LCNC-SEC-01

Authentication & Credential Failures

Low-code and no-code tools make it easy to build apps that require no authentication, or that embed credentials directly in the application logic. These are ready entry points for attackers and a primary cause of data leakage.

OWASP LCNC-SEC-02 · OWASP LCNC-SEC-03

Autonomous Agents Acting at Scale

AI Agents don't just answer questions — they take actions: sending emails, updating records, initiating transactions. A compromised agent doesn't leak data; it acts on your behalf, autonomously, at the speed of software.

OWASP LLM06 · NIST AI RMF

Supply Chain & Component Risk

Low-code apps routinely pull components from public open-source libraries. These have been found to contain obfuscated malware — a supply chain risk that mirrors SolarWinds, but in a development context most security teams are not monitoring.

OWASP LCNC-SEC-09 · OWASP LCNC-SEC-10
On references Threats cited above are sourced from OWASP's Low-Code/No-Code Top 10 and LLM Top 10 security frameworks, NIST's AI Risk Management Framework, and published CVE records from Microsoft's Security Response Centre. These are independent, vendor-neutral sources — not vendor marketing.

Why your existing cyber programme doesn't cover this

EDR, SIEM, and firewall solutions detect threats moving through your network or attacking your endpoints. They are essential — and they are blind to what happens inside a copilot conversation, inside an AI agent's decision loop, or inside a Power Platform app built by your marketing team last Tuesday.

The shared responsibility model that governs cloud security applies here too. Microsoft, Salesforce, and other platform providers are responsible for the infrastructure. Your organisation is responsible for what it builds on top of that infrastructure, and the data those systems process. Many enterprises are entering this landscape without fully understanding that responsibility — or having any governance structure to manage it.

This is not a gap that existing tools fill. It requires a different discipline: understanding AI behaviour in context, governing how AI systems are built and deployed, and responding when they misbehave or are exploited.

What we do

We provide AI Governance & Security advisory services to enterprises deploying copilots, agents, and low-code development platforms. Our work spans three areas:

🔍 Copilot & Low-Code Security Assessment

A structured audit of your AI application landscape — mapped against OWASP's Low-Code/No-Code Top 10 and LLM Top 10 frameworks.

  • Inventory of copilots, agents, and low-code apps in your environment
  • Assessment against OWASP LCNC Top 10 and LLM Top 10
  • Identification of authentication failures, oversharing, and exposed credentials
  • Prompt injection and data exfiltration risk review
  • Prioritised remediation recommendations

📋 AI Governance Framework

Design the policies, procedures, and guardrails that allow your organisation to use AI confidently and at scale.

  • AI deployment policy and approval procedures
  • Escalation and incident response playbooks for AI systems
  • Regulatory alignment: GDPR, EU AI Act, SOX, HIPAA
  • Bias and fairness review for agent-driven decisions
  • Board and executive reporting frameworks

🚨 Incident Response & Retained Advisory

Expert support when AI systems misbehave, are exploited, or produce unexpected outcomes with real business consequences.

  • Incident triage and root cause analysis for AI-related events
  • Integration with your existing monitoring platform
  • Retained advisory for ongoing governance support
  • Post-incident review and control improvement

How we work: We integrate with your existing monitoring platform — whether that is Zenity, Microsoft Defender, Sentinel, or a custom solution. You own the tooling; we provide the expertise layer to interpret it, govern it, and respond when needed.

Why Intelligent Resilience

Grounded in recognised frameworks

Our assessments are structured around OWASP, NIST, and published CVE records — not proprietary methodologies. You get findings you can defend to a regulator or a board.

Cyber and AI governance together

AI governance doesn't sit alongside cyber security — it is an extension of it. We bring both disciplines together in a single practice, so nothing falls between two teams.

20+ years of enterprise security

We speak the language of CISOs, risk officers, and boards. Assessments produce findings that are actionable at both the technical and the governance level.

Platform agnostic

We work with any monitoring platform and any AI toolset. You are not locked into our technology stack or any specific vendor relationship.

Proportionate to your situation

Every engagement is scoped to your specific environment, compliance requirements, and risk appetite. No one-size-fits-all assessments.

Ahead of the regulatory curve

The EU AI Act is already in force. GDPR enforcement on AI systems is accelerating. We help you build governance that is compliant today and resilient to what comes next.

Getting started

1

Discovery Conversation

Understand your AI environment, monitoring setup, and compliance concerns. No cost, no obligation.

2

Scoped Proposal

A clear proposal tailored to your environment — assessment, governance design, retained advisory, or a combination.

3

Engagement

Structured assessment work, framework design, or incident response — depending on where you need to start.

4

Ongoing Partnership

Scale from a one-off assessment to retained advisory as your AI deployments grow and the regulatory landscape evolves.

Understand your AI exposure

Start with a conversation about what your organisation has deployed, what it can currently see, and where the gaps are.

Schedule a Discovery Call

Frequently asked questions

Is this different from a standard cyber security assessment?

Yes — meaningfully so. A traditional cyber security assessment evaluates your defences against external threats: network posture, endpoint security, identity controls, and so on. This work addresses a different problem: the risk created by AI systems your organisation has built and deployed internally. The threat model, the tooling, and the governance approach are all distinct.

We already have Defender and Sentinel in place. Does that cover this?

Defender and Sentinel are excellent tools for their intended purpose — detecting external threats and suspicious activity across your environment. They do not provide visibility into the behaviour of Copilot Studio bots, Power Platform applications, or AI agents, nor do they assess how those systems are configured, what data they can access, or whether they are vulnerable to prompt injection. The two disciplines are complementary, not substitutes.

Do we need AI monitoring tooling before engaging?

Not necessarily. If you have AI systems in production but no monitoring, we can help you evaluate appropriate solutions as part of the engagement. If you already have tooling in place, we work with whatever you have.

What regulations apply to our AI systems?

That depends on your sector and what your AI systems are doing. The EU AI Act applies broadly to systems deployed in or affecting EU citizens. GDPR applies wherever personal data is processed by AI. SOX and HIPAA create specific obligations for AI involved in financial reporting or healthcare decisions. FCRA and Equal Opportunity legislation apply to AI making or influencing decisions about individuals. We assess your specific deployment against the regulations that are relevant to your organisation.

Who in our organisation should be involved?

Typically: the CISO or Head of Security, the team responsible for deploying or managing AI tools, a compliance or legal representative, and an executive sponsor. We will help you identify the right stakeholders once we understand your environment.