AI Governance & Security
Copilots, agents, and low-code tools have created a new class of risk that sits inside your organisation — invisible to traditional cyber defences. We help you see it, govern it, and respond when things go wrong.
Traditional cyber security was designed to defend a perimeter — keeping attackers out. It is well understood, well tooled, and maturing rapidly. But over the last two years, a different problem has emerged: the risk created by AI systems inside the organisation.
Microsoft 365 Copilot, Copilot Studio, Power Platform, Salesforce Einstein, ServiceNow, and a dozen other platforms now allow anyone in the enterprise to build powerful AI applications using natural language and drag-and-drop interfaces — no coding required. Gartner forecasts that 65% of all enterprise application development will happen on low-code platforms by 2025, with 80% of those builders sitting outside IT departments.
The result is an enormous and largely ungoverned attack surface that most security teams cannot see, and that traditional tools were not built to address.
OWASP — the globally recognised authority on application security — has published two dedicated frameworks that confirm the scale of the problem: the Low-Code/No-Code Top 10 and the LLM Top 10. Together they define the risk landscape that most enterprise security programmes are not yet equipped to address.
These are not theoretical risks. They are documented vulnerability classes with published CVEs and real-world exploits in enterprise environments.
Attackers embed malicious instructions inside documents, emails, or data that an AI copilot processes. The copilot follows those instructions — reading sensitive data, altering outputs, or performing actions — without the user knowing anything has changed.
OWASP LLM01 · NIST AI RiskCVE-2025-32711 (CVSS 9.3, "EchoLeak") — a critical vulnerability in Microsoft 365 Copilot — allowed an attacker to silently exfiltrate sensitive data from a user's environment without any interaction required from that user. Patched, but the vulnerability class persists.
CVE-2025-32711 · Microsoft MSRCBusiness users building AI apps routinely configure them to be accessible to everyone in the organisation — or beyond, including guest accounts. Combined with broad data access, this creates large unauthorised exposure surfaces.
OWASP LCNC-SEC-05 · OWASP LCNC-SEC-01Low-code and no-code tools make it easy to build apps that require no authentication, or that embed credentials directly in the application logic. These are ready entry points for attackers and a primary cause of data leakage.
OWASP LCNC-SEC-02 · OWASP LCNC-SEC-03AI Agents don't just answer questions — they take actions: sending emails, updating records, initiating transactions. A compromised agent doesn't leak data; it acts on your behalf, autonomously, at the speed of software.
OWASP LLM06 · NIST AI RMFLow-code apps routinely pull components from public open-source libraries. These have been found to contain obfuscated malware — a supply chain risk that mirrors SolarWinds, but in a development context most security teams are not monitoring.
OWASP LCNC-SEC-09 · OWASP LCNC-SEC-10EDR, SIEM, and firewall solutions detect threats moving through your network or attacking your endpoints. They are essential — and they are blind to what happens inside a copilot conversation, inside an AI agent's decision loop, or inside a Power Platform app built by your marketing team last Tuesday.
The shared responsibility model that governs cloud security applies here too. Microsoft, Salesforce, and other platform providers are responsible for the infrastructure. Your organisation is responsible for what it builds on top of that infrastructure, and the data those systems process. Many enterprises are entering this landscape without fully understanding that responsibility — or having any governance structure to manage it.
This is not a gap that existing tools fill. It requires a different discipline: understanding AI behaviour in context, governing how AI systems are built and deployed, and responding when they misbehave or are exploited.
We provide AI Governance & Security advisory services to enterprises deploying copilots, agents, and low-code development platforms. Our work spans three areas:
A structured audit of your AI application landscape — mapped against OWASP's Low-Code/No-Code Top 10 and LLM Top 10 frameworks.
Design the policies, procedures, and guardrails that allow your organisation to use AI confidently and at scale.
Expert support when AI systems misbehave, are exploited, or produce unexpected outcomes with real business consequences.
How we work: We integrate with your existing monitoring platform — whether that is Zenity, Microsoft Defender, Sentinel, or a custom solution. You own the tooling; we provide the expertise layer to interpret it, govern it, and respond when needed.
Our assessments are structured around OWASP, NIST, and published CVE records — not proprietary methodologies. You get findings you can defend to a regulator or a board.
AI governance doesn't sit alongside cyber security — it is an extension of it. We bring both disciplines together in a single practice, so nothing falls between two teams.
We speak the language of CISOs, risk officers, and boards. Assessments produce findings that are actionable at both the technical and the governance level.
We work with any monitoring platform and any AI toolset. You are not locked into our technology stack or any specific vendor relationship.
Every engagement is scoped to your specific environment, compliance requirements, and risk appetite. No one-size-fits-all assessments.
The EU AI Act is already in force. GDPR enforcement on AI systems is accelerating. We help you build governance that is compliant today and resilient to what comes next.
Understand your AI environment, monitoring setup, and compliance concerns. No cost, no obligation.
A clear proposal tailored to your environment — assessment, governance design, retained advisory, or a combination.
Structured assessment work, framework design, or incident response — depending on where you need to start.
Scale from a one-off assessment to retained advisory as your AI deployments grow and the regulatory landscape evolves.
Start with a conversation about what your organisation has deployed, what it can currently see, and where the gaps are.
Schedule a Discovery CallYes — meaningfully so. A traditional cyber security assessment evaluates your defences against external threats: network posture, endpoint security, identity controls, and so on. This work addresses a different problem: the risk created by AI systems your organisation has built and deployed internally. The threat model, the tooling, and the governance approach are all distinct.
Defender and Sentinel are excellent tools for their intended purpose — detecting external threats and suspicious activity across your environment. They do not provide visibility into the behaviour of Copilot Studio bots, Power Platform applications, or AI agents, nor do they assess how those systems are configured, what data they can access, or whether they are vulnerable to prompt injection. The two disciplines are complementary, not substitutes.
Not necessarily. If you have AI systems in production but no monitoring, we can help you evaluate appropriate solutions as part of the engagement. If you already have tooling in place, we work with whatever you have.
That depends on your sector and what your AI systems are doing. The EU AI Act applies broadly to systems deployed in or affecting EU citizens. GDPR applies wherever personal data is processed by AI. SOX and HIPAA create specific obligations for AI involved in financial reporting or healthcare decisions. FCRA and Equal Opportunity legislation apply to AI making or influencing decisions about individuals. We assess your specific deployment against the regulations that are relevant to your organisation.
Typically: the CISO or Head of Security, the team responsible for deploying or managing AI tools, a compliance or legal representative, and an executive sponsor. We will help you identify the right stakeholders once we understand your environment.