About me (Nafis Lodhi)
I build security functions from the ground up
AI is being adopted faster than it can be secured. Organisations are embedding AI into critical decisions, infrastructure, and customer experiences, often without understanding the new risks they’re taking on. The gap between capability and security grows wider every month.
I’ve seen this pattern before. Over 24 years in enterprise cybersecurity, I’ve built and transformed security functions at M&G, Santander, Bupa, Citigroup, and BT. Each time, the challenge was the same: a major technology shift had outpaced the organisation’s ability to protect itself, and someone needed to build the security capability to close that gap. Not advise on it. Build it.
That experience is what I bring to AI security. I know what it takes to stand up a security function from scratch, hire and develop the right people, put governance in place that actually works, and embed security into the way an organisation operates. The technology changes. The fundamentals of building a function that endures do not.
What I’ve learnt along the way
Every security function I’ve built has reinforced a few hard-won lessons. First, you cannot bolt security on after the fact. The organisations that fare best are those that apply security fundamentals early, even when the frameworks are still maturing. Waiting for perfection is a guarantee of exposure.
Second, AI introduces challenges that traditional cybersecurity never had to face. When a model’s behaviour emerges from training data rather than explicit code, how do you verify it’s doing what you intended? When an attacker can manipulate a system through carefully crafted inputs that look normal to humans, what does “input validation” even mean? These are real problems I’ve encountered in practice, and they require security leaders who are willing to engage with genuine complexity rather than reaching for familiar playbooks.
Third, governance frameworks and threat landscapes shift constantly. Some of these problems are genuinely unsolved. That’s not a reason for alarm; it’s what makes this work important. The best security functions are built by people who are intellectually honest about what they don’t yet know, and disciplined enough to keep learning.
The operating model that works
Through building security teams across these organisations, I’ve developed a model for how an AI Safety and Security function should operate. It draws on an unlikely source: Bletchley Park, the wartime code-breaking centre that succeeded against impossible odds through an approach that was:
- Mission-driven: clear purpose above all else
- Collaborative: no single discipline has the answers
- Diverse: the best solutions come from the most unexpected places
- Humble: acknowledging what we don’t know
- Ambitious: refusing to accept that the challenge is too great
These aren’t abstract principles. They reflect what I’ve seen work in practice: security functions that succeed are the ones built around a clear mission, staffed with diverse thinkers, and led with both humility and ambition.
What this guide covers
This guide walks through building an AI Safety and Security function in three parts:
- The Foundations: the enduring security principles your function must be built upon, and what makes AI distinct.
- The Operating Model: how to lead your function, drawing on principles that are mission-driven, collaborative, diverse, humble, and ambitious.
- The Roadmap: a practical four-phase path from initial assessment through strategic transformation.
The articles on this site provide practical guidance across every capability your function needs: from governance and compliance through threat intelligence to security architecture and operations.
If you work in security, AI, governance, or anywhere these areas intersect, this guide is for you.
Core Capabilities of an AI Safety and Security Function
Governance, Risk & Compliance
How organisations build AI governance that actually works, aligned with frameworks like NIST AI RMF and the EU AI Act, without stifling innovation.
People Security & Culture
The human side of AI security: building teams that think about risk naturally, not because a policy tells them to.
Security Operations
What happens when your SOC encounters an AI-specific threat? Extending detection and response capabilities into unfamiliar territory.
Third-Party Risk Management
The challenge of trusting what you didn't build: pre-trained models, open-source libraries, cloud inference providers, and the hidden dependencies in AI supply chains.
Cyber AI Safeguards
The technical puzzle of protecting AI systems: adversarial robustness, input validation, output monitoring, and securing ML pipelines from training to deployment.
Threat Intelligence
Understanding who targets AI systems, how, and why. Tracking the evolving intersection of adversarial machine learning and real-world threats.
AI Security Architecture
Designing AI systems that are secure by default: defence in depth for ML pipelines, zero trust for model access, and deployment patterns that don't leave gaps.