Everyone's building AI agents. Everyone's got a chatbot. Everyone can find stuff. But here's the question nobody's answering:
Why would you trust it?
I'm Michael Down, Global Head of Financial Services at Neo4j. I work with the world's largest banks to build solutions where every decision must be provable and every outcome explainable. In regulated environments, trust isn't optional. So when I started building an AI agent, trust was the starting point.
Think about autonomous cars. Do you trust them? It's hard to, because you can't see what they're doing. Enterprise AI is the same. A question goes in, an answer comes out, a black box in between. You don't trust what you can't see.
Inspired by Simon Sinek's Golden Circle
Every AI tool can find an answer. The question is, do you believe the answer is right? And can you prove it?
Trust starts with knowing who can do what
If you're deploying an AI agent across an organisation, not just for one person, you need proper control. Different people. Different roles. Different levels of access. Different security requirements. Most AI tools don't think about this because they assume it's one person, one install. That's not how real organisations work.
Trust comes from seeing the whole journey
When M2 answers a question, it doesn't just hand you a result. You can see exactly what happened. Every step, every decision, every source. From the moment the question arrives to the moment the answer is delivered, the entire journey is visible and auditable.
This is what makes the trust real. M2's brain is a Neo4j knowledge graph. Not a vector store. Not a document index. A web of explicit relationships that are traversable, visible, and auditable. When it finds something, you can trace exactly how it got there. That's not something you get from embeddings in a black box.
A knowledge graph stores relationships explicitly. Use case connects to vertical, connects to customer reference, connects to presentation. Every hop is visible. Every connection is a reason you can point to.
When you ask it something, it doesn't just go looking for a document. It figures out what actually matters, what's missing, what expertise to bring in. Then it acts. It's a reasoning loop, not a search box.
M2 doesn't show up the same way every time. It adapts to the team and the context.
Every time you use it, every skill you teach it, every bit of context you give it, the knowledge graph gets richer. It builds understanding the way you do. Except it doesn't forget.
I'm documenting the whole journey as it happens. Every decision, every wrong turn, every breakthrough. If you're interested in how something like this actually gets built, and why trust ended up at the centre of everything, this is where it'll live.
Everyone can build a chatbot. Everyone can do RAG. So what actually matters? The moment I realised the value isn't search, it's trust.
Read more →You don't trust what you don't understand. Why enterprise AI needs to show its working, not just its answers.
Read more →Why Neo4j is the brain. How explicit relationships give you something embeddings never can. A trail you can follow.
Read more →The permission problem nobody talks about. What I learned building granular control into an agent from day one.
Read more →"Do we have X?" That's where it starts. M2 surfaces use cases, presentations, and references, and shows you exactly how it found them.
You are hereBeyond just finding things. Helping the team draft, adapt, and create, with full control over what's allowed and who can do what.
M2 starts coming to the team. Spots patterns, suggests things before anyone asks, but only within the bounds you've set.