2 2

Building Trust.

Everyone's building AI agents. Everyone's got a chatbot. Everyone can find stuff. But here's the question nobody's answering:

Why would you trust it?

I'm Michael Down, Global Head of Financial Services at Neo4j. I work with the world's largest banks to build solutions where every decision must be provable and every outcome explainable. In regulated environments, trust isn't optional. So when I started building an AI agent, trust was the starting point.

Ask M2 to
Start with Why

I believe the future of AI in the enterprise
isn't about capability. It's about trust.

Think about autonomous cars. Do you trust them? It's hard to, because you can't see what they're doing. Enterprise AI is the same. A question goes in, an answer comes out, a black box in between. You don't trust what you can't see.

Why Trust
How Control & Transparency
What Knowledge Graph Agent

Inspired by Simon Sinek's Golden Circle

Every AI tool can find an answer. The question is, do you believe the answer is right? And can you prove it?
How trust is built

You trust what you can see.
You trust what you can control.

Control

Trust starts with knowing who can do what

If you're deploying an AI agent across an organisation, not just for one person, you need proper control. Different people. Different roles. Different levels of access. Different security requirements. Most AI tools don't think about this because they assume it's one person, one install. That's not how real organisations work.

Knowledge Base Search tool
External Data Sources data
Financial Services Persona persona
Content Generation tool
Customer References data
Proactive Suggestions tool

Transparency

Trust comes from seeing the whole journey

When M2 answers a question, it doesn't just hand you a result. You can see exactly what happened. Every step, every decision, every source. From the moment the question arrives to the moment the answer is delivered, the entire journey is visible and auditable.

Request received "Do we have a fraud detection use case for banking?"
Classified Intent: content discovery | Vertical: financial services | Confidence: 0.94
Dimensions identified use_case: fraud detection | vertical: banking | asset_type: use case
Tools called find_use_cases(vertical="financial_services", topic="fraud") → 3 results
Sources grounded uc-fraud-detect-banking, uc-aml-transaction, ref-hsbc-fraud
Answer delivered Response with 3 cited assets, 0 fabricated claims, full provenance
What makes it possible

A knowledge graph doesn't just find answers.
It shows you why that's the answer.

This is what makes the trust real. M2's brain is a Neo4j knowledge graph. Not a vector store. Not a document index. A web of explicit relationships that are traversable, visible, and auditable. When it finds something, you can trace exactly how it got there. That's not something you get from embeddings in a black box.

Relationships you can see

A knowledge graph stores relationships explicitly. Use case connects to vertical, connects to customer reference, connects to presentation. Every hop is visible. Every connection is a reason you can point to.

Reasoning, not retrieval

When you ask it something, it doesn't just go looking for a document. It figures out what actually matters, what's missing, what expertise to bring in. Then it acts. It's a reasoning loop, not a search box.

Personas that adapt

M2 doesn't show up the same way every time. It adapts to the team and the context.

Soul The domain knowledge. Financial services conversation? It thinks like an FS person. Pharma? Different lens, same intelligence.
Calibration It learns how you like to work. How technical you go, how much detail you want.
Session Real-time reads. Who else is in the conversation, how urgent it is, how deep to go.

It grows with you

Every time you use it, every skill you teach it, every bit of context you give it, the knowledge graph gets richer. It builds understanding the way you do. Except it doesn't forget.

Building in Public

The Journey

I'm documenting the whole journey as it happens. Every decision, every wrong turn, every breakthrough. If you're interested in how something like this actually gets built, and why trust ended up at the centre of everything, this is where it'll live.

Coming Soon

Why trust is the real differentiator in enterprise AI

Everyone can build a chatbot. Everyone can do RAG. So what actually matters? The moment I realised the value isn't search, it's trust.

Read more →
Coming Soon

The autonomous car problem: why black-box AI doesn't work in organisations

You don't trust what you don't understand. Why enterprise AI needs to show its working, not just its answers.

Read more →
Coming Soon

How a knowledge graph makes AI decisions auditable

Why Neo4j is the brain. How explicit relationships give you something embeddings never can. A trail you can follow.

Read more →
Coming Soon

Control at scale: what happens when everyone shares one agent

The permission problem nobody talks about. What I learned building granular control into an agent from day one.

Read more →
Where This Is Heading

The exploration

Phase 1

Discovery with Trust

"Do we have X?" That's where it starts. M2 surfaces use cases, presentations, and references, and shows you exactly how it found them.

You are here
Phase 2

Assistance with Guardrails

Beyond just finding things. Helping the team draft, adapt, and create, with full control over what's allowed and who can do what.

Phase 3

Proactive with Permission

M2 starts coming to the team. Spots patterns, suggests things before anyone asks, but only within the bounds you've set.