The Brain Gets Smarter: How Multi-Agent AI Turns a Process Repository into a Living Intelligence System

The Brain Gets Smarter: How Multi-Agent AI Turns a Process Repository into a Living Intelligence System

A follow-up to: "Your Signavio–CALM Integration Is a Pipe. We Built a Brain."

In our previous article, we showed how connecting SAP Signavio and SAP Cloud ALM through a knowledge graph transforms a data pipeline into something that can reason. The brain existed. It could answer questions about what was in scope, where gaps were, and how processes connected to SAP scope items.

The question we kept getting was: what does the brain actually think — and how does it get smarter over time?

This article is the answer.

The problem with a brain that only knows process structure

A knowledge graph of processes, E2E domains, SAP scope items, and capabilities is a powerful foundation. But it answers only one type of question: what is. What processes do we have. What scope items are in scope. What scenarios the reference model defines.

The questions that actually drive value in BPM engagements are different. They are questions like:

  • Which AI use cases are validated for our Order-to-Cash scenarios, and what economic value do they represent?

  • If we automate invoice matching with an autonomous AI agent, which compliance obligations apply — and what controls must be built into the process design before we even talk about go-live?

  • Should we use SAP standard or a best-of-breed solution for financial planning, and where does that decision change if we need sophisticated scenario modelling?

These questions require not just process knowledge, but three additional dimensions: innovation context (what AI use cases exist and what they deliver), compliance knowledge (what regulatory obligations apply to which processes and AI systems), and balanced evaluation (how competing solution options score against each other).

And they require these dimensions to be in genuine tension with each other — argued, scored, and resolved — not averaged away into a diplomatically acceptable middle ground.

Adding the three dimensions to the brain

The process reference repository remains the backbone. It is what grounds every answer in the structure of your actual process landscape, not in generic best practice.

But now three knowledge streams flow into it continuously.

AI use cases are mapped directly to E2E scenarios. When a process analyst surfaces a specific scenario — say, vendor invoice clearing in the A2R domain — the system already knows which AI accelerators have been validated for that scenario, what their descriptions are, and what transformation they enable. This is not a generic list of AI possibilities. It is a specific, curated set of use cases anchored to your process structure.

Compliance obligations are structured as knowledge nodes, linked to the scenarios they constrain. GDPR Article 22 (automated decision-making) is linked to every scenario where an AI system could make decisions affecting individuals without human review. SOX segregation of duties obligations are linked to every A2R, O2C, and Pl2P financial flow. GxP validation requirements are linked to quality management scenarios. When a scenario is surfaced in a debate, the compliance obligations that apply to it are loaded automatically — not looked up manually, not forgotten.

Market signals — regulatory updates, BPM research, SAP roadmap developments — flow in as additional context that the agents can draw on when the question requires current awareness rather than only structured reference data.

What makes this different from simply having three separate databases is the graph structure. The relationships are explicit. An AI use case is not just "relevant to financial planning" — it is specifically linked via a typed relationship to the E2E scenario it accelerates. A compliance obligation is not just "applicable to AI" — it is linked to the specific AI accelerators it flags, with the penalty range and mandatory controls stored on the obligation node itself. When agents query the graph, they are not doing keyword search. They are traversing a connected structure that encodes what belongs together and why.

Why multiple agents — and why they debate

The conventional approach to AI-assisted BPM advisory is a single conversation: ask a question, get a response. The response is usually balanced, reasonable, and completely uncommitted. It acknowledges that AI offers opportunities but also has risks. It notes that SAP standard and best-of-breed both have merits. It concludes with a recommendation to assess the specific context.

This is not useful to a practitioner trying to make a real decision.

What a good BPM recommendation actually requires is for competing perspectives to be articulated clearly, placed in genuine tension with each other, and resolved through a structured process — not smoothed away by a single model optimising for diplomatic acceptability.

The multi-agent approach makes each perspective a specialist:

A process analyst maps the question against the reference repository. Which E2E scenarios are implicated? Where are the gaps between current-state coverage and the reference model? What scope items should be in scope but aren't? This agent reasons from graph evidence and cites node IDs — its findings are verifiable, not asserted.

An innovator evaluates the economic opportunity. Which AI accelerators are mapped to the implicated scenarios? What is the case for digitalization and AI-augmentation? This agent argues for adoption when the evidence supports it — it is deliberately optimistic, not artificially neutral.

A compliance critic stress-tests every proposal against the applicable regulatory frameworks — GDPR, EU AI Act, ISO 27001, GxP, SOX. It enters the debate knowing which obligations are linked to the scenarios under discussion, and it argues against adoption unless those obligations can be met. It is the hardest voice to satisfy. That is its value. When the critic flags that an autonomous invoice reconciliation agent triggers GDPR Article 22, SOX segregation of duties, and EU AI Act Article 14 (human oversight requirements), it does so with specific article citations, penalty ranges, and mandatory controls — not with a generic "please consider data protection".

A solution arbiter evaluates the solution options on a scored matrix. SAP standard versus best-of-breed. Digital process automation versus AI-augmented solutions. It scores each on functional fit, implementation effort, TCO, vendor lock-in, and time-to-value — without favouring either axis.

An orchestrator runs the debate. It assigns questions to agents, collects positions, scores convergence, and decides when the positions have sufficiently aligned to produce a synthesis. If the critic has unresolved CRITICAL risks, the debate continues. If all four agents have reached compatible positions, the orchestrator halts and hands the transcript to the synthesizer.

The synthesizer produces a final output that represents every perspective fairly — including unresolved risks, which are flagged prominently rather than buried in a risk register no one reads.

The process designer then converts the agreed synthesis into formal process artefacts: BPMN process structures, SIPOC tables, and Turtle diagrams that already encode the compliance controls that the debate established are mandatory. The human-in-the-loop checkpoint that GDPR Article 22 requires is not added later as an afterthought — it is modelled in the BPMN from the start because the critic made it a precondition of convergence.

What continuous learning actually means

A system that answers a question once and then forgets everything is not meaningfully intelligent. The brain needs to accumulate.

Every debate session writes back to the graph. The positions each agent took, the evidence nodes they cited, the risks the critic raised — all become part of an auditable history that can be queried, analysed, and learned from.

More importantly, the knowledge the system acquires in one engagement becomes available in the next. A compliance document uploaded for a pharmaceutical client — a GxP SOP, an ISO 27001 policy, a GDPR transfer impact assessment — is stored as a structured knowledge node, linked to the scenarios it constrains, and automatically loaded by the compliance critic in every future debate where those scenarios appear. The document does not need to be re-uploaded. The obligation does not need to be re-explained.

Cypher query patterns that prove reliable in one engagement become encoded as agent skills — loaded into the relevant agent's context at the start of subsequent debates so that it immediately knows the right way to traverse the graph for that type of question.

The reference model itself grows richer with every project. AI use cases validated in one client engagement are linked to scenarios and available for the next. Compliance obligations structured for one industry are automatically scoped to others where the same frameworks apply.

This is not model retraining. The underlying LLM does not change. What changes is the graph — progressively more scenarios mapped, more AI use cases validated, more compliance obligations structured, more proven reasoning patterns encoded. Each engagement benefits from every previous one. The longer the system runs, the more precisely it can ground its answers in the specific structure of your process landscape rather than in generic knowledge from training data.

The shift this creates in practice

For practitioners, the change in conversation is significant.

The starting point shifts from "based on our experience, we recommend..." to "based on your process structure — mapped against the reference model, with the following AI opportunities identified in the graph and the following compliance obligations confirmed — our recommendation is..."

The recommendation may be the same quality of judgement. The foundation is demonstrably different. The process analyst cited twelve node GUIDs. The compliance critic cited specific articles with specific penalty ranges. The solution arbiter produced a scored matrix. Every claim is traceable.

For organisations evaluating AI-assisted BPM advisory, the question to ask is not "how intelligent is the AI?" but "what does it reason from?" A system reasoning from a structured, compliance-linked, AI-enriched process repository is a fundamentally different proposition from a system reasoning from training weights alone — regardless of model size.

The reference repository is the IP. The agents are the reasoning engine. Together they produce something neither can produce alone: BPM intelligence that is grounded in your reality, balanced across competing perspectives, compliant by design, and continuously improving.

The pipe became a brain. Now the brain is learning to think for itself.

At bpExperts we are building and validating this approach in live client engagements. The architecture described here is the result of sustained development — connecting SAP scope items, AI use cases, and regulatory obligations into a knowledge graph that specialist agents reason from in real time. We are happy to explore what this looks like for your organisation.

Follow the BPM360 Podcast for the intersection of process management, AI, and organisational transformation.

Business Flows 2.0: Why Industry Context Matters More Than Ever in SAP Transformations

Over the past years, we have used Business Flows in many SAP transformation initiatives — especially in large, complex industrial environments. And while the feedback has consistently been positive, one insight became impossible to ignore:

👉 Reference content only creates value if it is scoped, relatable, and usable from day one.

That insight is the starting point of Business Flows 2.0.


From “One Size Fits All” to Industry-Specific Acceleration

In earlier releases, Business Flows followed a deliberately generic approach: a comprehensive set of end-to-end scenarios covering all industries, all domains, all variants of doing business.

That worked—until it didn’t.

As the content grew, we saw a clear pattern in projects:

  • Scoping workshops became harder

  • Repositories became overwhelming

  • Teams spent too much time reducing instead of accelerating

With Business Flows 2.0, we have added a fast lane:

➡️ Industry-specific repositories, curated and pre-scoped for real transformation work.

Aligning Business Architecture with SAP Reference Content

Another strong driver behind Business Flows 2.0 is the way SAP has evolved its own reference content over the last years.

SAP Best Practices, Scope Items, and Solution Capabilities have become extremely rich—but also complex. What’s often missing is a business-oriented structure that helps organizations understand:

  • Why certain capabilities matter

  • Which scope items are relevant

  • How they relate to real end-to-end business scenarios

Business Flows 2.0 bridges exactly that gap:

  • Business end-to-end scenarios remain the anchor

  • Transformation drivers make objectives and pain points explicit

  • Business capabilities connect strategy to execution

  • SAP solutions and scope items are mapped transparently—without losing the business perspective

One Domain. One Map. One Conversation.

A major structural change in Business Flows 2.0 is that we no longer separate:

  • End-to-end scenarios

  • Process groups

  • Process libraries

  • Transformation drivers

into disconnected entry points.

Instead, they now come together within one domain map.

That means:

  • No jumping between different models

  • No loss of context

  • Much faster conversations with business and IT stakeholders

It’s a setup designed for the Discover and Prepare phases of SAP initiatives—before teams disappear into detail.

First Release: Process Industry (Discrete Manufacturing Next)

We’re starting the Business Flows 2.0 journey with the Process Industry domain, released today.

Discrete Manufacturing is already in progress and will follow shortly. From there, we’ll move into Consumer Goods—and later into industries where the differences are even more substantial, such as Retail, Utilities, Energy, and Services.

That’s where the industry-specific approach will really shine.

Transparency Is Still Our Philosophy

One thing hasn’t changed.

We’ve always believed that reference content only creates trust if it is transparent, consistent, and open for discussion. That’s why we’re happy to:

  • Walk you through the content

  • Give you access via our collaboration hub

  • Discuss how it fits (or doesn’t fit) your transformation context

Because at the end of the day, Business Flows is not about models.

It’s about helping organizations enter and execute SAP transformations with clarity, structure, and speed.

If this resonates with you, feel free to reach out—we’re happy to continue the conversation.

👉 If you want to see how this looks in practice, reach out to us and get your free demo session!

AI Needs Process Thinking — Not Project Thinking

How Organisations Can Embed AI Into Real Transformation

By Russell Gomersall

Artificial Intelligence is rapidly becoming the headline topic in every boardroom. Yet, many AI initiatives stall before they deliver measurable value. The root cause is surprisingly simple: organisations treat AI as a project, instead of seeing it as an integral part of their process landscape.

In my keynote during KI-Week, I explored why successful AI adoption requires a process-centric mindset, how companies can structure their initiatives, and what it takes to scale AI sustainably across an organisation. The following article summarises these core ideas.

The long Road Through BPM - And What It Means for AI

Anyone who has worked long enough in Business Process Management knows: it’s a battle. Not war — but definitely a continuous fight for clarity, structure, and alignment across teams.

Since my first deep dive into BPM back in 2005 at IDS Scheer, and later when joining bpExperts in 2012, one insight has stayed constant:

Process management isn’t a toolbox. It’s a way of thinking.

And AI needs exactly this way of thinking to succeed.

AI initiatives launched “because the technology is there” usually fail. AI initiatives launched because a business process needs improvement have a real chance of delivering value.

Reasons for Process Centric AI

Why AI Must Be Embedded in Your Process Architecture

Too often, organisations start AI activities in isolation — a chatbot here, a document classifier there, an automation experiment somewhere else. The result is a collection of disconnected pilots with no strategic or operational anchor.

A process-centric approach changes that.

1. Strategy and operations stay connected

Processes operationalise strategy.

Embedding AI into processes ensures your AI efforts support strategic goals instead of creating technical “side projects”.

2. Clear roles and responsibilities

A process model clarifies:

  • Which roles interact with AI

  • Who owns the data

  • Where decisions are made

  • How compliance and governance are ensured

Without this clarity, AI becomes a black box nobody feels accountable for.

3. Understanding where AI actually adds value

AI makes sense where:

  • Tasks are repetitive but variable

  • Unstructured data must be analysed

  • Complex decisions require support

  • Manual handovers generate delays or errors

  • Documents need comparison, validation, extraction

But many pain points can be solved more easily:

  • with basic digitalisation,

  • with standard ERP functionality,

  • or by adjusting process logic.

A structured process assessment very quickly separates true AI use cases from tasks that only look like AI problems.

AI Use Cases Need a Clear Evaluation Framework

To avoid hype-driven decision-making, organisations should assess every use case along a consistent canvas:

✔ Data readiness

Do we have the required input (structured, unstructured, labelled, historical)?

✔ Process impact

Which steps, handovers, and decisions are affected?

✔ Financial expectations

Is there a measurable business case — cost savings, throughput, quality, risk reduction?

✔ Strategic relevance

Does the use case contribute to strategic goals or capability building?

✔ Change & Adoption

Which roles must learn new work patterns?

What training, enablement, and organisational adjustment is needed?

Addressing these questions first, avoids “cool experiments” and instead builds a portfolio of well-positioned, value-oriented, outcome-driven AI cases.

Before Starting: Assess Your AI Maturity

Every AI initiative should begin with a quick maturity check across five success factors:

  1. Process governance

  2. Data governance

  3. Roles & responsibilities

  4. Technology readiness

  5. Change & adoption capability

This determines whether the organisation is ready to scale AI beyond isolated pilots — or whether foundational work must come first.

The Role of New (and Evolving) Responsibilities

AI changes the organisational landscape.

Companies must answer questions such as:

  • Do we still need classical key users?

  • Should process owners evolve into “AI champions”?

  • Do we introduce dedicated AI governance roles?

  • How does compliance adapt to AI-driven decisions and data flows?

Existing governance models shouldn’t be replaced — but challenged and expanded to include AI-specific responsibilities.

From Chaos to Structure in Three Months

In many client projects, we see dozens of parallel, uncoordinated AI activities — each started with good intent but without integration.

With a structured process- and governance-driven approach, organisations can:

  • consolidate their AI activities,

  • establish a unified roadmap,

  • clarify data and process responsibilities,

  • and align all ongoing projects to a common direction.

This can be achieved in as little as three months, depending on stakeholder engagement. What follows — scaling pilots into daily operations across multiple sites and departments — naturally takes longer, but the foundation is laid.

A Practical Example: Should You Give the “Actual Process” to an AI for Improvement?

One question from the KI-Week audience was:

“If I document my current process and feed it into an AI, can the AI generate improvement suggestions?”

The answer: Yes — but start one step earlier.

If no consistent process documentation exists, begin with:

  • the key questions process managers must answer,

  • the roles involved,

  • the decision points,

  • the data that flows through the process.

Without this context, AI suggestions remain shallow. With the right context, AI can highlight improvement potential across decision logic, handovers, data usage, and automation opportunities.

Conclusion: AI Needs Process Thinking

If we summarise everything into three messages, it’s this:

1. AI requires a process mindset, not a project mindset

Technology alone doesn’t solve problems.

Embedded in processes, AI becomes a strategic accelerator.

2. Your process models are the compass for AI transformation

They provide orientation, responsibility, data structures, and governance.

3. Every use case must be evaluated in the context of the whole organisation

Only then can AI scale sustainably instead of becoming a collection of isolated experiments.

Organisations that embrace this process-centric AI approach will not just implement technology — they will build lasting capabilities for transformation.

Celebrating 50 Episodes of Insight from the BPM360 Podcast

Celebrating 50 Episodes of Insight from the BPM360 Podcast

The BPM360 Podcast by our Partner Dr. Russel Gomersall and Caspar Jans (Celonis) has become an established platform for sharing perspectives on business-process management, automation, and digital transformation.
Its purpose is straightforward: to provide clear, experience-driven insights into how processes operate across modern organizations, how technologies are reshaping them, and which trends are defining the next generation of BPM practices. Through conversations with experts and practitioners, the podcast brings forward real challenges, emerging opportunities, and the ongoing evolution of the BPM landscape.

A significant milestone has now been reached with the release of Episode 50.

Check out the milestone episode here.

What This Milestone Represents

Fifty episodes reflect more than consistency — they mark the growth of a knowledge hub that continues to support and inform the BPM community. Over time, the podcast has developed a comprehensive library of discussions covering topics such as process mining, orchestration, operational excellence, automation frameworks, and the shifting roles of technology platforms.
Reaching this point reinforces the value of sustained dialogue in a field that evolves rapidly and often unpredictably.

Inside Episode 50

The milestone episode focuses on a timely and highly relevant topic: the role of ServiceNow’s process orchestration capabilities in shaping the next chapter of BPM.
Key themes include:

  • The strategic expansion of orchestration across enterprise systems

  • The influence of recent market movements, including major acquisitions in the BPM and process mining space

  • How platforms are shifting from isolated workflows toward interconnected, intelligence-driven process landscapes

  • The growing necessity for orchestration layers that sit above traditional applications and integrate processes end to end

The episode provides a forward-looking exploration of how BPM is transitioning from improvement initiatives to an orchestration-centric model that connects processes, data, and automation frameworks across the enterprise.

Looking Ahead

With 50 episodes now available, the BPM360 Podcast continues to build momentum. Future discussions are expected to dive deeper into process intelligence, orchestration strategies, automation-driven operating models, and the evolving ecosystems surrounding modern BPM platforms. The milestone marks both a reflection point and a launchpad for even more advanced conversations about the future of process work.

Supercharge Your SAP Signavio Reporting with Neo4j & KNIME: A Practical Guide

In this video, we explore an innovative approach to enhancing the reporting and analysis capabilities of SAP Signavio using open-source tools Neo4j and KNIME. While SAP Signavio provides robust BPM modeling and transformation features, its built-in reporting options can be limited, particularly when managing large repositories with complex cross-references and custom attributes.

This guide showcases:

  • Overcoming challenges of out-of-the-box reporting in SAP Signavio, including reliance on cumbersome Excel exports.

  • Mining data out of SAP Signavio into a Neo4j graph database for efficient analysis of dictionary items, diagrams, and relationships.

  • Using CypherQL in Neo4j to perform advanced consistency checks and governance tasks.

  • Integrating Neo4j with KNIME for automated data pipelines and seamless export to tools like Power BI for dynamic visualizations.

By combining SAP Signavio with Neo4j and KNIME, you unlock unparalleled flexibility and sophistication in reporting, without additional licensing costs.

This video is ideal for:

  • SAP Signavio users managing large repositories or extended metamodels.

  • BPM professionals seeking advanced reporting and governance solutions.

  • IT and business analysts looking for cost-effective tools to integrate and analyze process data.

  • Data enthusiasts exploring open-source tools for data visualization and process optimization.