BPM Bits & Business Flows #3: How Virtual Tanks Protect Liquid Traceability

Russell Gomersall

We continue our weekly series to deconstruct the architecture of modern business! Every Monday, we share one expert insight from the bpExperts Framework to help you eliminate silos and turn raw data into actionable intelligence.

Over the coming weeks, we are moving through the different SAP domains, specifically tailored to the unique challenges of the process industry.

Today, we shift from defining financial value to mastering operational precision in liquid logistics.

Did you know?

Within our Business Flows framework, liquid materials are not directly transferred into main storage tanks. Instead, they are first managed through Virtual Day Tanks to ensure full control and traceability.

The Expert Deep Dive:

In liquid process manufacturing, managing incoming bulk materials can quickly become a traceability challenge—especially when new batches mix with residual quantities in storage tanks. Our framework addresses this through the Inbound Storage Tank Management (I2O) scenario.

This approach introduces virtual day tanks as an intermediate control layer before materials enter main storage. It ensures that every incoming batch is properly validated, controlled, and documented before any mingling occurs.

This technical logic is essential because it enables quality inspection triggers before storage, governs the controlled mixing of new and residual materials through dedicated process orders, and ensures precise batch tracking by separating day tank and main tank logic at storage and execution level.

Why it matters for bpExperts:

  • End-to-End Traceability: It eliminates the “traceability black hole” that occurs when materials are mixed without proper control or documentation.

  • Operational Integrity: It provides a reliable and transparent view of material flows, supported by structured processes and automated backflushing.

  • Risk Mitigation: By enforcing controlled checkpoints between virtual and physical storage, only compliant materials are allowed to proceed into production.

As noted in our layout, organizations that adopt these BPM practices improve operational efficiency and warehouse visibility by enabling real-time insights into material movements.

Master the Flow: Join us every Monday as we continue bridging the gap between high-level strategy and operational excellence across the Process Industry landscape! Discover more about our methodology here: https://www.bpexperts.de/business-flows

Discover more about our methodology here:

https://www.bpexperts.de/business-flows

Next Monday we will move from operational control to the “Genesis of Value.” We will discuss how enabling real-time co-innovation between customers and suppliers can break down functional silos and significantly increase your ideation throughput.

The Brain Learns to Draw: How the Process Debate Now Produces BPMN, Presentations — and a Live Interactive Diagram

The updated debate architecture — with Process Designer and BPMN Bridge added as new execution steps between synthesis and output.

In our last article we described a brain that reasons. Four specialist agents — process analyst, innovator, compliance critic, solution arbiter — debate a business question, ground their positions in the knowledge graph, and converge on a synthesis that represents every perspective fairly. The process designer then translates that synthesis into formal process artefacts: BPMN structures, SIPOC tables, Turtle diagrams.

The question we left open was: what happens after the design is agreed?

Previously, the answer was: a human takes the artefacts and builds the process model. The BPMN was described but not constructed. The recommendation was documented but not visualised. The engagement produced intelligence — but intelligence still needed to be converted into deliverables by hand.

We closed that gap. This article describes how.

What changed: two new components

The architecture now has two additional components that connect the debate output directly to production-ready deliverables.

The BPMN Builder — built on the BPMN_MCP server — is a stateful tool service that constructs valid BPMN 2.0 XML from structured instructions. It knows how to create pools, lanes, events, tasks, gateways, and sequence flows. It enforces dependency ordering — lanes before elements, elements before flows — and validates the result against the BPMN 2.0 standard before emitting XML. It does not reason. It constructs. That is the right division of labour.

The BPMN Bridge connects the debate pipeline to the BPMN Builder. It reads the bpmn_spec block that the process designer agent now emits as part of its synthesis, and orchestrates the sequence of tool calls required to build the complete process model. It handles the translation from semantic intent — swim lanes named after business roles, tasks typed as human or automated, gateways labelled with decision logic — into the precise API calls that produce standards-compliant XML.

The result is that a business question entered into the debate pipeline now produces, without any additional human effort:

  • A structured recommendation with a risk register, grounded in the knowledge graph

  • A BPMN 2.0 process model, validated and ready to import into Signavio, ARIS, Camunda, or SAP Cloud ALM

  • A corporate-branded PowerPoint presentation, built from the bpExperts template, with all slides populated from the debate output

  • A live interactive process viewer, renderable directly in a browser or embeddable in a website

The updated engagement loop — BPMN process models are now generated automatically and stored back into the knowledge graph, making them available to every future session that touches the same scenarios.

What the Process Designer agent now emits

The original process designer translated a debate synthesis into a human-readable description of a BPMN process: swim lanes, tasks, gateways, flows, and compliance checkpoints. That description was accurate and useful — but it required a process modeller to convert it into an actual diagram.

The updated process designer emits exactly the same human-readable output, plus a new structured block called bpmn_spec. This is a machine-readable JSON object that follows an exact schema:

  • Every pool and lane is defined with a stable snake_case identifier

  • Every task is typed: serviceTask for AI agent steps, userTask for human decisions

  • Every gateway is typed — exclusive or parallel — with documentation explaining the decision logic

  • Every sequence flow references its source and target by the IDs defined above, with named labels for gateway branches

  • Documentation strings are capped at 300 characters and written for the audit trail, not for the modeller

The schema is the contract between the debate pipeline and the BPMN Builder. When the bridge receives a valid bpmn_spec, it can construct the entire process model without any human interpretation. The process designer is the architect. The bridge and builder are the construction crew.

An example: AI-augmented service procurement for a public transport outsourcer

To validate the full pipeline, we ran a complete engagement on a real business problem.

A company that operates in outsourcing public transport regularly procures complex services. On SAP S/4HANA Public Cloud. Their purchase requisition process takes weeks — sometimes months. Complex services require an RFP and multiple approval stages before a contract can be negotiated. Only then can the PR and PO be created in SAP. The question: deploy AI and a workflow engine to transform this end-to-end.

The debate agents ran their full analysis against the Business Flows knowledge graph. The process analyst identified the primary E2E anchor — Service Procurement — and mapped it to seven SAP scope items across Ariba Sourcing (4QN), Ariba Contracts (4B0), Guided Buying (3EN), the Business Network (6BJ), and Fieldglass (22K). The innovator identified five high-value AI use cases, from intake classification (60–70% time reduction) to contract-to-PO automation (70–80% effort reduction, error rate below 2%). The compliance critic raised a HIGH-to-CRITICAL surface: EU Directive 2014/25/EC threshold rules, GDPR Article 22 automated decision requirements, EU AI Act Annex III high-risk classification for three of the five AI agents, and the Alcatel standstill obligation for above-threshold contract awards. The solution arbiter scored SAP standard against BTP AI extensions across eight capability dimensions.

The synthesis produced a recommendation — Strongly Recommended, conditional on compliance-first implementation — and a three-wave deployment roadmap.

The process designer then emitted a bpmn_spec covering eight swim lanes, twenty tasks, two gateways, and twenty-five sequence flows. The BPMN Bridge executed twenty-eight tool calls in the correct dependency order. The validator returned: valid: true, issues: [].

The result is the process diagram below.

The live process — interactive, embeddable, importable

The output of the pipeline is not a static image. It is a fully interactive BPMN 2.0 diagram rendered with bpmn.io, with the complete process logic, swim lane structure, and element documentation intact.

Click on image to go to rendered BPMN html page

What you can do with this diagram directly:

  • Click any element to see its full documentation, compliance notes, and AI agent details

  • Use the 🤖 AI Tasks highlight to see all seven AI agents in one view

  • Use the 🔍 HITL Gates highlight to see every mandatory human-in-the-loop checkpoint — the ones that are there because the compliance critic made them a precondition of convergence

  • Use the ⚖️ Compliance highlight to see every regulatory control point embedded in the process

  • Export the underlying BPMN 2.0 XML for import into any standards-compliant modelling tool

The diagram is not a visualisation of a generic procurement process. Every element in it was placed there because the debate established it belonged there. The eight-stage approval routing structure reflects the EU Directive threshold matrix. The AI: Intake Classification task carries the HIGH RISK tag because the compliance critic flagged EU AI Act Annex III before the synthesizer would accept convergence. The Alcatel standstill task exists because the compliance critic cited the obligation and the solution arbiter confirmed it as non-negotiable regardless of solution choice.

The process was not designed and then checked for compliance. It was designed compliance-first, with the HITL gates and regulatory checkpoints as the skeleton from which everything else was built.

The presentation layer

The same engagement also produced a full corporate presentation — twenty-five slides in the bpExperts Century Gothic template — covering the client challenge, the Business Flows knowledge graph anchors, the five AI use cases, the solution architecture comparison, the process overview, the compliance map, the three-wave roadmap, and the final verdict.

The system now produces five dimensions of output from a single debate engagement — including auto-generated BPMN that feeds back into the knowledge graph.

The presentation was not summarised from the debate. It was structured from it. The slide content maps directly to the agent positions: the compliance slide draws from the compliance critic's output, the ROI slide draws from the innovator's quantified use cases, the architecture comparison draws from the solution arbiter's scored matrix. Every claim in the deck has a traceable origin in the debate transcript.

This matters for two reasons. First, it is faster — a complete client-ready deck produced as a side effect of the analysis, not as a separate workstream. Second, it is more defensible — the deck does not contain claims that were not also argued and tested in the debate. The compliance critic that blocked three AI use cases pending GDPR controls is the same source as the compliance slide that lists those controls as prerequisites.

What continuous improvement looks like now

Each engagement now writes more back to the graph than before. In addition to the compliance documents, Cypher patterns, AI use cases, and debate history described in the previous article, the graph now also accumulates:

Generated BPMN process models, stored with their process_id and linked to the E2E scenarios they cover. When a future engagement touches the same scenarios — Service Procurement, or any of the seven scope items mapped in this one — the existing process model is available as a starting point. The next consultant does not start from a blank diagram. They start from a validated, compliance-embedded model that was the output of a structured debate.

bpmn_spec schemas, stored as reusable templates. The lane structure, task taxonomy, and gateway logic developed for complex-service procurement in a utilities context does not disappear after one engagement. It is available as a reference for the next one, with the compliance controls already modelled in.

The knowledge graph does not just get richer. It gets more specific — progressively more closely matched to the types of engagements that have actually been run through it, the compliance frameworks that have actually been argued, and the process structures that have actually been validated.

What this means for how process design work gets done

The traditional sequence in a BPM engagement is: analyse → workshop → model → review → validate → document. Each step is a handoff. Each handoff is a compression — something is lost in translation between the analysis and the workshop, between the workshop discussion and the model, between the model and the documentation.

The pipeline described here compresses that sequence differently. The analysis and the workshop happen inside the debate. The model is produced directly from the synthesis. The documentation is a side effect of the process designer's output. The review happens through the validator. The compliance checkpoint is embedded in the structure, not appended to it.

What remains for the human practitioner is the work that should never have been delegated in the first place: deciding whether the question was the right question, whether the synthesis reflects the political reality of the organisation, and what the recommendation means for the people whose work it will change.

The brain now produces the drawings. What it cannot produce is the judgement about whether the building should be built at all.

BPM Bits & Business Flows #2: Why Your Product Design Is Never Truly “Finished”

We continue our weekly series to deconstruct the architecture of modern business! Every Monday, we share one expert insight from the bpExperts Framework to help you eliminate silos and turn raw data into actionable intelligence.

As we move further through the SAP domains tailored to the unique challenges of the process industry, we stay within the Idea to Market Domain—but shift our focus from controlling ideas to defining their value.

Did you know? Within our Business Flows framework, a product design is not considered complete until a standard cost estimate has been established.

The Expert Deep Dive: In complex manufacturing environments, organizations often face a disconnect between R&D and Finance. Technical designs are finalized without a clear understanding of their financial impact. Our framework closes this gap by requiring that all activities executed during product development are translated into a financial baseline before production begins. This includes structured cost planning across materials, labor, and overhead, lifecycle targeting of expected Cost of Goods Sold (COGS), and early feasibility validation to ensure alignment between technical design and commercial viability.

Why it matters for bpExperts:

  • Informed Decision-Making: It provides the financial intelligence needed to evaluate whether an innovation is commercially viable before high-cost production begins.

  • Eliminates Financial Surprises: By establishing a standard cost early, you transform design data into clear expectations around margins and profitability.

  • Silo-Breaking: It enforces alignment between Design, Manufacturing, and Controlling, ensuring all functions are working toward the same financial objectives.

As reflected in our framework layout, organizations that embed financial checkpoints into their workflows improve operational efficiency and enable more consistent, data-driven decision-making across the enterprise.

Master the Flow: Join us every Monday as we continue bridging the gap between high-level strategy and operational excellence across the Process Industry landscape! Discover more about our methodology here: https://www.bpexperts.de/business-flows

Next Monday, we move from the “Cost-Counter” to the technical foundation of production. We’ll explore how “Virtual Tanks” enable control, traceability, and balance management in liquid process manufacturing.

The Brain Gets Smarter: How Multi-Agent AI Turns a Process Repository into a Living Intelligence System

The Brain Gets Smarter: How Multi-Agent AI Turns a Process Repository into a Living Intelligence System

A follow-up to: "Your Signavio–CALM Integration Is a Pipe. We Built a Brain."

In our previous article, we showed how connecting SAP Signavio and SAP Cloud ALM through a knowledge graph transforms a data pipeline into something that can reason. The brain existed. It could answer questions about what was in scope, where gaps were, and how processes connected to SAP scope items.

The question we kept getting was: what does the brain actually think — and how does it get smarter over time?

This article is the answer.

The problem with a brain that only knows process structure

A knowledge graph of processes, E2E domains, SAP scope items, and capabilities is a powerful foundation. But it answers only one type of question: what is. What processes do we have. What scope items are in scope. What scenarios the reference model defines.

The questions that actually drive value in BPM engagements are different. They are questions like:

  • Which AI use cases are validated for our Order-to-Cash scenarios, and what economic value do they represent?

  • If we automate invoice matching with an autonomous AI agent, which compliance obligations apply — and what controls must be built into the process design before we even talk about go-live?

  • Should we use SAP standard or a best-of-breed solution for financial planning, and where does that decision change if we need sophisticated scenario modelling?

These questions require not just process knowledge, but three additional dimensions: innovation context (what AI use cases exist and what they deliver), compliance knowledge (what regulatory obligations apply to which processes and AI systems), and balanced evaluation (how competing solution options score against each other).

And they require these dimensions to be in genuine tension with each other — argued, scored, and resolved — not averaged away into a diplomatically acceptable middle ground.

Adding the three dimensions to the brain

The process reference repository remains the backbone. It is what grounds every answer in the structure of your actual process landscape, not in generic best practice.

But now three knowledge streams flow into it continuously.

AI use cases are mapped directly to E2E scenarios. When a process analyst surfaces a specific scenario — say, vendor invoice clearing in the A2R domain — the system already knows which AI accelerators have been validated for that scenario, what their descriptions are, and what transformation they enable. This is not a generic list of AI possibilities. It is a specific, curated set of use cases anchored to your process structure.

Compliance obligations are structured as knowledge nodes, linked to the scenarios they constrain. GDPR Article 22 (automated decision-making) is linked to every scenario where an AI system could make decisions affecting individuals without human review. SOX segregation of duties obligations are linked to every A2R, O2C, and Pl2P financial flow. GxP validation requirements are linked to quality management scenarios. When a scenario is surfaced in a debate, the compliance obligations that apply to it are loaded automatically — not looked up manually, not forgotten.

Market signals — regulatory updates, BPM research, SAP roadmap developments — flow in as additional context that the agents can draw on when the question requires current awareness rather than only structured reference data.

What makes this different from simply having three separate databases is the graph structure. The relationships are explicit. An AI use case is not just "relevant to financial planning" — it is specifically linked via a typed relationship to the E2E scenario it accelerates. A compliance obligation is not just "applicable to AI" — it is linked to the specific AI accelerators it flags, with the penalty range and mandatory controls stored on the obligation node itself. When agents query the graph, they are not doing keyword search. They are traversing a connected structure that encodes what belongs together and why.

Why multiple agents — and why they debate

The conventional approach to AI-assisted BPM advisory is a single conversation: ask a question, get a response. The response is usually balanced, reasonable, and completely uncommitted. It acknowledges that AI offers opportunities but also has risks. It notes that SAP standard and best-of-breed both have merits. It concludes with a recommendation to assess the specific context.

This is not useful to a practitioner trying to make a real decision.

What a good BPM recommendation actually requires is for competing perspectives to be articulated clearly, placed in genuine tension with each other, and resolved through a structured process — not smoothed away by a single model optimising for diplomatic acceptability.

The multi-agent approach makes each perspective a specialist:

A process analyst maps the question against the reference repository. Which E2E scenarios are implicated? Where are the gaps between current-state coverage and the reference model? What scope items should be in scope but aren't? This agent reasons from graph evidence and cites node IDs — its findings are verifiable, not asserted.

An innovator evaluates the economic opportunity. Which AI accelerators are mapped to the implicated scenarios? What is the case for digitalization and AI-augmentation? This agent argues for adoption when the evidence supports it — it is deliberately optimistic, not artificially neutral.

A compliance critic stress-tests every proposal against the applicable regulatory frameworks — GDPR, EU AI Act, ISO 27001, GxP, SOX. It enters the debate knowing which obligations are linked to the scenarios under discussion, and it argues against adoption unless those obligations can be met. It is the hardest voice to satisfy. That is its value. When the critic flags that an autonomous invoice reconciliation agent triggers GDPR Article 22, SOX segregation of duties, and EU AI Act Article 14 (human oversight requirements), it does so with specific article citations, penalty ranges, and mandatory controls — not with a generic "please consider data protection".

A solution arbiter evaluates the solution options on a scored matrix. SAP standard versus best-of-breed. Digital process automation versus AI-augmented solutions. It scores each on functional fit, implementation effort, TCO, vendor lock-in, and time-to-value — without favouring either axis.

An orchestrator runs the debate. It assigns questions to agents, collects positions, scores convergence, and decides when the positions have sufficiently aligned to produce a synthesis. If the critic has unresolved CRITICAL risks, the debate continues. If all four agents have reached compatible positions, the orchestrator halts and hands the transcript to the synthesizer.

The synthesizer produces a final output that represents every perspective fairly — including unresolved risks, which are flagged prominently rather than buried in a risk register no one reads.

The process designer then converts the agreed synthesis into formal process artefacts: BPMN process structures, SIPOC tables, and Turtle diagrams that already encode the compliance controls that the debate established are mandatory. The human-in-the-loop checkpoint that GDPR Article 22 requires is not added later as an afterthought — it is modelled in the BPMN from the start because the critic made it a precondition of convergence.

What continuous learning actually means

A system that answers a question once and then forgets everything is not meaningfully intelligent. The brain needs to accumulate.

Every debate session writes back to the graph. The positions each agent took, the evidence nodes they cited, the risks the critic raised — all become part of an auditable history that can be queried, analysed, and learned from.

More importantly, the knowledge the system acquires in one engagement becomes available in the next. A compliance document uploaded for a pharmaceutical client — a GxP SOP, an ISO 27001 policy, a GDPR transfer impact assessment — is stored as a structured knowledge node, linked to the scenarios it constrains, and automatically loaded by the compliance critic in every future debate where those scenarios appear. The document does not need to be re-uploaded. The obligation does not need to be re-explained.

Cypher query patterns that prove reliable in one engagement become encoded as agent skills — loaded into the relevant agent's context at the start of subsequent debates so that it immediately knows the right way to traverse the graph for that type of question.

The reference model itself grows richer with every project. AI use cases validated in one client engagement are linked to scenarios and available for the next. Compliance obligations structured for one industry are automatically scoped to others where the same frameworks apply.

This is not model retraining. The underlying LLM does not change. What changes is the graph — progressively more scenarios mapped, more AI use cases validated, more compliance obligations structured, more proven reasoning patterns encoded. Each engagement benefits from every previous one. The longer the system runs, the more precisely it can ground its answers in the specific structure of your process landscape rather than in generic knowledge from training data.

The shift this creates in practice

For practitioners, the change in conversation is significant.

The starting point shifts from "based on our experience, we recommend..." to "based on your process structure — mapped against the reference model, with the following AI opportunities identified in the graph and the following compliance obligations confirmed — our recommendation is..."

The recommendation may be the same quality of judgement. The foundation is demonstrably different. The process analyst cited twelve node GUIDs. The compliance critic cited specific articles with specific penalty ranges. The solution arbiter produced a scored matrix. Every claim is traceable.

For organisations evaluating AI-assisted BPM advisory, the question to ask is not "how intelligent is the AI?" but "what does it reason from?" A system reasoning from a structured, compliance-linked, AI-enriched process repository is a fundamentally different proposition from a system reasoning from training weights alone — regardless of model size.

The reference repository is the IP. The agents are the reasoning engine. Together they produce something neither can produce alone: BPM intelligence that is grounded in your reality, balanced across competing perspectives, compliant by design, and continuously improving.

The pipe became a brain. Now the brain is learning to think for itself.

At bpExperts we are building and validating this approach in live client engagements. The architecture described here is the result of sustained development — connecting SAP scope items, AI use cases, and regulatory obligations into a knowledge graph that specialist agents reason from in real time. We are happy to explore what this looks like for your organisation.

Follow the BPM360 Podcast for the intersection of process management, AI, and organisational transformation.

bpExperts Employee Spotlight: Navigating Culture, Projects, and Remote Work with Kristina

Welcome to the second episode of bpExperts Employee Spotlight—a series dedicated to introducing the people behind bpExperts. In this edition, we shine the spotlight on Kristina, a Senior Consultant with over five years of experience, who shares her journey, insights into international collaboration, and what it means to work in a remote, global environment.

From process modeling to business transformation, Kristina’s role combines technical expertise with cross-cultural collaboration—offering a unique perspective on modern consulting.

A Dynamic Role in Process and Transformation

Kristina’s journey at bpExperts began during her student years and has since evolved into her current role as a Senior Consultant. Her work focuses primarily on process architecture and process modeling - key elements in helping organizations optimize and streamline their operations.

In addition, she contributes to business transformation projects, particularly in change management and supporting users as they transition to new systems. Working closely with diverse teams, Kristina helps map processes and improve workflows across industries.

What keeps her motivated is the variety. “Every project is different,” she explains. “You may have an idea of what to expect, but there’s always something unique depending on the client or industry.” This constant change makes her work both challenging and rewarding.

Working Across Cultures

Originally from Slovakia, Kristina has lived and worked in Austria and Portugal—an experience made possible by bpExperts’ flexible and international work environment. This exposure has given her valuable insights into different working styles and cultural perspectives.

She notes that while differences in communication and structure can sometimes lead to misunderstandings, they also create opportunities for learning. “There are big differences in how people from various countries work,” she says. “But over time, you realize that people are open and collaborative - and sometimes your assumptions turn out to be wrong.”

These experiences have helped her develop adaptability and a deeper appreciation for global teamwork.

Staying Productive in a Remote Environment

Remote work is an integral part of Kristina’s daily routine, and she has developed strategies to stay focused and balanced. She structures her day around her most productive hours - typically in the morning—allowing her to work efficiently without feeling overwhelmed.

Equally important is maintaining a routine outside of work. Whether it’s meeting friends or staying active through sports, Kristina emphasizes the importance of stepping away from the screen. “It helps me avoid feeling stuck at home all day,” she explains.

She also relies on small habits to stay energized and organized - planning her day in advance, keeping water nearby, and enjoying simple rituals like her morning coffee. These routines help create structure and consistency in a remote setting.

A Supportive and Open Culture

When describing the culture at bpExperts, Kristina highlights the openness and kindness of her colleagues. Despite working remotely, there is a strong sense of connection and collaboration across the team.

“The environment is very supportive,” she says. “People are genuinely kind, and communication is always clear.” Whether it’s asking questions or solving challenges, support is always within reach.

This culture of transparency and teamwork is one of the key reasons Kristina enjoys working at bpExperts and sees herself continuing her journey with the company.

Looking Ahead

Kristina’s story reflects what makes bpExperts unique: a combination of diverse projects, international opportunities, and a collaborative culture. Her experience shows how adaptability, curiosity, and strong routines can help professionals thrive in a global and remote work environment.

As bpExperts continues to grow, stories like Kristina’s highlight the people and values that shape the company.

Stay tuned for more bpExperts Employee Spotlights - featuring the individuals who bring expertise and culture together across borders.