Phenomenal understanding of Nano Banana what I wanted to express with this article

AI of course, still in love with nano banana

Balancing Act: Robustness with Customer Focus

Martijn Veldkamp

“Strategic Technology Leader | Customer’s Virtual CTO | Salesforce Expert | Helping Businesses Drive Digital Transformation”

December 5, 2025

In large-scale enterprise IT, there is a constant, almost gravitational tension between two opposing forces.

On one side, you have the market. It demands speed and radical customer focus. It wants new features deployed yesterday. On the other side, you have Governance. It demands stability, compliance, and zero-risk operations. For years, the industry buzzword was “Agility.” The narrative was that if we just adopted enough scrum teams and microservices, the tension would vanish. But in regulated sectors treating every system as a “move fast and break things” playground isn’t happening.

Strategic architecture isn’t about choosing between speed and stability. It is about architecting a landscape where both can exist in harmony. The biggest thing I see in transformation roadmaps is the attempt to apply a single methodology to the entire IT landscape.

If you treat your core transaction systems like a marketing app, you introduce unacceptable risk. Conversely, if you treat your customer-facing channels with the heaviness of a SAP4Hana migration, you will be disrupted by a startup before you finish your requirements document.

To simplify and modernize a complex application landscape, we have to recognize that different distinct layers breathe at different rates. Speed layering.

Systems of Record, where change is slow, deliberate, and expensive (by design). We want friction here because the cost of error is too high.Systems of Differentiation, where unique business processes live. It connects the core to the customer. We need flexibility here to configure new products, but we still need structure.Systems of Innovation, where we test new customer journeys. If an idea here fails, it should fail fast and cheap without shaking the foundation.

The Architect role shifts from being the “standards police” to becoming city planners. We don’t just draw diagrams, we define zones. We tell the organization: “Here, in the innovation zone, you can use the newest tech and deploy daily. But here, in the core, we measure twice and cut once.”

Implementing this strategy is rarely a technology problem, its a people challenge. It requires coaching stakeholders to understand that “slow” isn’t a dirty word, it’s a synonym for “reliable.” It requires mentoring solution architects to recognize which layer they are building in and to choose their tools accordingly.

When we get this right, we stop fighting the tension between Innovation and Stability. Instead, we use the solid foundation of the core to launch the rapid experiments of the future. We achieve a landscape that is calm at the center, but agile at the edge.

That is how you build an IT strategy that survives the long term.

Do what I mean, not what I say And the gentle art of wrangle meaningful responses from our Generative AI tools While listening to Welcome to the machine by Shadows Fall

Bedrock picture from DuckDuckGo and not a Minecraft one

Data Governance as the Bedrock of Effective AI Governance

Martijn Veldkamp

“Strategic Technology Leader | Customer’s Virtual CTO | Salesforce Expert | Helping Businesses Drive Digital Transformation”

November 3, 2023

As an organisation you need a plan to address the market disruption of Generative AI. You don’t need to build your own version of ChatGPT. But you need a plan on how your organisation will deal with all the initiatives that will start. Otherwise I wish you good luck with the conversation you will have when one of your CxO comes back from some partner paid conference stating that the company will be bankrupt if you don’t invest right now.

In this series of articles I felt the need to explore some of my current thinking on where Generative AI has it’s place.

Business Braveheart

In this ever-renewing push of the newest flavour of technology, the fusion of architecture, governance, and data governance stands as the cornerstone for reliability.

As organisations navigate their discovery of the complex realm of artificial intelligence, it becomes increasingly apparent that the success of implementing one of these LLMs (ChatGPT, Bard or Bing AI) is deeply entwined with the quality, security, and integrity of the data that they need and produce.

Effective AI governance is not just about fine-tuning algorithms or optimising your LLM models. It begins with the bedrock of quality.

Garbage in, garbage out

Feedback loop

It’s the quality, accuracy, and reliability of the input data that dictates the usefulness of AI’s output. Thus, a holistic approach with a very strong foundation in data quality and it’s governance. And remember, the prompts that you use for getting results is also data that needs to be governed. How else will you establish a feedback loop on effective usage of the tool?

The Interdependence of Data Governance and AI Governance

Data governance, as I’ve stated in the previous blog posts, primarily concerns itself with the management, availability, integrity, usability, and security of an organisation’s data.

AI in any form, by its nature, operates as an extension of the data it is fed. Without a sturdy governance structure over the data that you produce, AI governance becomes a moot point. On another note I’m still surprised nobody came up with an AI that generates cute cat short clips for a Youtube channel. Wait, I’m on to something here…

Quality Data: The Lifeblood of AI

A key aspect is that the quality of data isn’t an isolated attribute but a collective responsibility of various departments within an organisation. We all know that, but where does the generated data sit?

In the past I wrote about System Thinking and I still have to plot for myself where Generative AI sits. Is it like our imagination? Where do you Master the data LLM generates for you? Can I re-generate reliable? What happens with newer generated outcomes? Are these better then the old? Is the generated response email owned by the Service Department or the AI team? These articles are as much for you as for me to fully grok where LLM and it’s outcomes sits in the system.

Security and Ethical Implications

Privacy concerns, compliance with regulations, and ethical considerations in handling and processing data are pivotal components of data governance. As AI systems often deal with sensitive information, ensuring compliance with data protection regulations and ethical use of data becomes a critical component of AI governance. The same goes for the outputs. Where are they used or stored? How do these different data providers compare? The question that popped up in my head was that within the Salesforce ecosystem we use a lot of Account data and have linked with third party providers. We enrich the data that we have of the customer with Duns & Bradstreet information or in the Netherlands with the KVK register. What happens with the ‘authority score’ if we add Generative AI in the mix? We still have a lot to discover together.

Keep it simple

Keep calm meme-o-matic

In short because I harped on it before. Organizations should:

Establish Comprehensive Data Governance Frameworks: Institute clear policies for data ownership, stewardship, and data management processes. This not only fosters quality but also ensures accountability and responsibility in data handling. Promote Cross-Functional Collaboration: Break down silos and encourage collaboration between various departments. Not just good for data quality but many more aspects in life. Leverage Automation for Data Quality Assurance: Harness the power of automation tools to identify anomalies and inconsistencies within data, ensuring high-quality inputs for AI models. Ever did a large migration from one system to another? Right, automation for the win!Continuously Monitor and Improve Data Governance: Implement systems for ongoing monitoring of data quality. We have a Dutch expression which translated goes something like “The Polluter pays”. Bad data has so many downstream effects that I almost want to advise to have a monthly blame an shame high light list. Let’s forget about that for now. I do however want to stress a carrot and stick approach.

Conclusion

In my subsequent articles, I’ll try to delve deeper into the practical strategies and steps organisations can adopt and make it more Salesforcy.

Borrowed from https://blog.kore.ai/conversational-ai-top-20-trends-for-2020

Borrowed from https://blog.kore.ai/conversational-ai-top-20-trends-for-2020

AI Governance

Martijn Veldkamp

“Strategic Technology Leader | Customer’s Virtual CTO | Salesforce Expert | Helping Businesses Drive Digital Transformation”

October 4, 2023

AI governance is crucial to strike a balance between harnessing the promised benefits of AI technology and safeguarding against its potential risks and negative consequences.

It’s a very interesting and still an emerging problem of how to effectively govern the creation, deployment and management of these new AI services. End to end.

Heavy regulated industries, such as banking or public services such as tax agencies, are legally required to provide a level of transparency on how they operate their IT. And this is also true for their AI models. Failure to offer this transparency can lead to severe penalties. AI models, like their predecessors algorithms, can no longer function as a mystery.

The funny thing is that AI and it’s Governance is a hot topic, but Data Governance…? That ranges from boring to “Wasn’t that solved already?”.

It’s still all about that data

The real challenge is always the data. If you think about it, AI is about what data did you train it on, and on what do you want to use it, when? So AI Governance is not just the algorithms and the models. It starts with Data.

It’s no longer enough to secure your data and saying you will comply with privacy laws. You have to be, verifiable, in control of the data you are using. Both in and out of the AI models you are using.

It’s source, it’s provenance and it’s ownership. What rights do you have? What rights does the provider of that data have? We are not even beginning to scratch the surface of how do you even enforce those rights.

When planning to use AI ensure your Data is accurate, complete, and high quality

And this is starts at the collection of data. Does it provide accurate information? Am I missing data? Is the source reliable, timely and high quality? How will we measure and asses if the data collection is working as intended?

Human error is one of the easiest ways to lose data integrity. Well you can call it human error, but it also boils down to is the system set up in a way that it actually makes sense to enter all that data here and now? If users are not willing to enter all the necessary data, your data sets will never be of a high enough data quality.

System integrating with one another is also a great way to lose data integrity. The moment systems have different implementations of a concepts like Customer or Order and their lifecycles you will have a hard time combining that data.

In order to successfully introduce AI in your business, you have to be in control of your data. That data is created throughout your application landscape. From the earlier iterations of Data Warehouses to the rise to prominence of Data scientists, it’s all about Data, it’s integrity, security and quality. That hasn’t changed.

How you frame your problem will influence how you solve it

According to The Systems Thinker, if a problem meets these four criteria, it could benefit from a systems thinking approach:

The issue is importantThe problem is recurringThe problem is familiar and has known historyPeople have unsuccessfully tried to solve the problem

We need to make sense of the complexity of the application landscape by looking at it in terms of whole and relationships rather than by splitting it down into its parts. Then the flow of data throughout will start to make sense. And then can we start addressing the lack of Data Quality, Data Security and Data Integrity.

And who knows, if that all is solved we can start thinking about where it actually makes sense to introduce AI.

Delivery is like a Relay Run As promised in my earlier blogs. I am writing a blog a month and I write about Architecture, Governance and Changing the Process or the Imp

Foute wissel stak stokje voor medaille | NOS
Faulty handover costs gold medal

As promised in my earlier blogs. I am writing a blog a month and I write about Architecture, Governance and Changing the Process or the Implementation and last month was about Technical debt. This month it is about going live successfully. Why? The relay run that the Dutch team lost due to a faulty handover got me thinking about software and delivery, going live, handover moments and risk mitigation. Next to training to sprint really fast, they also train the handover extensively. And despite all this training, this time it went wrong. On an elite Olympic Athlete level!

To be fair, there are many examples of handovers going wrong:

balcony on wrong side of building
And nobody noticed or said anything?

Processes, tools and safety measures

Successful projects have certain elements and key traits in common. These traits consist of

  • Mature, agreed upon processes with KPI’s and a feedback loop to standardize the handovers
  • Automation to support these processes
  • Safety measures for Murphy’s law (when even processes can’t save you)

Key principle is to not try to drown an organisation in red tape and make it more complicated than necessary. Like my first blog “Simplify and then add lightness”. We need these processes to progress in a sustainable and foreseeable way towards a desirable outcome: go live with your Salesforce implementation.

These processes are there to safeguard the handovers. The part of the Dutch relay team that made me think about our own relay runs and their associated risks.

Handovers

The main handovers are:

User → Product Owner → Business Requirement → User Story → Solution Approach → Deploy → User.

As you can see it is a circle and with the right team and tools it can be iterated in very short sprints.

User → Product Owner

“If I had asked people what they wanted, they would have said faster horses.”

Henry Ford

Okay, so there is no evidence that Ford ever said something similar to these words, but they have been attributed so many times that he might as well have said them. I want to use that quote to show different methods on how to get to an understanding of the User Needs. The two sides are either innovating by tightly coupled customer feedback. Or by visionaries who ignores customer input and instead rely on their vision for a better product.

Having no strong opinion on either approach, I still tend be a bit more risk averse and like to have feedback as early as possible. This is perhaps not a handover in a true sense that you can influence as an architect, but getting a true sense of Users Needs might be one that is essential for your Salesforce project to succeed.

I still remember the discussion with a very passionate Product Owner: We need a field named fldzkrglc for storing important data. When diving deeper we found it was a custom field in SAP that was derived from the previous Mainframe implementation. So, that basically meant that the requirements where 50 years old. Innovation?

Business Requirement → User Story

User Stories for Dummies
User Stories for Dummies

There are many ways the software industry has evolved. One of them is around how to write down User Needs. A simple framework I use for validating User Stories are the 3C’s. Short recap:

  • Card essentially is the story is printed with a unique number. There are many tools for supporting that.
  • Conversation is around the story that basically says “AS A … I WANT … SO THAT I …” It’s a starting point for the team to get together and discuss what’s required
  • Confirmation is essentially the acceptance criteria which at a high level are the test criteria confirming that the story is working as expected.

Often used measurement is the Definition of Ready (DoR). It is a working agreement between the team and the Product Owner on what readiness means. And it is a way to indicate an item in the product backlog is ready to work.

As handover and risks go, the quality and quantity of the user stories largely determine the greatness of the Salesforce implementation. Again as an architect you can influence only so many things but in order to bring innovation and move fast User Stories are key.

User Story → Solution Approach

This is where as an architect you can have a solid impact. This is where your high level architecte, solution direction and day to day choices come together. This is your architecture handover moment. When you work together with the developers and create the high level design based on the actual implemented code base. The group as a whole can help find logical flaws, previously wrong decisions and tech debt. The architecture becomes a collaboration. As I wrote earlier, keep it simple and remember Gall’s law. It explains the why of striving for as few number of parts in your architecture.

“A complex system that works is invariably found to have evolved from a simple system that worked. The inverse proposition also appears to be true: A complex system designed from scratch never works and cannot be made to work. You have to start over, beginning with a working simple system.”

John Gall: General systemantics, an essay on how systems work, and especially how they fail, 1975

Next to keeping it simple, I also firmly believe that there should be a place to try out and experiment with the new technology that Salesforce brings. The earlier mentioned experimenting phase fits perfectly. Why only prototype the new business requirements? It is a great place to test out all new cool technical things Salesforce offers like SFDX, Packages or even Einstein and evaluate their value and impact they could have on your Salesforce Org.

Deployment

In any software development project, the riskiest point as perceived by the customer is always go-live time. It’s the first time that new features come into contact with the real production org. Ideally, during a deployment, nobody will be doing anything they haven’t done before. Improvisation should only be required if something completely unexpected happens. The best way to get the necessary experience is to deploy as often as possible.

“In software, when something is painful, the way to reduce the pain is to do it more frequently, not less.”

David Farley, Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation

So establish a repeatable process for going live and perform it many times. This sounds easy but remember that during an Olympic relay race it still went wrong.

Salesforce Sandboxes and our Scratch Orgs provides a Target org to practice your deployments. They are meant for User Acceptance tests, but also making sure that everything will deploy successfully. It can also give developers necessary experience and feedback of deploying their work while it’s in progress. So now that we have a target we need tools to help manage the drudgery.

There are whole suites of tools specifically to support the development team in this. From Gearset to Copado, and Bluecanvas to Flosum. There is a lot, there are even teams that support the business with their own build toolset with Salesforce DX. It is a good practice to choose a tool that supports you and your go-live process to help automate as much as possible.

Safety measures

We have an agreed upon working processes, we measure the quality of handover moments, we automated deployments with bought or homegrown tools, now what?

Even Olympic Athletes make mistakes, so what can we do with Software and Databases that in the physical world is impossible? Backups!

A lot of Salesforce deployments especially for larger customers tend to be fairly data driven. Next to business data as Accounts, Contacts and Orders, there is configured business rule data for example with CPQ. Next to that there is technical data or metadata that is meant for Trigger and Flow frameworks, internationalisation and keeping local diversifications maintainable.

Deploying this data or even making changes in a Production Org is asking for Backups. A good practice is to have a complete Org Backup before you release.

Key takeaways?

  • Establish a process and start ‘training’ your handover moments
  • Automate your release process as much as possible and measure your progress
  • When a handover goes wrong have some safety measures in place

Cloud adoption! Do you have a strategy? As conversations about the Cloud continues to focus on IT’s inability at adoption (or the gap between IT and Business), organiza

As conversations about the Cloud continues to focus on IT’s inability at adoption (or the gap between IT and Business), organizations outside of IT continue their cloud adoption. While many of these efforts are considered Rogue or Shadow IT efforts and are frowned upon by the IT organization, they are simply a response to a wider problem.

The IT organization needs to adopt a cloud strategy, a holistic one is even better. However, are they really ready for this approach? There are still CIOs who are resisting cloud.

A large part of the problem is that most organizations are still in a much earlier state of adoption.

Common hurdles are

  1. The mindset : “critical systems may not reside outside your own data center”
  2. Differentiation: “our applications and services are true differentiators”
  3. Organizational changes : “moving to cloud changes how our processes and governance models behave”
  4. Vendor management : “we like the current vendors and their sales representative”

In order to develop a holistic cloud strategy, it is important to follow a well-defined process. Plan Do Check Act fits just about any organization:

Assess: Provide a holistic assessment of the entire IT organization, applications and services that are business focused, not technology focused. Understand what is differentiating and what is not.

Roadmap: Use the options and recommendations from the assessment to provide a roadmap. The roadmap outlines priority and valuations .

Execute: For many, it is important to start small because of the lower risk and ramp up were possible.

Re-Assess & Adjust: As the IT organization starts down the path of execution, lessons are learned and adjustments needed. Those adjustments will span technology, organization, process and governance. Continual improvement is a key hallmark to staying in tune with the changing demands.

Today, cloud is leveraged in many ways from Software as a Service (SaaS) to Infrastructure as a Service (IaaS). However, it is most often a very fractured and disjointed approach to leveraging cloud. Yet, the very applications and services in play require that organizations consider a holistic approach in order to work most effectively.

Price reductions for IaaS lead to? In the last six months the continued decline in pricing for IaaS is a signal that more business is sought. IBM thinks that the prices

Beyond the Policy pdf

Martijn Veldkamp

“Strategic Technology Leader | Customer’s Virtual CTO | Salesforce Expert | Helping Businesses Drive Digital Transformation”

June 27, 2025

An Architect’s Guide to Governing Agentic AI

Your company is rushing to deploy autonomous AI. A traditional governance playbook won’t work. The only real solution is to architect for control from day one.

You can feel the pressure on CTOs and Chief Architects from every direction. The board wants to know your AI strategy. Your development teams are experimenting with autonomous agents that can either write their own code. Or access internal systems, and “oh no!” interact with customers. I understand. The marketing engine is working overtime and the promise is enormous: radical efficiency, hyper-personalized services, and a significant competitive edge.

But as a CIO or CTO, you’re the one who has to manage the fallout. You’re the one left with the inevitable governance nightmare when an autonomous entity makes a critical mistake.

Statement of intent

Let’s be clear: the old governance playbook is obsolete. A 20-page PDF outlining “responsible AI principles” is a statement of intent, not a control mechanism. In the age of agents that can act independently, governance cannot be an afterthought! I strongly believe it must be a core pillar of your Enterprise Architecture.

This isn’t about blocking innovation. It’s about building the necessary guardrails to accelerate, safely.

The New Risk: Why Agentic AI Isn’t Just Another Tool

We must stop thinking of Agentic AI as just another piece of software. Traditional applications are deterministic; they follow pre-programmed rules. Agentic AI is different. It’s a new, probabilistic class of digital actor.

Great example I heard

AI interpretation of Agents running around like Lemmings

Think of it this way. You just hired a million-dollar team of hyper-efficient, infinitely scalable junior employees. But they have no (learned) inherent common sense, no manager, and no intuitive understanding of your company’s risk appetite. Makes me think of the game Lemmings?

This looks a lot when IaaS, PaaS & SaaS and moving towards the Cloud was being discussed, but with something extra:

Data Exfiltration & Leakage: An agent tasked with “summarizing sales data” could inadvertently access and include sensitive data in its output, as seen when Samsung employees leaked source code via ChatGPT prompts.Runaway Processes & Costs: An agent caught in a loop or pursuing a flawed goal can consume enormous computational resources in minutes, long before a human can intervene. The $440 million loss at Knight Capital from a faulty algorithm is a stark reminder of how quickly automated systems can cause financial damage.Operational Catastrophe: An agent given control over logistics could misinterpret a goal and reroute an entire supply chain based on flawed reasoning, causing chaos that takes weeks to untangle.Accountability Black Holes: When an agent makes a decision, who is responsible? The developer? The data provider? The business unit that deployed it? Without a clear audit trail of the agent’s “reasoning,” assigning accountability becomes impossible.

A policy document can’t force it to align with your business intent. The only answer is to build the controls directly into the environment where the agents live and operate.

Architecting for Control

Instead of trying to police every individual agent, a pragmatic leader architects the system that governs all of them.

Pillar 1: The Governance Gateway

Before any agent can go live executing a significant actions like accessing a database, calling an external API, spending money, or communicating with a customer. It must pass through a central checkpoint. This Governance Gateway is is where you enforce the hard rules:

Cost Control: Set strict budget limits. “This agent cannot exceed $50 in compute costs for this task.”Risk Thresholds: Define the agent’s blast radius. “This agent can read from the Account object, but can only write to a Notes field.”Tool Vetting: Maintain an up to date “allowed list” of approved tools and APIs the agent is permitted to use.Human-in-the-Loop Triggers: For high-stakes decisions, the design should automatically pause the action and requires human approval before proceeding.

This is should be a familiar concept familiar because I borrowed it from API gateways, now just applied to Agentic actions. An approved Design is your primary lever of control.

Pillar 2: Decision Traceability

When something goes wrong, “what did the agent do?” is the wrong question. The right question is, “why did the agent do it?” Standard logs are insufficient. You need a system dedicated to providing deep observability into an agent’s reasoning.

This system must capture:

The Initial Prompt/Goal: What was the agent originally asked to do?The Chain of Thought: What was the agent’s step-by-step plan? Which sub-tasks did it create?The Data Accessed: What specific information did it use to inform its decision?The Tools Used: Which APIs did it call and with what parameters?The Final Output: The action it ultimately took.

This level of traceability is non-negotiable for forensic analysis, debugging, and, crucially, for demonstrating regulatory compliance. It’s the difference between a mysterious failure and an explainable, correctable incident.

Pillar 3: Architected Containment

You wouldn’t let a new employee roam the entire corporate network on day one. Don’t let an AI agent do it either. Agents must operate within carefully architected contained environments.

This goes beyond standard network permissions. Architected Containment means:

Scoped Data Access: The agent only has credentials to access the minimum viable dataset required for its task.Simulation & Testing: Before deploying an agent that can impact real-world systems, it must first prove its safety and efficacy in a high-fidelity simulation of that environment.

Containment isn’t about limiting the agent’s potential; it’s about defining a safe arena where it can perform without creating unacceptable enterprise-wide risk.

From Risk Mitigation to Strategic Advantage

Building this architectural foundation may seem like a defensive move, but it is fundamentally an offensive strategy. This first version of the framework is more than a set of features; it’s a strategic shift. It allows you to move away from the impossible task of policing individual agents and towards the pragmatic, scalable model of architecting their environment.

This is how you build a platform for innovation based on trust, safety, and control. It’s how you empower your organization to deploy more powerful AI, faster, because the guardrails are built-in, not bolted on.

The organizations that master AI governance will be the ones that can deploy more powerful agents, more quickly, and with greater confidence than their competitors. They will unlock new levels of automation and innovation because they have built a system based on trust and control.

This architecture transforms the role of EA and IT leadership. You are no longer just a support function trying to keep up; you become the strategic enabler of the company’s AI-powered future. You provide the business with a platform for safe, scalable experimentation.

The conversation needs to shift today. Stop asking “what could this AI do?” and start architecting the answer to “what should this AI be allowed to do?”

What’s your take? Of these three pillars: Gateway, Decision traceability, and Containment: which presents the biggest architectural challenge for your organization right now?

Share your thoughts in the comments below.