Price reductions for IaaS lead to? In the last six months the continued decline in pricing for IaaS is a signal that more business is sought. IBM thinks that the prices

The Low-Hanging Fruit Diet: Nutritionally Deficient in Strategic Value

Martijn Veldkamp

“Strategic Technology Leader | Customer’s Virtual CTO | Salesforce Expert | Helping Businesses Drive Digital Transformation”

March 28, 2025

“We are kept from our goals not by obstacles but by a clear path to lesser goals.”

I’m not sure who pointed me in the the direction of Robert Brault, but I love that quote. This resonates with me on different levels. If I hear low hanging fruit one more time, I’m going to scream! Let’s just say my definition of “low hanging fruit” involves more planning and less bruising.

Forrest for the trees?

When we talk about the complexities of implementing new systems and strategies, let’s say something with AI, we often focus on achieving the immediate results. We need to show tangible benefits, now. With this pressure on the “lesser goals” there is a risk that we don’t see the forest because we’re focussed on the trees. An unintended consequence is that we start steering toward the wrong outcome. And thus off-course from our larger objectives.

For example: Back to the simple act of washing dishes. As I told before, our dishwasher broke down and that reminded me that automation of tasks transforms the tasks, not the outcome. But clean tupperware is not my end-goal. Nor is a clean house. My personal end-goal is comfortable living. I don’t want to live in a showroom or museum (it’s also not possible with four cats and two dogs). The “lesser goal” of automating the cleaning of the dishes, introduced a new set of tasks: correct loading , filter cleaning, and if you’re in luck cryptic error codes. It’s up to me to decide what I rather do. What is an efficient use of my time vs how much do I hate doing it and do I actually mind when I sit very comfortably on my couch?

Now, let’s switch this up. Imagine your idealized environment with a Salesforce rollout. When you implement specific features or modules, for example automating Lead capture.

These actions are aimed at achieving those “lesser goals”: more leads, faster follow-ups, improved efficiency in a specific area. You have all these kinds of Key Performance Indicators (KPIs) that you diligently track. There is an uptick in lead volume, a reduction in the average time to qualify a lead, or a faster completion rate for a specific workflow. This is the bright, reassuring signal that our “lesser goal” is being achieved.

Unforeseen side effects

There are these unforeseen side effects that might not be immediately obvious. These can hinder our progress towards our larger, strategic objectives. The normal thinking is that more leads, more qualified leads, more opportunities, more closed won. But there is also the risk of:

Data Quality Degradation: The ease of automated capture might lead to a flood of unqualified or duplicate leads, diluting the quality of your data and requiring significant cleanup efforts down the line. User Frustration and Workarounds: A rigid process might not accommodate all cases, leading sales reps to develop inefficient workarounds (cough Excel cough) outside the system, undermining the very efficiency you aimed for. Increased Reporting Burden, Diminished Insight: The focus on tracking the increased lead volume might lead to reports that no one has the time or expertise to analyze effectively, creating noise without genuine insight. Silos Around Automated Processes: Teams might become overly reliant on the automated lead flow, neglecting cross-functional communication or losing sight of the bigger customer journey. This is like a localized concentration of “light” that doesn’t illuminate the entire tank.

This is where Brault’s quote hits home for me. Because we can fix all that. De-duplication, validation rules, Lead Assignments, Data Quality reports. As you can see these unintended consequences are quietly accumulating and we have more busy work. If you become to focused on the easily measurable metrics, there is a risk that these very actions might be creating new obstacles and thus diverting us from our overarching strategic direction.

Losing direction while chasing immediate targets

This risk of losing direction while chasing immediate targets is certainly possible. Just look at the rapidly evolving landscape of Artificial Intelligence. Like our automated lead capture, their is an allure of ‘quick wins’. This ‘low-hanging fruit’ everyone seems to like can create a similar set of unintended consequences. As I discussed in my previous article, ‘T.A.N.S.T.A.A.F.L.’ This ‘free lunch’ of uncoordinated AI initiatives often comes with hidden costs: duplicated efforts, integration nightmares, and a lack of strategic alignment.

The solution, much like ensuring our Salesforce implementation truly serves our overarching business objectives, lies in adopting an AI Game plan. This upfront investment in strategic thinking acts as our compass, maybe even North Star. We all need to work out what comfortable living means for themselves. This way we are ensuring that the individual ‘lesser goals’ we pursue with AI are not just shiny objects distracting us from the real destination.

A well-defined AI Game plan helps us anticipate and mitigate potential unintended consequences by:

Providing Strategic Alignment: Ensuring that every AI initiative, including those focused on immediate gains, is directly tied to our broader business goals. This helps us evaluate if the ‘clear path’ to a lesser AI goal is actually leading us towards our ultimate strategic vision.Promoting Resource Optimization: Preventing the duplication of effort and the creation of isolated AI solutions that don’t integrate effectively, thus avoiding the ‘busy work’ of constantly patching disparate systems.Establishing Data Governance: Implementing clear guidelines for data quality, security, and sharing, mitigating the risk of ‘data degradation’ and ensuring that the fuel for our AI initiatives is clean and reliable.Encouraging Holistic Thinking: Fostering cross-functional collaboration and a shared understanding of the bigger picture, preventing the formation of ‘silos’ around individual AI applications.Defining Measurement Beyond the Immediate: Establishing KPIs that not only track the success of the ‘lesser goals’ but also monitor for potential negative side effects and progress towards the overarching strategic objectives.

We need to look beyond the initial increase in leads and consider the quality and the impact on the sales team’s workflow in our Salesforce example. And an AI Game plan compels us to look beyond the initial promise of AI and consider its long-term impact on our people, processes, technology, and overall strategy.

Now, for a nice closure statement that ties everything together

Ultimately, the path to achieving our true goals is rarely a straight line of easily attainable milestones. It requires an awareness of the broader landscape and a willingness to look beyond the immediate allure of quick-wins.

Whether it’s ensuring clean tupperware contributes to a comfortable life, or that an automated lead capture process genuinely fuels business growth. A strategic ‘game plan’ either for our household chores or our AI ambitions acts as our guiding star. All the wile making sure that the many trees we focus on are indeed leading us towards the right forest.

Price reductions for IaaS lead to? In the last six months the continued decline in pricing for IaaS is a signal that more business is sought. IBM thinks that the prices

The Evolution of Data Integration

Martijn Veldkamp

“Strategic Technology Leader | Customer’s Virtual CTO | Salesforce Expert | Helping Businesses Drive Digital Transformation”

November 1, 2024

In today’s interconnected applications landscape, we rely on data shared across multiple systems and services. Throughout my career as an architect, integration strategies have evolved from basic ETL database copies to monolithic middleware, operational databases, API-driven microservices, and now ZeroCopy patterns.

As I’m almost a year older again I look back at all the stuff I build, supported and architected. I see kind of four big phases in Data integration architecture. (cool idea for a t-shirt).

ETL -> Middleware -> Microservices -> ZeroCopy

Phase 1: Database Copies and ETL

See content credentials

It looks so spiffy, but generating images and getting the text right is difficult

In the early days, applications were often monolithic—self-contained systems that handled all functions internally. Database copies or ETL (Extract, Transform, Load) processes were standard for sharing data between these applications. We created copies of parts of entire databases to allow other applications to access data without impacting the first system. ETL batch jobs would extract data, transform it into compatible formats, and load it into the destination system on a scheduled basis.

I still remember the C++ libraries I’ve written to access the mainframe and get some batch job started. Those integrations were hard coded and oh so fragile.

While this approach enabled some data sharing, it had many limitations. Copied data was stale, becoming out-of-date, creating “data silos” and thus inconsistent information across applications. These running ETL batch jobs introduced their own costs.

As the number of applications grew together with them becoming more specialized, the need for real-time data grew. Together with the technology push we kind of created the rise of middleware layers.

Phase 2: Middleware Layers

See content credentials

I’m not sure where DALL-E gets it’s inspiration from but I love the nonsensical images

Middleware allowed applications to connect with predefined contracts without requiring direct database copies. And service calls provided a standardized, flexible way to request specific data without duplicating parts of databases.

The amount of SOAP and XSDs were staggering. There were so many debates on the best way to build XSDs, I still remember the Venetian Blinds or Russian Doll patterns. Also this phase started the different integration wars. Either MQ Series or Microsoft BizTalk and the “guaranteed delivery”. We had some lovely discussions around Idempotency. There were even companies that the middleware layer was so crucial that it never was allowed to be updated! Ahhh, the good old days of Technical Debt.

This stage also marked the initial shift towards a more modular design (Service Oriented Architecture). Applications began interfacing with smaller, more specialized data parts. Which allowed us developers to create and maintain parts of a system landscape independently. APIs provided a way for these modules to interact without tight coupling (we thought). APIs allowed applications to fetch data on demand, reducing the delays of scheduled ETL batch jobs. We could access specific data without creating additional copies, improving data consistency. And new applications could easily connect to the existing ecosystem, providing a more flexible integration model.

But if I’m honest this traditional middleware and Service calls also had it’s limitations. These requests could create latency, especially when handling complex queries or high-frequency requests. Managing versions or dependencies became increasingly challenging, often requiring additional infrastructure like Service Locators and gateways.

What also started around that time frame were Operational Databases. A sort of specialized Data Warehouse were all of the different application data was stored to offload the stress of all these API calls. The trouble was with storing the application data in another database, the internal security model of these applications was lost and we had to come up with an extra security layer to have a measure of control on who could access what.

Phase 3: Event-Driven Architecture or Microservices

See content credentials

I’m not sure what DALL-E picked up, but we now have more colour and not the cool 60s vibe

Around this timeframe we have started to leave the company’s datacenter and move towards Cloud infrastructure. As applications evolved into microservices architectures, our previous traditional approach to integration strategies fell short.

Microservices require real-time data access and their inherent loose coupling to function effectively and scale independently. This need gave rise to event-driven architectures as a central integration strategy, enabling microservices to communicate efficiently while maintaining autonomy.

In an event-driven architecture, instead of requesting data through direct API calls or relying on periodic updates, microservices subscribe to events—discrete records of state changes within the system. Each time a relevant change occurs, such as a new order, it generates an event. This event is then picked up by any microservice that subscribes to it, allowing them to react instantly.

The event-driven model offers significant advantages. Microservices are designed to function independently, with each service subscribing only to events it needs. This decoupling ensures that services are not tightly bound to one another, improving resilience and allowing changes or updates in one service without impacting others.

Events propagate in real-time, so when a state change occurs, any relevant microservice can act immediately. For example, in an order processing flow, a payment service can trigger an inventory update and notify a shipping service as soon as an order is confirmed.

Microservices also introduce new challenges: managing lifecycle, handling high volumes, and ensuring data consistency across events. I still remember the talk from the CIO from Uber when they were the flagship microservices implementor. “I love the weekends, all the developers are off and they are not breaking everything, everywhere at once.”

Managing and scaling event-driven systems can be challenging, particularly as microservices count rises, setting the stage for the next level of integration: ZeroCopy.

Phase 4: The Rise of ZeroCopy Patterns

See content credentials

It’s probably my state of mind that I find these images so incredibly funny. But really? Wata access?

ZeroCopy patterns represent the latest and greatest in data integration (according to the suppliers of said technology). This approach enables applications to access data directly in shared memory or in isolated, secure environments without needing to copy it across systems. ZeroCopy offers a streaming approach, allowing applications to interact with a single, real-time data source.

As my focus as an architect over time drifted form building integrations to just using what the different apps offer, I don’t have much to say about ZeroCopy. Other than that it reduces storage costs and bandwidth usage by eliminating data duplication. Also with a single data source accessable by all applications, ZeroCopy avoids synchronization issues common in earlier integrations phases.

So what is next? Where are we headed?

Future Directions: Hybrid Approaches and Secure ZeroCopy Models

As organizations continue to embrace microservices together with their big applications like Salesforce and ERP, future data integration strategies are likely to blend traditional approaches with ZeroCopy for a more adaptable, performance-oriented model.

Here’s what I think we can expect moving forward:

Hybrid Integration Models: ZeroCopy patterns will coexist with APIs, operational databases, and event-driven models, giving microservices the flexibility to choose the best integration method based on data access requirements. And making it hard for architects that like simple models and their complexity decreased.Advanced Security and Access Controls for Shared Data: As ZeroCopy adoption grows, ensuring data privacy and security will be crucial. Innovations in data isolation, encryption, and access control will protect shared data in environments with multi-tenant applications and services. Just look at Salesforce’s DataCloud.Data Mesh and Data Fabric Architectures: With the rise of data mesh and fabric architectures, organizations will start to move towards decentralized data ownership and access, reducing the need for data duplication and aligning well with ZeroCopy principles. These architectures emphasize local data access within domains while supporting broader, seamless data sharing across the organization.

The evolution of data integration strategies, from early shared databases copies to ZeroCopy, reflects to me that we do not (yet) have a cohesive approach to data sharing, where applications and microservices can interact with real-time data securely and efficiently. But, let’s keep it positive, we are starting to find our way. I think.

Price reductions for IaaS lead to? In the last six months the continued decline in pricing for IaaS is a signal that more business is sought. IBM thinks that the prices

One of the propositions of cloud is that it should be possible – through the use of intelligent software – to build reliable systems on top of unreliable hardware. Just like you can build reliable and affordable storage systems using RAID (Redundant Arrays of Inexpensive Disks).
One of the largest cloud providers says: “everything that can go wrong, will go wrong”.

So the hardware is unreliable, right? Mmm, no. Nowadays most large cloud providers buy very reliable simpler (purpose-optimized) equipment upstream of suppliers in the server market. Sorry Dell, HP & Lenovo there goes a large part of your market. Because when running several hundred thousands of servers a failure rate of 1 PPM versus 2 PPM (parts per million) makes a huge difference.

Up-time is further increased by thinking carefully about what exactly is important for reliability. For example: one of the big providers routinely removes the overload protection from its transformers. They prefer that occasionally a transformer costing a few thousand dollars breaks down, to regularly having whole isles loose power because a transformer manufacturer was worried about possible warranty claims.

The real question continues to be what happens to your application when something like this happens. Does it simply remain operational, does it gracefully decline to a slightly simpler, slightly slower but still usable version of itself, or does it just crash and burn? And for how long?
The cloud is not about technology or hardware, it’s about mindset and the application architecture.

 

Price reductions for IaaS lead to? In the last six months the continued decline in pricing for IaaS is a signal that more business is sought. IBM thinks that the prices

Generated based on an actual picture of Cluedo that looks nothing like the old dutch version that we have.

Architecture is Like a Game of Cluedo

Martijn Veldkamp

“Strategic Technology Leader | Customer’s Virtual CTO | Salesforce Expert | Helping Businesses Drive Digital Transformation”

September 18, 2024

Really? Designing an architecture is as fun as a game of Cluedo? I know, they seem worlds apart. One is a classic murder mystery board game, and the other is a structured approach to solving complex technical problems.

However, when you squint your eyes a bit and look a bit cross-eyed, the process of uncovering the right solution architecture can feel surprisingly similar to solving the mystery in Cluedo.

And be honest, have you ever come across a solution design that looked like a crime scene?

The Mystery

In Cluedo, the goal is to determine who committed the crime, with what weapon, and in which room. In solution architecture, your “mystery” is figuring out the right components, technologies, and design patterns that best fit the problem you’re trying to solve. Just like in Cluedo, there’s probably a combination that will form the hard of to the solution.

The Suspects

In Cluedo, you have suspects like Colonel Mustard and Miss Scarlett. In solution architecture, your suspects are the different components or applications you could use. Salesforce application, datawarehouses, micro services, frameworks, and cloud providers. Each has its own characteristics and the pros, and cons determine their fit.

The Weapons

Weapons in Cluedo are the tools used to commit the crime, like the candlestick or the revolver. In solution architecture, the “weapons” are the tools and technologies you might use to build the solution—whether it’s a specific programming language, an API, or a security protocol. Choosing the right “weapon” is crucial to the success of your architecture.

The Rooms

The rooms in Cluedo represent different locations where the crime might have occurred, such as the kitchen or the library. In the world of solution architecture, these are the environments where your solution will operate—like cloud platforms, on-premises data centers, or hybrid environments. Each environment has its own set of rules and constraints that you need to consider.

Gathering Clues

In Cluedo, you move from room to room, gathering clues by making suggestions and seeing which cards other players have. Similarly, in solution architecture, you gather information by asking questions, conducting stakeholder interviews, and analyzing requirements. You need to understand the business needs, technical constraints, and existing systems to start narrowing down your options.

Making Suggestions

During the game, you make suggestions like “I think it was Colonel Mustard with the candlestick in the library.” In solution architecture, you make initial design proposals. For example, “I suggest using a microservices architecture with a NoSQL database on a cloud platform.” You then test these suggestions by validating them against the requirements and constraints and talking to your peers or even presenting them as options to the Architecture Board. Depends a bit on how brave you are.

Refining Your Solution

As you gather more clues in Cluedo, you begin to eliminate possibilities and zero in on the solution. In solution architecture, this is akin to iterating on your design. You continuously refine your architecture, testing assumptions, and adjusting components until you have the optimal setup. For now, at least.

Making the Final Accusation

In Cluedo, the game is won by making the correct accusation: identifying the murderer, weapon, and location.

In solution architecture there are multiple wins. The first “win” is when you finalize a solution that meets all the requirements. Another “win” is to test it with your peers or Lead Devs to see if it is feasible. Approval from stakeholders is always a big win. And finally when your design goes into production, that is the best win of all!

So next time you’re architecting a solution, remember other people might see it as a crime scene!

Price reductions for IaaS lead to? In the last six months the continued decline in pricing for IaaS is a signal that more business is sought. IBM thinks that the prices

Tiny banana, based on the article below

Are We Building Lasting Value or the World’s Most Expensive Bubble?

Martijn Veldkamp

“Strategic Technology Leader | Customer’s Virtual CTO | Salesforce Expert | Helping Businesses Drive Digital Transformation”

October 24, 2025

The AI infrastructure boom is not an abstract headline to me. It’s hitting close to home. Here in the Netherlands, close to where I live. Microsoft has acquired 50 hectares for a new data center, with the CEO citing 300,000 customers who “want to store their data close by in a sovereign country.” This is at Agriport where there are two datacenters already (Microsoft and Google) which were recently in the news that, oopsie, we consume more water and power then we said in our initial plans.

The local impact is pretty bad. This planned buildout is consuming so much power and water from the grid that it’s halting the construction of new businesses and even houses for young people in the surrounding villages. There is a report that mentions a striking change in an environmental vision (omgevingswet), which made extra building land available on which new data centers could be built. This was based on a map that, according to the researchers of Berenschot, was of poor quality, was poorly substantiated and whose origin could not be traced.

It’s made me step back and question the drivers behind this relentless push for more datacenters.

Is This the Right Path to Better AI?

The current building and investing frenzy is built on a two beliefs: that adding more compute power will inevitably lead to better, perhaps even superintelligent, AI. And two, that the big tech companies cannot lose this AI war. OpenAI’s Sam Altman is reportedly aiming to create a factory that can produce a gigawatt of new AI infrastructure every week.

Yet, a growing group of AI researchers is questioning this “scaling hypothesis,” suggesting we may be hitting a wall and that other breakthroughs are needed. This skepticism is materializing elsewhere, too. Meta, a leader in this race, just announced it will lay off roughly 600 employees, with cuts impacting its AI infrastructure and Fundamental AI Research (FAIR) units.

The Bizarre Economics of the race to build more

In earlier decades, datacenter growth was a story of reusing the leftovers of older industries. Repurposing the power infrastructure left over from an earlier era, like old steel mills and aluminum plants.

Today, we’re doing it again. Hyperscalers are building from datacenters at a massive scale, competing for everything that leads to a datacenter, from land to skilled labor to copper wire and transformers, leading to plans to build their own powerplants. The economics are astounding.

Why do I say that? Datacenters also depreciate, not as fast the nVidia chips, but still. Are we sure we are not building the new leftovers like the old steel mills?

FOMO, Negative Returns, and the Bubble Question

We now have megacap tech stocks, once celebrated as “asset-light”. They are spending nearly all their cash flow on datacenters. Losing the AI race is seen as existential. This means all future cash flow, for years to come, may be funneled into projects with fabulously negative returns on capital. Lighting hundreds of billions of dollars on fire instead of losing out to a competitor, even when the ultimate prize is unclear.

If this turns out to be a bubble, what lasting thing gets built? The dot-com bubble left me with a memory of Nina Brink and the World Online drama. But if the AI bubble pops, 70% of the capital (the GPUs) will be worthless in 3 years. We’ll be left with overbuilt shells and power infrastructure. Perhaps the only lasting value will be a newly an industrialized supply chain for ehhh.

What’s your take? Are we building the next industrial revolution or the world’s most expensive, short-lived infrastructure?

Price reductions for IaaS lead to? In the last six months the continued decline in pricing for IaaS is a signal that more business is sought. IBM thinks that the prices

Famous quote from Tom Hanks in the movie Apollo 13

“In space, there is no problem so bad that you cannot make it worse.”

Martijn Veldkamp

“Strategic Technology Leader | Customer’s Virtual CTO | Salesforce Expert | Helping Businesses Drive Digital Transformation”

August 9, 2023

In one of his TED Talks, Chris Hadfield mentions this saying and sheds a light on how to deal with the complexity, the sheer pressure, of dealing with dangerous and scary situations.

Risk Management

In the astronaut business the space shuttle is a very complicated vehicle. It’s the most complicated flying machine ever built. All for a single purpose. To escape earth’s gravity well, launch cargo and return safely.

For the astronauts and the people watching it, it is an amazing experience.

But NASA calculated the odds of a catastrophic event during the first five shuttle launches. It was one in nine. Later launches were getting better, about one in 38 or so.

Why do we take that risk? Who would do something that dangerous?

The biggest risk is the missed chance

Next to the personal dreams and ambitions that astronauts have, Space stations gives us the opportunity to do experiments in zero gravity. We have a chance to learn what the substance of the universe is made of. We get to see Earth from a whole different perspective. (Maybe flat-earthers need a stay onboard of that Space Station, but that is a whole other topic).

Everything that we do for the first time is hard. If we never do the impossible, how do we progress as a species? I think it is human nature, to improve, to explore, to do things never done before.

“If you always do what you always did,
you will always get what you always got.”

by Albert Einstein

We have athletes that perform ultra triathlons. Strong-men en -women who can lift 500 kgs. And, closer to home, we have architects that help transform companies to be closer to their customers.

Do or do not, there is no try

Master Yoda from the movie The Empire strikes back from LucasArts (Disney)

If the problems seem unsurmountable, tangled together, impossible to move forward, we need a fresh perspective. A way to frame the problem in a different light. To come up with hypotheses to solve small parts. And then design a small experiment to test it. If it does not work, we go back to the drawing board. We are not trying to make the problem bigger.

System thinking still holds true, optimising or solving a small part of the problem does not optimize the whole system.

Price reductions for IaaS lead to? In the last six months the continued decline in pricing for IaaS is a signal that more business is sought. IBM thinks that the prices

The Amber Trap

Martijn Veldkamp

“Strategic Technology Leader | Customer’s Virtual CTO | Salesforce Expert | Helping Businesses Drive Digital Transformation”

August 29, 2025

Last week I was cleaning out some old project folders (a therapeutic exercise I recommend to any one). I learned that from a fellow architect. Does this spark joy? Anywho, I stumbled upon a perfectly preserved architecture document from 2018. Complete with detailed diagrams, stakeholder matrices, and decision rationales that read like artifacts from a lost civilization. The document was comprehensive! And absolutely useless for understanding that current system. It had become digital amber. Beautiful preserved, but tragically misleading.

This got me thinking about my previous article on the contextual decay in software systems. While that article focused on how LLMs miss the living context around code, there’s another phenomenon at play. Sometime we do preserve context, but it fossilizes into something that actively misleads rather than informs.

The Dominican Republic of Documentation

In the mountains of the Dominican Republic, amber deposits contain some of the most perfectly preserved specimens in the natural world. Insects trapped in tree resin millions of years ago look as fresh as if they died yesterday. Their delicate wing structures, compound eyes, and even cellular details remain intact. It’s breathtaking. It’s also a snapshot of a world that no longer exists. And people pay serious money for these risk fraught mined specimens, that is a comple other rabbit hole. Yes I ended up on catawiki.

The forest that produced that resin is gone. The ecosystem those insects lived in has vanished. TThe amber preserves the form perfectly while the function has become irrelevant.

The Amber Trap: When Context Gets Fossilized

This leads me to a fourth law, a corollary to Contextual Decay. I call it

The Amber Trap: A system’s context, when perfectly preserved without evolving, becomes a liability that actively harms its future development.

In my previous article, I argued that code repositories are like dinosaur skeletons. All structure but miss the context. Documentation was supposed to be that soft tissue, filling in the gaps with the why behind the what. But here’s what I’ve realized: soft tissue doesn’t just decay it sometimes it mummifies.

Real soft tissue decomposes quickly, which is actually helpful. It forces paleontologists to constantly seek new evidence, to challenge their assumptions, to remain humble about their reconstructions. See https://www.nhm.ac.uk/discover/news/2025/august/bizarre-armoured-dinosaur-spicomellus-afer-rewrites-ankylosaur-evolution.html

Outdated Architecture Decisions

I’ve seen this pattern repeatedly across organizations:

The Authentication Fossil: A security architecture document from 2019 explaining why we chose a custom JWT implementation instead of OAuth because “we need fine-grained control and the OAuth landscape is too fragmented.” Today, that same custom implementation is our biggest security vulnerability, but new developers read the document and assume there’s still a good reason for not using industry standards.

The Database Decision Artifact: Documentation explaining why we chose MongoDB because “we need to move fast and can’t be constrained by rigid schemas.” The current system now has more validation rules and data consistency requirements than most SQL databases, but the document makes it seem like any proposal to add structure is fighting against architectural principles.

Price reductions for IaaS lead to? In the last six months the continued decline in pricing for IaaS is a signal that more business is sought. IBM thinks that the prices

Yes, the zeroth law!

Engineering doesn’t solve problems…

Martijn Veldkamp

“Strategic Technology Leader | Customer’s Virtual CTO | Salesforce Expert | Helping Businesses Drive Digital Transformation”

July 10, 2025

We just trade them in for newer, more interesting ones. And that’s our real job.

You ever spend a week with your team fixing a critical system, finally push the fix live and feel that wave of relief? Only to get a Slack message two months later. “Hey, weird question. Ever since we deployed that one patch, the reporting dashboard runs… backwards?”

Of course it does.

This is a beautiful lie

We have a sacred belief that we are problem solvers. We take a messy world and make it clean with elegant logic. We’re not problem solvers. We are professional Problem Shifters. Think about it. We “fixed” monolithic backends… and created the 1,000-microservice headache that is only stable in the weekends as no one is deploying (Real story from Uber Lead Architect). We “fixed” manual server deployments… and created the 3000-line YAML file. Therefore we now need anchors and aliases.

It makes me think of the invention of the refrigerator. It’s a modern miracle. On the other hand, the cooling liquid tore a hole in the ozone layer.

That’s our job in a nutshell. We aren’t creating a utopia of solved problems. We’re just swapping today’s problem for tomorrow’s way more fascinating crisis. Yeah, I’m old and I was there to fix the year 2000 problem.

My PTO is coming up, and I was looking at books to take with me. Next to Bill Bryson’s a short history of nearly everything I also packed Hitchhikers guide to the galaxy by Douglas Adams. I’ve read them before, but never combined them.

I think it is safe to say that we have found the :

First Law of Engineering Thermodynamics

Problem energy can neither be created nor destroyed, only changed in form.

Saying this out loud is a bit cynical. But I think you can also make it into a superpower. If we are smart about our entire approach.

We need to Stop chasing “done.” And start chasing “durable.” The goal isn’t just to close the ticket. It’s to create a solution that won’t spawn five more tickets.Ask the “Next Question.” The most important question is “if we build this, what new problem are we creating?” Acknowledge the trade off upfront.Redefine your win. The best engineers aren’t the ones who solve the most problems. They’re the ones who create the highest quality of future problems. That did not come out right. It’s having less complexity and less problems.

Building stuff is easy. The same goes for adding quick fixes, work arounds or low hanging fruits. Building a maintainable future that’s the hard part.

What’s the best “future problem” you’ve ever created? Drop your best story in the comments.

Price reductions for IaaS lead to? In the last six months the continued decline in pricing for IaaS is a signal that more business is sought. IBM thinks that the prices

Beyond the Policy pdf

Martijn Veldkamp

“Strategic Technology Leader | Customer’s Virtual CTO | Salesforce Expert | Helping Businesses Drive Digital Transformation”

June 27, 2025

An Architect’s Guide to Governing Agentic AI

Your company is rushing to deploy autonomous AI. A traditional governance playbook won’t work. The only real solution is to architect for control from day one.

You can feel the pressure on CTOs and Chief Architects from every direction. The board wants to know your AI strategy. Your development teams are experimenting with autonomous agents that can either write their own code. Or access internal systems, and “oh no!” interact with customers. I understand. The marketing engine is working overtime and the promise is enormous: radical efficiency, hyper-personalized services, and a significant competitive edge.

But as a CIO or CTO, you’re the one who has to manage the fallout. You’re the one left with the inevitable governance nightmare when an autonomous entity makes a critical mistake.

Statement of intent

Let’s be clear: the old governance playbook is obsolete. A 20-page PDF outlining “responsible AI principles” is a statement of intent, not a control mechanism. In the age of agents that can act independently, governance cannot be an afterthought! I strongly believe it must be a core pillar of your Enterprise Architecture.

This isn’t about blocking innovation. It’s about building the necessary guardrails to accelerate, safely.

The New Risk: Why Agentic AI Isn’t Just Another Tool

We must stop thinking of Agentic AI as just another piece of software. Traditional applications are deterministic; they follow pre-programmed rules. Agentic AI is different. It’s a new, probabilistic class of digital actor.

Great example I heard

AI interpretation of Agents running around like Lemmings

Think of it this way. You just hired a million-dollar team of hyper-efficient, infinitely scalable junior employees. But they have no (learned) inherent common sense, no manager, and no intuitive understanding of your company’s risk appetite. Makes me think of the game Lemmings?

This looks a lot when IaaS, PaaS & SaaS and moving towards the Cloud was being discussed, but with something extra:

Data Exfiltration & Leakage: An agent tasked with “summarizing sales data” could inadvertently access and include sensitive data in its output, as seen when Samsung employees leaked source code via ChatGPT prompts.Runaway Processes & Costs: An agent caught in a loop or pursuing a flawed goal can consume enormous computational resources in minutes, long before a human can intervene. The $440 million loss at Knight Capital from a faulty algorithm is a stark reminder of how quickly automated systems can cause financial damage.Operational Catastrophe: An agent given control over logistics could misinterpret a goal and reroute an entire supply chain based on flawed reasoning, causing chaos that takes weeks to untangle.Accountability Black Holes: When an agent makes a decision, who is responsible? The developer? The data provider? The business unit that deployed it? Without a clear audit trail of the agent’s “reasoning,” assigning accountability becomes impossible.

A policy document can’t force it to align with your business intent. The only answer is to build the controls directly into the environment where the agents live and operate.

Architecting for Control

Instead of trying to police every individual agent, a pragmatic leader architects the system that governs all of them.

Pillar 1: The Governance Gateway

Before any agent can go live executing a significant actions like accessing a database, calling an external API, spending money, or communicating with a customer. It must pass through a central checkpoint. This Governance Gateway is is where you enforce the hard rules:

Cost Control: Set strict budget limits. “This agent cannot exceed $50 in compute costs for this task.”Risk Thresholds: Define the agent’s blast radius. “This agent can read from the Account object, but can only write to a Notes field.”Tool Vetting: Maintain an up to date “allowed list” of approved tools and APIs the agent is permitted to use.Human-in-the-Loop Triggers: For high-stakes decisions, the design should automatically pause the action and requires human approval before proceeding.

This is should be a familiar concept familiar because I borrowed it from API gateways, now just applied to Agentic actions. An approved Design is your primary lever of control.

Pillar 2: Decision Traceability

When something goes wrong, “what did the agent do?” is the wrong question. The right question is, “why did the agent do it?” Standard logs are insufficient. You need a system dedicated to providing deep observability into an agent’s reasoning.

This system must capture:

The Initial Prompt/Goal: What was the agent originally asked to do?The Chain of Thought: What was the agent’s step-by-step plan? Which sub-tasks did it create?The Data Accessed: What specific information did it use to inform its decision?The Tools Used: Which APIs did it call and with what parameters?The Final Output: The action it ultimately took.

This level of traceability is non-negotiable for forensic analysis, debugging, and, crucially, for demonstrating regulatory compliance. It’s the difference between a mysterious failure and an explainable, correctable incident.

Pillar 3: Architected Containment

You wouldn’t let a new employee roam the entire corporate network on day one. Don’t let an AI agent do it either. Agents must operate within carefully architected contained environments.

This goes beyond standard network permissions. Architected Containment means:

Scoped Data Access: The agent only has credentials to access the minimum viable dataset required for its task.Simulation & Testing: Before deploying an agent that can impact real-world systems, it must first prove its safety and efficacy in a high-fidelity simulation of that environment.

Containment isn’t about limiting the agent’s potential; it’s about defining a safe arena where it can perform without creating unacceptable enterprise-wide risk.

From Risk Mitigation to Strategic Advantage

Building this architectural foundation may seem like a defensive move, but it is fundamentally an offensive strategy. This first version of the framework is more than a set of features; it’s a strategic shift. It allows you to move away from the impossible task of policing individual agents and towards the pragmatic, scalable model of architecting their environment.

This is how you build a platform for innovation based on trust, safety, and control. It’s how you empower your organization to deploy more powerful AI, faster, because the guardrails are built-in, not bolted on.

The organizations that master AI governance will be the ones that can deploy more powerful agents, more quickly, and with greater confidence than their competitors. They will unlock new levels of automation and innovation because they have built a system based on trust and control.

This architecture transforms the role of EA and IT leadership. You are no longer just a support function trying to keep up; you become the strategic enabler of the company’s AI-powered future. You provide the business with a platform for safe, scalable experimentation.

The conversation needs to shift today. Stop asking “what could this AI do?” and start architecting the answer to “what should this AI be allowed to do?”

What’s your take? Of these three pillars: Gateway, Decision traceability, and Containment: which presents the biggest architectural challenge for your organization right now?

Share your thoughts in the comments below.