Stop spinning your wheels: The AI Game Plan
My previous articles explored the “AI Battle Royale” – the explosive proliferation of AI models by all these different compa

Strategic Technology Leader | Customers Virtual CTO | Salesforce Expert | Helping Businesses Drive Digital Transformation
Stop spinning your wheels: The AI Game Plan
My previous articles explored the “AI Battle Royale” – the explosive proliferation of AI models by all these different compa
The Illusion of “Fire and Forget”
Dishwashers, Robot Vacuums, and AI As our dishwasher broke down with very interesting but totally unhelpful error messages, I had to d
The Human Side of Automation
We plan our technology implementations. We map our approach, define KPIs, and anticipate efficiency gains. Yet, even the most crafted AI in
A Curmudgeons Lament
I’ve seen the digital age unfold, from the monolithic hum of mainframes to the incessant notifications of the device in your pocket. And I’m here t
The Law of Contextual Decay
I was on PTO in lovely Luxemburg (nature sure needs a lot of rain to look so good) and the campsite where we stayed had a folder for enticin
Emperor the algorithm says No
The life of a social media influencer looks to my children as the pinnacle of doing as less as possible. Having creative freedom, direct a
Escaping the Feature Factory
We spent the last decade obsessing over speed. We adopted Agile, we SCRUMmed, did the DevOps, and we measured our success by velocity and d
Famous quote from Tom Hanks in the movie Apollo 13
“In space, there is no problem so bad that you cannot make it worse.”
Martijn Veldkamp
“Strategic Technology Leader | Customer’s Virtual CTO | Salesforce Expert | Helping Businesses Drive Digital Transformation”
August 9, 2023
In one of his TED Talks, Chris Hadfield mentions this saying and sheds a light on how to deal with the complexity, the sheer pressure, of dealing with dangerous and scary situations.
Risk Management
In the astronaut business the space shuttle is a very complicated vehicle. It’s the most complicated flying machine ever built. All for a single purpose. To escape earth’s gravity well, launch cargo and return safely.
For the astronauts and the people watching it, it is an amazing experience.
But NASA calculated the odds of a catastrophic event during the first five shuttle launches. It was one in nine. Later launches were getting better, about one in 38 or so.
Why do we take that risk? Who would do something that dangerous?
The biggest risk is the missed chance
Next to the personal dreams and ambitions that astronauts have, Space stations gives us the opportunity to do experiments in zero gravity. We have a chance to learn what the substance of the universe is made of. We get to see Earth from a whole different perspective. (Maybe flat-earthers need a stay onboard of that Space Station, but that is a whole other topic).
Everything that we do for the first time is hard. If we never do the impossible, how do we progress as a species? I think it is human nature, to improve, to explore, to do things never done before.
“If you always do what you always did,
you will always get what you always got.”
by Albert Einstein
We have athletes that perform ultra triathlons. Strong-men en -women who can lift 500 kgs. And, closer to home, we have architects that help transform companies to be closer to their customers.
Do or do not, there is no try
Master Yoda from the movie The Empire strikes back from LucasArts (Disney)
If the problems seem unsurmountable, tangled together, impossible to move forward, we need a fresh perspective. A way to frame the problem in a different light. To come up with hypotheses to solve small parts. And then design a small experiment to test it. If it does not work, we go back to the drawing board. We are not trying to make the problem bigger.
System thinking still holds true, optimising or solving a small part of the problem does not optimize the whole system.
Generated based on an actual picture of Cluedo that looks nothing like the old dutch version that we have.
Architecture is Like a Game of Cluedo
Martijn Veldkamp
“Strategic Technology Leader | Customer’s Virtual CTO | Salesforce Expert | Helping Businesses Drive Digital Transformation”
September 18, 2024
Really? Designing an architecture is as fun as a game of Cluedo? I know, they seem worlds apart. One is a classic murder mystery board game, and the other is a structured approach to solving complex technical problems.
However, when you squint your eyes a bit and look a bit cross-eyed, the process of uncovering the right solution architecture can feel surprisingly similar to solving the mystery in Cluedo.
And be honest, have you ever come across a solution design that looked like a crime scene?
The Mystery
In Cluedo, the goal is to determine who committed the crime, with what weapon, and in which room. In solution architecture, your “mystery” is figuring out the right components, technologies, and design patterns that best fit the problem you’re trying to solve. Just like in Cluedo, there’s probably a combination that will form the hard of to the solution.
The Suspects
In Cluedo, you have suspects like Colonel Mustard and Miss Scarlett. In solution architecture, your suspects are the different components or applications you could use. Salesforce application, datawarehouses, micro services, frameworks, and cloud providers. Each has its own characteristics and the pros, and cons determine their fit.
The Weapons
Weapons in Cluedo are the tools used to commit the crime, like the candlestick or the revolver. In solution architecture, the “weapons” are the tools and technologies you might use to build the solution—whether it’s a specific programming language, an API, or a security protocol. Choosing the right “weapon” is crucial to the success of your architecture.
The Rooms
The rooms in Cluedo represent different locations where the crime might have occurred, such as the kitchen or the library. In the world of solution architecture, these are the environments where your solution will operate—like cloud platforms, on-premises data centers, or hybrid environments. Each environment has its own set of rules and constraints that you need to consider.
Gathering Clues
In Cluedo, you move from room to room, gathering clues by making suggestions and seeing which cards other players have. Similarly, in solution architecture, you gather information by asking questions, conducting stakeholder interviews, and analyzing requirements. You need to understand the business needs, technical constraints, and existing systems to start narrowing down your options.
Making Suggestions
During the game, you make suggestions like “I think it was Colonel Mustard with the candlestick in the library.” In solution architecture, you make initial design proposals. For example, “I suggest using a microservices architecture with a NoSQL database on a cloud platform.” You then test these suggestions by validating them against the requirements and constraints and talking to your peers or even presenting them as options to the Architecture Board. Depends a bit on how brave you are.
Refining Your Solution
As you gather more clues in Cluedo, you begin to eliminate possibilities and zero in on the solution. In solution architecture, this is akin to iterating on your design. You continuously refine your architecture, testing assumptions, and adjusting components until you have the optimal setup. For now, at least.
Making the Final Accusation
In Cluedo, the game is won by making the correct accusation: identifying the murderer, weapon, and location.
In solution architecture there are multiple wins. The first “win” is when you finalize a solution that meets all the requirements. Another “win” is to test it with your peers or Lead Devs to see if it is feasible. Approval from stakeholders is always a big win. And finally when your design goes into production, that is the best win of all!
So next time you’re architecting a solution, remember other people might see it as a crime scene!
The Evolution of Data Integration
Martijn Veldkamp
“Strategic Technology Leader | Customer’s Virtual CTO | Salesforce Expert | Helping Businesses Drive Digital Transformation”
November 1, 2024
In today’s interconnected applications landscape, we rely on data shared across multiple systems and services. Throughout my career as an architect, integration strategies have evolved from basic ETL database copies to monolithic middleware, operational databases, API-driven microservices, and now ZeroCopy patterns.
As I’m almost a year older again I look back at all the stuff I build, supported and architected. I see kind of four big phases in Data integration architecture. (cool idea for a t-shirt).
ETL -> Middleware -> Microservices -> ZeroCopy
Phase 1: Database Copies and ETL
See content credentials
It looks so spiffy, but generating images and getting the text right is difficult
In the early days, applications were often monolithic—self-contained systems that handled all functions internally. Database copies or ETL (Extract, Transform, Load) processes were standard for sharing data between these applications. We created copies of parts of entire databases to allow other applications to access data without impacting the first system. ETL batch jobs would extract data, transform it into compatible formats, and load it into the destination system on a scheduled basis.
I still remember the C++ libraries I’ve written to access the mainframe and get some batch job started. Those integrations were hard coded and oh so fragile.
While this approach enabled some data sharing, it had many limitations. Copied data was stale, becoming out-of-date, creating “data silos” and thus inconsistent information across applications. These running ETL batch jobs introduced their own costs.
As the number of applications grew together with them becoming more specialized, the need for real-time data grew. Together with the technology push we kind of created the rise of middleware layers.
Phase 2: Middleware Layers
See content credentials
I’m not sure where DALL-E gets it’s inspiration from but I love the nonsensical images
Middleware allowed applications to connect with predefined contracts without requiring direct database copies. And service calls provided a standardized, flexible way to request specific data without duplicating parts of databases.
The amount of SOAP and XSDs were staggering. There were so many debates on the best way to build XSDs, I still remember the Venetian Blinds or Russian Doll patterns. Also this phase started the different integration wars. Either MQ Series or Microsoft BizTalk and the “guaranteed delivery”. We had some lovely discussions around Idempotency. There were even companies that the middleware layer was so crucial that it never was allowed to be updated! Ahhh, the good old days of Technical Debt.
This stage also marked the initial shift towards a more modular design (Service Oriented Architecture). Applications began interfacing with smaller, more specialized data parts. Which allowed us developers to create and maintain parts of a system landscape independently. APIs provided a way for these modules to interact without tight coupling (we thought). APIs allowed applications to fetch data on demand, reducing the delays of scheduled ETL batch jobs. We could access specific data without creating additional copies, improving data consistency. And new applications could easily connect to the existing ecosystem, providing a more flexible integration model.
But if I’m honest this traditional middleware and Service calls also had it’s limitations. These requests could create latency, especially when handling complex queries or high-frequency requests. Managing versions or dependencies became increasingly challenging, often requiring additional infrastructure like Service Locators and gateways.
What also started around that time frame were Operational Databases. A sort of specialized Data Warehouse were all of the different application data was stored to offload the stress of all these API calls. The trouble was with storing the application data in another database, the internal security model of these applications was lost and we had to come up with an extra security layer to have a measure of control on who could access what.
Phase 3: Event-Driven Architecture or Microservices
See content credentials
I’m not sure what DALL-E picked up, but we now have more colour and not the cool 60s vibe
Around this timeframe we have started to leave the company’s datacenter and move towards Cloud infrastructure. As applications evolved into microservices architectures, our previous traditional approach to integration strategies fell short.
Microservices require real-time data access and their inherent loose coupling to function effectively and scale independently. This need gave rise to event-driven architectures as a central integration strategy, enabling microservices to communicate efficiently while maintaining autonomy.
In an event-driven architecture, instead of requesting data through direct API calls or relying on periodic updates, microservices subscribe to events—discrete records of state changes within the system. Each time a relevant change occurs, such as a new order, it generates an event. This event is then picked up by any microservice that subscribes to it, allowing them to react instantly.
The event-driven model offers significant advantages. Microservices are designed to function independently, with each service subscribing only to events it needs. This decoupling ensures that services are not tightly bound to one another, improving resilience and allowing changes or updates in one service without impacting others.
Events propagate in real-time, so when a state change occurs, any relevant microservice can act immediately. For example, in an order processing flow, a payment service can trigger an inventory update and notify a shipping service as soon as an order is confirmed.
Microservices also introduce new challenges: managing lifecycle, handling high volumes, and ensuring data consistency across events. I still remember the talk from the CIO from Uber when they were the flagship microservices implementor. “I love the weekends, all the developers are off and they are not breaking everything, everywhere at once.”
Managing and scaling event-driven systems can be challenging, particularly as microservices count rises, setting the stage for the next level of integration: ZeroCopy.
Phase 4: The Rise of ZeroCopy Patterns
See content credentials
It’s probably my state of mind that I find these images so incredibly funny. But really? Wata access?
ZeroCopy patterns represent the latest and greatest in data integration (according to the suppliers of said technology). This approach enables applications to access data directly in shared memory or in isolated, secure environments without needing to copy it across systems. ZeroCopy offers a streaming approach, allowing applications to interact with a single, real-time data source.
As my focus as an architect over time drifted form building integrations to just using what the different apps offer, I don’t have much to say about ZeroCopy. Other than that it reduces storage costs and bandwidth usage by eliminating data duplication. Also with a single data source accessable by all applications, ZeroCopy avoids synchronization issues common in earlier integrations phases.
So what is next? Where are we headed?
Future Directions: Hybrid Approaches and Secure ZeroCopy Models
As organizations continue to embrace microservices together with their big applications like Salesforce and ERP, future data integration strategies are likely to blend traditional approaches with ZeroCopy for a more adaptable, performance-oriented model.
Here’s what I think we can expect moving forward:
Hybrid Integration Models: ZeroCopy patterns will coexist with APIs, operational databases, and event-driven models, giving microservices the flexibility to choose the best integration method based on data access requirements. And making it hard for architects that like simple models and their complexity decreased.Advanced Security and Access Controls for Shared Data: As ZeroCopy adoption grows, ensuring data privacy and security will be crucial. Innovations in data isolation, encryption, and access control will protect shared data in environments with multi-tenant applications and services. Just look at Salesforce’s DataCloud.Data Mesh and Data Fabric Architectures: With the rise of data mesh and fabric architectures, organizations will start to move towards decentralized data ownership and access, reducing the need for data duplication and aligning well with ZeroCopy principles. These architectures emphasize local data access within domains while supporting broader, seamless data sharing across the organization.
The evolution of data integration strategies, from early shared databases copies to ZeroCopy, reflects to me that we do not (yet) have a cohesive approach to data sharing, where applications and microservices can interact with real-time data securely and efficiently. But, let’s keep it positive, we are starting to find our way. I think.