Price reductions for IaaS lead to? In the last six months the continued decline in pricing for IaaS is a signal that more business is sought. IBM thinks that the prices

The Low-Hanging Fruit Diet: Nutritionally Deficient in Strategic Value

Martijn Veldkamp

“Strategic Technology Leader | Customer’s Virtual CTO | Salesforce Expert | Helping Businesses Drive Digital Transformation”

March 28, 2025

“We are kept from our goals not by obstacles but by a clear path to lesser goals.”

I’m not sure who pointed me in the the direction of Robert Brault, but I love that quote. This resonates with me on different levels. If I hear low hanging fruit one more time, I’m going to scream! Let’s just say my definition of “low hanging fruit” involves more planning and less bruising.

Forrest for the trees?

When we talk about the complexities of implementing new systems and strategies, let’s say something with AI, we often focus on achieving the immediate results. We need to show tangible benefits, now. With this pressure on the “lesser goals” there is a risk that we don’t see the forest because we’re focussed on the trees. An unintended consequence is that we start steering toward the wrong outcome. And thus off-course from our larger objectives.

For example: Back to the simple act of washing dishes. As I told before, our dishwasher broke down and that reminded me that automation of tasks transforms the tasks, not the outcome. But clean tupperware is not my end-goal. Nor is a clean house. My personal end-goal is comfortable living. I don’t want to live in a showroom or museum (it’s also not possible with four cats and two dogs). The “lesser goal” of automating the cleaning of the dishes, introduced a new set of tasks: correct loading , filter cleaning, and if you’re in luck cryptic error codes. It’s up to me to decide what I rather do. What is an efficient use of my time vs how much do I hate doing it and do I actually mind when I sit very comfortably on my couch?

Now, let’s switch this up. Imagine your idealized environment with a Salesforce rollout. When you implement specific features or modules, for example automating Lead capture.

These actions are aimed at achieving those “lesser goals”: more leads, faster follow-ups, improved efficiency in a specific area. You have all these kinds of Key Performance Indicators (KPIs) that you diligently track. There is an uptick in lead volume, a reduction in the average time to qualify a lead, or a faster completion rate for a specific workflow. This is the bright, reassuring signal that our “lesser goal” is being achieved.

Unforeseen side effects

There are these unforeseen side effects that might not be immediately obvious. These can hinder our progress towards our larger, strategic objectives. The normal thinking is that more leads, more qualified leads, more opportunities, more closed won. But there is also the risk of:

Data Quality Degradation: The ease of automated capture might lead to a flood of unqualified or duplicate leads, diluting the quality of your data and requiring significant cleanup efforts down the line. User Frustration and Workarounds: A rigid process might not accommodate all cases, leading sales reps to develop inefficient workarounds (cough Excel cough) outside the system, undermining the very efficiency you aimed for. Increased Reporting Burden, Diminished Insight: The focus on tracking the increased lead volume might lead to reports that no one has the time or expertise to analyze effectively, creating noise without genuine insight. Silos Around Automated Processes: Teams might become overly reliant on the automated lead flow, neglecting cross-functional communication or losing sight of the bigger customer journey. This is like a localized concentration of “light” that doesn’t illuminate the entire tank.

This is where Brault’s quote hits home for me. Because we can fix all that. De-duplication, validation rules, Lead Assignments, Data Quality reports. As you can see these unintended consequences are quietly accumulating and we have more busy work. If you become to focused on the easily measurable metrics, there is a risk that these very actions might be creating new obstacles and thus diverting us from our overarching strategic direction.

Losing direction while chasing immediate targets

This risk of losing direction while chasing immediate targets is certainly possible. Just look at the rapidly evolving landscape of Artificial Intelligence. Like our automated lead capture, their is an allure of ‘quick wins’. This ‘low-hanging fruit’ everyone seems to like can create a similar set of unintended consequences. As I discussed in my previous article, ‘T.A.N.S.T.A.A.F.L.’ This ‘free lunch’ of uncoordinated AI initiatives often comes with hidden costs: duplicated efforts, integration nightmares, and a lack of strategic alignment.

The solution, much like ensuring our Salesforce implementation truly serves our overarching business objectives, lies in adopting an AI Game plan. This upfront investment in strategic thinking acts as our compass, maybe even North Star. We all need to work out what comfortable living means for themselves. This way we are ensuring that the individual ‘lesser goals’ we pursue with AI are not just shiny objects distracting us from the real destination.

A well-defined AI Game plan helps us anticipate and mitigate potential unintended consequences by:

Providing Strategic Alignment: Ensuring that every AI initiative, including those focused on immediate gains, is directly tied to our broader business goals. This helps us evaluate if the ‘clear path’ to a lesser AI goal is actually leading us towards our ultimate strategic vision.Promoting Resource Optimization: Preventing the duplication of effort and the creation of isolated AI solutions that don’t integrate effectively, thus avoiding the ‘busy work’ of constantly patching disparate systems.Establishing Data Governance: Implementing clear guidelines for data quality, security, and sharing, mitigating the risk of ‘data degradation’ and ensuring that the fuel for our AI initiatives is clean and reliable.Encouraging Holistic Thinking: Fostering cross-functional collaboration and a shared understanding of the bigger picture, preventing the formation of ‘silos’ around individual AI applications.Defining Measurement Beyond the Immediate: Establishing KPIs that not only track the success of the ‘lesser goals’ but also monitor for potential negative side effects and progress towards the overarching strategic objectives.

We need to look beyond the initial increase in leads and consider the quality and the impact on the sales team’s workflow in our Salesforce example. And an AI Game plan compels us to look beyond the initial promise of AI and consider its long-term impact on our people, processes, technology, and overall strategy.

Now, for a nice closure statement that ties everything together

Ultimately, the path to achieving our true goals is rarely a straight line of easily attainable milestones. It requires an awareness of the broader landscape and a willingness to look beyond the immediate allure of quick-wins.

Whether it’s ensuring clean tupperware contributes to a comfortable life, or that an automated lead capture process genuinely fuels business growth. A strategic ‘game plan’ either for our household chores or our AI ambitions acts as our guiding star. All the wile making sure that the many trees we focus on are indeed leading us towards the right forest.

Price reductions for IaaS lead to? In the last six months the continued decline in pricing for IaaS is a signal that more business is sought. IBM thinks that the prices

It’s Time to Play the Music

Martijn Veldkamp

“Strategic Technology Leader | Customer’s Virtual CTO | Salesforce Expert | Helping Businesses Drive Digital Transformation”

April 17, 2025

Ever felt your design review was less constructive critique and more… relentless heckling from a balcony?

A couple of years ago, when I worked at Quint I wrote an article together with Rob Swinkels for the magazine called Informatie. It was based on the ideas of Mark de Bruin who stated that as an architect you should be aware of the role you choose in your assignment. We took that a step further (with his permission) and called it from Einstein to Mandela and added a bit of Circle of Influence and other roles. Even Livingstone!

While still a great article, perhaps the day-to-day reality of architectural life finds a more fitting comparison in the cast of The Muppet Show. Its beautiful chaos, clashing personalities, and its moments of sheer, unadulterated absurdity! Chocolate Moose…

The architectural profession, often perceived as somewhat ‘ivory towerish’, is certainly multi-faceted. While discussing spirit animals with my kids, my inner curmudgeon came out and said Waldorf and Statler. This got me thinking.

This article is a deliberately nonsensical, tongue-in-cheek look in the mirror, using the Muppet Show finest as a warped lens. Consider this a unnecessary continuation of that earlier piece and thus a bit of “Friday nonsense”. It’s a personal antidote to the serious side of business. Don’t you have meetings that you wish you just could end with a karate chop?

I had this article in draft for quite a while and was having way too much fun describing the characters and rewatching episodes.

Kermit the Frog

The Ever-Patient (and Slightly Panicked) archetype

Leading, or perhaps just trying desperately to keep the curtain from falling, is Kermit. As the de facto leader of his chaotic troupe, he’s more the group’s glue than an authoritarian figure. He embodies the Project Manager: kindhearted, polite even when faced with absurdity. Constantly trying to find balance among the clash of personalities and their demands. He leads through dreams and a desire to help his friends succeed. Can he keep pace with the backstage commotion? The high-chaos environment of architectural practice on all levels? He mediates conflicts between demanding Starchitects (Piggy-types), unpredictable creatives (Gonzo/Animal-types), and nit-picking reviewers (Statler & Waldorf-types).

He holds it all together, even if the binding agent is mostly sweat and existential dread. Okay, I’m making that up. Under extreme pressure, though, even Kermit can become cynical or bossy, revealing the strain beneath the calm green exterior.

Miss Piggy

The ‘Moi’-numental Starchitect type.

Making her grand entrance is Miss Piggy, the embodiment of the ‘Starchitect’ and a great example of the Prima Donna stereotype. She is convinced of her destiny for stardom and possesses a temperamental diva personality. Criticism is not tolerated. And any perceived slight might be met with a swift “Hi-yah!” karate chop.

French phrases are adding an air of sophistication to statements like, “Kermie, darling, the client’s budget is merely a suggestion, non? The Starchitect’s aggressive defense of their vision could potentially mask a deeper need for validation. In a profession where egos can be both large and fragile, sometimes appearing simultaneously, the diva-like behavior might serve as a performative shield against insecurity.

Fozzie Bear

The Relentlessly Optimistic

The insecure stand-up comic and relentlessly enthusiastic concept pitcher. He desires nothing more than to make people laugh. And be loved by his audience. He presents ideas with infectious enthusiasm often punctuated by his hopeful catchphrase.

His jokes are often met with groans, particularly from Waldorf and Statler and. Fozzie’s takes criticism hard and feels crushed by negative feedback. He looks to his best friend Kermit for leadership and validation, his approval making or breaking Fozzie’s day. Fozzie embodies this emotional risk, making supportive team dynamics crucial to weather the inevitable ‘heckling’.

Gonzo the Great

The Visionary

Then there’s Gonzo, the performer of dangerous stunts. As an archetype, he’s the experimental designer. He embraces weirdness, seeking artistic meaning in chaos. His plans might resemble performance art manifestos more than design decision documents.

Yet, Gonzo isn’t just weird for weirdness sake. Beneath the zany stunts lies a surprising sensitive, philosophical, and a melancholic persona;ity. He has an almost poetic desire to create something meaningful and unique.

Animal

Instinctual designer

Animal operates on raw energy and instinct. He’s the chaotic creative. And has trouble communicating design intent. “DESIGN! GOOD! BUILD! NOW!”.

He often needs supervision (like his bandmate Floyd keeping him chained to the drum kit). The Animal architect ignores constraints, budgets, and sometimes physics. Yet, Animal is also a savant, a versatile and capable musician when focused. Occasionally, the Animal architect produces a flash of brilliance, usually by accident, amidst the chaos. The key is finding the right context and guidance to channel the raw energy away from pure destruction towards productive, if still LOUD, design.

Statler & Waldorf

The Balcony Critics

AKA the jaded senior architect who’s seen it all and is thoroughly unimpressed. They find fault with everything, often with sarcastic glee, bursting into laughter at their own witty criticisms.

They embody the resistance to change, the mentally retired attitude sometimes encountered in long-established figures. They’ll question every decision and predict project failure with smug satisfaction. Despite constantly complaining about the show, they always return for the next performance. Their consistent attendance despite their negativity suggests their criticism might be somewhat performative, stemming from their own fixed perspectives and enjoyment of heckling, rather than evaluation aimed at improvement.

This archetype represents criticism potentially rooted in defending the status quo or asserting authority, rather than fostering innovation.

Swedish Chef

The Bork-tastic

Known for his incomprehensible mock-Swedish gibberish. And crazy cooking methods involving bizarre tools like firearms, he represents to me the highly specialized technical architect. His explanations are impenetrable technical jargon “Urdy Boordy, flipperdy schnurde, de R-value!”.

Communication inevitably breaks down, often ending with something exploding (“Bork, bork, bork!”). The Chef’s combination of incomprehensible jargon and unconventional methods humorously reflects the communication gap that can arise between technical specialists and the broader team.

While potentially innovative, their expertise can become counterproductive if not translatable or practical.

No actual Muppets (or architects) were harmed in the making of this article

Perhaps there’s a grain of truth in the madness. Do you recognize the frazzled but determined Kermit in your project, trying to steer the ship through stormy seas? Have you encountered the diva demands of a Miss Piggy-esque designer? Or felt the crushing weight of Statler and Waldorf’s critiques during a presentation?

Maybe, just maybe, you see a bit of Fozzie’s desperate need for approval in one of your own pitches, or Animal’s raw energy in a caffeine-fueled deadline sprint? Or perhaps you’re the Gonzo, passionately advocating for a truly bizarre idea, wondering why no one else quite gets your vision for AI agents deploying self build LWC components?

Ultimately, the comparison is ridiculous. Yet, sometimes, embracing the ridiculous is the only way to maintain sanity when the deadline looms large, the client is heckling from the virtual balcony, and someone (Animal!) appears to have broken every unit test in the system.

Price reductions for IaaS lead to? In the last six months the continued decline in pricing for IaaS is a signal that more business is sought. IBM thinks that the prices

Beyond the Policy pdf

Martijn Veldkamp

“Strategic Technology Leader | Customer’s Virtual CTO | Salesforce Expert | Helping Businesses Drive Digital Transformation”

June 27, 2025

An Architect’s Guide to Governing Agentic AI

Your company is rushing to deploy autonomous AI. A traditional governance playbook won’t work. The only real solution is to architect for control from day one.

You can feel the pressure on CTOs and Chief Architects from every direction. The board wants to know your AI strategy. Your development teams are experimenting with autonomous agents that can either write their own code. Or access internal systems, and “oh no!” interact with customers. I understand. The marketing engine is working overtime and the promise is enormous: radical efficiency, hyper-personalized services, and a significant competitive edge.

But as a CIO or CTO, you’re the one who has to manage the fallout. You’re the one left with the inevitable governance nightmare when an autonomous entity makes a critical mistake.

Statement of intent

Let’s be clear: the old governance playbook is obsolete. A 20-page PDF outlining “responsible AI principles” is a statement of intent, not a control mechanism. In the age of agents that can act independently, governance cannot be an afterthought! I strongly believe it must be a core pillar of your Enterprise Architecture.

This isn’t about blocking innovation. It’s about building the necessary guardrails to accelerate, safely.

The New Risk: Why Agentic AI Isn’t Just Another Tool

We must stop thinking of Agentic AI as just another piece of software. Traditional applications are deterministic; they follow pre-programmed rules. Agentic AI is different. It’s a new, probabilistic class of digital actor.

Great example I heard

AI interpretation of Agents running around like Lemmings

Think of it this way. You just hired a million-dollar team of hyper-efficient, infinitely scalable junior employees. But they have no (learned) inherent common sense, no manager, and no intuitive understanding of your company’s risk appetite. Makes me think of the game Lemmings?

This looks a lot when IaaS, PaaS & SaaS and moving towards the Cloud was being discussed, but with something extra:

Data Exfiltration & Leakage: An agent tasked with “summarizing sales data” could inadvertently access and include sensitive data in its output, as seen when Samsung employees leaked source code via ChatGPT prompts.Runaway Processes & Costs: An agent caught in a loop or pursuing a flawed goal can consume enormous computational resources in minutes, long before a human can intervene. The $440 million loss at Knight Capital from a faulty algorithm is a stark reminder of how quickly automated systems can cause financial damage.Operational Catastrophe: An agent given control over logistics could misinterpret a goal and reroute an entire supply chain based on flawed reasoning, causing chaos that takes weeks to untangle.Accountability Black Holes: When an agent makes a decision, who is responsible? The developer? The data provider? The business unit that deployed it? Without a clear audit trail of the agent’s “reasoning,” assigning accountability becomes impossible.

A policy document can’t force it to align with your business intent. The only answer is to build the controls directly into the environment where the agents live and operate.

Architecting for Control

Instead of trying to police every individual agent, a pragmatic leader architects the system that governs all of them.

Pillar 1: The Governance Gateway

Before any agent can go live executing a significant actions like accessing a database, calling an external API, spending money, or communicating with a customer. It must pass through a central checkpoint. This Governance Gateway is is where you enforce the hard rules:

Cost Control: Set strict budget limits. “This agent cannot exceed $50 in compute costs for this task.”Risk Thresholds: Define the agent’s blast radius. “This agent can read from the Account object, but can only write to a Notes field.”Tool Vetting: Maintain an up to date “allowed list” of approved tools and APIs the agent is permitted to use.Human-in-the-Loop Triggers: For high-stakes decisions, the design should automatically pause the action and requires human approval before proceeding.

This is should be a familiar concept familiar because I borrowed it from API gateways, now just applied to Agentic actions. An approved Design is your primary lever of control.

Pillar 2: Decision Traceability

When something goes wrong, “what did the agent do?” is the wrong question. The right question is, “why did the agent do it?” Standard logs are insufficient. You need a system dedicated to providing deep observability into an agent’s reasoning.

This system must capture:

The Initial Prompt/Goal: What was the agent originally asked to do?The Chain of Thought: What was the agent’s step-by-step plan? Which sub-tasks did it create?The Data Accessed: What specific information did it use to inform its decision?The Tools Used: Which APIs did it call and with what parameters?The Final Output: The action it ultimately took.

This level of traceability is non-negotiable for forensic analysis, debugging, and, crucially, for demonstrating regulatory compliance. It’s the difference between a mysterious failure and an explainable, correctable incident.

Pillar 3: Architected Containment

You wouldn’t let a new employee roam the entire corporate network on day one. Don’t let an AI agent do it either. Agents must operate within carefully architected contained environments.

This goes beyond standard network permissions. Architected Containment means:

Scoped Data Access: The agent only has credentials to access the minimum viable dataset required for its task.Simulation & Testing: Before deploying an agent that can impact real-world systems, it must first prove its safety and efficacy in a high-fidelity simulation of that environment.

Containment isn’t about limiting the agent’s potential; it’s about defining a safe arena where it can perform without creating unacceptable enterprise-wide risk.

From Risk Mitigation to Strategic Advantage

Building this architectural foundation may seem like a defensive move, but it is fundamentally an offensive strategy. This first version of the framework is more than a set of features; it’s a strategic shift. It allows you to move away from the impossible task of policing individual agents and towards the pragmatic, scalable model of architecting their environment.

This is how you build a platform for innovation based on trust, safety, and control. It’s how you empower your organization to deploy more powerful AI, faster, because the guardrails are built-in, not bolted on.

The organizations that master AI governance will be the ones that can deploy more powerful agents, more quickly, and with greater confidence than their competitors. They will unlock new levels of automation and innovation because they have built a system based on trust and control.

This architecture transforms the role of EA and IT leadership. You are no longer just a support function trying to keep up; you become the strategic enabler of the company’s AI-powered future. You provide the business with a platform for safe, scalable experimentation.

The conversation needs to shift today. Stop asking “what could this AI do?” and start architecting the answer to “what should this AI be allowed to do?”

What’s your take? Of these three pillars: Gateway, Decision traceability, and Containment: which presents the biggest architectural challenge for your organization right now?

Share your thoughts in the comments below.

Price reductions for IaaS lead to? In the last six months the continued decline in pricing for IaaS is a signal that more business is sought. IBM thinks that the prices

Yes, the zeroth law!

Engineering doesn’t solve problems…

Martijn Veldkamp

“Strategic Technology Leader | Customer’s Virtual CTO | Salesforce Expert | Helping Businesses Drive Digital Transformation”

July 10, 2025

We just trade them in for newer, more interesting ones. And that’s our real job.

You ever spend a week with your team fixing a critical system, finally push the fix live and feel that wave of relief? Only to get a Slack message two months later. “Hey, weird question. Ever since we deployed that one patch, the reporting dashboard runs… backwards?”

Of course it does.

This is a beautiful lie

We have a sacred belief that we are problem solvers. We take a messy world and make it clean with elegant logic. We’re not problem solvers. We are professional Problem Shifters. Think about it. We “fixed” monolithic backends… and created the 1,000-microservice headache that is only stable in the weekends as no one is deploying (Real story from Uber Lead Architect). We “fixed” manual server deployments… and created the 3000-line YAML file. Therefore we now need anchors and aliases.

It makes me think of the invention of the refrigerator. It’s a modern miracle. On the other hand, the cooling liquid tore a hole in the ozone layer.

That’s our job in a nutshell. We aren’t creating a utopia of solved problems. We’re just swapping today’s problem for tomorrow’s way more fascinating crisis. Yeah, I’m old and I was there to fix the year 2000 problem.

My PTO is coming up, and I was looking at books to take with me. Next to Bill Bryson’s a short history of nearly everything I also packed Hitchhikers guide to the galaxy by Douglas Adams. I’ve read them before, but never combined them.

I think it is safe to say that we have found the :

First Law of Engineering Thermodynamics

Problem energy can neither be created nor destroyed, only changed in form.

Saying this out loud is a bit cynical. But I think you can also make it into a superpower. If we are smart about our entire approach.

We need to Stop chasing “done.” And start chasing “durable.” The goal isn’t just to close the ticket. It’s to create a solution that won’t spawn five more tickets.Ask the “Next Question.” The most important question is “if we build this, what new problem are we creating?” Acknowledge the trade off upfront.Redefine your win. The best engineers aren’t the ones who solve the most problems. They’re the ones who create the highest quality of future problems. That did not come out right. It’s having less complexity and less problems.

Building stuff is easy. The same goes for adding quick fixes, work arounds or low hanging fruits. Building a maintainable future that’s the hard part.

What’s the best “future problem” you’ve ever created? Drop your best story in the comments.

Price reductions for IaaS lead to? In the last six months the continued decline in pricing for IaaS is a signal that more business is sought. IBM thinks that the prices

The Amber Trap

Martijn Veldkamp

“Strategic Technology Leader | Customer’s Virtual CTO | Salesforce Expert | Helping Businesses Drive Digital Transformation”

August 29, 2025

Last week I was cleaning out some old project folders (a therapeutic exercise I recommend to any one). I learned that from a fellow architect. Does this spark joy? Anywho, I stumbled upon a perfectly preserved architecture document from 2018. Complete with detailed diagrams, stakeholder matrices, and decision rationales that read like artifacts from a lost civilization. The document was comprehensive! And absolutely useless for understanding that current system. It had become digital amber. Beautiful preserved, but tragically misleading.

This got me thinking about my previous article on the contextual decay in software systems. While that article focused on how LLMs miss the living context around code, there’s another phenomenon at play. Sometime we do preserve context, but it fossilizes into something that actively misleads rather than informs.

The Dominican Republic of Documentation

In the mountains of the Dominican Republic, amber deposits contain some of the most perfectly preserved specimens in the natural world. Insects trapped in tree resin millions of years ago look as fresh as if they died yesterday. Their delicate wing structures, compound eyes, and even cellular details remain intact. It’s breathtaking. It’s also a snapshot of a world that no longer exists. And people pay serious money for these risk fraught mined specimens, that is a comple other rabbit hole. Yes I ended up on catawiki.

The forest that produced that resin is gone. The ecosystem those insects lived in has vanished. TThe amber preserves the form perfectly while the function has become irrelevant.

The Amber Trap: When Context Gets Fossilized

This leads me to a fourth law, a corollary to Contextual Decay. I call it

The Amber Trap: A system’s context, when perfectly preserved without evolving, becomes a liability that actively harms its future development.

In my previous article, I argued that code repositories are like dinosaur skeletons. All structure but miss the context. Documentation was supposed to be that soft tissue, filling in the gaps with the why behind the what. But here’s what I’ve realized: soft tissue doesn’t just decay it sometimes it mummifies.

Real soft tissue decomposes quickly, which is actually helpful. It forces paleontologists to constantly seek new evidence, to challenge their assumptions, to remain humble about their reconstructions. See https://www.nhm.ac.uk/discover/news/2025/august/bizarre-armoured-dinosaur-spicomellus-afer-rewrites-ankylosaur-evolution.html

Outdated Architecture Decisions

I’ve seen this pattern repeatedly across organizations:

The Authentication Fossil: A security architecture document from 2019 explaining why we chose a custom JWT implementation instead of OAuth because “we need fine-grained control and the OAuth landscape is too fragmented.” Today, that same custom implementation is our biggest security vulnerability, but new developers read the document and assume there’s still a good reason for not using industry standards.

The Database Decision Artifact: Documentation explaining why we chose MongoDB because “we need to move fast and can’t be constrained by rigid schemas.” The current system now has more validation rules and data consistency requirements than most SQL databases, but the document makes it seem like any proposal to add structure is fighting against architectural principles.

Price reductions for IaaS lead to? In the last six months the continued decline in pricing for IaaS is a signal that more business is sought. IBM thinks that the prices

Tiny banana, based on the article below

Are We Building Lasting Value or the World’s Most Expensive Bubble?

Martijn Veldkamp

“Strategic Technology Leader | Customer’s Virtual CTO | Salesforce Expert | Helping Businesses Drive Digital Transformation”

October 24, 2025

The AI infrastructure boom is not an abstract headline to me. It’s hitting close to home. Here in the Netherlands, close to where I live. Microsoft has acquired 50 hectares for a new data center, with the CEO citing 300,000 customers who “want to store their data close by in a sovereign country.” This is at Agriport where there are two datacenters already (Microsoft and Google) which were recently in the news that, oopsie, we consume more water and power then we said in our initial plans.

The local impact is pretty bad. This planned buildout is consuming so much power and water from the grid that it’s halting the construction of new businesses and even houses for young people in the surrounding villages. There is a report that mentions a striking change in an environmental vision (omgevingswet), which made extra building land available on which new data centers could be built. This was based on a map that, according to the researchers of Berenschot, was of poor quality, was poorly substantiated and whose origin could not be traced.

It’s made me step back and question the drivers behind this relentless push for more datacenters.

Is This the Right Path to Better AI?

The current building and investing frenzy is built on a two beliefs: that adding more compute power will inevitably lead to better, perhaps even superintelligent, AI. And two, that the big tech companies cannot lose this AI war. OpenAI’s Sam Altman is reportedly aiming to create a factory that can produce a gigawatt of new AI infrastructure every week.

Yet, a growing group of AI researchers is questioning this “scaling hypothesis,” suggesting we may be hitting a wall and that other breakthroughs are needed. This skepticism is materializing elsewhere, too. Meta, a leader in this race, just announced it will lay off roughly 600 employees, with cuts impacting its AI infrastructure and Fundamental AI Research (FAIR) units.

The Bizarre Economics of the race to build more

In earlier decades, datacenter growth was a story of reusing the leftovers of older industries. Repurposing the power infrastructure left over from an earlier era, like old steel mills and aluminum plants.

Today, we’re doing it again. Hyperscalers are building from datacenters at a massive scale, competing for everything that leads to a datacenter, from land to skilled labor to copper wire and transformers, leading to plans to build their own powerplants. The economics are astounding.

Why do I say that? Datacenters also depreciate, not as fast the nVidia chips, but still. Are we sure we are not building the new leftovers like the old steel mills?

FOMO, Negative Returns, and the Bubble Question

We now have megacap tech stocks, once celebrated as “asset-light”. They are spending nearly all their cash flow on datacenters. Losing the AI race is seen as existential. This means all future cash flow, for years to come, may be funneled into projects with fabulously negative returns on capital. Lighting hundreds of billions of dollars on fire instead of losing out to a competitor, even when the ultimate prize is unclear.

If this turns out to be a bubble, what lasting thing gets built? The dot-com bubble left me with a memory of Nina Brink and the World Online drama. But if the AI bubble pops, 70% of the capital (the GPUs) will be worthless in 3 years. We’ll be left with overbuilt shells and power infrastructure. Perhaps the only lasting value will be a newly an industrialized supply chain for ehhh.

What’s your take? Are we building the next industrial revolution or the world’s most expensive, short-lived infrastructure?

Price reductions for IaaS lead to? In the last six months the continued decline in pricing for IaaS is a signal that more business is sought. IBM thinks that the prices

Conways Law

Any organization that designs a system (defined broadly) will
produce a design whose structure is a copy of the organization’s
communication structure

— Melvin E. Conway

I’ve blogged previously on the topic of Technical Debt and ways to prevent and or overcome it. I’ve used my brilliant PowerPoint skills for pictures like this:

No alt text provided for this image


But what I forgot is one of the most important drivers of them all… communication! And how organisations either facilitate that or (unknowingly) hinder that.

Study by The Harvard Business School

In a study conducted by The Harvard Business School various codebases where examined to confirm the hypothesis. The codebases had similar functional purposes. They compared results from loosely-coupled open source teams with tightly-knit teams. The study revealed that tightly-coupled product teams tended to produce tightly-coupled, monolithic software. And, open source projects leaned towards adopting more modular and decomposed codebases.

Reorganisations

There is a growing trend around company reorganisations with Agile and Scrum. Many organisations aim towards a similar structure made popular by the Spotify model. They are converting into Squads, supported by Chapters, organised in Tribes and Guilds. These autonomous teams have end-to-end responsibility for what they build and deliver. This reorganisation is a great example of managing this connection between organisational structure and the software the teams produce.

So, if organisations design systems that mirror their own communication structure. What does that look like in Salesforce? What kind of teams do we see? What kind of Technical Debt does that create or solve?

Salesforce teams

In Salesforce implementations we organise for the functional domains and their supporting Clouds. So: Marketing, Commerce, Sales & Service. These teams can be very autonomous but only up to a certain point they all need to deploy to the same Salesforce Org. There are often discussions on who builds the interfaces. Do these integration developers sit in the Salesforce teams or do they have their own team?

Impact on Technical Debt

I have talked often on Technical debt and I like to define it as “the accumulated consequences of taking shortcuts during development to meet deadlines or other constraints“. When an organisation’s setup and structure promotes fragmented communication and isolated decision-making, it can lead to technical debt in several ways:

  1. Inconsistent Design: If different teams work in silos without communication and collaboration, the overall architecture may lack coherence and consistency. Inconsistent design and architectural choices may accumulate over time, making Salesforce harder to maintain and deliver new functionality.
  2. Communication Gaps: When teams work independently, they might not be aware of changes or updates made by other teams, leading to communication gaps. This lack of awareness can result in conflicts, redundant efforts, and integration challenges, further exacerbating technical debt.
  3. Knowledge Silos: Conway’s Law also plays a role in knowledge sharing. Isolated teams may develop specialized knowledge about specific aspects of Salesforce, creating knowledge silos. When team members leave or move to other projects, this specialized knowledge may be lost, hindering the understanding and maintenance of the software.
  4. Delayed Integration: Tightly-coupled designs often result in fragile point to point integrations. These require constant and extensive testing efforts. As a result, changes might lead to integration problems, regression bugs, and time-consuming rework.

Conclusion

By understanding the unintended effects of Conway’s Law, organising and facilitating teams to have a single purpose and responsibility, but still collaborate and communicate effectively: organisations can have a great Salesforce implementation that fit their needs.

Price reductions for IaaS lead to? In the last six months the continued decline in pricing for IaaS is a signal that more business is sought. IBM thinks that the prices

One of the propositions of cloud is that it should be possible – through the use of intelligent software – to build reliable systems on top of unreliable hardware. Just like you can build reliable and affordable storage systems using RAID (Redundant Arrays of Inexpensive Disks).
One of the largest cloud providers says: “everything that can go wrong, will go wrong”.

So the hardware is unreliable, right? Mmm, no. Nowadays most large cloud providers buy very reliable simpler (purpose-optimized) equipment upstream of suppliers in the server market. Sorry Dell, HP & Lenovo there goes a large part of your market. Because when running several hundred thousands of servers a failure rate of 1 PPM versus 2 PPM (parts per million) makes a huge difference.

Up-time is further increased by thinking carefully about what exactly is important for reliability. For example: one of the big providers routinely removes the overload protection from its transformers. They prefer that occasionally a transformer costing a few thousand dollars breaks down, to regularly having whole isles loose power because a transformer manufacturer was worried about possible warranty claims.

The real question continues to be what happens to your application when something like this happens. Does it simply remain operational, does it gracefully decline to a slightly simpler, slightly slower but still usable version of itself, or does it just crash and burn? And for how long?
The cloud is not about technology or hardware, it’s about mindset and the application architecture.

 

Price reductions for IaaS lead to? In the last six months the continued decline in pricing for IaaS is a signal that more business is sought. IBM thinks that the prices

One thing is for certain, we will spend a good part of 2015 talking about, discussing and disagreeing on how we now need to move, deliver, transport, carry, send and integrate the various component elements that make up our Business Applications.

The advent of Cloud, virtualization and managed hosting technologies means that we have all become used to the ‘as-a-Service’ extension as we now purchase a defined selection of software applications and data that are increasingly segmented and componentized in their nature.

Because of the Cloud, businesses run on mobile devices with employees, customers and partners easily collaborating, data securely stored and accessible from anywhere in the world all without a worry about the infrastructure. That’s someone else’s problem, isn’t it? With low monthly prices, who wouldn’t sign up and embrace a SaaS app that makes your life easier.

All the convenience comes at a price.

That price is silos. Instead of tearing down silos, SaaS applications builds strong and high walls around functionality and data. Not like those traditional legacy silos but loads of little silos within and in between departments and teams. Instead of bringing teams into alignment, they are separated into fiefdoms of data if one does not govern the Cloud.

Price reductions for IaaS lead to? In the last six months the continued decline in pricing for IaaS is a signal that more business is sought. IBM thinks that the prices

We’re about to go back once again in the circle to decentralize and give a greater role to local storage and computing power.

It depends on the nature and the amount of data that needs to be stored and it’s process demands. With the enormous rise of the amount of data because of the ‘Internet of Things’ the nature of the data is becoming more and more diffuse. These developments lead to yet another revolution in data area: The Fog.

Smarter? Or gathering more data?

More and more devices are equipped with sensors; cars, lampposts, parking lots, windmills, solar power plants and from animals to humans. Many of these developments are currently still in the design phase, but it will not be long before we live in smart homes in smart cities and we are driving our cars by smart streets wearing our smart tech.

Everything around us is ‘getting smarter’ / gathers more data. But where is that data stored, and why? Where is all that data processed into useful information? The bandwidth of the networks we use, grows much slower than the amount of data that is send through it. This requires thinking about the reason to store data (in the cloud).

If you want to compare data from many different locations, for instance data from sensors in a parking lot via an app where the nearest free parking space is, then the cloud is a good place to process the information. But what about the data that can even better be handled locally?

Data Qualification

The more data is collected, the more important it will be to determine the nature of the data is and what needs to be done with it. We need to look at the purpose of the collected data. For example: If the data is used for ‘predictive maintenance’, which monitors something so that a timely replacement or preventive maintenance can take place, it does not always make sense to send the data to the cloud.

Another example is the data that is generated by security cameras. These typically show 99.9% of the time an image of room/space that has not changed. The interesting data is the remaining 0.1% where there is something to see. The rest can be stored locally, or even not at all. This filtering of useful and useless data calls again for local power.

This decentralization of computing power and storage is a recent trend that Cisco calls ‘fog computing’. With distributed intelligence an often more effective action can be taken in response to the collected data, and unnecessary costs of bandwidth and storage can be avoided. This is a development that goes very well with the transition to the cloud.

Cisco

Fog Computing is a paradigm that extends Cloud computing and services to the edge of the network. Similar to Cloud, Fog provides data, compute, storage, and application services to end-users. The distinguishing Fog characteristics are its proximity to end-users, its dense geographical distribution, and its support for mobility. Services are hosted at the network edge or even end devices such as set-top-boxes or access points. By doing so, Fog reduces service latency, and improves Quality of Service (QoS), resulting in superior user-experience. Fog Computing supports emerging Internet of Everything (IoE) applications that demand real-time/predictable latency (industrial automation, transportation, networks of sensors and actuators). Thanks to its wide geographical distribution the Fog paradigm is well positioned for real time big data and real time analytics. Fog supports densely distributed data collection points, hence adding a fourth axis to the often mentioned Big Data dimensions (volume, variety, and velocity).

Unlike traditional data centers, Fog devices are geographically distributed over heterogeneous platforms, spanning multiple management domains. Cisco is interested in innovative proposals that facilitate service mobility across platforms, and technologies that preserve end-user and content security and privacy across domains.

The future? It will be hybrid with foggy edges.