Architectural OCD A couple of week’s ago the Fridays Nonsense post by Karsten Scherer had an interesting topic that had me thinking about internal systems. He mentioned

Architectural OCD

Martijn Veldkamp

“Strategic Technology Leader | Customer’s Virtual CTO | Salesforce Expert | Helping Businesses Drive Digital Transformation”

March 8, 2024

A couple of week’s ago the Fridays Nonsense post by Karsten Scherer had an interesting topic that had me thinking about internal systems. He mentioned systems that we are not aware of, until they stop working. For example your inner ear balance.

That post kickstarted my thinking about switching between unconscious and conscious. Becoming aware of feeling uncomfortable. That something is not right, but something is off. What triggers it? Why is a picture on the wall hanging slightly skewed triggering my senses that I notice it immediately? With what framework and expectations am I viewing the world? Or the other way around. What damping and priority settings do I have to handle noise but still alert me when stuff is different?

For example when you walk in a house and you smell something different. “Do you smell that?”. Could be me burning todays dinner, could be the house on fire, could be my wife painting the wall yet another color or the neighbour burning their plastic garbage, again…

Anyway, it is something different. Different then expected. Different then the baseline that we all have. Different enough that it travels through the overload of other sensory input. And get’s priority to override.

Side note:Reacting to something smelling differently is probably a survivor trait. Unconscious response to danger.

Smells

Fresh Prince of Bel Air

To loop back a bit to my professional career. I read about code smells either in a book by Martin Fowler or Kent Beck somewhere in the end of the 90s. Both come to memory when using the phrase “code smells” and possible technical debt.

I think it is the same as seeing stones in a sidewalk not fitting with the rest of the pattern. It “feels wrong”. Like a class with too many responsibilities, very long methods, duplicated code or methods with oh so many parameters. Next to these coding warning signs or anti patterns. There are also governance, processes and architectural smells. Let’s focus on the last part, architectural smells.

Architecture smells

Architectural smells have many examples. Applying a design in an inappropriate context. Or mixing designs that lead to undesirable behaviors. Applying design at the wrong level of granularity. These smells are not always because certain choices were made at the beginning. What I see happening at a lot of my customers is that this iterative development apporoach of their core systems and applications in distributed teams eventually leads to the loss of coherence in their architectural elements.

The moment you as an architect cannot focus on fixing important parts of the design without having to deal with the whole thing all at once. That is a smell.

Types of architectural smells I’ve encountered:

Lack of Separation of Concerns

I see systems where a lot of responsibility and knowledge sits in one component. How to setup a secure connection, orchestrate multiple calls and handle all the translations, handle errors but also maps all the fields and resultsReusability, modifiability, and understandability are impacted. Now every component that wants to send messages need their own implementation of securing a connection, storing secrets, orchestration, etc. Refactoring is an answer, next to education and showing what goods looks like.

Copies are everywhere

Another great anti-pattern is the result of copy and paste. It’s when you see duplications of code addressing the same concerns. Could be due to many reasons. I have seen Salesforce Orgs that paid very low wages to their Salesforce Developers and code was duplicated everywhere. Refactoring is an answer, next to education and showing what goods looks like.

Genericity

It probably isn’t a word, but as a non native speaker I have given myself some leeway. What I mean is an over use of very generic interfaces. So not explicitly modeled out interactions. In Salesforce we have the glorious sObject. Which can be anything really. Case, Contact or Account? It is hard to do dependency analysis when all is sObject. Yes there is a search function in Visual Studio Code and no it is not a replacement.

Conclusion

Code smells help developers figure out when and where they need to tidy up their code. Similarly, architectural smells help architects know where they should tweak their architectural designs. These smells show up when the software breaks some basic rules of how it should be built, like keeping changes separate from each other. But they also give specific signs that can be easily spotted if you have that bit of OCD.

By paying attention to what makes you uncomfortable, you can filters these smells, and make small changes in different parts of your design that over time when added up, make the whole system work a whole lot better.

Do what I mean, not what I say And the gentle art of wrangle meaningful responses from our Generative AI tools While listening to Welcome to the machine by Shadows Fall

Bedrock picture from DuckDuckGo and not a Minecraft one

Data Governance as the Bedrock of Effective AI Governance

Martijn Veldkamp

“Strategic Technology Leader | Customer’s Virtual CTO | Salesforce Expert | Helping Businesses Drive Digital Transformation”

November 3, 2023

As an organisation you need a plan to address the market disruption of Generative AI. You don’t need to build your own version of ChatGPT. But you need a plan on how your organisation will deal with all the initiatives that will start. Otherwise I wish you good luck with the conversation you will have when one of your CxO comes back from some partner paid conference stating that the company will be bankrupt if you don’t invest right now.

In this series of articles I felt the need to explore some of my current thinking on where Generative AI has it’s place.

Business Braveheart

In this ever-renewing push of the newest flavour of technology, the fusion of architecture, governance, and data governance stands as the cornerstone for reliability.

As organisations navigate their discovery of the complex realm of artificial intelligence, it becomes increasingly apparent that the success of implementing one of these LLMs (ChatGPT, Bard or Bing AI) is deeply entwined with the quality, security, and integrity of the data that they need and produce.

Effective AI governance is not just about fine-tuning algorithms or optimising your LLM models. It begins with the bedrock of quality.

Garbage in, garbage out

Feedback loop

It’s the quality, accuracy, and reliability of the input data that dictates the usefulness of AI’s output. Thus, a holistic approach with a very strong foundation in data quality and it’s governance. And remember, the prompts that you use for getting results is also data that needs to be governed. How else will you establish a feedback loop on effective usage of the tool?

The Interdependence of Data Governance and AI Governance

Data governance, as I’ve stated in the previous blog posts, primarily concerns itself with the management, availability, integrity, usability, and security of an organisation’s data.

AI in any form, by its nature, operates as an extension of the data it is fed. Without a sturdy governance structure over the data that you produce, AI governance becomes a moot point. On another note I’m still surprised nobody came up with an AI that generates cute cat short clips for a Youtube channel. Wait, I’m on to something here…

Quality Data: The Lifeblood of AI

A key aspect is that the quality of data isn’t an isolated attribute but a collective responsibility of various departments within an organisation. We all know that, but where does the generated data sit?

In the past I wrote about System Thinking and I still have to plot for myself where Generative AI sits. Is it like our imagination? Where do you Master the data LLM generates for you? Can I re-generate reliable? What happens with newer generated outcomes? Are these better then the old? Is the generated response email owned by the Service Department or the AI team? These articles are as much for you as for me to fully grok where LLM and it’s outcomes sits in the system.

Security and Ethical Implications

Privacy concerns, compliance with regulations, and ethical considerations in handling and processing data are pivotal components of data governance. As AI systems often deal with sensitive information, ensuring compliance with data protection regulations and ethical use of data becomes a critical component of AI governance. The same goes for the outputs. Where are they used or stored? How do these different data providers compare? The question that popped up in my head was that within the Salesforce ecosystem we use a lot of Account data and have linked with third party providers. We enrich the data that we have of the customer with Duns & Bradstreet information or in the Netherlands with the KVK register. What happens with the ‘authority score’ if we add Generative AI in the mix? We still have a lot to discover together.

Keep it simple

Keep calm meme-o-matic

In short because I harped on it before. Organizations should:

Establish Comprehensive Data Governance Frameworks: Institute clear policies for data ownership, stewardship, and data management processes. This not only fosters quality but also ensures accountability and responsibility in data handling. Promote Cross-Functional Collaboration: Break down silos and encourage collaboration between various departments. Not just good for data quality but many more aspects in life. Leverage Automation for Data Quality Assurance: Harness the power of automation tools to identify anomalies and inconsistencies within data, ensuring high-quality inputs for AI models. Ever did a large migration from one system to another? Right, automation for the win!Continuously Monitor and Improve Data Governance: Implement systems for ongoing monitoring of data quality. We have a Dutch expression which translated goes something like “The Polluter pays”. Bad data has so many downstream effects that I almost want to advise to have a monthly blame an shame high light list. Let’s forget about that for now. I do however want to stress a carrot and stick approach.

Conclusion

In my subsequent articles, I’ll try to delve deeper into the practical strategies and steps organisations can adopt and make it more Salesforcy.

Borrowed from https://blog.kore.ai/conversational-ai-top-20-trends-for-2020

Borrowed from https://blog.kore.ai/conversational-ai-top-20-trends-for-2020

AI Governance

Martijn Veldkamp

“Strategic Technology Leader | Customer’s Virtual CTO | Salesforce Expert | Helping Businesses Drive Digital Transformation”

October 4, 2023

AI governance is crucial to strike a balance between harnessing the promised benefits of AI technology and safeguarding against its potential risks and negative consequences.

It’s a very interesting and still an emerging problem of how to effectively govern the creation, deployment and management of these new AI services. End to end.

Heavy regulated industries, such as banking or public services such as tax agencies, are legally required to provide a level of transparency on how they operate their IT. And this is also true for their AI models. Failure to offer this transparency can lead to severe penalties. AI models, like their predecessors algorithms, can no longer function as a mystery.

The funny thing is that AI and it’s Governance is a hot topic, but Data Governance…? That ranges from boring to “Wasn’t that solved already?”.

It’s still all about that data

The real challenge is always the data. If you think about it, AI is about what data did you train it on, and on what do you want to use it, when? So AI Governance is not just the algorithms and the models. It starts with Data.

It’s no longer enough to secure your data and saying you will comply with privacy laws. You have to be, verifiable, in control of the data you are using. Both in and out of the AI models you are using.

It’s source, it’s provenance and it’s ownership. What rights do you have? What rights does the provider of that data have? We are not even beginning to scratch the surface of how do you even enforce those rights.

When planning to use AI ensure your Data is accurate, complete, and high quality

And this is starts at the collection of data. Does it provide accurate information? Am I missing data? Is the source reliable, timely and high quality? How will we measure and asses if the data collection is working as intended?

Human error is one of the easiest ways to lose data integrity. Well you can call it human error, but it also boils down to is the system set up in a way that it actually makes sense to enter all that data here and now? If users are not willing to enter all the necessary data, your data sets will never be of a high enough data quality.

System integrating with one another is also a great way to lose data integrity. The moment systems have different implementations of a concepts like Customer or Order and their lifecycles you will have a hard time combining that data.

In order to successfully introduce AI in your business, you have to be in control of your data. That data is created throughout your application landscape. From the earlier iterations of Data Warehouses to the rise to prominence of Data scientists, it’s all about Data, it’s integrity, security and quality. That hasn’t changed.

How you frame your problem will influence how you solve it

According to The Systems Thinker, if a problem meets these four criteria, it could benefit from a systems thinking approach:

The issue is importantThe problem is recurringThe problem is familiar and has known historyPeople have unsuccessfully tried to solve the problem

We need to make sense of the complexity of the application landscape by looking at it in terms of whole and relationships rather than by splitting it down into its parts. Then the flow of data throughout will start to make sense. And then can we start addressing the lack of Data Quality, Data Security and Data Integrity.

And who knows, if that all is solved we can start thinking about where it actually makes sense to introduce AI.

Human Error -> Root cause AWS S3 outage is found An authorized S3 team member using an established playbook executed a command which was intended to remove a small numb

Human Error -> Root cause AWS S3 outage is found

Martijn Veldkamp

“Strategic Technology Leader | Customer’s Virtual CTO | Salesforce Expert | Helping Businesses Drive Digital Transformation”

March 3, 2017

An authorized S3 team member using an established playbook executed a command which was intended to remove a small number of servers for one of the S3 subsystems that is used by the S3 billing process. Unfortunately, one of the inputs to the command was entered incorrectly and a larger set of servers was removed than intended.
The servers that were inadvertently removed supported two other S3 subsystems. One of these subsystems, the index subsystem, manages the metadata and location information of all S3 objects in the region.
While these subsystems were being restarted, S3 was unable to service requests. Other AWS services in the US-EAST-1 Region that rely on S3 for storage, including Elastic Compute Cloud (EC2), Elastic Block Store (EBS) volumes and AWS Lambda were also impacted while the S3 APIs were unavailable. 
Read the whole story here

„Hoe vind je het zelf gaan, Buurman?” Houtskoolschetsen, talent aanboren, managers doorzagen, prototypes realiseren, in de steigers zetten, zekerheden borgen, waarden v

„Hoe vind je het zelf gaan, Buurman?”

Martijn Veldkamp

“Strategic Technology Leader | Customer’s Virtual CTO | Salesforce Expert | Helping Businesses Drive Digital Transformation”

February 27, 2017

Houtskoolschetsen, talent aanboren, managers doorzagen, prototypes realiseren, in de steigers zetten, zekerheden borgen, waarden verankeren, platslaan, voorstellen afhameren, piketpaaltjes slaan en contouren schetsen: in de trein kijkend naar de bouwput vlak bij station Sloterdijk betrap ik mijzelf op taalgebruik met collega’s en klanten alsof alsof we met zijn allen op een bouwput staan.
Als ik mijn ogen dichtdoe hoor ik: „Lukt ie Buurman?” „Prima Buurman, A je to.”. Nou komt dat waarschijnlijk ook omdat mijn kinderen zo dol zijn op dat soort filmpjes en ik daar helemaal gehersenspoeld door ben. Maar om nu zelf te verworden tot een soort van Ed en Willem Bever die met Bob de Bouwer in overleg gaat…
Ik stel voor dat we stoppen met de bouwmetaforen op kantoor. Dus niet opbouwen, uitbouwen, of voortbouwen op die bouwtaal, maar juist afbouwen!

Masterclass Serie 1 “CIO Office anno 2020” – (BE)DENKEN – Van visie naar praktisch uitvoerbaar Actieplan De bijdrage van informatie en IT aan het succes van de onderne

Masterclass Serie 1 “CIO Office anno 2020” – (BE)DENKEN – Van visie naar praktisch uitvoerbaar Actieplan

Martijn Veldkamp

“Strategic Technology Leader | Customer’s Virtual CTO | Salesforce Expert | Helping Businesses Drive Digital Transformation”

September 23, 2014

De bijdrage van informatie en IT aan het succes van de onderneming neemt alleen maar toe. Reden genoeg om de organisatie die daar over gaat op de juiste manier in te richten en, als dat nodig is, drastisch te veranderen. Deze Masterclasses geven aan welke ‘spelers’ belangrijk gaan worden, welke vraagstukken moeten worden opgelost en in welke richting we de antwoorden moeten zoeken op de vraagstukken die zich aandienen in de (nabije) toekomst. Waarbij samenwerking tussen de disciplines Informatiemanagement, Architectuur en Information Risk Management onont-koombaar is en de beslissers in de IV kolom meer dan ooit het stuur van de onderneming in handen hebben om de juiste koers te varen. De wereld verandert snel en ingrijpend. Een aantal ontwikkelingen die hierbij een rol spelen zijn:
Technology push;
Information push: big data, business intelligence, door data gedreven, personificerende processen, internet of things;
Society push of de crowd push: social media, de crowd die de expert vervangt.
Allemaal veranderingen die sterk zijn verbonden met informatie en IT waarbij de juiste inzet hiervan bepalend is voor het succes van een organisatie.Opzet van de Masterclass SerieMaar hoe ga je hier nu mee om? In een drietal Masterclasses geeft Quint op basis van eenvoudige richtlijnen en concrete praktijk cases van verschillende bedrijven aan hoe ondernemingen hun business- en ICT/IV-strategie (kunnen) formuleren en omzetten in succesvolle realisatie. Reserveer daarom alvast elke laatste donderdag van de maanden oktober, november en december in uw agenda.Donderdag 30 oktober: (BE)DENKEN – Van visie naar praktisch uitvoerbaar ActieplanOrganisatie, informatie en IT raken meer en meer verweven. De eerste Masterclass gaat in op de vraag hoe we een handzame en praktische visie, strategie en richting kunnen definiëren en hoe we die omzetten tot een realistisch en haalbaar actieplan.Voor wie is deze Masterclass interessant?
CIO’s en IT Management
Informatie- en Domeinmanagers
Risk en Security Managers
Architecten
Business Managers betrokken bij Innovatie
Programma & DeelnameWe beginnen om 14:00 uur en sluiten af rond 17:00 uur. Tijdens de sessie zullen er twee presentaties worden gegeven over het onderwerp. Naar aanleiding van die presentaties wordt op interactieve wijze een dialoog gevoerd met elkaar.Deelname is kosteloos. Indien u zich wilt inschrijven voor meerdere Masterclasses uit deze serie, gebruik dan het opmerkingenveld van het aanmeldformulier om uw wensen door te geven. Neem gerust een collega mee!Overige Masterclasses uit deze serie:Donderdag 27 november: DURVEN – Strategie moet je durven uitvoeren Elke organisatie heeft wel een strategie of een missie. Maar wie is er binnen de organisatie concreet bezig met het realiseren van die strategie? En hoe weten we dat we met de juiste dingen bezig zijn en blijven? Het concretiseren van de strategie dwingt de verschillende disciplines (informatie management, architectuur, IRM) in de organisatie om samen te werken. De tweede Masterclass laat zien hoe de rollen en activiteiten van deze disciplines samenwerken om de strategie daadwerkelijk te realiseren. Donderdag 18 december: DOEN – Wat heeft de bedrijfsvoering nodig en waar moet ICT aan (gaan) voldoen? Nieuwe technologie, samenwerken in ketens en een toenemende mix van intern en extern betrokken diensten vragen om een organisatie die niet alleen procesmatig, maar ook inhoudelijk de regie kan nemen. Activiteiten zoals informatiemanagement, information risk management en architectuur zullen zich sterk moeten ontwikkelen. Naast een inkijkje in de toekomstige ontwikkeling van deze disciplines kijkt deze Masterclass juist naar de stappen die u morgen al kunt zetten om in de toekomst het stuur in handen te blijven houden. Deze nieuwe reeks Masterclasses staat in het teken van trends, ontwikkelingen en uitdagingen in ons vakgebied en is gebaseerd op de visie van Quint. Ervaren consultants, betrokken managers en gedreven vakmensen gaan met elkaar in gesprek en in debat om visies met elkaar te delen en ervaringen uit te wisselen.schrijf je nu in

Cloud adoption! Do you have a strategy? As conversations about the Cloud continues to focus on IT’s inability at adoption (or the gap between IT and Business), organiza

Reliability is often attributed as one of the reasons some organizations are wary of the cloud.

Last week, Amazon, Rackspace and IBM had to “reboot” their clouds to deal with maintenance issues with the Xen hypervisor. Details were scarce but it was pretty quickly established that an unspecified vulnerability in the Xen hypervisor was the issue.

The vulnerability, discovered by researcher Jan Beulich, concerned Xen hypervisor, open-source technology that cloud service providers use to create and run virtual machines. If exploited the vulnerability would have allowed malicious virtual machines to read data from or crash other virtual machines as well as the host server.

Not all providers had to reboot their clouds to upgrades or maintenance. Google and EMC VMware support the notion of live migration, which keeps internal changes invisible to users and avoids these Xen reboots and Microsoft uses (customized) Hyper V so they did not have that vulnerability.

It is interesting to see what “uptime” means in this context. In many reports of this nature, “uptime” doesn’t take into account “scheduled downtime.” And that could very well be the case here, as well. If one does a little bit of math:

  • 99.9% uptime is 8.77 hours of downtime per year
  • 99.99% uptime is 52.60 minutes of downtime per year
  • 99.999% uptime is 5.26 minutes of downtime per year

Although some users complained about the outage most where complaining about (the lack of) the providers’ communications.

Cloud providers cannot be considered as a black box anymore. As an architect we need to know the limitations of the architectural components the provider uses such as Xen. We need to know how often these kinds of reboots have occurred, and how the provider handles transparent maintenance.

We also need to consider the lines of communications. Providers often drop the ball here. People are often unhappy because they didn’t get much (or any) heads-up about the reboot, not about the reboots itself.

We should remember that outages and other disruptions are few and far between these days, so these rare event get extra media attention.

Cloud adoption! Do you have a strategy? As conversations about the Cloud continues to focus on IT’s inability at adoption (or the gap between IT and Business), organiza

As conversations about the Cloud continues to focus on IT’s inability at adoption (or the gap between IT and Business), organizations outside of IT continue their cloud adoption. While many of these efforts are considered Rogue or Shadow IT efforts and are frowned upon by the IT organization, they are simply a response to a wider problem.

The IT organization needs to adopt a cloud strategy, a holistic one is even better. However, are they really ready for this approach? There are still CIOs who are resisting cloud.

A large part of the problem is that most organizations are still in a much earlier state of adoption.

Common hurdles are

  1. The mindset : “critical systems may not reside outside your own data center”
  2. Differentiation: “our applications and services are true differentiators”
  3. Organizational changes : “moving to cloud changes how our processes and governance models behave”
  4. Vendor management : “we like the current vendors and their sales representative”

In order to develop a holistic cloud strategy, it is important to follow a well-defined process. Plan Do Check Act fits just about any organization:

Assess: Provide a holistic assessment of the entire IT organization, applications and services that are business focused, not technology focused. Understand what is differentiating and what is not.

Roadmap: Use the options and recommendations from the assessment to provide a roadmap. The roadmap outlines priority and valuations .

Execute: For many, it is important to start small because of the lower risk and ramp up were possible.

Re-Assess & Adjust: As the IT organization starts down the path of execution, lessons are learned and adjustments needed. Those adjustments will span technology, organization, process and governance. Continual improvement is a key hallmark to staying in tune with the changing demands.

Today, cloud is leveraged in many ways from Software as a Service (SaaS) to Infrastructure as a Service (IaaS). However, it is most often a very fractured and disjointed approach to leveraging cloud. Yet, the very applications and services in play require that organizations consider a holistic approach in order to work most effectively.

Price reductions for IaaS lead to? In the last six months the continued decline in pricing for IaaS is a signal that more business is sought. IBM thinks that the prices

Tiny banana, based on the article below

Are We Building Lasting Value or the World’s Most Expensive Bubble?

Martijn Veldkamp

“Strategic Technology Leader | Customer’s Virtual CTO | Salesforce Expert | Helping Businesses Drive Digital Transformation”

October 24, 2025

The AI infrastructure boom is not an abstract headline to me. It’s hitting close to home. Here in the Netherlands, close to where I live. Microsoft has acquired 50 hectares for a new data center, with the CEO citing 300,000 customers who “want to store their data close by in a sovereign country.” This is at Agriport where there are two datacenters already (Microsoft and Google) which were recently in the news that, oopsie, we consume more water and power then we said in our initial plans.

The local impact is pretty bad. This planned buildout is consuming so much power and water from the grid that it’s halting the construction of new businesses and even houses for young people in the surrounding villages. There is a report that mentions a striking change in an environmental vision (omgevingswet), which made extra building land available on which new data centers could be built. This was based on a map that, according to the researchers of Berenschot, was of poor quality, was poorly substantiated and whose origin could not be traced.

It’s made me step back and question the drivers behind this relentless push for more datacenters.

Is This the Right Path to Better AI?

The current building and investing frenzy is built on a two beliefs: that adding more compute power will inevitably lead to better, perhaps even superintelligent, AI. And two, that the big tech companies cannot lose this AI war. OpenAI’s Sam Altman is reportedly aiming to create a factory that can produce a gigawatt of new AI infrastructure every week.

Yet, a growing group of AI researchers is questioning this “scaling hypothesis,” suggesting we may be hitting a wall and that other breakthroughs are needed. This skepticism is materializing elsewhere, too. Meta, a leader in this race, just announced it will lay off roughly 600 employees, with cuts impacting its AI infrastructure and Fundamental AI Research (FAIR) units.

The Bizarre Economics of the race to build more

In earlier decades, datacenter growth was a story of reusing the leftovers of older industries. Repurposing the power infrastructure left over from an earlier era, like old steel mills and aluminum plants.

Today, we’re doing it again. Hyperscalers are building from datacenters at a massive scale, competing for everything that leads to a datacenter, from land to skilled labor to copper wire and transformers, leading to plans to build their own powerplants. The economics are astounding.

Why do I say that? Datacenters also depreciate, not as fast the nVidia chips, but still. Are we sure we are not building the new leftovers like the old steel mills?

FOMO, Negative Returns, and the Bubble Question

We now have megacap tech stocks, once celebrated as “asset-light”. They are spending nearly all their cash flow on datacenters. Losing the AI race is seen as existential. This means all future cash flow, for years to come, may be funneled into projects with fabulously negative returns on capital. Lighting hundreds of billions of dollars on fire instead of losing out to a competitor, even when the ultimate prize is unclear.

If this turns out to be a bubble, what lasting thing gets built? The dot-com bubble left me with a memory of Nina Brink and the World Online drama. But if the AI bubble pops, 70% of the capital (the GPUs) will be worthless in 3 years. We’ll be left with overbuilt shells and power infrastructure. Perhaps the only lasting value will be a newly an industrialized supply chain for ehhh.

What’s your take? Are we building the next industrial revolution or the world’s most expensive, short-lived infrastructure?

Price reductions for IaaS lead to? In the last six months the continued decline in pricing for IaaS is a signal that more business is sought. IBM thinks that the prices

Famous quote from Tom Hanks in the movie Apollo 13

“In space, there is no problem so bad that you cannot make it worse.”

Martijn Veldkamp

“Strategic Technology Leader | Customer’s Virtual CTO | Salesforce Expert | Helping Businesses Drive Digital Transformation”

August 9, 2023

In one of his TED Talks, Chris Hadfield mentions this saying and sheds a light on how to deal with the complexity, the sheer pressure, of dealing with dangerous and scary situations.

Risk Management

In the astronaut business the space shuttle is a very complicated vehicle. It’s the most complicated flying machine ever built. All for a single purpose. To escape earth’s gravity well, launch cargo and return safely.

For the astronauts and the people watching it, it is an amazing experience.

But NASA calculated the odds of a catastrophic event during the first five shuttle launches. It was one in nine. Later launches were getting better, about one in 38 or so.

Why do we take that risk? Who would do something that dangerous?

The biggest risk is the missed chance

Next to the personal dreams and ambitions that astronauts have, Space stations gives us the opportunity to do experiments in zero gravity. We have a chance to learn what the substance of the universe is made of. We get to see Earth from a whole different perspective. (Maybe flat-earthers need a stay onboard of that Space Station, but that is a whole other topic).

Everything that we do for the first time is hard. If we never do the impossible, how do we progress as a species? I think it is human nature, to improve, to explore, to do things never done before.

“If you always do what you always did,
you will always get what you always got.”

by Albert Einstein

We have athletes that perform ultra triathlons. Strong-men en -women who can lift 500 kgs. And, closer to home, we have architects that help transform companies to be closer to their customers.

Do or do not, there is no try

Master Yoda from the movie The Empire strikes back from LucasArts (Disney)

If the problems seem unsurmountable, tangled together, impossible to move forward, we need a fresh perspective. A way to frame the problem in a different light. To come up with hypotheses to solve small parts. And then design a small experiment to test it. If it does not work, we go back to the drawing board. We are not trying to make the problem bigger.

System thinking still holds true, optimising or solving a small part of the problem does not optimize the whole system.