It’s never getting off the couch! The single greatest killer of innovation is not bad preparation but poor administration. Innovation projects are uniquely vulnerable to this because they are not simple IT upgrades, they are fundamental changes to business processes.
The Hype Trap: The 95% of failures are often “hype experiments.” They start with a some flashy tool (like a generic AI chatbot) and go in search of a problem. Rather than the other way around. They stall because they have no clear owner, no defined ROI, and no integration into the actual workflows where you know, actual people do their jobs.The 5% Success: The 5% that get traction ignore the marketing hype. They are the unglamorous, high-return areas like back-office automation. They succeed because they are domain-specific (e.g., an AI that only reads lease agreements) and deeply integrated into a specific workflow.
Just do it!
There’s a common belief that AI initiatives require perfectly clean, structured and in-shape data before you can even begin. This is a form of procrastination. Let’s start tomorrow! Data is the ultimate couch potato, it will never get in shape on its own.
Waiting for Perfect: Companies that wait for a perfect, company-wide data strategy will be waiting forever. As one report on why AI projects fail notes, Garbage in, Garbage out is still a primary obstacle, leading to projects getting stuck in endless data-wrangling phases.The Start Now approach: Successful teams, adopt a pragmatic approach. They don’t wait. One manufacturing project, for example, saw a double-digit accuracy jump not from a better model, but by simply constraining the first version to SKUs that had at least 18 months of (imperfect) historical data. They started with the data they had, proving value, and built momentum from there.
Innovation fails not from bad prep, but from hype and hesitation. Start with a real workflow, use the messy data you already have, and build momentum. The unsexy projects are the ones that will actually drive benefits.
I spend a lot of my time not just building systems but designing the flows of work that ripple through entire organizations, not just Salesforce. For every elegant customer journey we try to automate, there’s a parallel process demanding “evidence” and “compliance” or “alignment”. This lack of trust is everywhere. It leads to mandatory fields nobody reads. Reports that exist only to justify more reports. Or even beautiful dashboards that are never refreshed.
I had some beers with an old colleague of mine and Salesforce, like any enterprise platform, is often the staging ground for bureaucratic theater. Performance red tape. The ultimate of organised distrust.
Duplicate approvals because one team doesn’t trust another’s process.Mandatory checkboxes that serve no analytical purpose.Endless “alignment” decks uploaded to your DMS of choice, then instantly forgotten.
Traditionally, these frictions had limits: human bandwidth, cost of labor, and outright resistance. Nobody wants to spend hours building dashboards that nobody reads. Even consultants eventually roll their eyes.
But AI erases those limits. Ever had ChatGPT say no to you? I asked my son to have ChatGPT help him come with a plan to stick a fly up his nose. Two prompts later and we have an action plan.
The guardrails are gone. Here’s where the mindset of a Salesforce architect diverges from the hype. My job isn’t just to implement AI. It’s to design friction intentionally. Why? Because constraints create meaning. Without them, we risk endless regression of workflows that look important but achieve nothing. Management by AI checkbox.
An architect must ask:
Does this automation reduce a real human pain point, or just accelerate a bureaucratic one?Who will read this dashboard? What decision will it actually influence?What’s the minimum viable process that satisfies compliance without spawning digital theater?Can we design AI to say no? To push back on pointless work instead of scaling it?
The danger isn’t AI itself. It’s AI without governance, AI without discernment, AI unmoored from business outcomes.
Last Wednesday was a very foggy day to travel by car to the office. I noticed that a lot of people either where running all their foglights or just their daytime lights making them nearly invisible.
Ever driven through “thick” fog, only to realise that your car’s daytime running lights are on, but your full headlights aren’t? It happens more often than you think. In today’s world of automation, many of us assume everything is taken care of, until we hit a foggy patch. The we discover we’re not as prepared as we thought. This unconscious reliance on automated systems can lull us into a false sense of security.
This is a lot like what happens in the world of enterprise architecture.
The Fog of Autopilot in Architecture
Just as drivers forget to turn on their headlights in the fog, we, too, sometimes find ourselves moving on autopilot in our architecture decisions. How often do we rely on processes that “just work” without stopping to verify if they’re the best approach for this situation? We fall into our favourite patterns, depending on the same tech stacks, the same vendors, or the same integration points, because they’ve worked before.
But in this application landscape where the unexpected happens regularly (new regulations, evolving customer needs, emerging technologies), running on autopilot can be risky. What happens when a critical decision needs to be made, and we realize we’ve left the headlights off?
Where Are You on Autopilot?
If you’re honest with yourself, where are you currently driving in “fog” mode? Is it in cloud adoption, choosing the best application for a certain workload, technical debt management, or returning to your favourite capability map? Perhaps it’s in your approach to security, assuming that what worked yesterday will work tomorrow.
I’ve seen and experienced how easy it is to coast along with what feels comfortable. But without stopping to switch on the right lights and thus gaining visibility into potential pitfalls or opportunities. We risk making critical oversights that could cost our organisations time, money, or worse, customer trust.
Call to Action
What parts of your architecture are you running on autopilot? I’d love to hear your thoughts in the comments. Let’s share examples and learn from each other where we might be forgetting to “turn on the lights” in our architectural decisions.
As promised in my earlier blogs. I am writing a blog a month and I write about Architecture, Governance and Changing the Process or the Implementation and last month was about Technical debt. This month it is about going live successfully. Why? The relay run that the Dutch team lost due to a faulty handover got me thinking about software and delivery, going live, handover moments and risk mitigation. Next to training to sprint really fast, they also train the handover extensively. And despite all this training, this time it went wrong. On an elite Olympic Athlete level!
To be fair, there are many examples of handovers going wrong:
And nobody noticed or said anything?
Processes, tools and safety measures
Successful projects have certain elements and key traits in common. These traits consist of
Mature, agreed upon processes with KPI’s and a feedback loop to standardize the handovers
Automation to support these processes
Safety measures for Murphy’s law (when even processes can’t save you)
Key principle is to not try to drown an organisation in red tape and make it more complicated than necessary. Like my first blog “Simplify and then add lightness”. We need these processes to progress in a sustainable and foreseeable way towards a desirable outcome: go live with your Salesforce implementation.
These processes are there to safeguard the handovers. The part of the Dutch relay team that made me think about our own relay runs and their associated risks.
Handovers
The main handovers are:
User → Product Owner → Business Requirement → User Story → Solution Approach → Deploy → User.
As you can see it is a circle and with the right team and tools it can be iterated in very short sprints.
User → Product Owner
“If I had asked people what they wanted, they would have said faster horses.”
Henry Ford
Okay, so there is no evidence that Ford ever said something similar to these words, but they have been attributed so many times that he might as well have said them. I want to use that quote to show different methods on how to get to an understanding of the User Needs. The two sides are either innovating by tightly coupled customer feedback. Or by visionaries who ignores customer input and instead rely on their vision for a better product.
Having no strong opinion on either approach, I still tend be a bit more risk averse and like to have feedback as early as possible. This is perhaps not a handover in a true sense that you can influence as an architect, but getting a true sense of Users Needs might be one that is essential for your Salesforce project to succeed.
I still remember the discussion with a very passionate Product Owner: We need a field named fldzkrglc for storing important data. When diving deeper we found it was a custom field in SAP that was derived from the previous Mainframe implementation. So, that basically meant that the requirements where 50 years old. Innovation?
Business Requirement → User Story
User Stories for Dummies
There are many ways the software industry has evolved. One of them is around how to write down User Needs. A simple framework I use for validating User Stories are the 3C’s. Short recap:
Card essentially is the story is printed with a unique number. There are many tools for supporting that.
Conversation is around the story that basically says “AS A … I WANT … SO THAT I …” It’s a starting point for the team to get together and discuss what’s required
Confirmation is essentially the acceptance criteria which at a high level are the test criteria confirming that the story is working as expected.
Often used measurement is the Definition of Ready (DoR). It is a working agreement between the team and the Product Owner on what readiness means. And it is a way to indicate an item in the product backlog is ready to work.
As handover and risks go, the quality and quantity of the user stories largely determine the greatness of the Salesforce implementation. Again as an architect you can influence only so many things but in order to bring innovation and move fast User Stories are key.
User Story → Solution Approach
This is where as an architect you can have a solid impact. This is where your high level architecte, solution direction and day to day choices come together. This is your architecture handover moment. When you work together with the developers and create the high level design based on the actual implemented code base. The group as a whole can help find logical flaws, previously wrong decisions and tech debt. The architecture becomes a collaboration. As I wrote earlier, keep it simple and remember Gall’s law. It explains the why of striving for as few number of parts in your architecture.
“A complex system that works is invariably found to have evolved from a simple system that worked. The inverse proposition also appears to be true: A complex system designed from scratch never works and cannot be made to work. You have to start over, beginning with a working simple system.”
John Gall: General systemantics, an essay on how systems work, and especially how they fail, 1975
Next to keeping it simple, I also firmly believe that there should be a place to try out and experiment with the new technology that Salesforce brings. The earlier mentioned experimenting phase fits perfectly. Why only prototype the new business requirements? It is a great place to test out all new cool technical things Salesforce offers like SFDX, Packages or even Einstein and evaluate their value and impact they could have on your Salesforce Org.
Deployment
In any software development project, the riskiest point as perceived by the customer is always go-live time. It’s the first time that new features come into contact with the real production org. Ideally, during a deployment, nobody will be doing anything they haven’t done before. Improvisation should only be required if something completely unexpected happens. The best way to get the necessary experience is to deploy as often as possible.
“In software, when something is painful, the way to reduce the pain is to do it more frequently, not less.”
David Farley, Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation
So establish a repeatable process for going live and perform it many times. This sounds easy but remember that during an Olympic relay race it still went wrong.
Salesforce Sandboxes and our Scratch Orgs provides a Target org to practice your deployments. They are meant for User Acceptance tests, but also making sure that everything will deploy successfully. It can also give developers necessary experience and feedback of deploying their work while it’s in progress. So now that we have a target we need tools to help manage the drudgery.
There are whole suites of tools specifically to support the development team in this. From Gearset to Copado, and Bluecanvas to Flosum. There is a lot, there are even teams that support the business with their own build toolset with Salesforce DX. It is a good practice to choose a tool that supports you and your go-live process to help automate as much as possible.
Safety measures
We have an agreed upon working processes, we measure the quality of handover moments, we automated deployments with bought or homegrown tools, now what?
Even Olympic Athletes make mistakes, so what can we do with Software and Databases that in the physical world is impossible? Backups!
A lot of Salesforce deployments especially for larger customers tend to be fairly data driven. Next to business data as Accounts, Contacts and Orders, there is configured business rule data for example with CPQ. Next to that there is technical data or metadata that is meant for Trigger and Flow frameworks, internationalisation and keeping local diversifications maintainable.
Deploying this data or even making changes in a Production Org is asking for Backups. A good practice is to have a complete Org Backup before you release.
Key takeaways?
Establish a process and start ‘training’ your handover moments
Automate your release process as much as possible and measure your progress
When a handover goes wrong have some safety measures in place
As the world relies more heavily on data as the basis for critical decision-making, it is vital that this data can be trusted. And that trust is the key issue here.
People (Data Scientists, Chief Innovation Officers) are looking for ways to automate using data. Automation translates to efficiency which translates to value. This automation trend has increased through advances in business intelligence, big data, the rise of IoT and the necessary cloud infrastructure.
So why do I raise this trust issue? Isn’t this solved perhaps by the Industry standard DMBOK? It states the possible Data Quality Management processes.
Because data is vulnerable, not just to the breaches we hear about in the news, but to a much more subtle, potentially more destructive class of attack, an attack on data integrity. Data isn’t stolen but manipulated and changed.
Like this tech-savvy Staten Island high school student who studied advanced computer programming at an elite computer camp who used his skills to hack into a secure computer system and improve his scores.
Enter the Blockchain
A possible solution for assuring data integrity could be blockchain technology.
In a blockchain, time-stamped entries are made into an immutable, linear log of events that is replicated across the network. Each discrete entry, in addition to being time-stamped, is irreversible and can have a strong identity attached. So it becomes irrefutable who made the entry, and when. These time-stamped entries are then approved by a distributed group of validators according to a previously agreed-upon rule set.
Once an entry is confirmed according to this rule set, the entry is replicated and stored by every node in the network, eliminating single points of failure and ensuring data resilience and availability.
Future
Because the promises of data integrity and security are so strong, new systems can be built to share blockchain-enforced data among organizations who may not trust each other. And once an ecosystem has shared data that everyone can trust in, new automation opportunities emerge.
Smart contracts are perhaps the next step. It makes it possible that different parties create automated processes across companies and perhaps industries. Blockchain could be an ecosystem for cross-industry workflows involving data from multiple parties. Now an entire new class of loosely coupled integration applications can be created.
“We are kept from our goals not by obstacles but by a clear path to lesser goals.”
I’m not sure who pointed me in the the direction of Robert Brault, but I love that quote. This resonates with me on different levels. If I hear low hanging fruit one more time, I’m going to scream! Let’s just say my definition of “low hanging fruit” involves more planning and less bruising.
Forrest for the trees?
When we talk about the complexities of implementing new systems and strategies, let’s say something with AI, we often focus on achieving the immediate results. We need to show tangible benefits, now. With this pressure on the “lesser goals” there is a risk that we don’t see the forest because we’re focussed on the trees. An unintended consequence is that we start steering toward the wrong outcome. And thus off-course from our larger objectives.
For example: Back to the simple act of washing dishes. As I told before, our dishwasher broke down and that reminded me that automation of tasks transforms the tasks, not the outcome. But clean tupperware is not my end-goal. Nor is a clean house. My personal end-goal is comfortable living. I don’t want to live in a showroom or museum (it’s also not possible with four cats and two dogs). The “lesser goal” of automating the cleaning of the dishes, introduced a new set of tasks: correct loading , filter cleaning, and if you’re in luck cryptic error codes. It’s up to me to decide what I rather do. What is an efficient use of my time vs how much do I hate doing it and do I actually mind when I sit very comfortably on my couch?
Now, let’s switch this up. Imagine your idealized environment with a Salesforce rollout. When you implement specific features or modules, for example automating Lead capture.
These actions are aimed at achieving those “lesser goals”: more leads, faster follow-ups, improved efficiency in a specific area. You have all these kinds of Key Performance Indicators (KPIs) that you diligently track. There is an uptick in lead volume, a reduction in the average time to qualify a lead, or a faster completion rate for a specific workflow. This is the bright, reassuring signal that our “lesser goal” is being achieved.
Unforeseen side effects
There are these unforeseen side effects that might not be immediately obvious. These can hinder our progress towards our larger, strategic objectives. The normal thinking is that more leads, more qualified leads, more opportunities, more closed won. But there is also the risk of:
Data Quality Degradation: The ease of automated capture might lead to a flood of unqualified or duplicate leads, diluting the quality of your data and requiring significant cleanup efforts down the line. User Frustration and Workarounds: A rigid process might not accommodate all cases, leading sales reps to develop inefficient workarounds (cough Excel cough) outside the system, undermining the very efficiency you aimed for. Increased Reporting Burden, Diminished Insight: The focus on tracking the increased lead volume might lead to reports that no one has the time or expertise to analyze effectively, creating noise without genuine insight. Silos Around Automated Processes: Teams might become overly reliant on the automated lead flow, neglecting cross-functional communication or losing sight of the bigger customer journey. This is like a localized concentration of “light” that doesn’t illuminate the entire tank.
This is where Brault’s quote hits home for me. Because we can fix all that. De-duplication, validation rules, Lead Assignments, Data Quality reports. As you can see these unintended consequences are quietly accumulating and we have more busy work. If you become to focused on the easily measurable metrics, there is a risk that these very actions might be creating new obstacles and thus diverting us from our overarching strategic direction.
Losing direction while chasing immediate targets
This risk of losing direction while chasing immediate targets is certainly possible. Just look at the rapidly evolving landscape of Artificial Intelligence. Like our automated lead capture, their is an allure of ‘quick wins’. This ‘low-hanging fruit’ everyone seems to like can create a similar set of unintended consequences. As I discussed in my previous article, ‘T.A.N.S.T.A.A.F.L.’ This ‘free lunch’ of uncoordinated AI initiatives often comes with hidden costs: duplicated efforts, integration nightmares, and a lack of strategic alignment.
The solution, much like ensuring our Salesforce implementation truly serves our overarching business objectives, lies in adopting an AI Game plan. This upfront investment in strategic thinking acts as our compass, maybe even North Star. We all need to work out what comfortable living means for themselves. This way we are ensuring that the individual ‘lesser goals’ we pursue with AI are not just shiny objects distracting us from the real destination.
A well-defined AI Game plan helps us anticipate and mitigate potential unintended consequences by:
Providing Strategic Alignment: Ensuring that every AI initiative, including those focused on immediate gains, is directly tied to our broader business goals. This helps us evaluate if the ‘clear path’ to a lesser AI goal is actually leading us towards our ultimate strategic vision.Promoting Resource Optimization: Preventing the duplication of effort and the creation of isolated AI solutions that don’t integrate effectively, thus avoiding the ‘busy work’ of constantly patching disparate systems.Establishing Data Governance: Implementing clear guidelines for data quality, security, and sharing, mitigating the risk of ‘data degradation’ and ensuring that the fuel for our AI initiatives is clean and reliable.Encouraging Holistic Thinking: Fostering cross-functional collaboration and a shared understanding of the bigger picture, preventing the formation of ‘silos’ around individual AI applications.Defining Measurement Beyond the Immediate: Establishing KPIs that not only track the success of the ‘lesser goals’ but also monitor for potential negative side effects and progress towards the overarching strategic objectives.
We need to look beyond the initial increase in leads and consider the quality and the impact on the sales team’s workflow in our Salesforce example. And an AI Game plan compels us to look beyond the initial promise of AI and consider its long-term impact on our people, processes, technology, and overall strategy.
Now, for a nice closure statement that ties everything together
Ultimately, the path to achieving our true goals is rarely a straight line of easily attainable milestones. It requires an awareness of the broader landscape and a willingness to look beyond the immediate allure of quick-wins.
Whether it’s ensuring clean tupperware contributes to a comfortable life, or that an automated lead capture process genuinely fuels business growth. A strategic ‘game plan’ either for our household chores or our AI ambitions acts as our guiding star. All the wile making sure that the many trees we focus on are indeed leading us towards the right forest.