In large-scale enterprise IT, there is a constant, almost gravitational tension between two opposing forces.
On one side, you have the market. It demands speed and radical customer focus. It wants new features deployed yesterday. On the other side, you have Governance. It demands stability, compliance, and zero-risk operations. For years, the industry buzzword was “Agility.” The narrative was that if we just adopted enough scrum teams and microservices, the tension would vanish. But in regulated sectors treating every system as a “move fast and break things” playground isn’t happening.
Strategic architecture isn’t about choosing between speed and stability. It is about architecting a landscape where both can exist in harmony. The biggest thing I see in transformation roadmaps is the attempt to apply a single methodology to the entire IT landscape.
If you treat your core transaction systems like a marketing app, you introduce unacceptable risk. Conversely, if you treat your customer-facing channels with the heaviness of a SAP4Hana migration, you will be disrupted by a startup before you finish your requirements document.
To simplify and modernize a complex application landscape, we have to recognize that different distinct layers breathe at different rates. Speed layering.
Systems of Record, where change is slow, deliberate, and expensive (by design). We want friction here because the cost of error is too high.Systems of Differentiation, where unique business processes live. It connects the core to the customer. We need flexibility here to configure new products, but we still need structure.Systems of Innovation, where we test new customer journeys. If an idea here fails, it should fail fast and cheap without shaking the foundation.
The Architect role shifts from being the “standards police” to becoming city planners. We don’t just draw diagrams, we define zones. We tell the organization: “Here, in the innovation zone, you can use the newest tech and deploy daily. But here, in the core, we measure twice and cut once.”
Implementing this strategy is rarely a technology problem, its a people challenge. It requires coaching stakeholders to understand that “slow” isn’t a dirty word, it’s a synonym for “reliable.” It requires mentoring solution architects to recognize which layer they are building in and to choose their tools accordingly.
When we get this right, we stop fighting the tension between Innovation and Stability. Instead, we use the solid foundation of the core to launch the rapid experiments of the future. We achieve a landscape that is calm at the center, but agile at the edge.
That is how you build an IT strategy that survives the long term.
I spend a lot of my time not just building systems but designing the flows of work that ripple through entire organizations, not just Salesforce. For every elegant customer journey we try to automate, there’s a parallel process demanding “evidence” and “compliance” or “alignment”. This lack of trust is everywhere. It leads to mandatory fields nobody reads. Reports that exist only to justify more reports. Or even beautiful dashboards that are never refreshed.
I had some beers with an old colleague of mine and Salesforce, like any enterprise platform, is often the staging ground for bureaucratic theater. Performance red tape. The ultimate of organised distrust.
Duplicate approvals because one team doesn’t trust another’s process.Mandatory checkboxes that serve no analytical purpose.Endless “alignment” decks uploaded to your DMS of choice, then instantly forgotten.
Traditionally, these frictions had limits: human bandwidth, cost of labor, and outright resistance. Nobody wants to spend hours building dashboards that nobody reads. Even consultants eventually roll their eyes.
But AI erases those limits. Ever had ChatGPT say no to you? I asked my son to have ChatGPT help him come with a plan to stick a fly up his nose. Two prompts later and we have an action plan.
The guardrails are gone. Here’s where the mindset of a Salesforce architect diverges from the hype. My job isn’t just to implement AI. It’s to design friction intentionally. Why? Because constraints create meaning. Without them, we risk endless regression of workflows that look important but achieve nothing. Management by AI checkbox.
An architect must ask:
Does this automation reduce a real human pain point, or just accelerate a bureaucratic one?Who will read this dashboard? What decision will it actually influence?What’s the minimum viable process that satisfies compliance without spawning digital theater?Can we design AI to say no? To push back on pointless work instead of scaling it?
The danger isn’t AI itself. It’s AI without governance, AI without discernment, AI unmoored from business outcomes.
Last Wednesday was a very foggy day to travel by car to the office. I noticed that a lot of people either where running all their foglights or just their daytime lights making them nearly invisible.
Ever driven through “thick” fog, only to realise that your car’s daytime running lights are on, but your full headlights aren’t? It happens more often than you think. In today’s world of automation, many of us assume everything is taken care of, until we hit a foggy patch. The we discover we’re not as prepared as we thought. This unconscious reliance on automated systems can lull us into a false sense of security.
This is a lot like what happens in the world of enterprise architecture.
The Fog of Autopilot in Architecture
Just as drivers forget to turn on their headlights in the fog, we, too, sometimes find ourselves moving on autopilot in our architecture decisions. How often do we rely on processes that “just work” without stopping to verify if they’re the best approach for this situation? We fall into our favourite patterns, depending on the same tech stacks, the same vendors, or the same integration points, because they’ve worked before.
But in this application landscape where the unexpected happens regularly (new regulations, evolving customer needs, emerging technologies), running on autopilot can be risky. What happens when a critical decision needs to be made, and we realize we’ve left the headlights off?
Where Are You on Autopilot?
If you’re honest with yourself, where are you currently driving in “fog” mode? Is it in cloud adoption, choosing the best application for a certain workload, technical debt management, or returning to your favourite capability map? Perhaps it’s in your approach to security, assuming that what worked yesterday will work tomorrow.
I’ve seen and experienced how easy it is to coast along with what feels comfortable. But without stopping to switch on the right lights and thus gaining visibility into potential pitfalls or opportunities. We risk making critical oversights that could cost our organisations time, money, or worse, customer trust.
Call to Action
What parts of your architecture are you running on autopilot? I’d love to hear your thoughts in the comments. Let’s share examples and learn from each other where we might be forgetting to “turn on the lights” in our architectural decisions.
As an organisation you need a plan to address the market disruption of Generative AI. You don’t need to build your own version of ChatGPT. But you need a plan on how your organisation will deal with all the initiatives that will start. Otherwise I wish you good luck with the conversation you will have when one of your CxO comes back from some partner paid conference stating that the company will be bankrupt if you don’t invest right now.
In this series of articles I felt the need to explore some of my current thinking on where Generative AI has it’s place.
Business Braveheart
In this ever-renewing push of the newest flavour of technology, the fusion of architecture, governance, and data governance stands as the cornerstone for reliability.
As organisations navigate their discovery of the complex realm of artificial intelligence, it becomes increasingly apparent that the success of implementing one of these LLMs (ChatGPT, Bard or Bing AI) is deeply entwined with the quality, security, and integrity of the data that they need and produce.
Effective AI governance is not just about fine-tuning algorithms or optimising your LLM models. It begins with the bedrock of quality.
Garbage in, garbage out
Feedback loop
It’s the quality, accuracy, and reliability of the input data that dictates the usefulness of AI’s output. Thus, a holistic approach with a very strong foundation in data quality and it’s governance. And remember, the prompts that you use for getting results is also data that needs to be governed. How else will you establish a feedback loop on effective usage of the tool?
The Interdependence of Data Governance and AI Governance
Data governance, as I’ve stated in the previous blog posts, primarily concerns itself with the management, availability, integrity, usability, and security of an organisation’s data.
AI in any form, by its nature, operates as an extension of the data it is fed. Without a sturdy governance structure over the data that you produce, AI governance becomes a moot point. On another note I’m still surprised nobody came up with an AI that generates cute cat short clips for a Youtube channel. Wait, I’m on to something here…
Quality Data: The Lifeblood of AI
A key aspect is that the quality of data isn’t an isolated attribute but a collective responsibility of various departments within an organisation. We all know that, but where does the generated data sit?
In the past I wrote about System Thinking and I still have to plot for myself where Generative AI sits. Is it like our imagination? Where do you Master the data LLM generates for you? Can I re-generate reliable? What happens with newer generated outcomes? Are these better then the old? Is the generated response email owned by the Service Department or the AI team? These articles are as much for you as for me to fully grok where LLM and it’s outcomes sits in the system.
Security and Ethical Implications
Privacy concerns, compliance with regulations, and ethical considerations in handling and processing data are pivotal components of data governance. As AI systems often deal with sensitive information, ensuring compliance with data protection regulations and ethical use of data becomes a critical component of AI governance. The same goes for the outputs. Where are they used or stored? How do these different data providers compare? The question that popped up in my head was that within the Salesforce ecosystem we use a lot of Account data and have linked with third party providers. We enrich the data that we have of the customer with Duns & Bradstreet information or in the Netherlands with the KVK register. What happens with the ‘authority score’ if we add Generative AI in the mix? We still have a lot to discover together.
Keep it simple
Keep calm meme-o-matic
In short because I harped on it before. Organizations should:
Establish Comprehensive Data Governance Frameworks: Institute clear policies for data ownership, stewardship, and data management processes. This not only fosters quality but also ensures accountability and responsibility in data handling. Promote Cross-Functional Collaboration: Break down silos and encourage collaboration between various departments. Not just good for data quality but many more aspects in life. Leverage Automation for Data Quality Assurance: Harness the power of automation tools to identify anomalies and inconsistencies within data, ensuring high-quality inputs for AI models. Ever did a large migration from one system to another? Right, automation for the win!Continuously Monitor and Improve Data Governance: Implement systems for ongoing monitoring of data quality. We have a Dutch expression which translated goes something like “The Polluter pays”. Bad data has so many downstream effects that I almost want to advise to have a monthly blame an shame high light list. Let’s forget about that for now. I do however want to stress a carrot and stick approach.
Conclusion
In my subsequent articles, I’ll try to delve deeper into the practical strategies and steps organisations can adopt and make it more Salesforcy.
AI governance is crucial to strike a balance between harnessing the promised benefits of AI technology and safeguarding against its potential risks and negative consequences.
It’s a very interesting and still an emerging problem of how to effectively govern the creation, deployment and management of these new AI services. End to end.
Heavy regulated industries, such as banking or public services such as tax agencies, are legally required to provide a level of transparency on how they operate their IT. And this is also true for their AI models. Failure to offer this transparency can lead to severe penalties. AI models, like their predecessors algorithms, can no longer function as a mystery.
The funny thing is that AI and it’s Governance is a hot topic, but Data Governance…? That ranges from boring to “Wasn’t that solved already?”.
It’s still all about that data
The real challenge is always the data. If you think about it, AI is about what data did you train it on, and on what do you want to use it, when? So AI Governance is not just the algorithms and the models. It starts with Data.
It’s no longer enough to secure your data and saying you will comply with privacy laws. You have to be, verifiable, in control of the data you are using. Both in and out of the AI models you are using.
It’s source, it’s provenance and it’s ownership. What rights do you have? What rights does the provider of that data have? We are not even beginning to scratch the surface of how do you even enforce those rights.
When planning to use AI ensure your Data is accurate, complete, and high quality
And this is starts at the collection of data. Does it provide accurate information? Am I missing data? Is the source reliable, timely and high quality? How will we measure and asses if the data collection is working as intended?
Human error is one of the easiest ways to lose data integrity. Well you can call it human error, but it also boils down to is the system set up in a way that it actually makes sense to enter all that data here and now? If users are not willing to enter all the necessary data, your data sets will never be of a high enough data quality.
System integrating with one another is also a great way to lose data integrity. The moment systems have different implementations of a concepts like Customer or Order and their lifecycles you will have a hard time combining that data.
In order to successfully introduce AI in your business, you have to be in control of your data. That data is created throughout your application landscape. From the earlier iterations of Data Warehouses to the rise to prominence of Data scientists, it’s all about Data, it’s integrity, security and quality. That hasn’t changed.
How you frame your problem will influence how you solve it
According to The Systems Thinker, if a problem meets these four criteria, it could benefit from a systems thinking approach:
The issue is importantThe problem is recurringThe problem is familiar and has known historyPeople have unsuccessfully tried to solve the problem
We need to make sense of the complexity of the application landscape by looking at it in terms of whole and relationships rather than by splitting it down into its parts. Then the flow of data throughout will start to make sense. And then can we start addressing the lack of Data Quality, Data Security and Data Integrity.
And who knows, if that all is solved we can start thinking about where it actually makes sense to introduce AI.
As promised in my earlier blogs. I am writing a blog a month and I write about Architecture, Governance and Changing the Process or the Implementation and last month was about Technical debt. This month it is about going live successfully. Why? The relay run that the Dutch team lost due to a faulty handover got me thinking about software and delivery, going live, handover moments and risk mitigation. Next to training to sprint really fast, they also train the handover extensively. And despite all this training, this time it went wrong. On an elite Olympic Athlete level!
To be fair, there are many examples of handovers going wrong:
And nobody noticed or said anything?
Processes, tools and safety measures
Successful projects have certain elements and key traits in common. These traits consist of
Mature, agreed upon processes with KPI’s and a feedback loop to standardize the handovers
Automation to support these processes
Safety measures for Murphy’s law (when even processes can’t save you)
Key principle is to not try to drown an organisation in red tape and make it more complicated than necessary. Like my first blog “Simplify and then add lightness”. We need these processes to progress in a sustainable and foreseeable way towards a desirable outcome: go live with your Salesforce implementation.
These processes are there to safeguard the handovers. The part of the Dutch relay team that made me think about our own relay runs and their associated risks.
Handovers
The main handovers are:
User → Product Owner → Business Requirement → User Story → Solution Approach → Deploy → User.
As you can see it is a circle and with the right team and tools it can be iterated in very short sprints.
User → Product Owner
“If I had asked people what they wanted, they would have said faster horses.”
Henry Ford
Okay, so there is no evidence that Ford ever said something similar to these words, but they have been attributed so many times that he might as well have said them. I want to use that quote to show different methods on how to get to an understanding of the User Needs. The two sides are either innovating by tightly coupled customer feedback. Or by visionaries who ignores customer input and instead rely on their vision for a better product.
Having no strong opinion on either approach, I still tend be a bit more risk averse and like to have feedback as early as possible. This is perhaps not a handover in a true sense that you can influence as an architect, but getting a true sense of Users Needs might be one that is essential for your Salesforce project to succeed.
I still remember the discussion with a very passionate Product Owner: We need a field named fldzkrglc for storing important data. When diving deeper we found it was a custom field in SAP that was derived from the previous Mainframe implementation. So, that basically meant that the requirements where 50 years old. Innovation?
Business Requirement → User Story
User Stories for Dummies
There are many ways the software industry has evolved. One of them is around how to write down User Needs. A simple framework I use for validating User Stories are the 3C’s. Short recap:
Card essentially is the story is printed with a unique number. There are many tools for supporting that.
Conversation is around the story that basically says “AS A … I WANT … SO THAT I …” It’s a starting point for the team to get together and discuss what’s required
Confirmation is essentially the acceptance criteria which at a high level are the test criteria confirming that the story is working as expected.
Often used measurement is the Definition of Ready (DoR). It is a working agreement between the team and the Product Owner on what readiness means. And it is a way to indicate an item in the product backlog is ready to work.
As handover and risks go, the quality and quantity of the user stories largely determine the greatness of the Salesforce implementation. Again as an architect you can influence only so many things but in order to bring innovation and move fast User Stories are key.
User Story → Solution Approach
This is where as an architect you can have a solid impact. This is where your high level architecte, solution direction and day to day choices come together. This is your architecture handover moment. When you work together with the developers and create the high level design based on the actual implemented code base. The group as a whole can help find logical flaws, previously wrong decisions and tech debt. The architecture becomes a collaboration. As I wrote earlier, keep it simple and remember Gall’s law. It explains the why of striving for as few number of parts in your architecture.
“A complex system that works is invariably found to have evolved from a simple system that worked. The inverse proposition also appears to be true: A complex system designed from scratch never works and cannot be made to work. You have to start over, beginning with a working simple system.”
John Gall: General systemantics, an essay on how systems work, and especially how they fail, 1975
Next to keeping it simple, I also firmly believe that there should be a place to try out and experiment with the new technology that Salesforce brings. The earlier mentioned experimenting phase fits perfectly. Why only prototype the new business requirements? It is a great place to test out all new cool technical things Salesforce offers like SFDX, Packages or even Einstein and evaluate their value and impact they could have on your Salesforce Org.
Deployment
In any software development project, the riskiest point as perceived by the customer is always go-live time. It’s the first time that new features come into contact with the real production org. Ideally, during a deployment, nobody will be doing anything they haven’t done before. Improvisation should only be required if something completely unexpected happens. The best way to get the necessary experience is to deploy as often as possible.
“In software, when something is painful, the way to reduce the pain is to do it more frequently, not less.”
David Farley, Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation
So establish a repeatable process for going live and perform it many times. This sounds easy but remember that during an Olympic relay race it still went wrong.
Salesforce Sandboxes and our Scratch Orgs provides a Target org to practice your deployments. They are meant for User Acceptance tests, but also making sure that everything will deploy successfully. It can also give developers necessary experience and feedback of deploying their work while it’s in progress. So now that we have a target we need tools to help manage the drudgery.
There are whole suites of tools specifically to support the development team in this. From Gearset to Copado, and Bluecanvas to Flosum. There is a lot, there are even teams that support the business with their own build toolset with Salesforce DX. It is a good practice to choose a tool that supports you and your go-live process to help automate as much as possible.
Safety measures
We have an agreed upon working processes, we measure the quality of handover moments, we automated deployments with bought or homegrown tools, now what?
Even Olympic Athletes make mistakes, so what can we do with Software and Databases that in the physical world is impossible? Backups!
A lot of Salesforce deployments especially for larger customers tend to be fairly data driven. Next to business data as Accounts, Contacts and Orders, there is configured business rule data for example with CPQ. Next to that there is technical data or metadata that is meant for Trigger and Flow frameworks, internationalisation and keeping local diversifications maintainable.
Deploying this data or even making changes in a Production Org is asking for Backups. A good practice is to have a complete Org Backup before you release.
Key takeaways?
Establish a process and start ‘training’ your handover moments
Automate your release process as much as possible and measure your progress
When a handover goes wrong have some safety measures in place
The blockchain originally is the technology behind the bitcoin. It is essentially a ledger. Ledgers are the basis of much of the information technology that we rely daily on and are essentially nothing more than lists in which all data and its changes are recorded.
Other registers operate in a similar manner. Identity information example, are carefully kept in identity repositories. Other examples of data held in databases are the register for Dutch domain names (SIDN), electronic patients file (EPD) or for patents (RVO). These central databases have a large role in our society. The security of these systems is crucial. As I said in a previous post (Blockchain could solve Data Integrity problems) data is vulnerable. It is not desirable that information in these systems are manipulated in any way.
These central databases and ledgers require a certain sense of trust and confidence that the data is properly maintained and accessible to the right stakeholders. This trust is not just based on the access, or refusal of, but also that the information will be there the next day. This requires a full set of management and reporting around the operations surrounding such a database.
The blockchain is based upon a different paradigm. Simply put, the blockchain is a distributed database, where every unit of transaction contains its own transaction history. It consists of blocks of timestamped transactions where each block contains the hash function – basically, a key – of the next block in the chain. Thus the name blockchain. For a great 2-minute introduction, take a look at this video on the blockchain from the World Economic Forum.
Next wave
There are now countless organisations running applications on blockchain technology, from banking to transportation to employment. The next wave where blockchain technology can also be transformative: social organization and the smarter use of resources.
A lot of people are being quite optimistic about the social potential of blockchain applications, and saw blockchain technology as a way to decentralize and formalize trust – yielding great potential for new and larger forms of social organization. Because it allows for transactions to be made reliably, but without third parties – which is also why it could transform not just money, but other forms of social organization, such as voting, property, or work.
Blockchain cloud be more like a society-shifting technology than just an application of technology. The blockchain could be seen as a general purpose technology, one that might fundamentally alter society, economy and culture – like the steam engine, electricity and the internet have done before.
My point here is not necessarily that that the blockchain will become the new internet or the steam engine – it is simply too early to tell. What is certain, though, is that safe and viable applications of blockchain technology will only come about through repeated iteration after iteration. The emergence of technology takes time.
A team of cryptographers and developers want to create a website where anyone can sell data sets to the highest bidder. “You’ll hate it,” is the slogan of the service, which is accessible via Tor. Payments are made via Bitcoin.
Who wants to leak a file to the highest bidder, must upload it to Slur, a marketplace for data. There are no restrictions on the type of data that is offered or the motives of the seller, says spokesman Thom Lauret of U99, the group cryptographers and developers behind the website. The design of the site is to “subvert and destabilize the established order“.
The website expects stolen databases, source code for proprietary software, zero-day exploits and other confidential documents, as well as “unflattering” pictures and videos of celebrities. Only the highest bidder will get the data, and then may choose to release the data, or just to keep it hidden. Large companies may be able to deposit money to keep leaks from publicity. In order to stem this, the website allows users to create a form of crowd sourcing/bidding that creates a larger bidding deposit .
Slur.io ensures that “whistleblowers” are to remain completely anonymous, and compensated. “Slur introduces a balanced system with the material interests of whistleblowers protected in exchange for the risks they take” said spokesman Lauret. Datasets can only be offered once.
To prevent false claims are made about the content of the data, the buyer can see the data before the seller gets the money. If the buyer is not happy with the content, you can start an arbitration involving other members to vote from the community about the content. If they agree with the buyer, the buyer gets his money back.
Payments are made via Bitcoin and the site will only be accessible via Tor to keep out the different governments. The developers do not expect to be targeted by the government because source code would fall under free speech and they do not claim to benefit from data that is sold on the site. The question is whether the American government agrees; the site is now based in San Francisco.
The developers of the website hope they get public money to pay for the development of the platform. In April, a beta version of the site was to be opened, and should follow a full release in July.
De bijdrage van informatie en IT aan het succes van de onderneming neemt alleen maar toe. Reden genoeg om de organisatie die daar over gaat op de juiste manier in te richten en, als dat nodig is, drastisch te veranderen. Deze Masterclasses geven aan welke ‘spelers’ belangrijk gaan worden, welke vraagstukken moeten worden opgelost en in welke richting we de antwoorden moeten zoeken op de vraagstukken die zich aandienen in de (nabije) toekomst. Waarbij samenwerking tussen de disciplines Informatiemanagement, Architectuur en Information Risk Management onont-koombaar is en de beslissers in de IV kolom meer dan ooit het stuur van de onderneming in handen hebben om de juiste koers te varen. De wereld verandert snel en ingrijpend. Een aantal ontwikkelingen die hierbij een rol spelen zijn:
Technology push;
Information push: big data, business intelligence, door data gedreven, personificerende processen, internet of things;
Society push of de crowd push: social media, de crowd die de expert vervangt.
Allemaal veranderingen die sterk zijn verbonden met informatie en IT waarbij de juiste inzet hiervan bepalend is voor het succes van een organisatie.Opzet van de Masterclass SerieMaar hoe ga je hier nu mee om? In een drietal Masterclasses geeft Quint op basis van eenvoudige richtlijnen en concrete praktijk cases van verschillende bedrijven aan hoe ondernemingen hun business- en ICT/IV-strategie (kunnen) formuleren en omzetten in succesvolle realisatie. Reserveer daarom alvast elke laatste donderdag van de maanden oktober, november en december in uw agenda.Donderdag 30 oktober: (BE)DENKEN – Van visie naar praktisch uitvoerbaar ActieplanOrganisatie, informatie en IT raken meer en meer verweven. De eerste Masterclass gaat in op de vraag hoe we een handzame en praktische visie, strategie en richting kunnen definiëren en hoe we die omzetten tot een realistisch en haalbaar actieplan.Voor wie is deze Masterclass interessant?
CIO’s en IT Management
Informatie- en Domeinmanagers
Risk en Security Managers
Architecten
Business Managers betrokken bij Innovatie
Programma & DeelnameWe beginnen om 14:00 uur en sluiten af rond 17:00 uur. Tijdens de sessie zullen er twee presentaties worden gegeven over het onderwerp. Naar aanleiding van die presentaties wordt op interactieve wijze een dialoog gevoerd met elkaar.Deelname is kosteloos. Indien u zich wilt inschrijven voor meerdere Masterclasses uit deze serie, gebruik dan het opmerkingenveld van het aanmeldformulier om uw wensen door te geven. Neem gerust een collega mee!Overige Masterclasses uit deze serie:Donderdag 27 november: DURVEN – Strategie moet je durven uitvoeren Elke organisatie heeft wel een strategie of een missie. Maar wie is er binnen de organisatie concreet bezig met het realiseren van die strategie? En hoe weten we dat we met de juiste dingen bezig zijn en blijven? Het concretiseren van de strategie dwingt de verschillende disciplines (informatie management, architectuur, IRM) in de organisatie om samen te werken. De tweede Masterclass laat zien hoe de rollen en activiteiten van deze disciplines samenwerken om de strategie daadwerkelijk te realiseren. Donderdag 18 december: DOEN – Wat heeft de bedrijfsvoering nodig en waar moet ICT aan (gaan) voldoen? Nieuwe technologie, samenwerken in ketens en een toenemende mix van intern en extern betrokken diensten vragen om een organisatie die niet alleen procesmatig, maar ook inhoudelijk de regie kan nemen. Activiteiten zoals informatiemanagement, information risk management en architectuur zullen zich sterk moeten ontwikkelen. Naast een inkijkje in de toekomstige ontwikkeling van deze disciplines kijkt deze Masterclass juist naar de stappen die u morgen al kunt zetten om in de toekomst het stuur in handen te blijven houden. Deze nieuwe reeks Masterclasses staat in het teken van trends, ontwikkelingen en uitdagingen in ons vakgebied en is gebaseerd op de visie van Quint. Ervaren consultants, betrokken managers en gedreven vakmensen gaan met elkaar in gesprek en in debat om visies met elkaar te delen en ervaringen uit te wisselen.schrijf je nu in
“We are kept from our goals not by obstacles but by a clear path to lesser goals.”
I’m not sure who pointed me in the the direction of Robert Brault, but I love that quote. This resonates with me on different levels. If I hear low hanging fruit one more time, I’m going to scream! Let’s just say my definition of “low hanging fruit” involves more planning and less bruising.
Forrest for the trees?
When we talk about the complexities of implementing new systems and strategies, let’s say something with AI, we often focus on achieving the immediate results. We need to show tangible benefits, now. With this pressure on the “lesser goals” there is a risk that we don’t see the forest because we’re focussed on the trees. An unintended consequence is that we start steering toward the wrong outcome. And thus off-course from our larger objectives.
For example: Back to the simple act of washing dishes. As I told before, our dishwasher broke down and that reminded me that automation of tasks transforms the tasks, not the outcome. But clean tupperware is not my end-goal. Nor is a clean house. My personal end-goal is comfortable living. I don’t want to live in a showroom or museum (it’s also not possible with four cats and two dogs). The “lesser goal” of automating the cleaning of the dishes, introduced a new set of tasks: correct loading , filter cleaning, and if you’re in luck cryptic error codes. It’s up to me to decide what I rather do. What is an efficient use of my time vs how much do I hate doing it and do I actually mind when I sit very comfortably on my couch?
Now, let’s switch this up. Imagine your idealized environment with a Salesforce rollout. When you implement specific features or modules, for example automating Lead capture.
These actions are aimed at achieving those “lesser goals”: more leads, faster follow-ups, improved efficiency in a specific area. You have all these kinds of Key Performance Indicators (KPIs) that you diligently track. There is an uptick in lead volume, a reduction in the average time to qualify a lead, or a faster completion rate for a specific workflow. This is the bright, reassuring signal that our “lesser goal” is being achieved.
Unforeseen side effects
There are these unforeseen side effects that might not be immediately obvious. These can hinder our progress towards our larger, strategic objectives. The normal thinking is that more leads, more qualified leads, more opportunities, more closed won. But there is also the risk of:
Data Quality Degradation: The ease of automated capture might lead to a flood of unqualified or duplicate leads, diluting the quality of your data and requiring significant cleanup efforts down the line. User Frustration and Workarounds: A rigid process might not accommodate all cases, leading sales reps to develop inefficient workarounds (cough Excel cough) outside the system, undermining the very efficiency you aimed for. Increased Reporting Burden, Diminished Insight: The focus on tracking the increased lead volume might lead to reports that no one has the time or expertise to analyze effectively, creating noise without genuine insight. Silos Around Automated Processes: Teams might become overly reliant on the automated lead flow, neglecting cross-functional communication or losing sight of the bigger customer journey. This is like a localized concentration of “light” that doesn’t illuminate the entire tank.
This is where Brault’s quote hits home for me. Because we can fix all that. De-duplication, validation rules, Lead Assignments, Data Quality reports. As you can see these unintended consequences are quietly accumulating and we have more busy work. If you become to focused on the easily measurable metrics, there is a risk that these very actions might be creating new obstacles and thus diverting us from our overarching strategic direction.
Losing direction while chasing immediate targets
This risk of losing direction while chasing immediate targets is certainly possible. Just look at the rapidly evolving landscape of Artificial Intelligence. Like our automated lead capture, their is an allure of ‘quick wins’. This ‘low-hanging fruit’ everyone seems to like can create a similar set of unintended consequences. As I discussed in my previous article, ‘T.A.N.S.T.A.A.F.L.’ This ‘free lunch’ of uncoordinated AI initiatives often comes with hidden costs: duplicated efforts, integration nightmares, and a lack of strategic alignment.
The solution, much like ensuring our Salesforce implementation truly serves our overarching business objectives, lies in adopting an AI Game plan. This upfront investment in strategic thinking acts as our compass, maybe even North Star. We all need to work out what comfortable living means for themselves. This way we are ensuring that the individual ‘lesser goals’ we pursue with AI are not just shiny objects distracting us from the real destination.
A well-defined AI Game plan helps us anticipate and mitigate potential unintended consequences by:
Providing Strategic Alignment: Ensuring that every AI initiative, including those focused on immediate gains, is directly tied to our broader business goals. This helps us evaluate if the ‘clear path’ to a lesser AI goal is actually leading us towards our ultimate strategic vision.Promoting Resource Optimization: Preventing the duplication of effort and the creation of isolated AI solutions that don’t integrate effectively, thus avoiding the ‘busy work’ of constantly patching disparate systems.Establishing Data Governance: Implementing clear guidelines for data quality, security, and sharing, mitigating the risk of ‘data degradation’ and ensuring that the fuel for our AI initiatives is clean and reliable.Encouraging Holistic Thinking: Fostering cross-functional collaboration and a shared understanding of the bigger picture, preventing the formation of ‘silos’ around individual AI applications.Defining Measurement Beyond the Immediate: Establishing KPIs that not only track the success of the ‘lesser goals’ but also monitor for potential negative side effects and progress towards the overarching strategic objectives.
We need to look beyond the initial increase in leads and consider the quality and the impact on the sales team’s workflow in our Salesforce example. And an AI Game plan compels us to look beyond the initial promise of AI and consider its long-term impact on our people, processes, technology, and overall strategy.
Now, for a nice closure statement that ties everything together
Ultimately, the path to achieving our true goals is rarely a straight line of easily attainable milestones. It requires an awareness of the broader landscape and a willingness to look beyond the immediate allure of quick-wins.
Whether it’s ensuring clean tupperware contributes to a comfortable life, or that an automated lead capture process genuinely fuels business growth. A strategic ‘game plan’ either for our household chores or our AI ambitions acts as our guiding star. All the wile making sure that the many trees we focus on are indeed leading us towards the right forest.