ChatGPT, an A.I. system developed by OpenAI, is available to the public since the end of 2022. It has been making big waves since. From making students book reports, writing their papers to new lyrics of old songs.
So what is it?
It is a system trained on vast amounts of online text. And is capable of predicting the probable next word in a sentence. This results in writing that is strikingly humanlike.
It’s amazing, Mike!
Despite the amazement and excitement surrounding ChatGPT, I believe it’s important to express a certain level of skepticism. These systems will not only reduce the cost of producing text, images, and code, but video and audio as well. With their ability to sound and appear human, these outputs will appear and sound extremely convincing. Their primary focus is on mimicry. Mastering the art of persuasion, creating outputs that seem remarkably realistic and from a human. My ex-colleague Sander Duivenstein co-wrote a great book about it (Echt Nep).
But ChatGPT has no actual idea what it’s saying or doing. You are just looking, basically, at an autocomplete. A very advanced and costly autocomplete.
I was surprised by my sort of wonder when I started using ChatGPT because it is a very, very cool program. And in many ways, I find that its answers are much better than Google for a lot of what I would ask it. My wife and I were looking to paint a staircase and it came with some very helpful information, in Dutch!
But at the heart of it, it is just repeating stuff that other people have said. And trying to maximize the probability of that. It’s just autocomplete. A very probable echo chamber. And as we all experienced in some way or another, autocomplete just gives you bullshit. There used to be a website that made fun of the errors of autocomplete and autocorrect (damnyouautocorrect.com), but it seems it went the way of the dodo.
Is ChatGPT a great next step in AI?
That is a very big topic where a lot of much smarter people wrote great articles and papers about. Tom me it is a great tool that highlights the strength of Large Language models. Cyberdyne systems taking over our world? Not by a long shot.
AI in your Enterprise?
So having taking the position of a ChatGPT sceptic, is there a place for AI in your enterprise? I think there is. But it needs to exist in the contextual fabric of where your customers lives. With the products that Salesforce has within our platform we give you the tools to get insights from your customers based on their past interactions.
Customer facing
You can then use these insights to strengthen relationships, prioritize leads, score opportunities, determine who needs to solve a cases, and help with campaigns to drive your business forward.
Employee productivity
With Einstein you can help your employees get more done in a shorter amount of time with intelligent case classification, next best actions, and recommendations. Provide them with the answers and information they need quickly while automating the best action to take.
As promised in my earlier blogs. I am writing a blog a month and I write about Architecture, Governance and Changing the Process or the Implementation and last month was about Technical debt. This month it is about going live successfully. Why? The relay run that the Dutch team lost due to a faulty handover got me thinking about software and delivery, going live, handover moments and risk mitigation. Next to training to sprint really fast, they also train the handover extensively. And despite all this training, this time it went wrong. On an elite Olympic Athlete level!
To be fair, there are many examples of handovers going wrong:
Processes, tools and safety measures
Successful projects have certain elements and key traits in common. These traits consist of
Mature, agreed upon processes with KPI’s and a feedback loop to standardize the handovers
Automation to support these processes
Safety measures for Murphy’s law (when even processes can’t save you)
Key principle is to not try to drown an organisation in red tape and make it more complicated than necessary. Like my first blog “Simplify and then add lightness”. We need these processes to progress in a sustainable and foreseeable way towards a desirable outcome: go live with your Salesforce implementation.
These processes are there to safeguard the handovers. The part of the Dutch relay team that made me think about our own relay runs and their associated risks.
Handovers
The main handovers are:
User → Product Owner → Business Requirement → User Story → Solution Approach → Deploy → User.
As you can see it is a circle and with the right team and tools it can be iterated in very short sprints.
User → Product Owner
“If I had asked people what they wanted, they would have said faster horses.”
Henry Ford
Okay, so there is no evidence that Ford ever said something similar to these words, but they have been attributed so many times that he might as well have said them. I want to use that quote to show different methods on how to get to an understanding of the User Needs. The two sides are either innovating by tightly coupled customer feedback. Or by visionaries who ignores customer input and instead rely on their vision for a better product.
Having no strong opinion on either approach, I still tend be a bit more risk averse and like to have feedback as early as possible. This is perhaps not a handover in a true sense that you can influence as an architect, but getting a true sense of Users Needs might be one that is essential for your Salesforce project to succeed.
I still remember the discussion with a very passionate Product Owner: We need a field named fldzkrglc for storing important data. When diving deeper we found it was a custom field in SAP that was derived from the previous Mainframe implementation. So, that basically meant that the requirements where 50 years old. Innovation?
Business Requirement → User Story
There are many ways the software industry has evolved. One of them is around how to write down User Needs. A simple framework I use for validating User Stories are the 3C’s. Short recap:
Card essentially is the story is printed with a unique number. There are many tools for supporting that.
Conversation is around the story that basically says “AS A … I WANT … SO THAT I …” It’s a starting point for the team to get together and discuss what’s required
Confirmation is essentially the acceptance criteria which at a high level are the test criteria confirming that the story is working as expected.
Often used measurement is the Definition of Ready (DoR). It is a working agreement between the team and the Product Owner on what readiness means. And it is a way to indicate an item in the product backlog is ready to work.
As handover and risks go, the quality and quantity of the user stories largely determine the greatness of the Salesforce implementation. Again as an architect you can influence only so many things but in order to bring innovation and move fast User Stories are key.
User Story → Solution Approach
This is where as an architect you can have a solid impact. This is where your high level architecte, solution direction and day to day choices come together. This is your architecture handover moment. When you work together with the developers and create the high level design based on the actual implemented code base. The group as a whole can help find logical flaws, previously wrong decisions and tech debt. The architecture becomes a collaboration. As I wrote earlier, keep it simple and remember Gall’s law. It explains the why of striving for as few number of parts in your architecture.
“A complex system that works is invariably found to have evolved from a simple system that worked. The inverse proposition also appears to be true: A complex system designed from scratch never works and cannot be made to work. You have to start over, beginning with a working simple system.”
John Gall: General systemantics, an essay on how systems work, and especially how they fail, 1975
Next to keeping it simple, I also firmly believe that there should be a place to try out and experiment with the new technology that Salesforce brings. The earlier mentioned experimenting phase fits perfectly. Why only prototype the new business requirements? It is a great place to test out all new cool technical things Salesforce offers like SFDX, Packages or even Einstein and evaluate their value and impact they could have on your Salesforce Org.
Deployment
In any software development project, the riskiest point as perceived by the customer is always go-live time. It’s the first time that new features come into contact with the real production org. Ideally, during a deployment, nobody will be doing anything they haven’t done before. Improvisation should only be required if something completely unexpected happens. The best way to get the necessary experience is to deploy as often as possible.
“In software, when something is painful, the way to reduce the pain is to do it more frequently, not less.”
David Farley, Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation
So establish a repeatable process for going live and perform it many times. This sounds easy but remember that during an Olympic relay race it still went wrong.
Salesforce Sandboxes and our Scratch Orgs provides a Target org to practice your deployments. They are meant for User Acceptance tests, but also making sure that everything will deploy successfully. It can also give developers necessary experience and feedback of deploying their work while it’s in progress. So now that we have a target we need tools to help manage the drudgery.
There are whole suites of tools specifically to support the development team in this. From Gearset to Copado, and Bluecanvas to Flosum. There is a lot, there are even teams that support the business with their own build toolset with Salesforce DX. It is a good practice to choose a tool that supports you and your go-live process to help automate as much as possible.
Safety measures
We have an agreed upon working processes, we measure the quality of handover moments, we automated deployments with bought or homegrown tools, now what?
Even Olympic Athletes make mistakes, so what can we do with Software and Databases that in the physical world is impossible? Backups!
A lot of Salesforce deployments especially for larger customers tend to be fairly data driven. Next to business data as Accounts, Contacts and Orders, there is configured business rule data for example with CPQ. Next to that there is technical data or metadata that is meant for Trigger and Flow frameworks, internationalisation and keeping local diversifications maintainable.
Deploying this data or even making changes in a Production Org is asking for Backups. A good practice is to have a complete Org Backup before you release.
Key takeaways?
Establish a process and start ‘training’ your handover moments
Automate your release process as much as possible and measure your progress
When a handover goes wrong have some safety measures in place
As promised in my earlier blog. I will try to write more often and will write about Changing the Process or the Implementation and Technical debt. Technical debt? What is that? It is a phrase from the famous software developer, Ward Cunningham. Who besides to being one of the authors of the Agile Manifesto, is also credited with inventing the wiki. He used the metaphor to explain to non-technical stakeholders why he needed budget for refactoring existing software.
He didn’t realize at the time, but he had coined a new buzzword in the software community. Later, it would become the subject of many articles, debates and opinions on how to solve technical debt in your code base. In this short blog I want to address the key elements in managing the creation of it in your Salesforce Org. After all, prevention is the best cure.
Types of Technical Debt
So what is it? In my opinion there are many types of technical debt. They range from code that needs to be refactored, application parts that need to be restructured, towards interfaces that work and nobody want to touch, or even complete systems that no longer fit the future application architecture. Who hasn’t seen, in a large company, more than one Enterprise Service Bus that nobody dared to change?
I still remember a factory that had a Dutch team traveling in the US looking on eBay for second hand parts for their server in the Netherlands. Because the company that originally made the parts went bankrupt. That specific server was EOL for more than 10 years. Unmanaged technical debt will become a very serious liability over the years.
Is Technical Debt so bad?
So now that we have a working definition of Technical Debt I want to talk about if it matters. Many of my personal achievements have started with a debt. For example getting my degree meant getting a student loan and buying my first house came with a mortgage. Approached that way, debt is a way to quickly start and defer the full payment. It is notavoiding that payment!
It is hard to predict if starting quickly and incur a debt is a wise decision or that it will be a sound investment. The key point is that it is a debt. Like my colleague said many years ago, it comes down to payment : “Either you pay now, or you pay later, but then? It will be more”.
Debt? To who?
If we agree that having a debt is not that bad as long as you plan on paying. Then who do we pay? Who is paying? The part that makes the problem harder, is the vagueness of the term debt. It immediately makes me think about the creditor and owner of that debt. For the earlier mentioned student loan in the Netherlands, it is clear, I had to pay the government back in a certain time frame. For a mortgage it is also clear, it is the bank. And there are many rules, regulations and stipulations to safeguard those organisations.
But with Salesforce it is often unclear who you have to pay. Is it Future Me? The Enterprise Architecture team or even one of their governance bodies, the Design Authority the creditor? And who is the one that needs to pay? The CIO? The Solution Architect? The Product Owner? The development team? As you can see it can and will vary a lot. As a true consultant I think it depends on the level of the Technical Debt. An Application not fitting in the landscape is probably owned by the EA team. Enterprise Service Bus not being updated in years is a CIO problem and the more small debts should be solved in the teams maintaining or actively developing the applications.
It will boil down to the agreed upon Governance in your organisation: what are the agreed upon rules safeguarding all parties and who gets to decide when to pay to whom?
Is Technical Debt unavoidable?
As stated in the beginning, this blog is about managing the Technical Debt in your Salesforce Org. Let’s be honest, some of these considered Technical Debt issues aren’t created. Often it is the older technology or previously written code that now hinders the requested changes. It still works and does what it needed to do, but is no longer the best fit or optimal solution for the changed requirements.
All software in the end will go the way of the Dodo. It will become obsolete because of many different reasons. Salesforce implementations aren’t that different. There are not that many greenfield implementations, most of us work on existing Orgs that have had many teams with different skill levels working on it. On top of that unstable base there is the phenomenon of decisions made in the past, being overtaken with the technology push of today. When I joined Salesforce we still had the Classic UI. And now 5 years later there are still customers that haven’t transitioned yet.
What are key drivers of Technical Debt in your Org?
I believe these are the main aspects that contribute
Business pace
Agile everything
Tech lead Technical debt
Let’s explore each topic:
Keeping up with Business
Business will always want to go at a pace IT can’t keep up with and in my opinion that’s the way it should be. That is what we try to solve with Pace Layering our Enterprise Architecture. Not all elements of an application landscape need to be changed with every season. For example: websites will need to change more often than sending invoices to the customer. Stuart Brand did some great explanation of applying Shearing Layers inside applications. The question that I think lies at the heart of it is: Should every request or new idea be implemented immediately? That is a great way to accrue technical debt on a more business architectural level.
Solution In general it is a good practice to explore and test innovations and new ideas for the value that they promise before determining if they can and should scale up and end up as projects and items on the backlog. In some organisations there is a role that acts as the Portfolio lead that can safeguard how many projects will be invested in and can be executed at the same time while still delivering the expected value. Managing the different priorities is an on-going aspect and not a one-time resolution. It’s just part of the process. Next to this more Governance approach I often see a Technical Architect that with a small team develops these ideas in so called Proof of Concepts or Architectural Spikes on a separate Sandbox to validate expectations, solution approach, technical fit with existing Org and overall value.
Organised this way you will have a safe place for experimenting and trying to see if you can make it work in your Salesforce Org. No is still a valid answer!
Agile solves everything, right?
Agile is a way to take bite sized chunks of the agreed upon strategy to implement the long term vision for a specific product, not a replacement for strategy. It lets you pivot quickly, so you can recover from a bad decision or implement features which suddenly get a higher priority. Next to that, some architectural decisions are too big and important to leave until the last moment. If different Agile teams do not coordinate or collaborate towards that shared architectural understanding of where they fit in and how to achieve outlined business goals that is another way to accrue technical debt.
Solution Agile can be seen as a Risk Management approach by staying in close communication with all your key stakeholders and show short term progress towards the larger picture on your key deliverables. You need to strike a balance between just enough architecture and upfront planning for critical, large and complex projects.
The earlier mentioned ongoing tinkering, prototyping, refactoring and architecture experiments are an important part. They can, for example, validate the Architecture decision that a new real time integration pattern will be solved by using Platform Events.
Cross team alignment for Agile teams is also needed to prevent Technical Debt. Different teams need to stay aligned on the architecture, the business strategy and how to manage those deliverables across teams. There are many solutions. From big room planning, or shared backlogs to even complete methodologies like SAFE. They all should lead to an agreed upon and prioritized roadmap that is then input for the different teams, their Product Owners and their backlogs. Managed in this way the big architecture topics don’t popup out of nowhere. They are managed from the start across the teams.
Tech lead Technical Debt
Sometimes I encounter a sort of disconnect in development teams and the business partner. Not physical but more the sentiment that “the Business” is not clearly stating their requirements. Or that they haven’t thought it through, so instead of sticking to a consistent implementation. And iterating and refactoring the current design we have a Tech lead gold-plating solutions to requirements. We just need an interface here, trust me in two or three months we will definitely need the extra abstraction it provides. I saw this really cool injection pattern I want to try out. And over there we need event driven updates, so we are totally ready for when users will actually use it…
Let’s be honest, most people tend to be less interested in the needed complex technical elements that make a feature work. And thus relative simple features may add disproportionately to a development schedule. And that is hard to explain, everytime. So it makes sense to want to be prepared, but at the same time you have to wonder how far are you willing to go and for what?
Next to that, most Architects and Developers that I know are fascinated by new technology and are itching to try out the new features of Salesforce and create their own implementation of a slick new thing they saw. Whether or not it’s required.
Salesforce releasing three times a year adds to that pressure of both wanting to implement the new innovative technology, but also deprecating some older implementations. For example: Process Builder and Workflows being migrated to Flow is a great example of the last category.
Solution The solution to this technology driven Technical Debt is two fold. I firmly believe that there should be a place to try out and experiment with the new technology that Salesforce brings. The earlier mentioned experimenting phase fits perfectly. Why only prototype the new business requirements? It is a great place to test out all new cool technical things Salesforce offers like SFDX, Packages or even Einstein and evaluate their value and impact they could have on your Salesforce Org.
Next to that there is a need for an ongoing process of evaluating and adjusting your current implementation standards and Org Maintenance. From newly needed naming conventions for Flow to getting a standard for Error Handling and Logging Errors. The second part is that we all need to communicate better. Sometimes infrastructural elements need to be in place before something fancy can be build. If your Salesforce Org is not yet able to support the requested feature what is the plan to get there?
This all leads to different levels of elements in the roadmap and the backlogs. Both Business User Stories and underlying supporting Technical User Stories in the backlog. Without the proper planning and attention for both, Technical Debt will accrue with every Salesforce release even if you have the best Salesforce Admin in the world.
Key takeaway
Is Tech Debt avoidable? It will be better manageable if you pay attention to
Proper maintenance of your Org as the standards
Keeping pace with Salesforce releases
Balance the push for new features with proving their value
Experiment with improvements in your PoC environment
This famous quote is from Colin Chapman. It was his philosophy, somewhere in 1950 way before ‘minimalism’ became fashionable.
Colin Chapman – engineer, inventor, and builder in the automotive industry, and founder of Lotus cars
Least number of parts
By tradition, Lotus uses the least number of parts needed in its products. Yet, they are impeccably engineered, retain their lightness and work dependably.
This is a great analogy and one we need to see more often in our approach to IT systems. It is what we should strive to establish with our architecture and designs. Yet we often see the opposite. We add many abstraction layers of indirection to be ready for possible future changes.
But when we discuss requirements or features we use the term MVP. Why not with our designs and architectures? Design is driven from requirements and should fit the overall architecture. Building anything beyond those requirements is pure speculation. My goal is to make the statement of Colin Chapman “Simplify, then add lightness” the guiding principle when architecting for a project.
Thinking about how to establish a wider practice around the concept of keeping it simple, lean and minimal, Gall’s law also comes immediately to my mind. It explains the why of striving for as few number of parts in your architecture.
“A complex system that works is invariably found to have evolved from a simple system that worked. The inverse proposition also appears to be true: A complex system designed from scratch never works and cannot be made to work. You have to start over, beginning with a working simple system.”
John Gall: General systemantics, an essay on how systems work, and especially how they fail, 1975
Although dubbed Gall’s Law by some, the phrase is not labeled as such in the original work. The book cites Murphy’s Law and the Peter Principle, and is filled with similar sayings. Gall’s Law is a rule of thumb for systems design from Gall’s book Systemantics: How Systems Really Work and How They Fail.
As a Salesforce Program Architect I’m involved at some of our biggest customers and their large and complex implementations. I’ve heard lots of stories about failed Salesforce implementations and they all had something in common. It became an over-built system designed for the wrong people and processes.
I can relate. It is a constant struggle to keep it as simple as can be and still deliver fast. Part of the struggle is our own knowledge. We know that some design patterns lead to brittle, hard to change systems with lots of Technical Debt. That will be part of another blog. Other part may be the question if the proposed first release is appealing enough? Yet another part may be the impact of keeping it simple. Do you change the design or change the process in order to keep it simple and deliver fast? There is certain inertia that comes with change. But that also will be part of another blog post.
Coping strategies
Why do I like to use Chapman and Gall? Chapman has great quotes that are understandable and translate well to the project. “In order to increase speed, you have to add lightness”. This sparks great discussions with the customer. How can we make this simpler and still have value? What is the smallest increment that we can deliver and is deemed valuable?
I like Gall’s law because influential people I learned a lot from in my career mentioned Gall’s law. One of the first times I encountered Gall’s Law was when Grady Booch mentioned it. I don’t need to explain Grady, right? Right?
What is also great is that in later books Gall even states some very familiar strategies to scale up from the simple system and cope with the possible negative outcomes. Sound familiar?
Develop using Iterative processes
Building iteratively is about reaffirming or refining the shared understanding of the problem. And getting feedback fast from actual users. You can think of it as a trial-and-error methodology that brings your project closer to its end goal. Build a MVP that addresses the most important issues first. Save the extra stuff for later. A small release can help get your users involved more quickly and generate better feedback without the risk of over-architecting, over-designing and overbuilding.
Reuse known working modules
Avoid over-doing by using standard Salesforce features. Salesforce consists of many known working modules and those connect really well. For example: Lead management, Account Management to Opportunity and their scoring model and Case Management. There is no need to architect, design or develop this all by yourself. These modules still need to be configured and sometimes even extended by some customization. This needs to be done by well known ‘good practices’. But that probably also should be YABP (yet another blog post). Salesforce has great resources on well known patterns and practices, visit https://architect.salesforce.com/ to learn more.
Release early and often
In any software development project, the riskiest point is always deployment time. It’s the first time that new features come into contact with the real production org. Ideally, during a deployment, nobody will be doing anything they haven’t done before. Improvisation should only be required if something completely unexpected happens. The best way to get the necessary experience is to deploy as often as possible.
“In software, when something is painful, the way to reduce the pain is to do it more frequently, not less.”
David Farley, Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation
Salesforce Sandboxes and our Scratch Orgs provides the means to do this. They are perfect as target practice to make sure that everything will deploy successfully. It can also give developers necessary experience and feedback of deploying their work while it’s in progress. There are whole suites of tools specifically to address this. From Gearset to Copado, and Bluecanvas or Flosum.
Automate testing
Use automated testing to ensure that enhancing the system does not break it. Otherwise any further steps are enhancing of a non-working system.
“So, when should you think about automating a process? The simplest answer is, “When you have to do it a second time.” The third time you do something, it should be done using an automated process.”
Jez Humble, Continuous Delivery: Reliable Software Releases Through Build, Test, and Deployment Automation
Building, executing, and maintaining tests can take a long time. How many business logic decision does your Salesforce implementation have? What about the different system and application integrations? To add to the complexity, Salesforce releases many new features delivered by three major upgrades every year. Great for improving functionality, but these regular upgrades break manual scripts, which probably require many hours to fix. All this leads to diminishing returns on manually creating and maintaining your test.
So part of the solution is to automate your test to ensure that Salesforce still delivers what you agreed upon to deliver. And like the release early and often it also has several companies that deliver solutions that can help you. Depending on your team size and skill levels and of course pricing.
Key takeaway
In order to be successful with your Salesforce implementation: “Keep it simple and add lightness”
Start small and simple with your features and design
Add more features and functionality based on actual User Feedback
Design Thinking and Agile are similar, different, and intertwined.
Short answer
Design Thinking is used strategically by using design methods to find the right question and begin to answer it. Agile is mostly used operationally, usually when building software, where once a question is asked, teams iterate toward a solution.
Introduction
Today, most organisations utilise many technologies in order to source, process, transport and deliver products and services. All of these technologies, as well as most, if not all, of the business processes still performed manually, are underpinned by information technology. As Microsoft’s Bill Gates said, “Information technology and business are becoming inextricably interwoven. I don’t think anybody can talk meaningfully about one without talking about the other.“
Change is occurring in both the business and IT environments at a far more rapid pace now than it has ever been. The rate of change is not going to slow down anytime soon. If anything, competition, and new technology will probably speed up even more in the next few decades in most industries.
Due to this extensive use of technology in a rapidly changing competitive environment, the need to continually align an organisation’s technology, product and services with its business direction has therefore become increasingly urgent and increasingly difficult. Therefore the rise of methods like Design Thinking and Agile. Both are converging on the challenges outlined above but they have quite different backgrounds.
Characteristics Design Thinking methodology
Design Thinking is used strategically by using design methods to find the right question and begin to answer it. It is a discipline that uses the designer’s sensibility and methods to match people’s needs with what is technologically feasible and what a viable business strategy can convert into customer value and market opportunity.
The methodology is historically applied by designers during their designing processes but can be used by everyone to solve every-day problems on a creatively manner.
Characteristics Agile methodologies
Agile refers to a sort of group of software development methodologies based on iterative software development, where requirements and solutions evolve through collaboration.
Agile methods generally promote a disciplined project management process that encourages frequent inspection and adaptation, teamwork, self-organization and accountability and a business approach that aligns software development with customer needs and company goals.
What are similarities of the methodologies?
Both methodologies use input from outside the team doing the work. Designers do user research, gather business needs and discuss technology possibilities. For Agile this looks more like creating backlogs, writing user stories and determining success metrics. Other similarity is the iteration. Both processes embrace iteration as part of the process and therefore establishing ongoing refinement up to the business value. Perhaps most interesting similarity is that both methodologies s that employees (people) are the focal point for creating value. This is stimulated by organising employees in cross functional teams which stimulate cross functional solutions of a product, service, or software.
Which differencescan be recognized?
There are also some differences. Agile in general doesn’t have a ‘synthesis’ stage. Usually the result from the last iteration are the direct input for the next iteration. It’s common for requirements to be updated and prioritised before work commences again. Design Thinking however takes a step back and tries to gather learnings and then spotting patterns to make an informed leap to something new.
Other difference is the staging of the product development. The legacy of Design means that we still often think in terms of projects with a beginning, middle and end of a product development. At the end the final product will be delivered. In between semi manufacturers are deployed and tested. Agile definitely has stage gates of deployment (alpha, beta, launch) but has the ability to deploy a solution which can be seen as finished product at any point in time. The design process of a product or service perhaps needs these points to force a coherent output or avoid high des-investments in unused product-/service developments. Where the design process of software doesn’t have these hurdles.
Perhaps the most interesting difference is in the range of tools to get the job done. From simple things (like pens and paper) to more complex tools (like the Business Model Canvas), Design Thinking can be as simple as taping some things together.
Conclusion
Design Thinking and Agile are similar, different and intertwined.
Where Design Thinking is used strategically by using design methods to find the right question and begin to answer it. Agile is mostly used operationally, usually when building software, where once a question is asked, teams iterate toward a solution.
One of the propositions of cloud is that it should be possible – through the use of intelligent software – to build reliable systems on top of unreliable hardware. Just like you can build reliable and affordable storage systems using RAID (Redundant Arrays of Inexpensive Disks).
One of the largest cloud providers says: “everything that can go wrong, will go wrong”.
So the hardware is unreliable, right? Mmm, no. Nowadays most large cloud providers buy very reliable simpler (purpose-optimized) equipment upstream of suppliers in the server market. Sorry Dell, HP & Lenovo there goes a large part of your market. Because when running several hundred thousands of servers a failure rate of 1 PPM versus 2 PPM (parts per million) makes a huge difference.
Up-time is further increased by thinking carefully about what exactly is important for reliability. For example: one of the big providers routinely removes the overload protection from its transformers. They prefer that occasionally a transformer costing a few thousand dollars breaks down, to regularly having whole isles loose power because a transformer manufacturer was worried about possible warranty claims.
The real question continues to be what happens to your application when something like this happens. Does it simply remain operational, does it gracefully decline to a slightly simpler, slightly slower but still usable version of itself, or does it just crash and burn? And for how long?
The cloud is not about technology or hardware, it’s about mindset and the application architecture.
A team of cryptographers and developers want to create a website where anyone can sell data sets to the highest bidder. “You’ll hate it,” is the slogan of the service, which is accessible via Tor. Payments are made via Bitcoin.
Who wants to leak a file to the highest bidder, must upload it to Slur, a marketplace for data. There are no restrictions on the type of data that is offered or the motives of the seller, says spokesman Thom Lauret of U99, the group cryptographers and developers behind the website. The design of the site is to “subvert and destabilize the established order“.
The website expects stolen databases, source code for proprietary software, zero-day exploits and other confidential documents, as well as “unflattering” pictures and videos of celebrities. Only the highest bidder will get the data, and then may choose to release the data, or just to keep it hidden. Large companies may be able to deposit money to keep leaks from publicity. In order to stem this, the website allows users to create a form of crowd sourcing/bidding that creates a larger bidding deposit .
Slur.io ensures that “whistleblowers” are to remain completely anonymous, and compensated. “Slur introduces a balanced system with the material interests of whistleblowers protected in exchange for the risks they take” said spokesman Lauret. Datasets can only be offered once.
To prevent false claims are made about the content of the data, the buyer can see the data before the seller gets the money. If the buyer is not happy with the content, you can start an arbitration involving other members to vote from the community about the content. If they agree with the buyer, the buyer gets his money back.
Payments are made via Bitcoin and the site will only be accessible via Tor to keep out the different governments. The developers do not expect to be targeted by the government because source code would fall under free speech and they do not claim to benefit from data that is sold on the site. The question is whether the American government agrees; the site is now based in San Francisco.
The developers of the website hope they get public money to pay for the development of the platform. In April, a beta version of the site was to be opened, and should follow a full release in July.