AWS Cloud Enterprise Strategy Blog

Experiment More, Fail Less

“I never lose. I either win or I learn”
Nelson Mandela

A phrase that I find is not resonating well with enterprise executives is “fail fast.” Despite the best attempts to justify this exhortation, executives at an airline, financial, or restaurant company are clearly not going to embrace accepting a philosophy where the wrong type of failure will put a company out of business. So what do we really mean?

Ironically, the intent is to educate executives that more rapid, data-driven experiments without the overhead of cumbersome bureaucracy will almost certainly lead to fewer failures than companies experience today. Any such failures will also be less business impacting. Ultimately, our objective is to learn faster and more cost effectively. The well-intentioned but often top-heavy governance ceremonies that have evolved in large enterprises can have the opposite effect. Most of us have seen the pattern where in the desire to justify an annual budget or to transform a product, process, or business, complex business cases are created. These cases are often heavy on the benefits, light on the data, and financially optimistic, with the intent of “winning” funding. The consequences of this approach are predictable. In their excitement to “win,” project teams make commitments that have a shaky objective basis. Scope bloats as departments realise this is their one shot at getting their requirements prioritised for the year, and risk increases exponentially. With our human aversion to failure, an escalation of commitment becomes predictable, and we see failing projects continue even where gating criteria are not met.

Rather than digress into a discussion about the role of Agile, DevOps, and other methodologies in de-risking development, my focus is on how we make better decisions faster in situations where the answer is uncertain or unknown. For instance, we could speculate on what makes for a great customer experience at a grocery store’s self-checkout kiosk, but why would anyone give this credence without A/B testing? There is little more irate or unforgiving than a customer you assume too much about.

Experiments, in general, should have a well-defined hypothesis and the data and metrics required to prove or disprove the hypothesis. I have seen a number of experimentation techniques that work well and can be adapted to most industries. These include:

  • Low fidelity pretotypes or prototypes to help create models. At McDonald’s, low cost foam core became a standard prefab method for new physical ideas. I have seen a block of wood used as a mobile device substitute to test ergonomics and customer journey integration, and Adobe Flash for employee user interface testing.
  • Designated test sites, such as restaurants or retail outlets, allow controlled experiments to be conducted in real-world situations (but be aware of the Hawthorne effect).
  • Branching application code and having a small (aka Agile) team helps you try basic new features.
  • Application instrumentation can be used to test non-functional impacts of changes to infrastructure or application components.
  • Use of eye tracking technology to look at customers’ interactions with any electronic media, including web sites or menu boards, and video ethnography for testing hypotheses involving logistics or factory layouts are also useful.
  • And simulation software, such as MatLab, SPICE, and OpenSim, are useful tools.

As you approach your own experiments, the framework and culture around these need to be thought through and, in some cases, treated like experiments themselves to ensure a good fit between desired outcomes and company culture. There are a few practices I recommend:

  • Time box experiments to create focus, whether by experiment type or by a simple rule of thumb for all experiments.
  • Have a small, light-touch team responsible for experiment methodology and approving the design of experiments, with limited seed money allocated to candidate projects.
  • Train the organisation on the basics of experiments, including writing a good hypothesis, using control groups and data, and other appropriate methodologies such as six sigma or Lean.
  • Create a repository of experiments to minimise duplication and, more critically, to share the learnings from the experiments.
  • Empower the teams and provide air cover. Watch for experiment teams sliding into “this must work” mindsets or being castigated for “failed” experiments. Recognise teams who embrace the data-driven experimentation culture…not necessarily for the tests they do, but for the attitude they demonstrate.
  • Make running experiments a business as usual activity for all new business cases, focusing on data validity in discussions on proposed investments. Enable this by making data an open asset within the legal, regulatory, and confidentiality constraints of your organisation.
  • Emphasise that work products are throwaways so teams don’t inadvertently attempt to create production solutions (and waste time) or excited leaders don’t attempt to promote flaky code to production.

The cloud plays two roles in this area: as a subject of experiments and as a platform for experimentation. Many organisations are getting comfortable with the cloud itself and the required shift in mental model. They are doing this today by allowing architects and developers to make very low cost investments in testing existing small workloads, new services, or infrastructure as code. Not only does this approach drive comfort with the technologies, but it also creates excitement, curiosity, and understanding, all key ingredients to changing a culture and overcoming resistance. This is such a low-cost, no-license undertaking that some organisations start these experiments using their corporate expense policy rather than any formal procurement process!

For organisations that are already using the cloud, this ongoing experimentation is a proven way to drive continual improvements in aspects such as performance, affordability, operational efficiencies, and reliability. Many organisations are lifting and shifting workloads directly to the cloud but continue to run data centre era practices in managing them. Continual experimentation enables these processes to evolve, realising significant efficiencies.

The cloud is also a massive enabler of, and engine for, innovation. As an example, prior to these capabilities existing the cost of performing object recognition, such as car license plates, ran into tens of thousands of dollars, and a significant ongoing operating cost, along with a long lead time to establish the required infrastructure. Today we can prove out the same concepts with technologies such as DeepLens and Sagemaker for less than $300. Similarly with advances in areas such as speech recognition. Ideas that would have previously needed extensive business cases and investments, all pulling from a finite amount of time and creating the psychological commitment I described above, can today be run for hundreds of dollars. It is also a low-cost opportunity to get non-technologists excited about the future, spark discussions on use cases and business implications, and allow your technology teams to be seen as innovative lighthouses.

I hope I have painted a picture for you of how we need to shift our conversations from failures to experimentation. The cumulative benefits of doing so will have a long-term, incremental, and low-risk positive impact on your organisation.   Remember, the goal is not to design the perfect experiment, it is to make better decisions than you are able to make today. In today’s changing world, relying on your experience is no longer sufficient—go prove it.

Phil
Phil Le-Brun

[1] Building Digital-Ready Culture in Traditional Organisations

[2] Smart Business Experiments

[3] Please Do Not Take More Risks

[4] 4 Dos and Don’ts When Using The Cloud To Experiment

Phil Le-Brun

Phil Le-Brun

Phil Le-Brun is an Enterprise Strategist and Evangelist at Amazon Web Services (AWS). In this role, Phil works with enterprise executives to share experiences and strategies for how the cloud can help them increase speed and agility while devoting more of their resources to their customers. Prior to joining AWS, Phil held multiple senior technology leadership roles at McDonald’s Corporation. Phil has a BEng in Electronic and Electrical Engineering, a Masters in Business Administration, and an MSc in Systems Thinking in Practice.