AWS Training and Certification Blog

Using a scientific thought process to improve customer experiences

Editor’s note: This blog is designed to share approaches and tips to help anyone in any type of customer-facing role to better understand the needs of the end customer by asking the right questions.

A key aspect of any successful customer facing professional is the ability to uncover the various drivers, goals, and needs of your customers. To identify them, it’s more than just asking questions—it’s about asking the right questions. In data science this is sometimes referred to as an ‘interesting question’ – a question that can lead to gaining more insight about the data or subject. It’s human nature to come into situations with preconceived ideas.

Taking the above into consideration, this blog shares two techniques of critical thinking to help you break down a complex scenario into smaller parts, allowing you to better analyze the situation and take appropriate action. Using each in a real-world scenario, we’ll put these two techniques into practice. You may use these techniques to complement each other, sequentially, or on a one-off basis.

  1. Analyze the data: what information is missing or potentially misleading the statistical information?
  2. Identify problem cause vs. symptom: What is the actual root cause problem that should be focused on to solve?

The real-world scenario

In our example, a retail company owns and manages several shopping malls. They’ve decided to introduce a cloud-based Software-as-a-Service (SaaS) solution, that will allow the various departments to gather data to better understand their customers’ journey, shopping experience, and more. The company built and tested a proof of concept (POC) solution. The customer told us, “We were able to gather data and quickly create and publish dashboards, gaining insight that was not possible for us before this cloud-based solution. Also, we achieved 25% faster applications’ response with our new hybrid connectivity using an SD-WAN solution that will help us with our expansion plans.”  However, the development team consistently missed deadlines because they made several attempts to incorporate numerous technical changes, delaying the anticipated launch of the production software-as-a-service solution.

Identify missing information and statistical facts

In general, missing information or stats that lack context may significantly change your perception of the situation and the optimal solution for it. When information is lacking or provided out of context, it can lead to poor decision making or recommendations.

For example, statistics like “40% higher”; “10% faster”; or “5% slower” can be misconstrued without additional comparison context. When an adjective like “higher” is assigned as a qualifying value, you should ask probing questions – “40% higher than what; in what time frame; and against what parameters?”

Using our example, the customer mentioned that, “We achieved 25% faster applications’ response with our new hybrid connectivity using an SD-WAN solution.” Although this seems to be a complete statement, it’s very generic.

  • Additional question to consider include:
  • What applications are being used?
  • Is “25% faster” referencing users access to the SaaS POC application?
  • When was this data collected? Specifically, was it measured under the same circumstances (e.g. during peak hours)?

The customer responds that they received an applications performance report from the applications team that showed 65% of applications are faster. Still the percentage indicates misleading or unclear facts. Continue probing.

What does it mean that 65% are faster?

  • Does this mean that all applications were measured, and only 65% of them were faster? Or were only 65% of applications considered and measured?
  • Are these applications accessed in the cloud, on-premises, or a mix?
  • Is it mainly about the new POC SaaS application?

After asking these questions, the customer advised that some applications were not assessed due to some security and compliance restrictions. Therefore, only 65% of the applications hosted on the cloud, including the new POC SaaS application, were considered and measured for the performance optimization during peak hours.

From this we learned, “65% of all our applications that are hosted on the cloud and are permissible for measurement have 25% faster performance during peak hours.” Obviously, this tells a relatively different story, which is why it’s critical to continue probing with critical questions to understand the full context of the data and eliminate any false assumptions.

After analyzing and clarifying the applications’ access performance, next we will seek to identify and analyze the root cause of the project delay using a different technique.

Identify problem cause vs. symptom

Have you ever rushed to provide a recommendation that ultimately created new problems downstream, generating an unintended chain of issues that deviated from the original problem? To avoid doing so, it’s important to identify problem causes versus symptoms. For our purposes, a symptom indicates a condition of an outcome that didn’t work as intended. A cause is the reason why something happened in the first place.

For this technique, we’ll use the ‘Five Whys,’ originally develop by Skichi Toyoda, to identify the underlying cause and effect relationships of a specific problem. By the time you get to the fourth or fifth ‘why’, you’ll most likely have identified the actual cause to be improved or resolved.

In our example, we want to know the main reason the project team missed their go-live deadline. The customer explained, “the deadline was missed because the development team made several attempts to incorporate numerous technical changes.”

The following are the key questions to consider:

  • Is this a cause or symptom?  It could be both. If we assume that this is the actual issue, there is a high possibility we are dealing with a symptom and not the actual root cause.
  • How do we know? Identify what led to several technical changes. If there is a symptom, there must be a cause.

Reason #1 given by the customer: We had to change some of the configuration specifications to comply with the company security standards.

Analysis: Again, we possibly have another symptom here! This means we still need to peel the onion to discover more and identify the root cause. This leads us to ask: why were security standards not included from the beginning?

Reason #2 given by the customer: Because the standards of the expected features and functionalities of the new SaaS operational model were not finalized.

Analysis: This is yet again another symptom. We can notice there is an inflection point at this stage, because it’s obvious we are moving away from the technical deployment aspect to defining standards and functionalities. Based on what we learned, why was this ‘symptom’ happening?

Reason #3 flagged by the customer: The management and IT executives did not agree on how this new model should operate. In turn, this impacted some of the expected features and functionalities of the new SaaS operational model.

Analysis: After going through this root cause identification analysis, it is obvious now: it’s not the developers’ issue. The actual cause is not even related to any technical aspect.

Final thoughts

Start your thought process by looking at the overall use case or scenario, identifying the ultimate goal and working backward from there. Then break it down into smaller parts, applying the techniques discussed in this blog where applicable. Even though there is no fixed formula to use or sequence of techniques, you should always start by identifying missing information that could help you to identity the root cause. If none can be found, ask questions and avoid making assumptions. As a result, it will make your efforts more efficient and impactful with customers.