Part 1 in a 2-part series
One of my biggest learnings from teaching Big Data MBA classes at the University of San Francisco School of Management last semester was around how to properly construct an actionable and measurable business hypothesis. One of the common mistakes is starting with an overly simplified business objective such as:
- Improve customer subscription renewals by X%
- Reduce inventory costs by X%
- Improve customer “likelihood to recommend” by X%
- Improve on-time delivery by X%
- Reduce unplanned downtime by X%
The problem with these business objectives is that they don’t fully capture the complexity of the real business world. They are one-dimensional and only solve for a single variable. For example:
- I could improve customer subscription renewals by X% by simply not charging for renewals (probably not good for the long-term profitability of the business). Or…
- I could reduce inventory costs by X% by getting rid of all inventory (will likely eventually lead to sales, revenue, and customer satisfaction problems down the road). Or…
- I could improve on-time delivery by X% by buying more delivery trucks and hiring more drivers (again, hard on profitability).
You can quickly see that those solutions, while technically possible, are not realistic. Optimizing a single objective or a single point is quite easy because there are no conflicting objectives. The real business challenge – and the source of much innovation – is trying to optimize a decision across multiple variables. Let’s explore this further.
Economics of optimizing multiple, conflicting objectives
Say you want to reduce unplanned operational downtime. You could be an airline or airport or hospital or entertainment venue or train or fleet manager or utility operator or an oil platform or hotel or any number of operations where unplanned downtime has significant negative impacts on profitability, operations, and customer satisfaction.
If my only business objective was to reduce unplanned operational downtime along that single dimension, I could easily address that objective by just increasing spend on parts, inventory, and labor in order to fix E-V-E-R-Y potential problem that might occur. These might include replacing any parts that show wear and tear, lubricating and inspecting the parts constantly, having redundant inventory on every part and repair consumable, adding sensors to every component to capture every reading and vibration, and so on. Again, not realistic in the long term if you are trying to balance profits and operational costs.
So let’s broaden the “unplanned operational downtime” example to optimize business and operational performance across two conflicting dimensions: increasing uptime while reducing maintenance costs. Now it’s getting interesting, and here is where economics can help us.
We can create an economics value curve that helps us determine the point of optimization between two variables like uptime percentage versus cost of maintenance. The value curve in Figure 1 shows the relationship between uptime percentage and cost of maintenance.
Fig. 1: Economic value curve
The only way to improve operational uptime given the economic value curve is to spend more money on maintenance. That is, in order to move uptime from Up1 to Up2, we need to increase maintenance investment from C1 to C2, as in Figure 2.
Fig. 2: Traversing the economic value curve
The only way the organization can improve uptime given the current economic value curve is to spend more money and time on maintenance. Increasing maintenance spend from C1 to C2 can increases uptime from Up1 to Up2, although the increase is not likely to be linear as uptime reaches its theoretical limit.
Unfortunately, economics throws a little wrinkle into the relationships on the economic value curve, and that’s the “diminishing returns” dilemma. In economics, diminishing returns is the decrease in the marginal (incremental) output of a production process as the amount of a single factor of production is incrementally increased while the amounts of all other factors of production stay constant. And we can see in Figure 2 how it takes a much more significant investment in maintenance costs (Δ C2 – C1) to achieve a much smaller incremental improvement in uptime percentage (Δ Up2 – Up1).
For those who have not totally forgotten their college mathematics, you’ll notice that this is the formula for calculating the slope of a line where m = (y2 – y1) / (x2 – x1).
Since the slope is a constant rate of change for a linear function, when the slope of the line drops below 1, you are experiencing diminishing returns on your investments.
Digital transformation is about changing the economic value curve
The way to change the game and beat the law of diminishing returns is to re-engineer the sources of value creation to create a new economics value curve. That is, for the same C2 spend, how do I get an uptime percentage increase from Up2 to Up3 (Figure 3)?
Fig. 3: Transforming the economics value curve with data and analytics
How do we digitally transform our economics value curve? How do we use data, analytics, and design to change the economic value curve? Doing the same old things, even more quickly and more economically with robotics, does not transform the economics value curve. What does transform the curve is to re-engineer the process using data and analytics – using “intelligence.”
Amazon has digitally transformed its warehousing operations using non-conventional, analytics-inspired business and operational changes. One example is how Amazon’s warehouses randomly stock shelves on purpose. Instead of painstakingly stocking all the toilet paper grouped by brand in one area and all the toothpaste grouped by brand in another area and all the dry cereals grouped by brand in another area, Amazon randomly places the inventory in whatever space is most convenient (where convenience is determined by a stocking algorithm that looks at open space, distance to that space, size of the items to be stocked, etc.).
From the article “Amazon’s Prime Now Warehouses Randomly Stock Shelves on Purpose,” we learn:
“Every item is scanned and its location logged on computer. Amazon then uses software to scan orders, look up where all the items are stored on the shelves, and plot the most efficient and fastest route for a worker to take to collect them all. Ultimately, the random placement doesn’t matter as long as the scanning happens. It also speeds up the restocking of the shelves as workers don’t need to worry where products are placed.”
Note: This is how disk drives have stored information for ages, and it works perfectly as long as you have a catalog of where everything is stored (e.g., hashing, pointers, indices).
In Part 2, we look at how to exploit these emerging technologies to create intelligent products and smart spaces that can self-monitor, self-diagnose, and self-heal.
Turn digital transformation into a practical strategy for driving long-term growth by Overcoming Analysis Paralysis And Jumpstarting Digital Transformation: 3 Tips.
This blog originally appeared on LinkedIn and is republished by permission.