Alan Greenspan, former head of America’s Federal Reserve, once famously confessed: “We really can’t forecast all that well, and yet we pretend that we can.” If the world’s top economists can’t get it right, then it’s hardly surprising that forecasting is a problem in business too. Forecasting is at its most challenging in sales, where month after month, sales reps churn out their revenue forecasts for management teams that have little or no faith in their accuracy. CSO Insight’s 2016 Sales Performance Optimization Study shows that the average win rate of forecast sales deals is only 45.8%. Flipping a coin would provide better odds!
The problem with sales forecasting lies in a combination of human bias in pipeline reporting and spreadsheet models that are not sufficiently sophisticated to take in all the possible factors that might be driving sales deals. Inaccurate revenue forecasts can potentially jeopardize a company’s share price, endanger its cash flow, and inflate inventory costs, so businesses are increasingly turning to a new technology—predictive analytics—for help. Predictive analytics is revolutionizing sales forecasting by replacing the limitations of human inference and bias with models based on machine-learning algorithms.
How predictive analytics improves sales forecasting
Sales forecasting with predictive analytics starts with the combining of internal customer data such as win/loss ratios, delay factors, close rates, and completeness of the sales process, with external data that indicate a customer’s propensity to buy (these data points could be as diverse as company revenue, executive changes, and social media activity). Forecasting algorithms then use machine learning to look for patterns in these large volumes of data, in ways and speeds not humanly possible. The relationships spotted in the data are then used to score each deal in the pipeline and predict its likely revenue with levels of accuracy reported to be as high as 82%.
This approach differs from using business intelligence tools and spreadsheets for forecasting— These traditional approaches rely on the humanbrain to infer correlations between the different factors (historical data) and the outcome (sales). However, the brain does not have sufficient power to spot links between thousands of variables and nonlinear relationships.
Sales forecasting with algorithms: in practice
While some have claimed that predictive analytics will replace sales reps altogether in the forecasting process, in practice this is not the case. At a global software company, sales managers compare forecasts by sales reps with those output from algorithms and discuss variances with their reps, who then take a closer look at their pipeline and fine-tune their estimates. When analyzing actual sales scored against those forecast, opportunities that didn’t close are also examined to find new factors driving closure that are then digitized and added to the forecasting algorithm for increased accuracy.
A Swiss chemical company uses predictive analytics to enhance the accuracy of its sales forecasting by complementing experience-based decisions with model-based forecasting. Through this process of forecast validation, the company has been able to reduce inventory levels and costs while increasing product availability, delivery capabilities, and customer satisfaction.
Predictive analytics is also becoming a key tool in sales enablement by letting salespeople know how and when to communicate with prospects based on algorithms that leverage every imaginable variable that impacts a customer’s decision to buy. A leading IT networking company is using predictive analytics in this way to help account mangers determine which customers to call on and which products to promote, based on deep insights into what their customers are likely to buy.
The bigger picture
In an earlier blog, Predictive Monthly: How A Little Bit Of Wizardry Can Transform Your Bottom Line, I explained that the key benefits of predictive analytics include not just objective and accurate predictions but also automated decision-making and the uncovering of new business opportunities. It’s for these reasons that predictive analytics is being adopted not just in sales but across many lines of business, including marketing, operations, HR, and finance. With the recent exponential increase in computer processing power and the rise of Big Data, predictive analytics is seen by many organizations as an ROI decision instead of a cost, because of the incredible value it can release from existing data and infrastructure.
Now that algorithms, machine learning, and Big Data can support sales forecasting, excuses are running out for getting it wrong!
The Digitalist Magazine is your online destination for everything you need to know to lead your enterprise’s digital transformation.
Read the Digitalist Magazine and get the latest insights about the digital economy that you can capitalize on today.
About Pierre Leroux
Pierre Leroux is the Director of Predictive Analytics Product Marketing at SAP. His areas of specialty include Data Discovery, Business Intelligence, Cloud applications, Customer Relationship Management (CRM), and ERP.
From the first teaser trailer to the final level, the gaming industry demonstrates that it knows exactly how to engage and cultivate the interest (and loyalty) of its customers. And, it’s a goldmine: there are an estimated 1.5 billion gamers, and this year revenue from the global games industry will be around $100 billion.
So, what can other industries learn from this example? Developers and game designers are under huge pressure to create a new experience each time, one that gamers are willing to pay for. Competition in the ceaselessly growing industry is enormous. For every hit, there are countless games no one plays. This constant “survival of the fittest” has a major advantage – many games offer a customer experience that has been perfected down to the finest detail.
Here are five ways gaming takes customer engagement to the next level.
Popular franchises like Pokémon, Grand Theft Auto, and Battlefield are constantly preoccupied with building a relationship with their customers. Real fans know well in advance that they want to buy a game. They are teased with new features, sneak peeks, trailers, artwork, events, and promotional campaigns. Early birds get discounts and unique content, like new levels and rare weapons.
Games companies understand that customer retention is an ongoing process. And they know that it’s easier and cheaper to sell something to an existing customer than to a new one. Franchises are strong brands with recognizable characters that appeal to the gamer. It’s only a matter of galvanizing the fans into action at the right moment – for example, for the holiday season.
Foster your existing customers, whet their appetites with news of your new products or services, and reward the early birds. And just keep on building a strong brand!
2. First impressions
New mobile games appear daily and can often be downloaded for free. However, research has shown that around a quarter of the apps are opened only once. For that reason, making a good impression is very important for game developers. After all, you don’t get a second chance. That’s why a game is tested exhaustively before it’s launched.
A messy or un-intuitive interface, the lack of a tutorial, long loading times, poor performance, irritating advertising, and intrusive in-app selling can motivate people to delete a game instantly. But factors outside the game also play a role, like the description of its features, general evaluations, and the amount of space needed for installation.
Developing games is a question of testing, optimizing, and testing again. And that’s how companies should also approach their customer experience: only perfect is good enough.
What actually makes a game good? Tastes differ, just as with films or music. But whether it’s a futuristic shooter, a strategic RPG, on an online word game with friends, a game has to be compelling or “immersive.” There’s no magic wand for creating an immersive experience. But there are certainly elements which contribute to it:
Good gameplay is crucial. Some games feature deep gameplay mechanics or have a high difficulty level, others opt for addictive elements like puzzles, and still others put the emphasis on the competitive factor (who’s the best?). Whatever they do, the game’s mechanisms must entertain the player.
A brilliant-looking game that’s enhanced with varied musical and sound effects creates a credible world that gamers want to plunge into. If you constantly encounter visual errors and strange animations, or you’re irritated by poor voice actors and repetitive music, it detracts from the immersion.
High visual production values are also a unique selling point for companies, and not only in terms of their products and services. So, for instance, a slow or ugly (mobile) website can upset the customer journey and can motivate potential customers to head to a competitor.
A huge attraction in many games is the story. Interesting characters, exciting developments, and humorous dialogues ensure that the gamer feels emotionally involved and is curious to discover what will happen. The choices you make, along with the way of playing, leads to a personal experience.
Marketing is increasingly concerned with emotion. You persuade the modern consumer not with superlatives, but with storytelling: a good story, which he or she would like to be part of. The interaction between customer and brand is one and the same. Buying stems from that.
Immersion is also very important for the customer experience. The process from orientation to buying and use doesn’t only have to be smooth. It must also be absorbing, personal, and even fun.
4. Customer behavior
Game developers are extremely proficient at analyzing customer behavior. The free-to-play model with in-app purchases has accorded this a completely new dimension. How long does the average gamer play? What type of content and what offers open wallets? What notifications are effective to get you to play again?
Research has shown that just 0.19 percent of mobile gamers account for half of all revenues. They are referred to jokingly as “whales.” Games companies pull out all the stops to land these meaty fish. Sometimes digital items are even developed specially for one whale. You might wonder whether this is ethical, but that gamer is certainly given the royal treatment.
Analyzing customer behavior offers valuable insights with which you can improve the customer experience. And it helps each company determine which customers are really worthwhile.
Certainly with mobile games, it’s entirely normal to ask the gamer for feedback. This has all sorts of benefits. A positive review can be just enough to persuade people to try an app. But bad reviews with complaints and criticism can also be very valuable. A developer can then see immediately what he has to change.
Even better is if the feedback arrives during the development. That’s why players of previous games are often invited to beta-test a new title – an excellent way to bind gamers at an early stage and get focused feedback from the target group. It’s also an economical way to spot bugs.
Every company benefits from feedback. Take every sign from customers seriously, both positive and negative. And let them test new products, for instance in exchange for a discount.
Business can learn lots from the games industry, but the customer experience is not a game. The biggest difference? Unfortunately your company doesn’t have an endless number of lives. Once your customers walk away, it really is game over.
Using a website instead of a brick-and-mortar retail store to sell products can be a great way to make money while keeping your overhead low. No need to pay rent or utilities on a storefront and hiring people to manage it—with an e-commerce site, you can open a virtual store and begin selling your wares almost immediately.
While it is similar to opening a physical store, the mere act of creating a website won’t provide you with instant sales. Yes, your overhead is much lower and moving product from your pipeline is a more seamless event; however, there are still many things that can go wrong if you aren’t paying attention or thinking about every angle.
Here are three of the most common mistakes people make when building an e-commerce site:
Not knowing where your customers are online
As with traditional marketing, not every channel is right for your business. For example, trying to sell mass-produced plastic items on a DIY website like Etsy may not net you much success. However, if fun, hand-crafted items are your thing, Etsy would be a great place to start.
It is absolutely critical that you do your homework before jumping into e-commerce. Who buys what you’re selling? Where would they go online? Look for websites that carry items similar to the ones you create. Browse their selections and see what their Internet presence is like to get a feel for what you should be doing.
Not maximizing social media
The marketing gods blessed the world of advertisers everywhere when social media came into existence. There are many different platforms that reach different types of people, which makes reaching your target audience much easier. This does take a little bit of awareness, however — for example, methods that appeal to a typical Facebook user might not work as well on Twitter or Instagram, and vice versa.
Social media is where you can build a strong brand voice and interact directly with potential customers. Understand the difference between the platforms and be as active in your community as possible in order to better serve your customers. Many e-commerce website builders integrate the major social media platforms directly on your page.
Getting too far ahead of yourself
In the early stages of any business, it’s important to pace yourself and utilize your time and resources effectively. One of the most common mistakes is purchasing too much inventory before you know what demand is going to be—you don’t want to end up with a garage or storage unit full of product with no one around to buy! To get a sense of how much demand a particular item has, search for it on eBay or other e-tailers and see how many have sold in the last month.
Opening an e-commerce store is very nuanced and comes with many obstacles. Avoiding these three pitfalls will help lead to a successful launch of your e-commerce store.
The Digitalist Magazine is your online destination for everything you need to know to lead your enterprise’s digital transformation.
Read the Digitalist Magazine and get the latest insights about the digital economy that you can capitalize on today.
About Jake Anderson
Awestruck by Star trek as a kid, Jake Anderson has been relentless in his pursuit for covering the big technological innovations which will shape the future. A self-proclaimed gadget freak, he loves getting his hands on every piece of gadget he can afford. Contact Jake on Twitter @_ShoutatJake.
We humans make sense of the world by looking for patterns, filtering them through what we think we already know, and making decisions accordingly. When we talk about handing decisions off to artificial intelligence (AI), we expect it to do the same, only better.
Machine learning does, in fact, have the potential to be a tremendous force for good. Humans are hindered by both their unconscious assumptions and their simple inability to process huge amounts of information. AI, on the other hand, can be taught to filter irrelevancies out of the decision-making process, pluck the most suitable candidates from a haystack of résumés, and guide us based on what it calculates is objectively best rather than simply what we’ve done in the past.
In other words, AI has the potential to help us avoid bias in hiring, operations, customer service, and the broader business and social communities—and doing so makes good business sense. For one thing, even the most unintentional discrimination can cost a company significantly, in both money and brand equity. The mere fact of having to defend against an accusation of bias can linger long after the issue itself is settled.
Beyond managing risk related to legal and regulatory issues, though, there’s a broader argument for tackling bias: in a relentlessly competitive and global economy, no organization can afford to shut itself off from broader input, more varied experiences, a wider range of talent, and larger potential markets.
That said, the algorithms that drive AI don’t reveal pure, objective truth just because they’re mathematical. Humans must tell AI what they consider suitable, teach it which information is relevant, and indicate that the outcomes they consider best—ethically, legally, and, of course, financially—are those that are free from bias, conscious or otherwise. That’s the only way AI can help us create systems that are fair, more productive, and ultimately better for both business and the broader society.
Bias: Bad for Business
When people talk about AI and machine learning, they usually mean algorithms that learn over time as they process large data sets. Organizations that have gathered vast amounts of data can use these algorithms to apply sophisticated mathematical modeling techniques to see if the results can predict future outcomes, such as fluctuations in the price of materials or traffic flows around a port facility. Computers are ideally suited to processing these massive data volumes to reveal patterns and interactions that might help organizations get ahead of their competitors. As we gather more types and sources of data with which to train increasingly complex algorithms, interest in AI will become even more intense.
Using AI for automated decision making is becoming more common, at least for simple tasks, such as recommending additional products at the point of sale based on a customer’s current and past purchases. The hope is that AI will be able to take on the process of making increasingly sophisticated decisions, such as suggesting entirely new markets where a company could be profitable, or finding the most qualified candidates for jobs by helping HR look beyond the expected demographics.
As AI takes on these increasingly complex decisions, it can help reduce bias, conscious or otherwise. By exposing a bias, algorithms allow us to lessen the impact of that bias on our decisions and actions. They enable us to make decisions that reflect objective data instead of untested assumptions; they reveal imbalances; and they alert people to their cognitive blind spots so they can make more accurate, unbiased decisions.
Imagine, for example, a major company that realizes that its past hiring practices were biased against women and that would benefit from having more women in its management pipeline. AI can help the company analyze its past job postings for gender-biased language, which might have discouraged some applicants. Future postings could be more gender neutral, increasing the number of female applicants who get past the initial screenings.
AI can also support people in making less-biased decisions. For example, a company is considering two candidates for an influential management position: one man and one woman. The final hiring decision lies with a hiring manager who, when they learn that the female candidate has a small child at home, assumes that she would prefer a part-time schedule.
That assumption may be well intentioned, but it runs counter to the outcome the company is looking for. An AI could apply corrective pressure by reminding the hiring manager that all qualifications being equal, the female candidate is an objectively good choice who meets the company’s criteria. The hope is that the hiring manager will realize their unfounded assumption and remove it from their decision-making process.
At the same time, by tracking the pattern of hiring decisions this manager makes, the AI could alert them—and other people in HR—that the company still has some remaining hidden biases against female candidates to address.
Look for Where Bias Already Exists
In other words, if we want AI to counter the effects of a biased world, we have to begin by acknowledging that the world is biased. And that starts in a surprisingly low-tech spot: identifying any biases baked into your own organization’s current processes. From there, you can determine how to address those biases and improve outcomes.
There are many scenarios where humans can collaborate with AI to prevent or even reverse bias, says Jason Baldridge, a former associate professor of computational linguistics at the University of Texas at Austin and now co-founder of People Pattern, a startup for predictive demographics using social media analytics. In the highly regulated financial services industry, for example, Baldridge says banks are required to ensure that their algorithmic choices are not based on input variables that correlate with protected demographic variables (like race and gender). The banks also have to prove to regulators that their mathematical models don’t focus on patterns that disfavor specific demographic groups, he says. What’s more, they have to allow outside data scientists to assess their models for code or data that might have a discriminatory effect. As a result, banks are more evenhanded in their lending.
Code Is Only Human
The reason for these checks and balances is clear: the algorithms that drive AI are built by humans, and humans choose the data with which to shape and train the resulting models. Because humans are prone to bias, we have to be careful that we are neither simply confirming existing biases nor introducing new ones when we develop AI models and feed them data.
“From the perspective of a business leader who wants to do the right thing, it’s a design question,” says Cathy O’Neil, whose best-selling book Weapons of Math Destruction was long-listed for the 2016 National Book Award. “You wouldn’t let your company design a car and send it out in the world without knowing whether it’s safe. You have to design it with safety standards in mind,” she says. “By the same token, algorithms have to be designed with fairness and legality in mind, with standards that are understandable to everyone, from the business leader to the people being scored.” (To learn more from O’Neil about transparency in algorithms, read Thinkers in this issue.)
Don’t Do What You’ve Always Done
To eliminate bias, you must first make sure that the data you’re using to train the algorithm is itself free of bias, or, rather, that the algorithm can recognize bias in that data and bring the bias to a human’s attention.
SAP has been working on an initiative that tackles this issue directly by spotting and categorizing gendered terminology in old job postings. Nothing as overt as No women need apply, which everyone knows is discriminatory, but phrases like outspoken and aggressively pursuing opportunities, which are proven to attract male job applicants and repel female applicants, and words like caring and flexible, which do the opposite.
Once humans categorize this language and feed it into an algorithm, the AI can learn to flag words that imply bias and suggest gender-neutral alternatives. Unfortunately, this de-biasing process currently requires too much human intervention to scale easily, but as the amount of available de-biased data grows, this will become far less of a limitation in developing AI for HR.
Similarly, companies should look for specificity in how their algorithms search for new talent. According to O’Neil, there’s no one-size-fits-all definition of the best engineer; there’s only the best engineer for a particular role or project at a particular time. That’s the needle in the haystack that AI is well suited to find.
Look Beyond the Obvious
AI could be invaluable in radically reducing deliberate and unconscious discrimination in the workplace. However, the more data your company analyzes, the more likely it is that you will deal with stereotypes, O’Neil says. If you’re looking for math professors, for example, and you load your hiring algorithm with all the data you can find about math professors, your algorithm may give a lower score to a black female candidate living in Harlem simply because there are fewer black female mathematicians in your data set. But if that candidate has a PhD in math from Cornell, and if you’ve trained your AI to prioritize that criterion, the algorithm will bump her up the list of candidates rather than summarily ruling out a potentially high-value hire on the spurious basis of race and gender.
To further improve the odds that AI will be useful, companies have to go beyond spotting relationships between data and the outcomes they care about. It doesn’t take sophisticated predictive modeling to determine, for example, that women are disproportionately likely to jump off the corporate ladder at the halfway point because they’re struggling with work/life balance.
Many companies find it all too easy to conclude that women simply aren’t qualified for middle management. However, a company committed to smart talent management will instead ask what it is about these positions that makes them incompatible with women’s lives. It will then explore what it can change so that it doesn’t lose talent and institutional knowledge that will cost the company far more to replace than to retain.
That company may even apply a second layer of machine learning that looks at its own suggestions and makes further recommendations: “It looks like you’re trying to do X, so consider doing Y,” where X might be promoting more women, making the workforce more ethnically diverse, or improving retention statistics, and Y is redefining job responsibilities with greater flexibility, hosting recruiting events in communities of color, or redesigning benefits packages based on what similar companies offer.
Context Matters—and Context Changes
Even though AI learns—and maybe because it learns—it can never be considered “set it and forget it” technology. To remain both accurate and relevant, it has to be continually trained to account for changes in the market, your company’s needs, and the data itself.
Sources for language analysis, for example, tend to be biased toward standard American English, so if you’re building models to analyze social media posts or conversational language input, Baldridge says, you have to make a deliberate effort to include and correct for slang and nonstandard dialects. Standard English applies the word sick to someone having health problems, but it’s also a popular slang term for something good or impressive, which could lead to an awkward experience if someone confuses the two meanings, to say the least. Correcting for that, or adding more rules to the algorithm, such as “The word sick appears in proximity to positive emoji,” takes human oversight.
Moving Forward with AI
Today, AI excels at making biased data obvious, but that isn’t the same as eliminating it. It’s up to human beings to pay attention to the existence of bias and enlist AI to help avoid it. That goes beyond simply implementing AI to insisting that it meet benchmarks for positive impact. The business benefits of taking this step are—or soon will be—obvious.
In IDC FutureScapes’ webcast “Worldwide Big Data, Business Analytics, and Cognitive Software 2017 Predictions,” research director David Schubmehl predicted that by 2020 perceived bias and lack of evidentiary transparency in cognitive/AI solutions will create an activist backlash movement, with up to 10% of users backing away from the technology. However, Schubmehl also speculated that consumer and enterprise users of machine learning will be far more likely to trust AI’s recommendations and decisions if they understand how those recommendations and decisions are made. That means knowing what goes into the algorithms, how they arrive at their conclusions, and whether they deliver desired outcomes that are also legally and ethically fair.
Clearly, organizations that can address this concern explicitly will have a competitive advantage, but simply stating their commitment to using AI for good may not be enough. They also may wish to support academic efforts to research AI and bias, such as the annual Fairness, Accountability, and Transparency in Machine Learning (FATML) workshop, which was held for the third time in November 2016.
O’Neil, who blogs about data science and founded the Lede Program for Data Journalism, an intensive certification program at Columbia University, is going one step further. She is attempting to create an entirely new industry dedicated to auditing and monitoring algorithms to ensure that they not only reveal bias but actively eliminate it. She proposes the formation of groups of data scientists that evaluate supply chains for signs of forced labor, connect children at risk of abuse with resources to support their families, or alert people through a smartphone app when their credit scores are used to evaluate eligibility for something other than a loan.
As we begin to entrust AI with more complex and consequential decisions, organizations may also want to be proactive about ensuring that their algorithms do good—so that their companies can use AI to do well. D!
Imagine the following situation: you are analyzing and gathering insights about product sales performance and wonder why a certain area in your country is doing better than others. You deep dive, slice, dice, and use different perspectives to analyze, but can’t find the answer to why sales are better for that region.
You conclude you need data that is not available in your corporate systems. Some geographical data that is available through Hadoop might answer your question. How can you get this information and quickly analyze it all?
Bring analytics to data
If we don’t want to go the traditional route of specifying, remodeling the data warehouse, and uploading and testing data, we’d need a whole new way of modern data warehousing. What we ultimately need is a kind of semantics that allows us to remodel our data warehouse in real time and on the fly – semantics that allows decision makers to leave the data where it is stored without populating it into the data warehouse. What we really need is a way to bring our analytics to data, instead of the other way around.
So our analytics wish list would be:
Access to the data source on the fly
Ability to remodel the data warehouse on the fly
No replication of data; the data stays where it is
Not losing time with data-load jobs
Analytical processing done in the moment with pushback to an in-memory computing platform
Drastic reduction of data objects to be stored and maintained
Elimination of aggregates
Traditional data warehousing is probably the biggest hurdle when it comes to agile business analytics. Though modern analytical tools perfectly add data sources on the fly and blend different data sources, these components are still analytical tools. When additional data must be available for multiple users or is huge in scale and complexity, analytical tools lack the computing power and scalability needed. It simply doesn’t make sense to blend them individually when multiple users require the same complex, additional data.
A data warehouse, in this case, is the answer. However, there is still one hurdle to overcome: A traditional data warehouse requires a substantial effort to adjust to new data needs. So we add to our wish list:
Adjust and adapt the modeling
Develop load and transformation script
Setup scheduling and linage
Test and maintain
In 2016, the future of data warehousing began. In-memory technology with smart, native, and real-time access moved information from analytics to the data warehouse, as well as the data warehouse to core in-memory systems. Combined with pushback technology, where analytical calculations are pushed back onto an in-memory computing platform, analytics is brought back to data. End-to-end in-memory processing has become the reality, enabling true agility. And end-to-end processing is ready for the Internet of Things at the petabyte scale.
Are we happy with this? Sure, we are! Does it come as a surprise? Of course, not! Digital transformation just enabled it!
Native, real-time access for analytics
What do next-generation data warehouses bring to analytics? Well, they allow for native access from top-end analytics components through the data warehouse and all the way to the core in-memory platform with our operational data. Even more, this native access is real-time. Every analytics-driven interaction from an end-user generates calculations. With the described architecture, these calculations are massively pushed back to the core platform where our data resides.
The same integrated architecture is also a game changer when it comes to agility and data optimization. When new, complex data is required, it can be added without data replication. Since there is no data replication, the data warehouse modeling can be done on the fly, leveraging the semantics. We no longer have to model, create, and populate new tables and aggregates when additional data is required in the data warehouse, because there are no new tables needed! We only create additional semantics, and this can be done on the fly.