Sections

Big Data Can Mean Big Returns in Retail

Lindsey Nelson

Big data for retail means a chance to see why a sale didn’t occur. Is it product selection? Pricing? Store display? Ineffective promotional material?

Before, this information was hard to track, but with the advent of big data and in-memory computing, two products ideally suited to collecting and analyzing unstructured data types like that of retail, are poised to play a significant role in sales.

For example, web logs are not the typical financial data people relate with the term “big data”. This web information shows how consumers navigate through an internet storefront. The data can be combined with previous BI (Business Intelligence) apps and sales data, generating clear insight.

Retailers now have the opportunity to see website traffic for a particular product and compare it to the sales. Before, if a product wasn’t selling it would be removed from the line. Now, managers can readjust pricing; ensure there are enough colors and sizes, and any other aspects that take a look to a sale.

This analytical approach to customer decisions is not limited to the web; some retailers are now using technologies to analyze foot traffic throughout their physical stores. These maps, combined with sales data, make way for new applications focused on optimizing store layout and product placement.

Retail relies heavily on in-store and online purchases, but they are not successful without making sure their product is delivered on time. Predictive analysis applications using the first day’s delivery, past delivery data, and real-time traffic data, provide revised delivery schedules, allowing retail managers to take immediate corrective action.

This is incredibly advantageous for retail managers, preparing them to better meet customer expectations and maintain high operational efficiency.

This operational efficiency is essential when retailers always want to know what their customers need before they even know they need it.

For example, using big data retailers now can see, through data from store cards, cashed-in coupons, and purchase history, when a customer may need a refill on a product. This data gives retail marketers the upper hand, sending the low stocked customers promotional material – urging them to buy the refill.

As Mark Ledbetter put it in his recent article, “Go Big or Go Home: How Big Data Can Bring Big Sales”, “How retailers use it to change their business, how they take advantage of it to grow sales…is only limited by their imagination.”

Comments

About Lindsey Nelson

Lindsey Nelson currently supports Content and Enablement at SAP. Prior to her current role, she was responsible for Thought Leadership Content Strategy and Pull Marketing Strategy at SAP.

What Do Business Users Really Want From BI?

Narinder Dhillon

When I joined SAP a couple of months ago, I was fascinated by the scale of the software options available to our customers. As the face of support of some of our analytics solutions, I started to wonder “What do business users really want from a business intelligence (BI) solution?”

Some will want a Google-like experience, some want automated reporting, some want the ability to play with their corporate data and then ask questions of the data to find new insights, and still others just want smarter Microsoft Excel without the multiple versions and manual errors.

The reality is, the answer will vary depending on the type of user. They might be a low-touch or a high-touch user, or somewhere in the middle. They might be an office worker using a laptop or a mobile user using an iPad or smartphone.

Four key BI needs

Vendors should provide business intelligence for whatever the end user requires. From the discussions I’ve had with customers and colleagues so far, BI needs seem to be broken down into four key areas:

  1. Corporate BI: While it’s nice to be able to give users the power to do whatever they want, organizations need to ensure corporate governance and ensure that BI looks great while also providing a single version of the truth. IT needs to deliver standard corporate content which is secure, automated, and meets corporate/legal standards for reporting. This includes corporate logos, color schemes, disclaimers, versions, and so on.
  1. Trusted BI: Users will use BI only if they trust the numbers and can act with confidence that they’re making the most informed decision based on trusted numbers. If they find discrepancies in the numbers, they’ll question where the numbers came from, or worse, they won’t use the tool again.
  1. Agile BI: Whilst most of the standard BI content needs to be created to a corporate standard, there also needs to be the ability for users to create new views of data and to almost prototype new visualizations which contain corporate as well as personal or external data. For example, most people in marketing will always look at internal and external views to understand customer behaviors and look for new niches and opportunities.
  1. Mobile BI: Mobile BI is a must for modern-day mobile workers. They need the same information on the go to make informed decisions. Which branch has the highest sales figures? Which asset needs to be maintained? Which shares should I buy today? I recently bumped into a Gartner analyst at a Christmas party, and he told me that wearables and the Internet of Things is where the smart money is going. Some software vendors are building proofs of concept for smartwatches and smart glasses!

Business intelligence needs to be natural for the user, just like using Microsoft Office. Ease of use is the key to successful BI adoption. Indeed, BI adoption is one of the largest barriers for organizations implementing a successful BI platform.

Having all of the above delivered on a scalable, trusted flexible platform can help…what do you think?

To learn more about mobile analytics, read the other blogs in our mobile series.

Comments

The Future Of Support: Empathetic, Proactive Services For The Real-Time Enterprise

Andreas Heckmann

Not so long ago, people with lots of technical training and experience were the main users of technology. Product support mirrored this audience as technicians offered just-the-facts responses to technical questions through iterative, ticket-based communications.

That type of support won’t give customers what they need in 2017 and beyond. With technology changes such as cloud computing solutions, mobile devices, hyper connectivity, and the Internet of Things reshaping the business landscape, the user community has changed. Solution providers must reimagine support to meet this new reality.

As Generations X and Y populate the workplace, user expectations for technology are shifting. People who click to complete a purchase on Amazon, swipe left or right to meet potential dates, and ask their smartphone to answer questions – whenever and however often they want – are unwilling to accept traditional methods of delivering support. These approaches are unsatisfactory, especially for digital enterprises that need to support a real-time business.

Technologies enabling responsive support

In 2016, I wrote about the features that organizations needed to adopt to provide users with responsive and meaningful support services. Clearly, people should be able to connect with their support organizations through multiple channels because ticket systems and chat windows alone are not enough. Omnichannel support, in which users can find answers in the channel of their choice (or multiple channels, if they like), is essential in the delivery of a delightful user experience.

Real-time support is another goal. Today’s highly mobile workforce expects to get answers anytime and anywhere. Always-on support can be enabled through chat services that provide access to support professionals for discussions about functional areas of the solution.

Innovative businesses are experimenting with new features that will bring support into the product itself. Instead of visiting a support portal or calling a technician when they have a question or a problem, users can click on a link or make a voice-activated request to get help, wherever they are within the product. Beyond that, support services must become more proactive, offering assistance before the user is facing an issue. Even predictive support is on the horizon.

All of these features will help enhance the support process and deliver the experience that users expect. But as we look ahead, the future of support will demand more than just new technical bells and whistles.

Delighting customers. Always.

Solution providers must embrace a new vision for delighting customers by always putting the customer first. Meeting this goal requires companies to improve the quality of interactions between customers and technicians by introducing empathy into every support transaction.

Empathy is the ability to understand, relate, and share the feelings of others. Support technicians who can empathize with users understand why an answer or resolution is needed while providing respectful assistance that helps users get back to work as soon as possible.

To deliver proactive, innovative support services that are also empathetic, solution providers may need to rethink and reimagine how they staff their support teams. Where we now divide support experts by functional area alone, empathetic support may require different experts in each area. Perhaps, for example, a technical engineer would handle the initial occurrence of a problem by performing the root-cause analysis and troubleshooting the system. Knowledge management experts, in the meantime, would productize and humanize information.

By moving data into the product’s technology infrastructure, machine learning technologies can consume it and make it available for user self-services through a fully automated chat bot. This process would allow people to engage in meaningful dialog for simpler solution support issues, leaving human-to-human communications for more serious problems. Support personnel interacting with users would then need to possess excellent communications, conflict resolution, and situational awareness skills to handle remaining service requests.

By providing empathetic support, organizations can help their customers incrementally realize higher value from their technology solutions. Ultimately, companies buy technology to add value to their businesses – and the more effectively our support services can make that goal a reality, the better the future of support will be.

For more insight on support solutions, see Farewell To The Landline: The Future Of Customer Service.

Comments

Andreas Heckmann

About Andreas Heckmann

Andreas Heckmann is head of Product Support at SAP. You can follow him on Twitter, LinkedIn, and WeChat at AndHeckmann.

How AI Can End Bias

Yvonne Baur, Brenda Reid, Steve Hunt, and Fawn Fitter

We humans make sense of the world by looking for patterns, filtering them through what we think we already know, and making decisions accordingly. When we talk about handing decisions off to artificial intelligence (AI), we expect it to do the same, only better.

Machine learning does, in fact, have the potential to be a tremendous force for good. Humans are hindered by both their unconscious assumptions and their simple inability to process huge amounts of information. AI, on the other hand, can be taught to filter irrelevancies out of the decision-making process, pluck the most suitable candidates from a haystack of résumés, and guide us based on what it calculates is objectively best rather than simply what we’ve done in the past.

In other words, AI has the potential to help us avoid bias in hiring, operations, customer service, and the broader business and social communities—and doing so makes good business sense. For one thing, even the most unintentional discrimination can cost a company significantly, in both money and brand equity. The mere fact of having to defend against an accusation of bias can linger long after the issue itself is settled.

Beyond managing risk related to legal and regulatory issues, though, there’s a broader argument for tackling bias: in a relentlessly competitive and global economy, no organization can afford to shut itself off from broader input, more varied experiences, a wider range of talent, and larger potential markets.

That said, the algorithms that drive AI don’t reveal pure, objective truth just because they’re mathematical. Humans must tell AI what they consider suitable, teach it which information is relevant, and indicate that the outcomes they consider best—ethically, legally, and, of course, financially—are those that are free from bias, conscious or otherwise. That’s the only way AI can help us create systems that are fair, more productive, and ultimately better for both business and the broader society.

Bias: Bad for Business

When people talk about AI and machine learning, they usually mean algorithms that learn over time as they process large data sets. Organizations that have gathered vast amounts of data can use these algorithms to apply sophisticated mathematical modeling techniques to see if the results can predict future outcomes, such as fluctuations in the price of materials or traffic flows around a port facility. Computers are ideally suited to processing these massive data volumes to reveal patterns and interactions that might help organizations get ahead of their competitors. As we gather more types and sources of data with which to train increasingly complex algorithms, interest in AI will become even more intense.

Using AI for automated decision making is becoming more common, at least for simple tasks, such as recommending additional products at the point of sale based on a customer’s current and past purchases. The hope is that AI will be able to take on the process of making increasingly sophisticated decisions, such as suggesting entirely new markets where a company could be profitable, or finding the most qualified candidates for jobs by helping HR look beyond the expected demographics.

As AI takes on these increasingly complex decisions, it can help reduce bias, conscious or otherwise. By exposing a bias, algorithms allow us to lessen the impact of that bias on our decisions and actions. They enable us to make decisions that reflect objective data instead of untested assumptions; they reveal imbalances; and they alert people to their cognitive blind spots so they can make more accurate, unbiased decisions.

Imagine, for example, a major company that realizes that its past hiring practices were biased against women and that would benefit from having more women in its management pipeline. AI can help the company analyze its past job postings for gender-biased language, which might have discouraged some applicants. Future postings could be more gender neutral, increasing the number of female applicants who get past the initial screenings.

AI can also support people in making less-biased decisions. For example, a company is considering two candidates for an influential management position: one man and one woman. The final hiring decision lies with a hiring manager who, when they learn that the female candidate has a small child at home, assumes that she would prefer a part-time schedule.

That assumption may be well intentioned, but it runs counter to the outcome the company is looking for. An AI could apply corrective pressure by reminding the hiring manager that all qualifications being equal, the female candidate is an objectively good choice who meets the company’s criteria. The hope is that the hiring manager will realize their unfounded assumption and remove it from their decision-making process.

At the same time, by tracking the pattern of hiring decisions this manager makes, the AI could alert them—and other people in HR—that the company still has some remaining hidden biases against female candidates to address.

Look for Where Bias Already Exists

In other words, if we want AI to counter the effects of a biased world, we have to begin by acknowledging that the world is biased. And that starts in a surprisingly low-tech spot: identifying any biases baked into your own organization’s current processes. From there, you can determine how to address those biases and improve outcomes.

There are many scenarios where humans can collaborate with AI to prevent or even reverse bias, says Jason Baldridge, a former associate professor of computational linguistics at the University of Texas at Austin and now co-founder of People Pattern, a startup for predictive demographics using social media analytics. In the highly regulated financial services industry, for example, Baldridge says banks are required to ensure that their algorithmic choices are not based on input variables that correlate with protected demographic variables (like race and gender). The banks also have to prove to regulators that their mathematical models don’t focus on patterns that disfavor specific demographic groups, he says. What’s more, they have to allow outside data scientists to assess their models for code or data that might have a discriminatory effect. As a result, banks are more evenhanded in their lending.

Code Is Only Human

The reason for these checks and balances is clear: the algorithms that drive AI are built by humans, and humans choose the data with which to shape and train the resulting models. Because humans are prone to bias, we have to be careful that we are neither simply confirming existing biases nor introducing new ones when we develop AI models and feed them data.

“From the perspective of a business leader who wants to do the right thing, it’s a design question,” says Cathy O’Neil, whose best-selling book Weapons of Math Destruction was long-listed for the 2016 National Book Award. “You wouldn’t let your company design a car and send it out in the world without knowing whether it’s safe. You have to design it with safety standards in mind,” she says. “By the same token, algorithms have to be designed with fairness and legality in mind, with standards that are understandable to everyone, from the business leader to the people being scored.” (To learn more from O’Neil about transparency in algorithms, read Thinkers in this issue.)

Don’t Do What You’ve Always Done

To eliminate bias, you must first make sure that the data you’re using to train the algorithm is itself free of bias, or, rather, that the algorithm can recognize bias in that data and bring the bias to a human’s attention.

SAP has been working on an initiative that tackles this issue directly by spotting and categorizing gendered terminology in old job postings. Nothing as overt as No women need apply, which everyone knows is discriminatory, but phrases like outspoken and aggressively pursuing opportunities, which are proven to attract male job applicants and repel female applicants, and words like caring and flexible, which do the opposite.

Once humans categorize this language and feed it into an algorithm, the AI can learn to flag words that imply bias and suggest gender-neutral alternatives. Unfortunately, this de-biasing process currently requires too much human intervention to scale easily, but as the amount of available de-biased data grows, this will become far less of a limitation in developing AI for HR.

Similarly, companies should look for specificity in how their algorithms search for new talent. According to O’Neil, there’s no one-size-fits-all definition of the best engineer; there’s only the best engineer for a particular role or project at a particular time. That’s the needle in the haystack that AI is well suited to find.

Look Beyond the Obvious

AI could be invaluable in radically reducing deliberate and unconscious discrimination in the workplace. However, the more data your company analyzes, the more likely it is that you will deal with stereotypes, O’Neil says. If you’re looking for math professors, for example, and you load your hiring algorithm with all the data you can find about math professors, your algorithm may give a lower score to a black female candidate living in Harlem simply because there are fewer black female mathematicians in your data set. But if that candidate has a PhD in math from Cornell, and if you’ve trained your AI to prioritize that criterion, the algorithm will bump her up the list of candidates rather than summarily ruling out a potentially high-value hire on the spurious basis of race and gender.

To further improve the odds that AI will be useful, companies have to go beyond spotting relationships between data and the outcomes they care about. It doesn’t take sophisticated predictive modeling to determine, for example, that women are disproportionately likely to jump off the corporate ladder at the halfway point because they’re struggling with work/life balance.

Many companies find it all too easy to conclude that women simply aren’t qualified for middle management. However, a company committed to smart talent management will instead ask what it is about these positions that makes them incompatible with women’s lives. It will then explore what it can change so that it doesn’t lose talent and institutional knowledge that will cost the company far more to replace than to retain.

That company may even apply a second layer of machine learning that looks at its own suggestions and makes further recommendations: “It looks like you’re trying to do X, so consider doing Y,” where X might be promoting more women, making the workforce more ethnically diverse, or improving retention statistics, and Y is redefining job responsibilities with greater flexibility, hosting recruiting events in communities of color, or redesigning benefits packages based on what similar companies offer.

Context Matters—and Context Changes

Even though AI learns—and maybe because it learns—it can never be considered “set it and forget it” technology. To remain both accurate and relevant, it has to be continually trained to account for changes in the market, your company’s needs, and the data itself.

Sources for language analysis, for example, tend to be biased toward standard American English, so if you’re building models to analyze social media posts or conversational language input, Baldridge says, you have to make a deliberate effort to include and correct for slang and nonstandard dialects. Standard English applies the word sick to someone having health problems, but it’s also a popular slang term for something good or impressive, which could lead to an awkward experience if someone confuses the two meanings, to say the least. Correcting for that, or adding more rules to the algorithm, such as “The word sick appears in proximity to positive emoji,” takes human oversight.

Moving Forward with AI

Today, AI excels at making biased data obvious, but that isn’t the same as eliminating it. It’s up to human beings to pay attention to the existence of bias and enlist AI to help avoid it. That goes beyond simply implementing AI to insisting that it meet benchmarks for positive impact. The business benefits of taking this step are—or soon will be—obvious.

In IDC FutureScapes’ webcast “Worldwide Big Data, Business Analytics, and Cognitive Software 2017 Predictions,” research director David Schubmehl predicted that by 2020 perceived bias and lack of evidentiary transparency in cognitive/AI solutions will create an activist backlash movement, with up to 10% of users backing away from the technology. However, Schubmehl also speculated that consumer and enterprise users of machine learning will be far more likely to trust AI’s recommendations and decisions if they understand how those recommendations and decisions are made. That means knowing what goes into the algorithms, how they arrive at their conclusions, and whether they deliver desired outcomes that are also legally and ethically fair.

Clearly, organizations that can address this concern explicitly will have a competitive advantage, but simply stating their commitment to using AI for good may not be enough. They also may wish to support academic efforts to research AI and bias, such as the annual Fairness, Accountability, and Transparency in Machine Learning (FATML) workshop, which was held for the third time in November 2016.

O’Neil, who blogs about data science and founded the Lede Program for Data Journalism, an intensive certification program at Columbia University, is going one step further. She is attempting to create an entirely new industry dedicated to auditing and monitoring algorithms to ensure that they not only reveal bias but actively eliminate it. She proposes the formation of groups of data scientists that evaluate supply chains for signs of forced labor, connect children at risk of abuse with resources to support their families, or alert people through a smartphone app when their credit scores are used to evaluate eligibility for something other than a loan.

As we begin to entrust AI with more complex and consequential decisions, organizations may also want to be proactive about ensuring that their algorithms do good—so that their companies can use AI to do well. D!

Read more thought provoking articles in the latest issue of the Digitalist Magazine, Executive Quarterly.


About the Authors:

Yvonne Baur is Head of Predictive Analytics for Sap SuccessFactors solutions.

Brenda Reid is Vice President of Product Management for Sap SuccessFactors solutions.

Steve Hunt is Senior Vice President of Human Capital Management Research for Sap SuccessFactors solutions.

Fawn Fitter is a freelance writer specializing in business and technology.

Comments

Tags:

Big Data: Better Than Big Muscles at Kinduct

Stephanie Overby

Travis McDonough has always been looking for a competitive edge. As an amateur athlete “on the small side,” he sought other ways—exercise, nutrition, strategy—to get ahead.

Today McDonough is the of CEO of Kinduct, a provider of cloud-based software that analyzes data from wearables, electronic medical records, computer vision solutions, and more to assess and make recommendations about physical human performance. Kinduct provides 100 professional sports organizations, including the five major sports leagues in North America, with intelligence to make decisions about their athletes and training programs.

Digital Fills a Gap

A chiropractor by training, McDonough owned and operated a network of sports rehabilitation clinics, where he found that patients retained only a fraction of what they were instructed to do through text or conversation. “As we treated athletes, we realized there was a gaping hole in the industry for technology [to fill],” he says.

McDonough first launched a company to create 3D videos designed to help his athlete patients better understand their injuries and the resulting therapy. The videos, delivered by text or e-mail, would illustrate what happens inside the human body when it experiences whiplash, for example.

“We quickly realized we couldn’t just be a content company and push information without understanding more about the athlete,” he says. Athletes and their trainers collected a massive amount of individual health and performance data that was available to be tapped from electronic medical records, wearable devices, and computer vision-based tracking systems that measure and record information such as how fast an athlete is running or jumping. “We needed to be agnostic and aggressive consumers of all kinds of data sources in order to push more targeted programs to our clients,” he says. So McDonough recruited his brother’s brother-in-law (vice president of product, Dave Anderson) to develop software to make sense of it all.

Innovate a Better Athlete

The software is suited for healthcare and military applications: the Canadian Armed Forces uses it to deliver exercise, wellness, and nutrition programs to its troops. But McDonough knew that the world of professional sports would provide his most eager customers.

Professional sports teams use Kinduct’s analytics to reduce injury and win more games.

“The sports world is willing to embrace innovation more quickly than other markets, like healthcare, that are slower-moving. And that’s where our passion lives. Many of us are sports fanatics and have been athletes,” says McDonough of the company’s 70 employees. Kinduct’s first customers were National Hockey League (NHL) teams, followed in short order by the National Basketball Association (NBA).

For its professional sports clients, Kinduct has uncovered more than 100 novel correlations. Most are closely guarded secrets, but several have become public. The company found, for example, that when a basketball player’s sleep falls below a certain threshold, there is a strong correlation with reduced free throw percentages two days later. That discovery led one NBA team (McDonough won’t say which) to focus on getting players to bed on time and making travel schedule changes to enable the requisite rest.

Kinduct software also found correlations for hockey teams. It demonstrated to a leading hockey team that better grip strength was likely to lead to harder and faster shots on goal. Moreover, when the system ingested three years of historical computer vision information, it found that a player’s ability to slow down dramatically affects the chances of soft tissue injuries, which are costly to professional sports teams and athletes. The software can send an alert when it spots a trend that could predict the possibility of such an injury.

We’re in this to go big. That means carrying a burn rate, hiring aggressively, and investing in research.

The software “will never replace the experts in the trenches,” says McDonough. “But we are able to arm coaches and trainers with the intelligence necessary to make more informed decisions. Technology will never replace the power of a good relationship.”

Think a Few Plays Ahead

Kinduct is based in McDonough’s hometown of Halifax, Nova Scotia, which boasts five universities, strong government subsidies, a low cost of living, and, for Kinducts’s predominantly U.S.-based customers, a favorable currency exchange rate. Despite these advantages, Halifax isn’t widely known for its digital innovators. “We’ve got a huge chip on our shoulder,” says McDonough. “We want to prove that we’re just as capable of becoming a global success as companies elsewhere,” such as Silicon Valley or London.

The Kinduct platform can help athletes or medical patients improve their condition or performance.

Nevertheless, McDonough spends significant time in Silicon Valley meeting with investors and looking at potential U.S. expansion (Kinduct closed a US$9 million Series A investment led by Intel Capital in October). “There’s a huge benefit to growing in Nova Scotia,” he says, “but we also need to be in the epicenter of the tech space.”

McDonough has big ideas for Kinduct’s future, thanks to the explosion of health- and fitness-tracking devices. “We can pull all the data in and, when we see a negative pattern, provide the user with the exact roadmap they need to follow to improve their condition or performance,” he says. “That’s equally as useful to a professional football player or an Olympic athlete as it is to someone recovering from a knee replacement or living with type 2 diabetes.”

Kinduct has 16 projects underway to measure the impact of the platform in helping individuals manage conditions like peripheral vascular disease and cognitive decline. “We want to show how the platform can empower and engage patients,” says McDonough.

Go Big or Go Home

Meanwhile, however, McDonough intends “to dominate the sports space. That is our bubble wrap of credibility, and we can leverage that to do other things.”

Focus was never a strong suit for McDonough, who struggled with dyslexia and ADD as a kid. “Thank God for sport, which helped to channel my energy,” he says. But that wandering mind, he says, has also been an asset. “Like a lot of ADD sufferers, I have a lot of imagination,” he says. For balance, he’s hired a leadership team that keeps him grounded, and he has assembled a board of experienced business and technology leaders. “They have the institutional knowledge in how to scale,” he says.

McDonough is blunt: right now, he’d rather be innovative than profitable. “We’re in this to go big. That means carrying a burn rate, hiring aggressively, and investing in research,” he says. “We’re lucky enough to be in locker rooms with these teams and close to some of the best in the business in terms of medicine and training and data science. That’s helping us to produce our future roadmap.” D!

Read more thought provoking articles in the latest issue of the Digitalist Magazine, Executive Quarterly.

Comments