Sections

The Super Materials Revolution

Dan Wellers

Thousands of years ago, humans discovered they could heat rocks to get metal, and it defined an epoch. Later, we refined iron into steel, and it changed the course of civilization. More recently, we turned petroleum into plastic, with all that implies. Whenever we create new materials that push the limits of what’s possible, we send the world down an entirely new path.

Today, we’re on the verge of a revolution in materials science that will transform the world yet again. Scientists have developed tools that make it possible to design, build, and shape new “super materials” that will eclipse what we once believed were physical limits, create previously unimaginable opportunities, and expand the capabilities of what we already think of as exponential technologies in ways limited only by our imaginations.

Super strength in a pencil

The materials of the future are already being made in the present. One astonishing example is graphene, derived from the same graphite that’s in the pencil on your desk. A sheet just one atom thick, graphene is essentially two-dimensional. It weighs next to nothing, yet is up to 300 times stronger than steel. It conducts electricity more efficiently and faster than any other material. It dissipates heat faster than any other known material. It’s the only substance on earth that is completely impermeable by gas.

Excitement about graphene’s potential was high from the first, and it’s not ebbing. At least 13 conferences focusing on graphene, 2D substances, and nanotechnology are scheduled for 2016. The European Commission has created Graphene Flagship, Europe’s largest-ever research initiative, to bring graphene into the mainstream by 2026. And researchers have already developed an array of fascinating uses for graphene: new types of sensors, high-performance transistors, composites that are both super-light and super-strong, even a graphene-based gel for spinal cord injuries that can help nerve cells communicate by conducting electricity between them.

In 2015, IBM achieved a breakthrough in carbon nanotubes — graphene rolled into a tubular shape — that opens the door to faster transistors that will pack exponentially more computing power onto a single silicon chip. In fact, taken to its logical conclusion, the ability to shrink transistors to nanoscale could lead to processors that combine vast power and tiny size in a way that could be called “smart dust” (good news for those of us who don’t prioritize good housekeeping).

But that’s not all we’ll be doing with graphene. Here are just a few examples of what researchers say this single super material is likely to bring us in the not-too-distant future:

Transparent future mobile phone in hands. Concept.
  • batteries that last twice as long as they do now and could offer electric cars a 500-mile range on a single charge.
  • solar cells that are up to 1,000 times more efficient
  • clothing that conducts electricity and has wireless connectivity
  • bendable, highly conductive display screens
  • water desalinization using 15 to 50 percent less energy
  • coatings that can be applied to almost any surface that needs protection from water and air
  • meteor-resistant spacecraft and lightweight bulletproof armor, both enabled by graphene’s ability to dissipate energy from incoming projectiles

Marveling at the possibilities

Amazingly, graphene barely scratches the surface. Consider these advanced materials, all of them currently in development, and let yourself marvel at how we might put them to work:

Nanomaterials artificially engineered at molecular scale are giving rise to cotton-blend fabric that kills bacteria or conducts electricity, a coating that makes objects so frictionless they give no tactile feedback, and ceramics that bounce back from extreme pressure.

Recyclable carbon fiber composites that can be turned back into liquid form and remolded will replace the current versions that can only go into landfills when they’re broken.

Ultra-thin silicon circuits will lead to high-performance medical instruments that can be not just worn, but implanted or swallowed.

Flexible solar cells will replace large, unwieldy solar panels with thin film that can go almost anywhere and be incorporated into almost anything, from windows to tents to clothing.

Rechargeable metal-air batteries that can store electricity in grid-scale amounts will bring plentiful low-cost, reliable energy to places that currently have unreliable or no access to the traditional power grid.

Biomaterials will allow us to build robotic structures out of engineered materials that mimic organic ones. Soft materials that can be activated by an electric field will give us a whole new take on the human/machine interface. The next generation of prosthetics, for example, will be more comfortable, more functional, and harder to distinguish from living flesh.

Metamaterials, synthetic composites designed at the inter-atomic level, will have properties not found in nature. Those of you who love Star Trek and/or Harry Potter will be thrilled at this example: Scientists have already created a thin skin of metamaterial that makes whatever it covers undetectable. That’s right—an actual invisibility cloak. (Unfortunately, non-Romulans and Muggles will probably have to wait quite a while for the retail version.)

Designing the future, one molecule at a time

More mind-boggling developments in material science are on their way. The Materials Genome Initiative (MGI) is a multi-agency U.S. government project designed to help American institutions discover, develop, and deploy advanced materials at lower cost and bring them to market in less time. One central part of the initiative is a database attempting to map the hundreds of millions of different combinations of elements on the periodic table so that scientists can use artificial intelligence to predict what properties those combinations will have. As the database grows, scientists can draw on that data to determine how best to combine elements to create new super materials that have specific desired properties.

Of course, no technological advance is without its challenges, and the rise of the super materials is no exception. One technical hurdle that’s already pressing is the need to find ways to integrate graphene into a high-tech world in which industry and academia have already invested trillions of dollars in silicon. That sum is impossible to walk away from, so unless (until?) graphene supplants silicon entirely, factories, production lines, and research centers will have to be retooled so that both materials can co-exist in the same projects.

That said, advanced materials are a fundamental building block for change, so keep your eye on them as they develop. As super materials become exponentially easier to produce, we’ll start to see them in common use — imagine 3D printers that can create new objects with high-performance computing and battery power literally baked in. As they become more common, expect to see them weaving exponential technologies tightly into the fabric of daily life, both literally and figuratively, and bringing us ever-closer to a world of ambient intelligence. And as these foundation-shaking new materials become ubiquitous, it’s likely that they’ll make today’s technological marvels seem like a preschooler’s playthings.

Download the executive brief Super Materials: Building the Impossible

super-materials-thumbnail

To learn more about how exponential technology will affect business and life, see Digital Futures in the Digitalist Magazine.

Comments

About Dan Wellers

Dan Wellers is the Global Lead of Digital Futures at SAP, which explores how organizations can anticipate the future impact of exponential technologies. Dan has extensive experience in technology marketing and business strategy, plus management, consulting, and sales.

A New Computing Paradigm: Conversational AI For Consumers And In The Enterprise

Dan Wellers and Till Pieper

Instant messaging apps have taken over. WhatsApp, iMessage, WeChat, Signal, Slack, Facebook Messenger, Snapchat — billions of users exchange information in bite-sized chunks on any or all of these platforms on a daily basis. In fact, as of mid-2015, people were spending more time on messaging apps than on social networks, and as messaging apps become increasingly more sophisticated, this trend shows no sign of reversing.

Messaging platforms have expanded far beyond simply enabling users to send and receive text messages, photos, and videos. Many of them allow users to exchange documents and files, voice memos, location information, and sometimes even cash. And intriguingly, they’re creating new opportunities for us to interact not just with each other, but thanks to chatbots, also with machines.

Rise of the chatbots

A chatbot is a service that, in the most basic form in many implementations today, responds through pre-programmed rules to queries it receives through a messaging interface. Despite the name, a chatbot doesn’t necessarily do any chatting. Rather, you tell it what you want, from ordering products to triggering actions, and it responds accordingly.

Chatbots have been around for decades. Eliza, a program that simulates conversation by asking a handful of questions and repeating parts of the
answers, dates back to 1966. Many of today’s chatbots are Eliza’s direct descendants, operating via messaging channels like SMS, Facebook Messenger, or Slack, and most chatbots today respond using predefined message templates.

Chatbots are ideally suited for delivering services from within a messaging app to its users in a frictionless, personal way. Instead of having to install and launch a separate application, the user can text a chatbot just like a human contact via the messaging app to hail a cab, buy a t-shirt, order a pizza, reserve a conference room, approve a workflow, or submit a vacation request. Some experts even believe that chatbots will replace applications to a certain extent, since a chatbot with limited or no graphical components that operates within a messaging platform is cheaper to build and run than a full-featured app.

As an example, SAP itself is piloting a chatbot at HanaHaus, a café and co-working space operated by SAP in downtown Palo Alto, California. Customers who want to make, extend, or cancel a reservation for a workspace or ask related questions can do so by sending a text message to HanaHaus in casual language such as “I need a desk for two people tomorrow afternoon.” The HanaHaus chatbot responds requesting any other necessary information, like start and end time, and confirms the information before processing the user’s credit card and confirming the reservation in a more frictionless way than the traditional web- or app-based approach.

The machines talk back

As machines get more sophisticated at understanding and responding in natural language, we’re also seeing a massive growth in another type of conversational application: digital assistants like Apple’s Siri, Amazon’s Alexa, Google Assistant, and SAP’s upcoming Copilot. These dedicated apps and, in the case of Amazon Echo and Google Home, devices are enabled with natural language processing (NLP) that helps them understand casual speech input by text or voice in more sophisticated ways and in a multitude of functional areas. They can interact with other applications, parse open-ended questions like “how do I get to the nearest subway station?”, “what’s the score for the Giants game?” and “what are the top 3 deals I still need to close this month?”, all through one consistent interface, and they can take action or deliver an answer just as a human assistant would.

These use cases are already materializing right now. Based on the results of a survey of more than 1,000 IT and business professionals, sponsored by SAP and conducted by IDC, 20% of companies are already using virtual digital assistants to interact with employees and/or customers today, and more than two-thirds are actively evaluating or considering it for the next two or three years.

Conversational artificial intelligence

Based on the recent promising advancements in machine learning and AI, we’re going to see an even more dramatic evolution as static, rule-based conversational applications like most of today’s chatbots give way to artificially intelligent solutions. We’ll be able to talk to these applications as if they’re people, and they’ll be able to learn from transactions and user behavior to create and enhance their own understanding to further refine their responses. Using them to access content, request customer service, and make transactions will become seamless. For example, the HanaHaus bot’s logic is already no longer based on pre-defined rules, but rather on its ability to learn from input examples, which enables it to enhance its capabilities the more it is used, just as a human learns a new language.

In other advanced and extended scenarios for conversational AI, a technician might send a photo of a broken part to a parts and maintenance bot, which uses deep learning-based image processing to identify the part, automatically submits a replacement order, and sends the technician the predicted delivery date and installation instructions via the same messaging channel. An employee could also use Slack to submit a leave request to HR’s scheduling bot by sending the message, “I’ll be taking the first week of August off.” Or a customer could use a messaging app to contact a customer service bot with a question about how to use a product and receive a link in seconds to a video of the solution. Systems might also get in touch proactively with users based on certain dynamic criteria or even take action autonomously within given constraints, enabling users to focus on more important tasks.

The conversation continues

At first, chatbots will augment apps. Then they may replace them, until eventually, text and GUI interfaces themselves may well fade away in favor of simply…talking. For a consumer, that might look like telling your phone to make a 7 pm reservation at the nicest restaurant within 10 miles of your home with an available table. In a business context, it might look like asking a tiny black box in your warehouse, “What are the 3 most important orders we need to fulfill this week, and what’s the best way to make them happen?” and getting the optimal response an instant later.

Some other possibilities might include approving multiple workflows from within a messaging app, submitting expense reports by voice interface, reserving conference rooms by SMS, speaking to IoT devices to configure them and retrieve data, and interacting with AR/VR applications without any mouse and keyboard. We’ll say what we need, and the smart systems behind the scenes will apply machine learning to determine what we want, ask questions to clarify and add context, and then deliver on the request, whether that involves running reports, providing customer support, or changing business travel plans on the fly.

Like children, the more we talk to these systems, the smarter they’ll get. Instead of forcing us to learn how they work, they’ll learn how we work and adapt themselves to suit. This isn’t simply the emergence of a new interface. It’s an entirely new paradigm for computing, and in terms of the business world, the end goal is nothing less than enterprise AI.

Download the executive brief Let’s Talk About Conversational Computing

To learn more about how exponential technology will affect business and life, see Digital Futures in the Digitalist Magazine.

For more insight on AI in the enterprise, see Machine Learning: New Companion For Knowledge Workers.

Comments

Autonomous Vehicles: Accelerating Into The Mainstream

Dan Wellers and Larry Stolle

In just one year, self-driving cars have gone from the theoretical to the imminent. Major manufacturers are leaping into development. The U.S. Department of Transportation issued new rules in September giving the federal government broad oversight over vehicles operated by software rather than by humans. Discussion of the technology’s pros and cons has become a staple in publications as mainstream as the New York Times. But now that the first fatality has occurred, we need to take a closer look at the implications and challenges of autonomous vehicles.

At the moment, the focus is on self-driving versions of today’s automobiles. However, autonomous vehicles also include taxis, buses, delivery vans, long-haul trucks, and more. Any vehicle that meets the fundamental need to move passengers and/or cargo from point A to point B may soon include multiple exponential technologies – machine learning and artificial intelligence, sensors, drones, cybersecurity, and at some future point, super materials – that allow it to pilot itself under some, if not all, circumstances.

What’s driving how we’re driving?

Whether autonomous vehicles will hit the road is no longer a question; some of them already have, and others, like the autonomous trucks being developed by Otto, a startup recently acquired by Uber, are on their way. What we need to be asking now is when self-driving cars will change lanes from experiments and curiosities to the mainstream – and how long it will be before they dominate. These factors will heavily influence the answers:

Technological limits. So far, machines can’t duplicate the sophisticated, intricate choreography that human drivers perform every day. Nor can they correct for catastrophic human error. The Tesla in June’s fatality failed to “see” a white truck against a bright sky, but it wasn’t designed to operate without any human intervention. The driver failed to override it, reportedly because he was not only speeding, but distracted by watching a movie on the car’s dashboard video screen. While current technology can achieve something very near autonomy, the last few steps still promise to be a big leap, in part because we don’t yet know where to draw the line between requiring human intervention and forbidding it. That line will shift as we become more comfortable with the technology and as technology itself advances. Moreover, security will have to become a higher priority as vehicles become just another set of endpoints in the Internet of Things.

The so-called “Trolley Problem.” The technological limits of self-driving vehicles are inextricably entwined with this iconic thought exercise, which asks whether it’s better to do nothing and let a runaway trolley hit five people on its track, or to actively participate in killing a single person standing on another track by pulling the lever that diverts the trolley. Self-driving vehicles will force us to confront similar ethical issues, and soon. They will have to learn what to do when a collision is unavoidable, but what will we teach them about whose safety to prioritize? Children over the elderly? Pedestrians over other drivers? The people in the vehicle over everyone else? Or should we simply instruct them to minimize injuries and deaths, even if their own passengers end up among the casualties?

Infrastructure limits. Current infrastructure is made for vehicles with human drivers. It’s going to need an overhaul if we hope to achieve the efficiency autonomous vehicles promise us. Our current infrastructure is crumbling in some areas and will need to be adapted in the short term to carry higher volumes of traffic. Traffic lights, signals, and signage may have to be enhanced (perhaps with their own sensors and beacons) so autonomous vehicles can detect them even in the brightest sunlight. Parking spaces and parking structures will need retooling to make it possible for cars to come and go as they’re summoned and allow payment when no human is present to feed a meter. On a positive note, if mass adoption of driverless vehicles reduces both the number of vehicles and the between-vehicle spacing that’s needed for safe operation, we could finally redesign roads to take up less space and free up real estate for more attractive purposes.

Mapping issues. Current autonomous vehicle technology requires highly detailed maps. Not every road is mapped to the necessary detail, and roadwork can make any map obsolete overnight. We’ll need either to raise our mapping game or develop GPS technology that includes enhanced geolocation features and isn’t entirely reliant on maps.

Resistance to change. Although an estimated 10 million driverless cars are expected to be on the road by 2020, many drivers – whether out of love of driving or a broader aversion to being told what to do – may fiercely resist any pressure to adopt them. This will change as the technology develops and is proven safe, especially as older adults realize that an autonomous car can drive them when they no longer have the reflexes, vision, or strength to drive themselves. After a generation or two grows up with self-driving cars as the norm, manually operated cars are likely to be the purview of collectors, hobbyists, and other niche enthusiasts.

Current mileposts 

Google and Tesla, which currently dominate the discussion of autonomous vehicles, are taking different paths to the technology. Tesla is working on perfecting its autopilot while reminding drivers (not always successfully, as June’s accident proves) that the technology is meant to assist rather than replace them. Google, by comparison, is simply designing test vehicles under the assumption that human drivers will make mistakes. Its experimental self-driving cars lack brake pedals, accelerators, and steering wheels; they’re packed with impact-absorbent foam and plastic windshields; and their top speed is a far from blistering 25 mph. Of course, these won’t hit the U.S. market as long as the government requires human controls in vehicles, but they suggest where we might be heading.

Traditional car manufacturers are also picking up the pace. Toyota, currently the world’s largest car maker, is creating an onboard system that only takes over when the human driver makes a mistake. BMW, Ford, and Nissan are also steering their research toward creating autonomous vehicles, while Spanish automaker SEAT has partnered with Samsung and SAP to present “a new concept for the connected car.” Built on the SAP Vehicles Network, this currently includes a smartphone app that integrates the car key and lets owners start, stop, and share access to the vehicle remotely, as well as a different app that lets drivers find, reserve, and pay for a parking space using fingerprint verification, both features that are also building blocks for greater autonomy.

Similarly, SAP has introduced the SAP Vehicle Insights app, which gathers driving and maintenance information about individual vehicles and entire fleets. This type of information will become relevant to managing fleets of self-driving cars as well. 

Merging into a driverless world

When Henry Ford introduced the Model T in 1908, it may have been an early stage in the automobile’s evolution, but it created a mass market and gave rise to many of the rules of the road we still observe today. Similarly, we’re further down the road to autonomous vehicles than many of us realize. In fact, automated driver assistance systems like those that automate braking, alert drivers to hazards on the road, or show what’s in blind spots have already brought us a long way.

We still lack the central “brain” to control all the other automated systems, the heuristic learning and decision-making abilities to take the place of the human driver, and the infrastructure necessary to enable truly autonomous operations, and even as we solve these issues, new ones will almost certainly come speeding toward us. Yet the value, safety, efficiency, improved capital utilization, and other benefits of self-driving vehicles will far outweigh the costs of continuing to develop them – so buckle up, because we’ve got an exciting ride ahead.

Download the executive brief Speeding Toward a Driverless Future

self-driving-cars-2-thumbnail

To learn more about how exponential technology will affect business and life, see Digital Futures in the Digitalist Magazine.

Listen to a Special Edition broadcast about The Future of Cars on SAP’s Coffee Break with Game-Changers.

Comments

How AI Can End Bias

Yvonne Baur, Brenda Reid, Steve Hunt, and Fawn Fitter

We humans make sense of the world by looking for patterns, filtering them through what we think we already know, and making decisions accordingly. When we talk about handing decisions off to artificial intelligence (AI), we expect it to do the same, only better.

Machine learning does, in fact, have the potential to be a tremendous force for good. Humans are hindered by both their unconscious assumptions and their simple inability to process huge amounts of information. AI, on the other hand, can be taught to filter irrelevancies out of the decision-making process, pluck the most suitable candidates from a haystack of résumés, and guide us based on what it calculates is objectively best rather than simply what we’ve done in the past.

In other words, AI has the potential to help us avoid bias in hiring, operations, customer service, and the broader business and social communities—and doing so makes good business sense. For one thing, even the most unintentional discrimination can cost a company significantly, in both money and brand equity. The mere fact of having to defend against an accusation of bias can linger long after the issue itself is settled.

Beyond managing risk related to legal and regulatory issues, though, there’s a broader argument for tackling bias: in a relentlessly competitive and global economy, no organization can afford to shut itself off from broader input, more varied experiences, a wider range of talent, and larger potential markets.

That said, the algorithms that drive AI don’t reveal pure, objective truth just because they’re mathematical. Humans must tell AI what they consider suitable, teach it which information is relevant, and indicate that the outcomes they consider best—ethically, legally, and, of course, financially—are those that are free from bias, conscious or otherwise. That’s the only way AI can help us create systems that are fair, more productive, and ultimately better for both business and the broader society.

Bias: Bad for Business

When people talk about AI and machine learning, they usually mean algorithms that learn over time as they process large data sets. Organizations that have gathered vast amounts of data can use these algorithms to apply sophisticated mathematical modeling techniques to see if the results can predict future outcomes, such as fluctuations in the price of materials or traffic flows around a port facility. Computers are ideally suited to processing these massive data volumes to reveal patterns and interactions that might help organizations get ahead of their competitors. As we gather more types and sources of data with which to train increasingly complex algorithms, interest in AI will become even more intense.

Using AI for automated decision making is becoming more common, at least for simple tasks, such as recommending additional products at the point of sale based on a customer’s current and past purchases. The hope is that AI will be able to take on the process of making increasingly sophisticated decisions, such as suggesting entirely new markets where a company could be profitable, or finding the most qualified candidates for jobs by helping HR look beyond the expected demographics.

As AI takes on these increasingly complex decisions, it can help reduce bias, conscious or otherwise. By exposing a bias, algorithms allow us to lessen the impact of that bias on our decisions and actions. They enable us to make decisions that reflect objective data instead of untested assumptions; they reveal imbalances; and they alert people to their cognitive blind spots so they can make more accurate, unbiased decisions.

Imagine, for example, a major company that realizes that its past hiring practices were biased against women and that would benefit from having more women in its management pipeline. AI can help the company analyze its past job postings for gender-biased language, which might have discouraged some applicants. Future postings could be more gender neutral, increasing the number of female applicants who get past the initial screenings.

AI can also support people in making less-biased decisions. For example, a company is considering two candidates for an influential management position: one man and one woman. The final hiring decision lies with a hiring manager who, when they learn that the female candidate has a small child at home, assumes that she would prefer a part-time schedule.

That assumption may be well intentioned, but it runs counter to the outcome the company is looking for. An AI could apply corrective pressure by reminding the hiring manager that all qualifications being equal, the female candidate is an objectively good choice who meets the company’s criteria. The hope is that the hiring manager will realize their unfounded assumption and remove it from their decision-making process.

At the same time, by tracking the pattern of hiring decisions this manager makes, the AI could alert them—and other people in HR—that the company still has some remaining hidden biases against female candidates to address.

Look for Where Bias Already Exists

In other words, if we want AI to counter the effects of a biased world, we have to begin by acknowledging that the world is biased. And that starts in a surprisingly low-tech spot: identifying any biases baked into your own organization’s current processes. From there, you can determine how to address those biases and improve outcomes.

There are many scenarios where humans can collaborate with AI to prevent or even reverse bias, says Jason Baldridge, a former associate professor of computational linguistics at the University of Texas at Austin and now co-founder of People Pattern, a startup for predictive demographics using social media analytics. In the highly regulated financial services industry, for example, Baldridge says banks are required to ensure that their algorithmic choices are not based on input variables that correlate with protected demographic variables (like race and gender). The banks also have to prove to regulators that their mathematical models don’t focus on patterns that disfavor specific demographic groups, he says. What’s more, they have to allow outside data scientists to assess their models for code or data that might have a discriminatory effect. As a result, banks are more evenhanded in their lending.

Code Is Only Human

The reason for these checks and balances is clear: the algorithms that drive AI are built by humans, and humans choose the data with which to shape and train the resulting models. Because humans are prone to bias, we have to be careful that we are neither simply confirming existing biases nor introducing new ones when we develop AI models and feed them data.

“From the perspective of a business leader who wants to do the right thing, it’s a design question,” says Cathy O’Neil, whose best-selling book Weapons of Math Destruction was long-listed for the 2016 National Book Award. “You wouldn’t let your company design a car and send it out in the world without knowing whether it’s safe. You have to design it with safety standards in mind,” she says. “By the same token, algorithms have to be designed with fairness and legality in mind, with standards that are understandable to everyone, from the business leader to the people being scored.” (To learn more from O’Neil about transparency in algorithms, read Thinkers in this issue.)

Don’t Do What You’ve Always Done

To eliminate bias, you must first make sure that the data you’re using to train the algorithm is itself free of bias, or, rather, that the algorithm can recognize bias in that data and bring the bias to a human’s attention.

SAP has been working on an initiative that tackles this issue directly by spotting and categorizing gendered terminology in old job postings. Nothing as overt as No women need apply, which everyone knows is discriminatory, but phrases like outspoken and aggressively pursuing opportunities, which are proven to attract male job applicants and repel female applicants, and words like caring and flexible, which do the opposite.

Once humans categorize this language and feed it into an algorithm, the AI can learn to flag words that imply bias and suggest gender-neutral alternatives. Unfortunately, this de-biasing process currently requires too much human intervention to scale easily, but as the amount of available de-biased data grows, this will become far less of a limitation in developing AI for HR.

Similarly, companies should look for specificity in how their algorithms search for new talent. According to O’Neil, there’s no one-size-fits-all definition of the best engineer; there’s only the best engineer for a particular role or project at a particular time. That’s the needle in the haystack that AI is well suited to find.

Look Beyond the Obvious

AI could be invaluable in radically reducing deliberate and unconscious discrimination in the workplace. However, the more data your company analyzes, the more likely it is that you will deal with stereotypes, O’Neil says. If you’re looking for math professors, for example, and you load your hiring algorithm with all the data you can find about math professors, your algorithm may give a lower score to a black female candidate living in Harlem simply because there are fewer black female mathematicians in your data set. But if that candidate has a PhD in math from Cornell, and if you’ve trained your AI to prioritize that criterion, the algorithm will bump her up the list of candidates rather than summarily ruling out a potentially high-value hire on the spurious basis of race and gender.

To further improve the odds that AI will be useful, companies have to go beyond spotting relationships between data and the outcomes they care about. It doesn’t take sophisticated predictive modeling to determine, for example, that women are disproportionately likely to jump off the corporate ladder at the halfway point because they’re struggling with work/life balance.

Many companies find it all too easy to conclude that women simply aren’t qualified for middle management. However, a company committed to smart talent management will instead ask what it is about these positions that makes them incompatible with women’s lives. It will then explore what it can change so that it doesn’t lose talent and institutional knowledge that will cost the company far more to replace than to retain.

That company may even apply a second layer of machine learning that looks at its own suggestions and makes further recommendations: “It looks like you’re trying to do X, so consider doing Y,” where X might be promoting more women, making the workforce more ethnically diverse, or improving retention statistics, and Y is redefining job responsibilities with greater flexibility, hosting recruiting events in communities of color, or redesigning benefits packages based on what similar companies offer.

Context Matters—and Context Changes

Even though AI learns—and maybe because it learns—it can never be considered “set it and forget it” technology. To remain both accurate and relevant, it has to be continually trained to account for changes in the market, your company’s needs, and the data itself.

Sources for language analysis, for example, tend to be biased toward standard American English, so if you’re building models to analyze social media posts or conversational language input, Baldridge says, you have to make a deliberate effort to include and correct for slang and nonstandard dialects. Standard English applies the word sick to someone having health problems, but it’s also a popular slang term for something good or impressive, which could lead to an awkward experience if someone confuses the two meanings, to say the least. Correcting for that, or adding more rules to the algorithm, such as “The word sick appears in proximity to positive emoji,” takes human oversight.

Moving Forward with AI

Today, AI excels at making biased data obvious, but that isn’t the same as eliminating it. It’s up to human beings to pay attention to the existence of bias and enlist AI to help avoid it. That goes beyond simply implementing AI to insisting that it meet benchmarks for positive impact. The business benefits of taking this step are—or soon will be—obvious.

In IDC FutureScapes’ webcast “Worldwide Big Data, Business Analytics, and Cognitive Software 2017 Predictions,” research director David Schubmehl predicted that by 2020 perceived bias and lack of evidentiary transparency in cognitive/AI solutions will create an activist backlash movement, with up to 10% of users backing away from the technology. However, Schubmehl also speculated that consumer and enterprise users of machine learning will be far more likely to trust AI’s recommendations and decisions if they understand how those recommendations and decisions are made. That means knowing what goes into the algorithms, how they arrive at their conclusions, and whether they deliver desired outcomes that are also legally and ethically fair.

Clearly, organizations that can address this concern explicitly will have a competitive advantage, but simply stating their commitment to using AI for good may not be enough. They also may wish to support academic efforts to research AI and bias, such as the annual Fairness, Accountability, and Transparency in Machine Learning (FATML) workshop, which was held for the third time in November 2016.

O’Neil, who blogs about data science and founded the Lede Program for Data Journalism, an intensive certification program at Columbia University, is going one step further. She is attempting to create an entirely new industry dedicated to auditing and monitoring algorithms to ensure that they not only reveal bias but actively eliminate it. She proposes the formation of groups of data scientists that evaluate supply chains for signs of forced labor, connect children at risk of abuse with resources to support their families, or alert people through a smartphone app when their credit scores are used to evaluate eligibility for something other than a loan.

As we begin to entrust AI with more complex and consequential decisions, organizations may also want to be proactive about ensuring that their algorithms do good—so that their companies can use AI to do well. D!

Read more thought provoking articles in the latest issue of the Digitalist Magazine, Executive Quarterly.


About the Authors:

Yvonne Baur is Head of Predictive Analytics for Sap SuccessFactors solutions.

Brenda Reid is Vice President of Product Management for Sap SuccessFactors solutions.

Steve Hunt is Senior Vice President of Human Capital Management Research for Sap SuccessFactors solutions.

Fawn Fitter is a freelance writer specializing in business and technology.

Comments

Tags:

Next-Generation, Real-Time Data Warehouse: Bringing Analytics To Data

Iver van de Zand

Imagine the following situation: you are analyzing and gathering insights about product sales performance and wonder why a certain area in your country is doing better than others. You deep dive, slice, dice, and use different perspectives to analyze, but can’t find the answer to why sales are better for that region.

You conclude you need data that is not available in your corporate systems. Some geographical data that is available through Hadoop might answer your question. How can you get this information and quickly analyze it all?

Bring analytics to data

If we don’t want to go the traditional route of specifying, remodeling the data warehouse, and uploading and testing data, we’d need a whole new way of modern data warehousing. What we ultimately need is a kind of semantics that allows us to remodel our data warehouse in real time and on the fly – semantics that allows decision makers to leave the data where it is stored without populating it into the data warehouse. What we really need is a way to bring our analytics to data, instead of the other way around.

So our analytics wish list would be:

  • Access to the data source on the fly
  • Ability to remodel the data warehouse on the fly
  • No replication of data; the data stays where it is
  • Not losing time with data-load jobs
  • Analytical processing done in the moment with pushback to an in-memory computing platform
  • Drastic reduction of data objects to be stored and maintained
  • Elimination of aggregates

Traditional data warehousing is probably the biggest hurdle when it comes to agile business analytics. Though modern analytical tools perfectly add data sources on the fly and blend different data sources, these components are still analytical tools. When additional data must be available for multiple users or is huge in scale and complexity, analytical tools lack the computing power and scalability needed. It simply doesn’t make sense to blend them individually when multiple users require the same complex, additional data.

A data warehouse, in this case, is the answer. However, there is still one hurdle to overcome: A traditional data warehouse requires a substantial effort to adjust to new data needs. So we add to our wish list:

  • Adjust and adapt the modeling
  • Develop load and transformation script
  • Assign sizing
  • Setup scheduling and linage
  • Test and maintain

In 2016, the future of data warehousing began. In-memory technology with smart, native, and real-time access moved information from analytics to the data warehouse, as well as the data warehouse to core in-memory systems. Combined with pushback technology, where analytical calculations are pushed back onto an in-memory computing platform, analytics is brought back to data. End-to-end in-memory processing has become the reality, enabling true agility. And end-to-end processing is ready for the Internet of Things at the petabyte scale.

Are we happy with this? Sure, we are! Does it come as a surprise? Of course, not! Digital transformation just enabled it!

Native, real-time access for analytics

What do next-generation data warehouses bring to analytics? Well, they allow for native access from top-end analytics components through the data warehouse and all the way to the core in-memory platform with our operational data. Even more, this native access is real-time. Every analytics-driven interaction from an end-user generates calculations. With the described architecture, these calculations are massively pushed back to the core platform where our data resides.

The same integrated architecture is also a game changer when it comes to agility and data optimization. When new, complex data is required, it can be added without data replication. Since there is no data replication, the data warehouse modeling can be done on the fly, leveraging the semantics. We no longer have to model, create, and populate new tables and aggregates when additional data is required in the data warehouse, because there are no new tables needed! We only create additional semantics, and this can be done on the fly.

Learn why you need to put analytics into your business processes in the free eBook How to Use Algorithms to Dominate Your Industry.

This article appeared on Iver van de Zand.

Comments

Iver van de Zand

About Iver van de Zand

Iver van de Zand is a Business Analytics Leader at SAP responsible for Business Analytics with a special attention towards Business Intelligence Suite, Lumira and Predictive Analytics.