AI And Ethics: We Will Live What Machines Learn

Dan Wellers and Timo Elliott

Big Data analytics, machine learning, and other emerging artificial intelligence (AI) technologies have, in a very short time, become astonishingly good at helping companies see, and react to, patterns in data they would otherwise have missed. More and more, however, these new patterns carry difficult ethical choices. Not every connection between data points needs to be made, nor does every new insight need to be used. Consider these embarrassing real-world examples’:

  • One company sent “congratulations on your new baby” announcements to women who weren’t ready to reveal their pregnancy.
  • Another company disproportionately targeted ads implying that the recipient had an arrest record toward people with names suggesting they belonged to minority ethnic groups.
  • A ride-hailing company showed guests at a corporate party records of customers who had traveled to an address not their own and late at night, and then to their own homes early the next morning, with a nudge-and-wink suggestion of what they might have been doing in between.

The evolution of technology inevitably includes mistakes, but the fact that these algorithmic failures were unintentional didn’t make them any less painful. It also didn’t make them any less ethically questionable. And we can’t blame the algorithms for that. They were only doing what they were taught to do.

Similarly, Microsoft had a mortifying experience in early 2016 with “Tay,” a chatbot intended to be a fun experiment in training an AI to understand conversational language. However, when trolls coordinated their efforts on Twitter and in messaging apps GroupMe and Kik, they were able to teach Tay to respond to them in appallingly racist ways, forcing Microsoft to take the AI offline after just 16 hours.

Artificial intelligence has already progressed to the point that we’re already asking it to automate not just business processes, but ethical choices. However, as the Tay incident shows, it still lacks the context and empathy of human intelligence and intuition. And all too often, neither the organization using AI nor the people affected by its decisions understand clearly how it arrives at its conclusions, or have any recourse to correct those conclusions when they’re wrong.

AI on the ethical edge

As AI advances exponentially, we urgently need to understand and mitigate its ethical risks, not in spite of the technology’s possibilities, but because of them. We’re already giving AI a great deal of power over decisions that are not only consequential, but potentially life-changing. Here are just a few examples:

Credit scoring algorithms, originally intended just to assess lending risk, are now commonly used to decide whether someone should get a job offer or be able to rent an apartment. Insurance underwriting algorithms determine whether someone can get coverage, how much, and at what cost, with little recourse for the applicant who disagrees. An insurer or potential employer might use health care algorithms to penalize people for the possibility that they might get ill at some point in the future, even if they never do. And as data scientist Cathy O’Neil explores at length in her best-selling book Weapons of Math Destruction, law enforcement decisions, from where to focus police activity to what kind of court sentences are handed out, are notorious for their racial bias.

If those issues aren’t complex enough, there’s the so-called “trolley problem” facing engineers working on self-driving cars: what do they instruct the car to do in an accident situation when every possible outcome is bad? (For a sense of how difficult this task might be, visit The Moral Machine, an MIT website that lets you choose among multiple scenarios a self-driving car might encounter.) How they will make these decisions is, to put it mildly, a difficult question. How society should react when machines start to make life-changing or even life-ending choices is exponentially more so.

Guilty until proven innocent?

We can’t expect AI to know right from wrong just because it’s based on mathematical equations. We can’t even assume it will prevent us from doing the wrong thing. It turns out it’s already far too easy to use AI for the wrong reasons.

It’s well known, for example, that students often struggle during their first year at college. The University of Texas in Austin implemented an algorithm that helps it identify floundering freshmen and offer them extra resources, like study guides and study partners. In her book, O’Neil cites this project approvingly because it increases students’ chances of passing their classes, moving ahead in their field of study, and eventually graduating.

But what if a school used a similar algorithm for a different purpose? As it turns out, one did. In early 2016, a private university in the U.S. used a mathematical model to identify freshmen who were at risk of poor grades — then encouraged those students to drop out early in the year in order to improve the school’s retention numbers and therefore its academic ranking. The plan leaked, outrage ensued, and the university has yet to recover.

This may be uncomfortably reminiscent of the 2002 movie Minority Report, which posited a future world where people are arrested proactively because computers predict they’ll break the law in the future. We aren’t at that dystopian point, but futurists, who make a career out of speculating about what’s coming next, say we’re already deep in uncharted waters and need to advance our thinking about the ethics of AI immediately.

Current thinking, future planning

There’s no way around it: all machine learning is going to have built-in assumptions and biases. That doesn’t mean AI is deliberately skewed or prejudiced; it just means that algorithms and the data that drive them are created by humans. We can’t help having our own assumptions and biases, even if they’re unconscious, but business leaders need to be aware of this simple truth and be proactive in addressing it.

AI has enormous potential, but if people don’t feel they can trust it, adoption will suffer. If we simply avoid the risks, we also lose out on the benefits. That’s why businesses, universities, governments and others are opening research and engaging in dialog around AI-related concerns, principles, restrictions, responsibilities, unintended outcomes, legal issues, and transparency requirements.

We’re also starting to see the first explorations of ethical best practices for maximizing the good and minimizing the bad in our AI-infused future. For example, a fledgling movement is emerging to monitor algorithms to make sure they aren’t learning bias, and what’s more, to audit them not just for neutrality, but for their ability to advance positive goals. In addition, there’s now an annual academic conference, Fairness, Accountability, and Transparency in Machine Learning (FATML), launched in 2014 and focusing on the challenges of ensuring that AI-driven decision-making is non-discriminatory, understandable, and includes due process.

But making machine learning more fair, accountable, and transparent can’t wait. As the AI field continues to grow and mature, we need to act on these steps right away:

First, we must think about what incentives AI algorithms promote, and build in processes to assess and improve them to ensure they guide us in the right — by which we mean the ethical — direction.

We must also create human-driven overrides, avenues of recourse, and formal grievance procedures for people affected by AI decisions.

We must extend anti-bias laws to include algorithms. Civilized countries put controls on weapons; when data can be used as a weapon, we need governmental controls to protect against its misuse.

Most importantly, we must see the question of AI and ethics less as a technological issue than as a societal one. That means introducing ethics training as part of both formal education and employment training, for everyone from technologists creating AI systems to vendors who market them to organizations deploying them. It means developing avenues through which developers and data scientists can express dissent when they see ethical issues emerging on AI projects. It means creating and using methodologies that incorporate values into systems design.

Fundamentally, AI is merely a tool. We can use it to set ethical standards, or we can use it as an excuse to circumvent them. It’s up to us to make the right choice.

Read the executive brief Teaching Machines Right from Wrong.


Comments

About Dan Wellers

Dan Wellers is founder and leader of Digital Futures at SAP, a strategic insights and thought leadership discipline that explores how digital technologies drive exponential change in business and society.

Timo Elliott

About Timo Elliott

Timo Elliott is an Innovation Evangelist for SAP and a passionate advocate of innovation, digital business, analytics, and artificial intelligence. He was the eighth employee of BusinessObjects and for the last 25 years he has worked closely with SAP customers around the world on new technology directions and their impact on real-world organizations. His articles have appeared in publications such as Harvard Business Review, Forbes, ZDNet, The Guardian, and Digitalist Magazine. He has worked in the UK, Hong Kong, New Zealand, and Silicon Valley, and currently lives in Paris, France. He has a degree in Econometrics and a patent in mobile analytics. 

Three Ways Advanced Technology Is Making Us More Creative In Business

Melissa Di Donato

Technology has changed the way we live and work in ways previously unimaginable. Innovation in areas such as artificial intelligence (AI), cloud technology, and Big Data enables us to run faster, jump higher, and dream bigger.

In my role at SAP, I see examples every day of how these technologies are making industries smarter, better, and faster. However, while efficiency gains are incredibly important, these technologies also open up opportunities for innovation as well as give room for creativity – both of which are key for companies that want to grow and succeed in today’s digital and technologized world. In my opinion, there are quite a few areas for which technological advancements are particularly important. It is these advancements that lay the runway to foster new ideas and propel businesses forward. The following are the areas I’m thinking of.

Automation of routine tasks gives room for exploration

Robotics and AI technology have been critical in helping businesses automate routine, monotonous tasks. While many have feared the implications of AI on the job market, we are seeing that AI and automation actually provide new opportunities for employees. Thanks to these new technologies, employees can spend a much larger amount of their working time focusing on advancing their own skills – which is not only great for the employees themselves but, in consequence, just as beneficial to their overall work quality. As I discussed in an interview with Cloud Computing News, technology gives us the power as a society to improve the value and skillset of each individual. By giving employees the time needed to focus on exploration and skills advancement, organizations can better than ever foster innovation and ultimately drive business success.

Democratizing access to data

Advances in data analytics and visualization have broken down the barriers that often existed between IT, on the one hand, and all other departments of an enterprise on the other. No longer than a decade ago, it could take IT departments weeks, if not months or years, to analyze data and deliver insights and reporting to the departments that had asked for it. Working on that data provided by IT would then consume yet another few weeks. This would be impossible today, given our quickly changing digitized world! We are lucky enough to have visualization dashboards and graphs today that make it easier for all of us to understand and work with data, but there is still room for more openness, and agile data flows throughout an enterprise.

We all know, and some of us use, natural language-processing applications like Siri and Alexa in our private lives: They are great in that they give us much easier access to information, for example by “reading” a recipe to us, or even by entertaining our guests with some soft jazz or jokes. In today’s fast-paced digitized business world, though, these voice-enabled devices have the potential to ease access to data. You can easily think of a CFO asking for real-time revenue figures during a board meeting, or an HR team looking at employee hiring trends ahead of an interview. No matter which business situation comes to your mind, data is becoming more accessible across the enterprise, at a faster pace than ever, no matter the data’s complexity. This newly won ease in using data opens up opportunities for innovation and creative thinking in every department and at every level. I’m absolutely convinced that innovation, paired with creativity, is key to succeed in the digitalized business.

Empowering flexibility, scalability, and innovation in the cloud

The digitalization of our economy has made digital transformation and cloud a must-have for businesses. Hosting technology in the cloud not only allows companies to scale their businesses easily and instantly, it also fosters an environment where organizations can explore new technologies and applications. It is the cloud that gives organizations the chance to think out of the box every day. Again, it is the cloud that makes it possible to try out newly developed innovations instantly, which – once the innovations have proved themselves successful – allows companies to push new solutions into the market and adapt rapidly. This is the flexibility companies need to beat their competitors and be (or become) market leaders.

Let me sum up my basic thoughts now: Advancements in AI, data analytics, and cloud are completely transforming how (and where) we work. With more data, time, accessibility, and convenience, we can much more easily connect, expand our knowledge, and keep drumming up new ideas that contribute to our business’ momentum. With cloud technology increasing teamwork, streamlining and uniting business goals, and crowdsourcing ideas that help make organizations smarter, you don’t have to wait long to see creativity skyrocket. As we all know, where there is productivity, efficiency, and creativity – success is only one step away!

For more information, visit SAP S/4HANA Cloud on SAP.com.

Comments

Melissa Di Donato

About Melissa Di Donato

Melissa is the Chief Revenue Officer for SAP ERP Cloud. Prior to SAP, Melissa was the Area Vice President at salesforce.com for EMEA and APAC and has spent more than 20 years in technology as a leader in the digital space delivering innovative and transformative enterprise cloud solutions for customers around the globe. She is dedicated to building high performing sales teams, ensuring diversity across the business and developing multi-channel go to market strategies.

Want Disruptive Change? There’s An Algorithm For That (Or Soon Will Be)

Jessica Schubert

Trust me – it’s not you. Our world really is more unpredictable than ever. Even the best-laid strategies are being disrupted, whether they are focused on the workplace’s culture, technical environment, market dynamics, customer behavior, or business processes. But central to these uncertainties is one constant: an algorithm guiding every step along the evolutionary trail to digital transformation.

“Each company has a predictable algorithm that’s driving its business model,” said Sathya Narasimhan, senior director for Partner Business Development at SAP, on a live episode of Coffee Break with Game Changers Radio, presented by SAP and produced and moderated by SAP’s Bonnie D. Graham. “When we understand how data affects outcomes and bring sensor data online, it’s easier for the infrastructure to process this information to create additional insights,” she explained. Sathya was joined on the program by a panel of thought leaders featuring Darwin Deano, principal at Deloitte Consulting LLP, and Patricia Florissi, global CTO for Dell EMC Sales.

This observation is very telling of the predictive power of algorithms. Think about it: Amazon proposes what should be in your shopping cart. Netflix recommends the next movie you should watch. Google is serving up ads that tug at your heart (and wallet). And all of this wouldn’t be possible without an algorithm running in the background that predicts what people want, how they behave, and what will influence their actions.

Click to listen to the full episode.

Digital technology nears a tipping point – toward enlightenment

Smart business leaders know that tomorrow’s competitive edge requires rapid innovation across their organization today. Machine learning, Internet of Things, artificial intelligence, blockchain, analytics, and Big Data – there are so many choices available. And businesses have a great opportunity now to begin figuring out how to harness and invest in them.

Patricia believes that this reality may soon reach a tipping point as data volumes continue to swell. “We are entering an enlightened age where there is so much data and computing and processing power that we can infer that our quality of life will fundamentally improve,” she observed.

Recent disruption in agriculture certainly proves Patricia’s point. Although the Internet of Things has been around for 20 years, farmers and their suppliers are just starting to capitalize on the generated data because the power needed to process and analyze it has finally arrived. Now, farmers are collecting and analyzing data generated from GPS and sensors buried in the field soil and embedded in farming equipment to improve crop yields and resource use. This is a significant advancement as farmers find new ways to increase food production – possibly by as much as 70% – to keep pace with a global population that is projected to grow from 7.6 billion today to 9 billion by 2050.

Darwin added that technology is now so affordable that the digital landscape is “starting to see patterns emerge, where some archetypes are beginning to develop as the technology matures.”

The predictive power of algorithms and humans drives business outcomes

One of the best examples of the maturing landscape of technology and algorithms is the current state of artificial intelligence and deep learning.

“Artificial intelligence has evolved, especially with deep learning, to teach computers or to help computers automatically learn how we do things and what we do without being told the rules,” Patricia commented. “The more data being observed, the more patterns can be detected and the more accurate the generalization.”

However, Darwin cautioned against heavy reliance on this technology. “We need to protect ourselves against the erosion of basic cognitive skills, which can be an unintended side effect,” he warned. With cognitive technology, people must have rapid interpretation and response, and algorithms may not always be able to satisfy that need. Humans must maintain the cognitive skills required to do this themselves.

Whether employees have the processing speed and full insight they need to make decisions boils down to a company’s willingness to invest in capabilities required to get work done effectively. “I have realized, after spending the last six years assessing developing strategies for various companies, that businesses gain market share and grow faster because of the investments they make to improve the predictive power of their algorithms,” mentioned Sathya.

The future of algorithms: blockchain, humanity, and ecosystems

As technology and processing power become more mature and powerful, new opportunities for digital innovation will inevitably emerge in the near future.

For Sathya, this future hinges on the arrival of blockchain as a mainstream technology. “We are likely entering an environment where we are relying on fewer regulations and less government interference in how businesses work. To ensure that this new paradigm does not undermine the standard of living of people and the society they live in, multiple parties – such as manufacturers, suppliers, financial services, customers, and government – will need to work together in a way that is less intrusive, more efficient and transparent, and trusted and secure.”

Darwin believes that this change will bring about a new renaissance. “We’re talking about humans being replaced by artificial intelligence, machine learning, and the Internet of Things. However, technology automated to the nth degree will actually free us from acting like technology zombies always engaged with smartphones.”

Patricia anticipates that Sathya’s and Darwin’s predictions will eventually bring about an era of optimized ecosystems and innovative models. “Companies that learn how to nurture, cultivate, and enable vibrant ecosystems, platforms, and new business models will be the digital beginners,” she said. “They are redefining how transactions are conducted and the currency used. Ultimately, the ecosystem, cross-education, and cross-pollination will be key to their transformation.”

Listen to the SAP Radio show “Future-Proof Your Business: Digital Solutions Now!” on demand.

Comments

Jessica Schubert

About Jessica Schubert

Jessica Schubert is the director of Global Partner Marketing, Deloitte Alliance Lead, at SAP. Her specialties include strategic partnerships, business alliances, go-to-market strategy, product marketing, and demand generation.

Human Skills for the Digital Future

Dan Wellers and Kai Goerlich

Technology Evolves.
So Must We.


Technology replacing human effort is as old as the first stone axe, and so is the disruption it creates.
Thanks to deep learning and other advances in AI, machine learning is catching up to the human mind faster than expected.
How do we maintain our value in a world in which AI can perform many high-value tasks?


Uniquely Human Abilities

AI is excellent at automating routine knowledge work and generating new insights from existing data — but humans know what they don’t know.

We’re driven to explore, try new and risky things, and make a difference.
 
 
 
We deduce the existence of information we don’t yet know about.
 
 
 
We imagine radical new business models, products, and opportunities.
 
 
 
We have creativity, imagination, humor, ethics, persistence, and critical thinking.


There’s Nothing Soft About “Soft Skills”

To stay ahead of AI in an increasingly automated world, we need to start cultivating our most human abilities on a societal level. There’s nothing soft about these skills, and we can’t afford to leave them to chance.

We must revamp how and what we teach to nurture the critical skills of passion, curiosity, imagination, creativity, critical thinking, and persistence. In the era of AI, no one will be able to thrive without these abilities, and most people will need help acquiring and improving them.

Anything artificial intelligence does has to fit into a human-centered value system that takes our unique abilities into account. While we help AI get more powerful, we need to get better at being human.


Download the executive brief Human Skills for the Digital Future.


Read the full article The Human Factor in an AI Future.


Comments

About Dan Wellers

Dan Wellers is founder and leader of Digital Futures at SAP, a strategic insights and thought leadership discipline that explores how digital technologies drive exponential change in business and society.

Kai Goerlich

About Kai Goerlich

Kai Goerlich is the Chief Futurist at SAP Innovation Center network His specialties include Competitive Intelligence, Market Intelligence, Corporate Foresight, Trends, Futuring and ideation.

Share your thoughts with Kai on Twitter @KaiGoe.heif Futu

Tags:

The Human Factor In An AI Future

Dan Wellers and Kai Goerlich

As artificial intelligence becomes more sophisticated and its ability to perform human tasks accelerates exponentially, we’re finally seeing some attempts to wrestle with what that means, not just for business, but for humanity as a whole.

From the first stone ax to the printing press to the latest ERP solution, technology that reduces or even eliminates physical and mental effort is as old as the human race itself. However, that doesn’t make each step forward any less uncomfortable for the people whose work is directly affected – and the rise of AI is qualitatively different from past developments.

Until now, we developed technology to handle specific routine tasks. A human needed to break down complex processes into their component tasks, determine how to automate each of those tasks, and finally create and refine the automation process. AI is different. Because AI can evaluate, select, act, and learn from its actions, it can be independent and self-sustaining.

Some people, like investor/inventor Elon Musk and Alibaba founder and chairman Jack Ma, are focusing intently on how AI will impact the labor market. It’s going to do far more than eliminate repetitive manual jobs like warehouse picking. Any job that involves routine problem-solving within existing structures, processes, and knowledge is ripe for handing over to a machine. Indeed, jobs like customer service, travel planning, medical diagnostics, stock trading, real estate, and even clothing design are already increasingly automated.

As for more complex problem-solving, we used to think it would take computers decades or even centuries to catch up to the nimble human mind, but we underestimated the exponential explosion of deep learning. IBM’s Watson trounced past Jeopardy champions in 2011 – and just last year, Google’s DeepMind AI beat the reigning European champion at Go, a game once thought too complex for even the most sophisticated computer.

Where does AI leave human?

This raises an urgent question for the future: How do human beings maintain our economic value in a world in which AI will keep getting better than us at more and more things?

The concept of the technological singularity – the point at which machines attain superhuman intelligence and permanently outpace the human mind – is based on the idea that human thinking can’t evolve fast enough to keep up with technology. However, the limits of human performance have yet to be found. It’s possible that people are only at risk of lagging behind machines because nothing has forced us to test ourselves at scale.

Other than a handful of notable individual thinkers, scientists, and artists, most of humanity has met survival-level needs through mostly repetitive tasks. Most people don’t have the time or energy for higher-level activities. But as the human race faces the unique challenge of imminent obsolescence, we need to think of those activities not as luxuries, but as necessities. As technology replaces our traditional economic value, the economic system may stop attaching value to us entirely unless we determine the unique value humanity offers – and what we can and must do to cultivate the uniquely human skills that deliver that value.

Honing the human advantage

As a species, humans are driven to push past boundaries, to try new things, to build something worthwhile, and to make a difference. We have strong instincts to explore and enjoy novelty and risk – but according to psychologist Mihaly Csikszentmihalyi, these instincts crumble if we don’t cultivate them.

AI is brilliant at automating routine knowledge work and generating new insights from existing data. What it can’t do is deduce the existence, or even the possibility, of information it isn’t already aware of. It can’t imagine radical new products and business models. Or ask previously unconceptualized questions. Or envision unimagined opportunities and achievements. AI doesn’t even have common sense! As theoretical physicist Michio Kaku says, a robot doesn’t know that water is wet or that strings can pull but not push. Nor can robots engage in what Kaku calls “intellectual capitalism” – activities that involve creativity, imagination, leadership, analysis, humor, and original thought.

At the moment, though, we don’t generally value these so-called “soft skills” enough to prioritize them. We expect people to develop their competency in emotional intelligence, cross-cultural awareness, curiosity, critical thinking, and persistence organically, as if these skills simply emerge on their own given enough time. But there’s nothing soft about these skills, and we can’t afford to leave them to chance.

Lessons in being human

To stay ahead of AI in an increasingly automated world, we need to start cultivating our most human abilities on a societal level – and to do so not just as soon as possible, but as early as possible.

Singularity University chairman Peter Diamandis, for example, advocates revamping the elementary school curriculum to nurture the critical skills of passion, curiosity, imagination, critical thinking, and persistence. He envisions a curriculum that, among other things, teaches kids to communicate, ask questions, solve problems with creativity, empathy, and ethics, and accept failure as an opportunity to try again. These concepts aren’t necessarily new – Waldorf and Montessori schools have been encouraging similar approaches for decades – but increasing automation and digitization make them newly relevant and urgent.

The Mastery Transcript Consortium is approaching the same problem from the opposite side, by starting with outcomes. This organization is pushing to redesign the secondary school transcript to better reflect whether and how high school students are acquiring the necessary combination of creative, critical, and analytical abilities. By measuring student achievement in a more nuanced way than through letter grades and test scores, the consortium’s approach would inherently require schools to reverse-engineer their curricula to emphasize those abilities.

Most critically, this isn’t simply a concern of high-tuition private schools and “good school districts” intended to create tomorrow’s executives and high-level knowledge workers. One critical aspect of the challenge we face is the assumption that the vast majority of people are inevitably destined for lives that don’t require creativity or critical thinking – that either they will somehow be able to thrive anyway or their inability to thrive isn’t a cause for concern. In the era of AI, no one will be able to thrive without these abilities, which means that everyone will need help acquiring them. For humanitarian, political, and economic reasons, we cannot just write off a large percentage of the population as disposable.

In the end, anything an AI does has to fit into a human-centered value system that takes our unique human abilities into account. Why would we want to give up our humanity in favor of letting machines determine whether or not an action or idea is valuable? Instead, while we let artificial intelligence get better at being what it is, we need to get better at being human. That’s how we’ll keep coming up with groundbreaking new ideas like jazz music, graphic novels, self-driving cars, blockchain, machine learning – and AI itself.

Read the executive brief Human Skills for the Digital Future.

Build an intelligent enterprise with AI and machine learning to unite human expertise and computer insights. Run live with SAP Leonardo.


Comments

About Dan Wellers

Dan Wellers is founder and leader of Digital Futures at SAP, a strategic insights and thought leadership discipline that explores how digital technologies drive exponential change in business and society.

Kai Goerlich

About Kai Goerlich

Kai Goerlich is the Chief Futurist at SAP Innovation Center network His specialties include Competitive Intelligence, Market Intelligence, Corporate Foresight, Trends, Futuring and ideation.

Share your thoughts with Kai on Twitter @KaiGoe.heif Futu