Running for elective office is about building an organization to promote the candidate’s views. It’s about attracting professionals and volunteers to make calls and knock on doors. It’s about raising money. But today a political campaign of any size needs to be about collecting and analyzing data about voters: who supports the candidate, who can be persuaded, who may give money and who will show up at the polls.
The political parties provide their candidates with access to data, but access is just the start. To make productive use of data, campaigns must first ensure it is of high quality and then employ analysis to identify supporters, encourage them to volunteer and donate and, most importantly, get out to vote.
Sophisticated data analysis is not a level playing field—it costs a lot of money. What’s more, effective analysis of data creates a virtuous cycle: More resources enable a campaign to collect more data about voters’ views, to find more supporters, to refine messages that resonate, to recruit more volunteers, to attract even more donors and to get their supporters to the polls.
“One of the things that data helps do is to figure out which groups of voters do we need to target, and which groups of voters do we need to spend our time and our messaging resources on,” says Kreiss. “And how do we efficiently do that to get more votes, again on Election Day, than the next person. So data matters greatly in terms of resources.”
A number of companies have emerged in recent years to help candidates and campaigns crunch data. NationBuilder, for example, prides itself on serving all political persuasions. Emily Schwartz, NationBuilder’s vice president of organizing, says that today’s data tools make analytical resources available to local and grassroots campaigns as well as national ones.
NationBuilder’s service is free to try, then users pay monthly fees based on the level of services, such as software for email campaigns and campaign-focused websites.
Other organizations offer data help to candidates:
i360, whose backers include the conservative Koch brothers, bills itself as “the leading data and technology resource for the free market political advocacy community.”
NGP VAN is a voter-data management platform for Democratic candidates and progressive organizations.
The Republican National Committee’s Data Center 2016 project is a proprietary voter file designed to give its candidates ammunition in data-driven elections.
The Obama model
Like other political experts, Meta S. Brown, president of the consultancy A4A Brown Inc. and author of “Data Mining for Dummies,” points to President Obama’s 2012 re-election victory over Mitt Romney as a recent high point in the use of data. In that election, the Obama campaign sent out volunteers to go door to door asking voters their opinions.
“You can use that information in every way you campaign,” says Brown. “They can use it in advertising and how to do ad buys effectively. That was a big competitive advantage of the Obama campaign over the Romney campaign.”
That data crunching provided another type of competitive advantage, she adds: The Obama campaign could spend fewer ad dollars to reach voters, on average, because they selected only those TV programs and times they needed.
The kind of sophisticated data analysis goes well beyond TV. It enables a campaign to microtarget people through email messages, social media, online ads, follow-up visits and phone calls to the homes of supporters to encourage them to vote.
Brown points to the Obama campaign’s expert use of social media as a model in the modern campaign. The campaign found supporters on Facebook (through their “likes” or message postings) and got them to send supportive messages to friends in swing states like Ohio. “That ability to take advantage of social media, that really is a tipping point,” says Brown.
The effective use of data can also reinforce a campaign’s relative strength against an opponent.
Campaigns that collect voters’ email addresses can conduct experiments on which messages are more effective, down to the email subject line, says Kreiss. By testing which type of message delivers better results—yielding more volunteers or more donations—the campaigns refine their efforts. The Obama campaign estimated that insights gained through these “A/B tests” added $100 million in donations during the 2012 race, according to Kreiss.
It starts with voter data
Campaigns start their data work with voter records. Voter data is largely a matter of public record in the United States. That includes names and mailing address and often when the person voted in past elections. It sometimes includes party affiliation.
For data experts, getting these names is just the start. The information needs to be in a format that computers can read. Voter rolls are subject to change as people move, die, change names or register for the first time. Keeping an accurate and up-to-date voter database is a major task for registration officials. A 2012 study by the Pew Center on the States found that about 24 million voter registrations in the United States—one in every eight—are no longer valid or are inaccurate. And voter lists do not come in a standard format, says Schwartz of NationBuilder. Wisconsin, Virginia and Washington, for example, do not provide party affiliation with their voter lists.
Brown notes that data has to be organized to be useful, and that takes work. “It doesn’t matter who the source is, you have to expect some quality problems and do some investigations,” she says. That’s where the well-funded and data-savvy campaign has a significant advantage.
Data and our democracy
Experts on data analysis disagree on its implications for the country. Kreiss says cynics argue that candidates use data to manipulate voters, but he argues for the benefits of analysis.
“We live in world where it is a lot harder to reach voters than it was 40 years ago because people’s media habits have changed significantly,” he says. “So to the extent that data is enrolled in the ability of campaigns to actually figure out which sorts of voters should we be talking to, how do we mobilize them, how do we get them to the polls and, ultimately, what should we be saying in order to get people excited about particular candidates, I think that is a good thing for democracy.”
The rise of data crunching in politics also tends to favor incumbents, because those who have been in the game before will have more detailed data in their files.
Eitan Hersh, an assistant professor of political science at Yale, says that while political parties share data with candidates about voters in their district, incumbents are able to take advantage of the data they have collected in previous campaigns to reconnect with previously identified supporters.
“They make lists of everyone who asked for a yard sign and made a donation and volunteered. And if you have been in Congress for 10 years, and you have a whole bunch of people on that list, that can be very valuable,” says Hersh. “If a challenger comes around, they might not have that.”
The technology used to analyze voter data—which costs less than buying TV ads—can help an upstart to level the playing field, but it requires the insurgent campaign to have data savvy and a lot of volunteers.
“One huge advantage that is closely tied to data is volunteer support because a lot of the strategies that utilize data are things like door-to-door canvassing and phone banking, strategies where the data helps you [know] who to contact and helps you study the effectiveness of contacts,” says Hersh. “Once you have a certain level of data access, that data can power volunteers. And it’s hard to manufacture volunteers.”
Another effect of campaign data-crunching is the potential for increased voter turnout.
Ryan Enos, an associate professor of government at Harvard, co-authored a study of get-out-the-vote efforts in the 2012 presidential race. Both the Obama and Romney campaigns successfully used data-driven techniques to encourage citizens to vote. In total, the campaigns raised voter turnout by 2.6 million people, or 7 percent, says Enos. “If we define a healthy democracy as one with larger participation, then that is probably a good thing,” he says.
However, there is a participation gap in who responds to these get-out-the-vote efforts, says Enos. People who are already more likely to vote are more responsive. “What we find is that these sorts of techniques work best for people who tend to be politically conservative. They tend to be richer, they tend not to be racial minorities. In some ways what these techniques do is widen the gap in what we might call participators and non-participators,” he says.
At the end of the day, successful use of data is no substitute for a strong candidate. Schwartz, who worked on Obama’s 2008 campaign, notes that effectively using data saves time and money in targeting voters. It does not substitute for a candidate’s message. “They ran an incredible campaign,” she says of 2008. But they also had Barack Obama.”
This blog was written through a partnership with Thompson Reuters. To learn how SAP is helping them Run Live, click here.
The business world is now firmly in the age of data. Not that data wasn’t relevant before; it was just nowhere close to the speed and volume that’s available to us today. Businesses are buckling under the deluge of petabytes, exabytes, and zettabytes. Within these bytes lie valuable information on customer behavior, key business insights, and revenue generation. However, all that data is practically useless for businesses without the ability to identify the right data. Plus, if they don’t have the talent and resources to capture the right data, organize it, dissect it, draw actionable insights from it and, finally, deliver those insights in a meaningful way, their data initiatives will fail.
Rise of the CDO
Companies of all sizes can easily find themselves drowning in data generated from websites, landing pages, social streams, emails, text messages, and many other sources. Additionally, there is data in their own repositories. With so much data at their disposal, companies are under mounting pressure to utilize it to generate insights. These insights are critical because they can (and should) drive the overall business strategy and help companies make better business decisions. To leverage the power of data analytics, businesses need more “top-management muscle” specialized in the field of data science. This specialized field has lead to the creation of roles like Chief Data Officer (CDO).
In addition, with more companies undertaking digital transformations, there’s greater impetus for the C-suite to make data-driven decisions. The CDO helps make data-driven decisions and also develops a digital business strategy around those decisions. As data grows at an unstoppable rate, becoming an inseparable part of key business functions, we will see the CDO act as a bridge between other C-suite execs.
Data skills – an emerging business necessity
So far, only large enterprises with bigger data mining and management needs maintain in-house solutions. These in-house teams and technologies handle the growing sets of diverse and dispersed data. Others work with third-party service providers to develop and execute their big data strategies.
As the amount of data grows, the need to mine it for insights becomes a key business requirement. For both large and small businesses, data-centric roles will experience endless upward mobility. These roles include data anlysts and scientists. There is going to be a huge opportunity for critical thinkers to turn their analytical skills into rapidly growing roles in the field of data science. In fact, data skills are now a prized qualification for titles like IT project managers and computer systems analysts.
Forbes cited the McKinsey Global Institute’s prediction that by 2018 there could be a massive shortage of data-skilled professionals. This indicates a disruption at the demand-supply level with the needs for data skills at an all-time high. With an increasing number of companies adopting big data strategies, salaries for data jobs are going through the roof. This is turning the position into a highly coveted one.
According to Harvard Professor Gary King, “There is a big data revolution. The big data revolution is that now we can do something with the data.” The big problem is that most enterprises don’t know what to do with data. Data professionals are helping businesses figure that out. So if you’re casting about for where to apply your skills and want to take advantage of one of the best career paths in the job market today, focus on data science.
The Digitalist Magazine is your online destination for everything you need to know to lead your enterprise’s digital transformation.
Read the Digitalist Magazine and get the latest insights about the digital economy that you can capitalize on today.
About Daniel Newman
Daniel Newman serves as the Co-Founder and CEO of EC3, a quickly growing hosted IT and Communication service provider. Prior to this role Daniel has held several prominent leadership roles including serving as CEO of United Visual. Parent company to United Visual Systems, United Visual Productions, and United GlobalComm; a family of companies focused on Visual Communications and Audio Visual Technologies.
Daniel is also widely published and active in the Social Media Community. He is the Author of Amazon Best Selling Business Book "The Millennial CEO." Daniel also Co-Founded the Global online Community 12 Most and was recognized by the Huffington Post as one of the 100 Business and Leadership Accounts to Follow on Twitter.
Newman is an Adjunct Professor of Management at North Central College. He attained his undergraduate degree in Marketing at Northern Illinois University and an Executive MBA from North Central College in Naperville, IL. Newman currently resides in Aurora, Illinois with his wife (Lisa) and his two daughters (Hailey 9, Avery 5).
A Chicago native all of his life, Newman is an avid golfer, a fitness fan, and a classically trained pianist
Self-service BI and data discovery will drive the number of users using the BI solutions to rapidly expand. Yet all of these more casual users will not be well versed in BI and visualization best practices.
When your user base rapidly expands to more casual users, you need to help educate them on what is important. For example, one IT manager told me that his casual BI users were making visualizations with very difficult-to-read charts and customizing color palettes to incredible degrees.
I had a similar experience when I was a technical writer. One of our lead writers was so concerned with readability of every sentence that he was going through the 300+ page manuals (yes, they were printed then) and manually adjusting all of the line breaks and page breaks. (!) Yes, readability was incrementally improved. But now any number of changes–technical capabilities, edits, inserting larger graphics—required re-adjusting all of those manual “optimizations.” The time it took just to do the additional optimization was incredible, much less the maintenance of these optimizations! Meanwhile, the technical writing team was falling behind on new deliverables.
The same scenario applies to your new casual BI users. This new group needs guidance to help them focus on the highest value practices:
Customization of color and appearance of visualizations: When is this customization necessary for a management deliverable, versus indulging an OCD tendency? I too have to stop myself from obsessing about the font, line spacing, and that a certain blue is just a bit different than another shade of blue. Yes, these options do matter. But help these casual users determine when that time is well spent.
Proper visualizations: When is a spinning 3D pie chart necessary to grab someone’s attention? BI professionals would firmly say “NEVER!” But these casual users do not have a lot of depth on BI best practices. Give them a few simple guidelines as to when “flash” needs to subsume understanding. Consider offering a monthly one-hour Lunch and Learn that shows them how to create impactful, polished visuals. Understanding if their visualizations are going to be viewed casually on the way to a meeting, or dissected at a laptop, also helps determine how much time to spend optimizing a visualization. No, you can’t just mandate that they all read Tufte.
Predictive: Provide advanced analytics capabilities like forecasting and regression directly in their casual BI tools. Using these capabilities will really help them wow their audience with substance instead of flash.
Feature requests: Make sure you understand the motivation and business value behind some of the casual users’ requests. These casual users are less likely to understand the implications of supporting specific requests across an enterprise, so make sure you are collaborating on use cases and priorities for substantive requests.
By working with your casual BI users on the above points, you will be able to collectively understand when the absolute exact request is critical (and supports good visualization practices), and when it is an “optimization” that may impact productivity. In many cases, “good” is good enough for the fast turnaround of data discovery.
Next week, I’ll wrap this series up with hints on getting your casual users to embrace the “we” not “me” mentality.
In the tech world in 2017, several trends emerged as signals amid the noise, signifying much larger changes to come.
As we noted in last year’s More Than Noise list, things are changing—and the changes are occurring in ways that don’t necessarily fit into the prevailing narrative.
While many of 2017’s signals have a dark tint to them, perhaps reflecting the times we live in, we have sought out some rays of light to illuminate the way forward. The following signals differ considerably, but understanding them can help guide businesses in the right direction for 2018 and beyond.
When a team of psychologists, linguists, and software engineers created Woebot, an AI chatbot that helps people learn cognitive behavioral therapy techniques for managing mental health issues like anxiety and depression, they did something unusual, at least when it comes to chatbots: they submitted it for peer review.
Stanford University researchers recruited a sample group of 70 college-age participants on social media to take part in a randomized control study of Woebot. The researchers found that their creation was useful for improving anxiety and depression symptoms. A study of the user interaction with the bot was submitted for peer review and published in the Journal of Medical Internet Research Mental Health in June 2017.
While Woebot may not revolutionize the field of psychology, it could change the way we view AI development. Well-known figures such as Elon Musk and Bill Gates have expressed concerns that artificial intelligence is essentially ungovernable. Peer review, such as with the Stanford study, is one way to approach this challenge and figure out how to properly evaluate and find a place for these software programs.
The healthcare community could be onto something. We’ve already seen instances where AI chatbots have spun out of control, such as when internet trolls trained Microsoft’s Tay to become a hate-spewing misanthrope. Bots are only as good as their design; making sure they stay on message and don’t act in unexpected ways is crucial.
This is especially true in healthcare. When chatbots are offering therapeutic services, they must be properly designed, vetted, and tested to maintain patient safety.
It may be prudent to apply the same level of caution to a business setting. By treating chatbots as if they’re akin to medicine or drugs, we have a model for thorough vetting that, while not perfect, is generally effective and time tested.
It may seem like overkill to think of chatbots that manage pizza orders or help resolve parking tickets as potential health threats. But it’s already clear that AI can have unintended side effects that could extend far beyond Tay’s loathsome behavior.
For example, in July, Facebook shut down an experiment where it challenged two AIs to negotiate with each other over a trade. When the experiment began, the two chatbots quickly went rogue, developing linguistic shortcuts to reduce negotiating time and leaving their creators unable to understand what they were saying.
Do we want AIs interacting in a secret language because designers didn’t fully understand what they were designing?
The implications are chilling. Do we want AIs interacting in a secret language because designers didn’t fully understand what they were designing?
In this context, the healthcare community’s conservative approach doesn’t seem so farfetched. Woebot could ultimately become an example of the kind of oversight that’s needed for all AIs.
Meanwhile, it’s clear that chatbots have great potential in healthcare—not just for treating mental health issues but for helping patients understand symptoms, build treatment regimens, and more. They could also help unclog barriers to healthcare, which is plagued worldwide by high prices, long wait times, and other challenges. While they are not a substitute for actual humans, chatbots can be used by anyone with a computer or smartphone, 24 hours a day, seven days a week, regardless of financial status.
Finding the right governance for AI development won’t happen overnight. But peer review, extensive internal quality analysis, and other processes will go a long way to ensuring bots function as expected. Otherwise, companies and their customers could pay a big price.
Elon Musk is an expert at dominating the news cycle with his sci-fi premonitions about space travel and high-speed hyperloops. However, he captured media attention in Australia in April 2017 for something much more down to earth: how to deal with blackouts and power outages.
In 2016, a massive blackout hit the state of South Australia following a storm. Although power was restored quickly in Adelaide, the capital, people in the wide stretches of arid desert that surround it spent days waiting for the power to return. That hit South Australia’s wine and livestock industries especially hard.
South Australia’s electrical grid currently gets more than half of its energy from wind and solar, with coal and gas plants acting as backups for when the sun hides or the wind doesn’t blow, according to ABC News Australia. But this network is vulnerable to sudden loss of generation—which is exactly what happened in the storm that caused the 2016 blackout, when tornadoes ripped through some key transmission lines. Getting the system back on stable footing has been an issue ever since.
Displaying his usual talent for showmanship, Musk stepped in and promised to build the world’s largest battery to store backup energy for the network—and he pledged to complete it within 100 days of signing the contract or the battery would be free. Pen met paper with South Australia and French utility Neoen in September. As of press time in November, construction was underway.
For South Australia, the Tesla deal offers an easy and secure way to store renewable energy. Tesla’s 129 MWh battery will be the most powerful battery system in the world by 60% once completed, according to Gizmodo. The battery, which is stationed at a wind farm, will cover temporary drops in wind power and kick in to help conventional gas and coal plants balance generation with demand across the network. South Australian citizens and politicians largely support the project, which Tesla claims will be able to power 30,000 homes.
Until Musk made his bold promise, batteries did not figure much in renewable energy networks, mostly because they just aren’t that good. They have limited charges, are difficult to build, and are difficult to manage. Utilities also worry about relying on the same lithium-ion battery technology as cellphone makers like Samsung, whose Galaxy Note 7 had to be recalled in 2016 after some defective batteries burst into flames, according to CNET.
However, when made right, the batteries are safe. It’s just that they’ve traditionally been too expensive for large-scale uses such as renewable power storage. But battery innovations such as Tesla’s could radically change how we power the economy. According to a study that appeared this year in Nature, the continued drop in the cost of battery storage has made renewable energy price-competitive with traditional fossil fuels.
This is a massive shift. Or, as David Roberts of news site Vox puts it, “Batteries are soon going to disrupt power markets at all scales.” Furthermore, if the cost of batteries continues to drop, supply chains could experience radical energy cost savings. This could disrupt energy utilities, manufacturing, transportation, and construction, to name just a few, and create many opportunities while changing established business models. (For more on how renewable energy will affect business, read the feature “Tick Tock” in this issue.)
Battery research and development has become big business. Thanks to electric cars and powerful smartphones, there has been incredible pressure to make more powerful batteries that last longer between charges.
The proof of this is in the R&D funding pudding. A Brookings Institution report notes that both the Chinese and U.S. governments offer generous subsidies for lithium-ion battery advancement. Automakers such as Daimler and BMW have established divisions marketing residential and commercial energy storage products. Boeing, Airbus, Rolls-Royce, and General Electric are all experimenting with various electric propulsion systems for aircraft—which means that hybrid airplanes are also a possibility.
Meanwhile, governments around the world are accelerating battery research investment by banning internal combustion vehicles. Britain, France, India, and Norway are seeking to go all electric as early as 2025 and by 2040 at the latest.
In the meantime, expect huge investment and new battery innovation from interested parties across industries that all share a stake in the outcome. This past September, for example, Volkswagen announced a €50 billion research investment in batteries to help bring 300 electric vehicle models to market by 2030.
At first, it sounds like a narrative device from a science fiction novel or a particularly bad urban legend.
Powerful cameras in several Chinese cities capture photographs of jaywalkers as they cross the street and, several minutes later, display their photograph, name, and home address on a large screen posted at the intersection. Several days later, a summons appears in the offender’s mailbox demanding payment of a fine or fulfillment of community service.
As Orwellian as it seems, this technology is very real for residents of Jinan and several other Chinese cities. According to a Xinhua interview with Li Yong of the Jinan traffic police, “Since the new technology has been adopted, the cases of jaywalking have been reduced from 200 to 20 each day at the major intersection of Jingshi and Shungeng roads.”
The sophisticated cameras and facial recognition systems already used in China—and their near–real-time public shaming—are an example of how machine learning, mobile phone surveillance, and internet activity tracking are being used to censor and control populations. Most worryingly, the prospect of real-time surveillance makes running surveillance states such as the former East Germany and current North Korea much more financially efficient.
According to a 2015 discussion paper by the Institute for the Study of Labor, a German research center, by the 1980s almost 0.5% of the East German population was directly employed by the Stasi, the country’s state security service and secret police—1 for every 166 citizens. An additional 1.1% of the population (1 for every 66 citizens) were working as unofficial informers, which represented a massive economic drain. Automated, real-time, algorithm-driven monitoring could potentially drive the cost of controlling the population down substantially in police states—and elsewhere.
We could see a radical new era of censorship that is much more manipulative than anything that has come before. Previously, dissidents were identified when investigators manually combed through photos, read writings, or listened in on phone calls. Real-time algorithmic monitoring means that acts of perceived defiance can be identified and deleted in the moment and their perpetrators marked for swift judgment before they can make an impression on others.
Businesses need to be aware of the wider trend toward real-time, automated censorship and how it might be used in both commercial and governmental settings. These tools can easily be used in countries with unstable political dynamics and could become a real concern for businesses that operate across borders. Businesses must learn to educate and protect employees when technology can censor and punish in real time.
Indeed, the technologies used for this kind of repression could be easily adapted from those that have already been developed for businesses. For instance, both Facebook and Google use near–real-time facial identification algorithms that automatically identify people in images uploaded by users—which helps the companies build out their social graphs and target users with profitable advertisements. Automated algorithms also flag Facebook posts that potentially violate the company’s terms of service.
China is already using these technologies to control its own people in ways that are largely hidden to outsiders.
According to a report by the University of Toronto’s Citizen Lab, the popular Chinese social network WeChat operates under a policy its authors call “One App, Two Systems.” Users with Chinese phone numbers are subjected to dynamic keyword censorship that changes depending on current events and whether a user is in a private chat or in a group. Depending on the political winds, users are blocked from accessing a range of websites that report critically on China through WeChat’s internal browser. Non-Chinese users, however, are not subject to any of these restrictions.
The censorship is also designed to be invisible. Messages are blocked without any user notification, and China has intermittently blocked WhatsApp and other foreign social networks. As a result, Chinese users are steered toward national social networks, which are more compliant with government pressure.
China’s policies play into a larger global trend: the nationalization of the internet. China, Russia, the European Union, and the United States have all adopted different approaches to censorship, user privacy, and surveillance. Although there are social networks such as WeChat or Russia’s VKontakte that are popular in primarily one country, nationalizing the internet challenges users of multinational services such as Facebook and YouTube. These different approaches, which impact everything from data safe harbor laws to legal consequences for posting inflammatory material, have implications for businesses working in multiple countries, as well.
For instance, Twitter is legally obligated to hide Nazi and neo-fascist imagery and some tweets in Germany and France—but not elsewhere. YouTube was officially banned in Turkey for two years because of videos a Turkish court deemed “insulting to the memory of Mustafa Kemal Atatürk,” father of modern Turkey. In Russia, Google must keep Russian users’ personal data on servers located inside Russia to comply with government policy.
While China is a pioneer in the field of instant censorship, tech companies in the United States are matching China’s progress, which could potentially have a chilling effect on democracy. In 2016, Apple applied for a patent on technology that censors audio streams in real time—automating the previously manual process of censoring curse words in streaming audio.
In March, after U.S. President Donald Trump told Fox News, “I think maybe I wouldn’t be [president] if it wasn’t for Twitter,” Twitter founder Evan “Ev” Williams did something highly unusual for the creator of a massive social network.
Speaking with David Streitfeld of The New York Times, Williams said, “It’s a very bad thing, Twitter’s role in that. If it’s true that he wouldn’t be president if it weren’t for Twitter, then yeah, I’m sorry.”
Entrepreneurs tend to be very proud of their innovations. Williams, however, offers a far more ambivalent response to his creation’s success. Much of the 2016 presidential election’s rancor was fueled by Twitter, and the instant gratification of Twitter attracts trolls, bullies, and bigots just as easily as it attracts politicians, celebrities, comedians, and sports fans.
Services such as Twitter, Facebook, YouTube, and Instagram are designed through a mix of look and feel, algorithmic wizardry, and psychological techniques to hang on to users for as long as possible—which helps the services sell more advertisements and make more money. Toxic political discourse and online harassment are unintended side effects of the economic-driven urge to keep users engaged no matter what.
Keeping users’ eyeballs on their screens requires endless hours of multivariate testing, user research, and algorithm refinement. For instance, Casey Newton of tech publication The Verge notes that Google Brain, Google’s AI division, plays a key part in generating YouTube’s video recommendations.
According to Jim McFadden, the technical lead for YouTube recommendations, “Before, if I watch this video from a comedian, our recommendations were pretty good at saying, here’s another one just like it,” he told Newton. “But the Google Brain model figures out other comedians who are similar but not exactly the same—even more adjacent relationships. It’s able to see patterns that are less obvious.”
A never-ending flow of content that is interesting without being repetitive is harder to resist. With users glued to online services, addiction and other behavioral problems occur to an unhealthy degree. According to a 2016 poll by nonprofit research company Common Sense Media, 50% of American teenagers believe they are addicted to their smartphones.
This pattern is extending into the workplace. Seventy-five percent of companies told research company Harris Poll in 2016 that two or more hours a day are lost in productivity because employees are distracted. The number one reason? Cellphones and texting, according to 55% of those companies surveyed. Another 41% pointed to the internet.
Tristan Harris, a former design ethicist at Google, argues that many product designers for online services try to exploit psychological vulnerabilities in a bid to keep users engaged for longer periods. Harris refers to an iPhone as “a slot machine in my pocket” and argues that user interface (UI) and user experience (UX) designers need to adopt something akin to a Hippocratic Oath to stop exploiting users’ psychological vulnerabilities.
In fact, there is an entire school of study devoted to “dark UX”—small design tweaks to increase profits. These can be as innocuous as a “Buy Now” button in a visually pleasing color or as controversial as when Facebook tweaked its algorithm in 2012 to show a randomly selected group of almost 700,000 users (who had not given their permission) newsfeeds that skewed more positive to some users and more negative to others to gauge the impact on their respective emotional states, according to an article in Wired.
As computers, smartphones, and televisions come ever closer to convergence, these issues matter increasingly to businesses. Some of the universal side effects of addiction are lost productivity at work and poor health. Businesses should offer training and help for employees who can’t stop checking their smartphones.
Mindfulness-centered mobile apps such as Headspace, Calm, and Forest offer one way to break the habit. Users can also choose to break internet addiction by going for a walk, turning their computers off, or using tools like StayFocusd or Freedom to block addictive websites or apps.
Most importantly, companies in the business of creating tech products need to design software and hardware that discourages addictive behavior. This means avoiding bad designs that emphasize engagement metrics over human health. A world of advertising preroll showing up on smart refrigerator touchscreens at 2 a.m. benefits no one.
According to a 2014 study in Cyberpsychology, Behavior and Social Networking, approximately 6% of the world’s population suffers from internet addiction to one degree or another. As more users in emerging economies gain access to cheap data, smartphones, and laptops, that percentage will only increase. For businesses, getting a head start on stopping internet addiction will make employees happier and more productive. D!
About the Authors
Maurizio Cattaneo is Director, Delivery Execution, Energy, and Natural Resources, at SAP.
David Delaney is Global Vice President and Chief Medical Officer, SAP Health.
Volker Hildebrand is Global Vice President for SAP Hybris solutions.
Neal Ungerleider is a Los Angeles-based technology journalist and consultant.
As software shifts from supporting the strategy to becoming the strategy of most companies, the relationship and even the sales process between the vendor side and the customer side in the IT industry is subsequently also undergoing some remarkable changes. The traditional IT salesman is an endangered species.
I recently had the pleasure of participating in a workshop with one of Scandinavia’s largest companies to create new business models in the company’s operations business area. As an IT vendor, we worked with the customer in an open process using the design thinking methodology—a creative process in which we jointly visualized, defined, and solidified how new flows of data can change business processes and their business models.
By working with “personas” relevant to their business, we could better understand how technology can help different roles in the involved departments deliver their contributions faster and more efficiently. The scope was completely open. We put our knowledge and experience with technological opportunities in parallel with the company’s own knowledge of the market, processes, and business.
The results may trigger a sale of software from our side at a point, but we do not know exactly which solution—or even if it will happen. What we did do was innovate together and better understand our customer’s future and viable routes to success. Such is the reality of the strategic work of digitizing here on the verge of year 2018.
Solution selling is not enough
In my view, the transgressive nature of technology is radically changing the way businesses and the sales process works. The IT industry—at least parts of it—must focus on completely different types of collaboration with the customer.
Historically, the sales process has already realized major changes. In the past, you’d find a product-fixated “used-car-sales” approach, which identified the characteristics of the box or solution and left it to the customer to find the hole in the cheese. Since then, a generation of IT key account managers learned “solution selling,” with a sharp focus on finding and defining a “pain point” at the customer and then position the solution against this. But today, even that approach falls short.
The challenge is that software solutions now support the formation of new, yet unknown business models. They transverse processes and do not respect silo borders within organizations. Consequently, businesses struggle to define a clear operational road. Top management faces a much broader search of potential for innovation. The creation of a compelling vision itself requires a continuous and comprehensive study of what digitization can do for the value chain and for the company’s ecosystem.
Vendors abandon their customers if they are too busy selling different tools and platforms without entering into a committed partnership to create the new business model. Therefore, the traditional IT salesperson, preoccupied with their own goals, is becoming an endangered species. The customer-driven process requires even key account managers to dig deep and endeavor to understand the customer’s business. The best in the IT industry will move closer to the role of trusted adviser, mastering the required capabilities and accepting the risks and rewards that follow.
Leaving the comfort zone
This obviously has major consequences for the sales culture in the IT industry. Reward mechanisms and incentive structures need to be reconsidered toward a more behavioral incentive. And the individual IT salesperson is going on a personal journey, as the end goal is no longer to close an order, but to create visions and deliver value in partnership with the customer and to do so in an ever-changing context, where the future is volatile and unpredictable.
A key account manager is the customer’s traveling companion. Do not expect to be able to reduce complexity and stay in your comfort zone and not be affected by this change. Vendors should think bigger, and as an IT salesperson, you need to show your ability for transformational thinking. Everyone must be prepared to take the first baby steps, but there will definitely also be some who cannot handle the change. Disruption is not just something you, as a vendor, deliver to a customer. The noble art of being a digital vendor is facing some serious earthquakes.
The Digitalist Magazine is your online destination for everything you need to know to lead your enterprise’s digital transformation.
Read the Digitalist Magazine and get the latest insights about the digital economy that you can capitalize on today.
About Jesper Schleimann
Chief Technology Officer, Nordic & Baltic region
In his role as Nordic CTO, Jesper's mission is to help customers unlock their business potential by simplifying their digital transformation. Jesper has a Cand.polit. from the University of Copenhagen as well as an Executive MBA from Copenhagen Business School.