Sections

To Cloud Or Not To Cloud: That Is The Compliance Question

Daniel Newman

Some companies have embraced cloud technology with open arms, while others approach it with extreme wariness—understandably so, since for industries that handle sensitive data, cloud security risks could spell disaster. However, fear of the cloud often comes from a lack of information rather than actual risk.

Companies today have the opportunity to use public, private, and hybrid cloud options, yet many technology leaders continue to vacillate in their cloud approach. Some still favor legacy solutions, even though they run slower and cost more. Others only use public or private cloud solutions minimally, and haven’t yet explored the possibility of hybrid solutions.

The possibilities of hybrid cloud

A hybrid cloud setup offers companies the greatest flexibility yet. If you have sensitive data, you can still use traditional networking for data storage while running some enterprise applications through a public or private cloud. The solution allows companies to set up a customized cloud structure that makes sense on every level.

The risk versus the reward  

Companies that look at the full threat landscape understand the potential risk of cloud solutions. While it will always carry a certain level of risk, lost devices, human error, and other types of breaches often represent a higher risk of vulnerability than a cloud solution. Furthermore, third-party companies often run private, public, and hybrid cloud solutions. Your security is in a vendor’s best interest. Without strong security protocols and continual updates, they could lose clients and their reputation in the industry.

All organizations in every industry are moving to the cloud. They may not house everything there, but they use it to some extent. Companies that fail to explore the possibilities today may not keep up with the changing digital needs of their target markets down the road. 2016 is the right time to explore a cloud migration.

Compliance and security: Hybrid cloud in tough verticals

Some industries must consider regulatory requirements before moving their enterprise applications and data into a cloud solution. Healthcare, government, and the finance industry all represent fields with data sensitivity concerns. Luckily, many vendors and a government program called FedRAMP now offer highly secure and customizable hybrid cloud solutions so certain industries can maintain compliance while continuing to transition to the cloud.

FedRAMP, healthcare, and government agencies. FedRAMP (Federal Risk and Authorization Management Program) is a program that standardizes security for cloud solutions. Companies that offer authorized FedRAMP security with their cloud solutions meet the security requirements sensitive industries can use to safely move their data and solutions into the cloud. Amazon, Windows, and IBM all offer FedRAMP cloud solutions for government agencies and other organizations.

The finance sector. Major banks, lenders, and other financial institutions—all heavily regulated—have discovered that the benefits of cloud migration outweighs the risks. They can work faster with less downtime and more comprehensive data management in the cloud, than in any traditional solutions provided. Plus, moving to the cloud is incredibly cost-effective. Hybrid solutions benefit internal operations, but more importantly, they benefit the customer.

Choosing a hybrid cloud provider 

Hybrid cloud solutions are scalable, so companies can use them on a small scale before they roll out a comprehensive solution at the enterprise level. Companies that still harbor reservations may find this type of approach more digestible. If you’re interested in seeing what the hybrid cloud can do for you, learn more about your compliance requirements. A cloud vendor that readily understands the regulatory constraints you face will know how to recommend a hybrid solution that allows you to create a secure solution.

When you adopt cloud solutions, they will offer more flexibility, faster transmission speeds, and enhanced productivity. Look for solutions that can support your needs today, as well as projects for the future of your industry. Explore how moving to the cloud will change device policies, IoT acceptance, and remote worker capabilities. A hybrid cloud investment will not only benefit your company today, it will also drive progress tomorrow.

 

This post was brought to you by IBM Global Technology Services. For more content like this, visit Point B and Beyond 

Photo Credit: iebschool via Compfight cc

The post To Cloud or Not to Cloud: That is the Compliance Question appeared first on Millennial CEO.

Comments

About Daniel Newman

Daniel Newman serves as the Co-Founder and CEO of EC3, a quickly growing hosted IT and Communication service provider. Prior to this role Daniel has held several prominent leadership roles including serving as CEO of United Visual. Parent company to United Visual Systems, United Visual Productions, and United GlobalComm; a family of companies focused on Visual Communications and Audio Visual Technologies. Daniel is also widely published and active in the Social Media Community. He is the Author of Amazon Best Selling Business Book "The Millennial CEO." Daniel also Co-Founded the Global online Community 12 Most and was recognized by the Huffington Post as one of the 100 Business and Leadership Accounts to Follow on Twitter. Newman is an Adjunct Professor of Management at North Central College. He attained his undergraduate degree in Marketing at Northern Illinois University and an Executive MBA from North Central College in Naperville, IL. Newman currently resides in Aurora, Illinois with his wife (Lisa) and his two daughters (Hailey 9, Avery 5). A Chicago native all of his life, Newman is an avid golfer, a fitness fan, and a classically trained pianist

Cathy O’Neil: Unmasking Unconscious Bias in Algorithms

Fawn Fitter

In the wake of the 2008 banking crisis, Cathy O’Neil, a former Barnard College math professor turned hedge fund data scientist, realized that the algorithms she once believed would solve complex problems with pure logic were instead creating them at great speed and scale. Now O’Neil—who goes by mathbabe on her popular blog and 11,000-follower Twitter account—works at bringing to light the dark side of Big Data: mathematical models that operate without transparency, without regulation, and—worst of all—without recourse if they’re wrong. She’s the founder of the Lede Program for Data Journalism at Columbia University, and her bestselling book, Weapons of Math Destruction (Crown, 2016), was long-listed for the 2016 National Book Award.

We asked O’Neil about creating accountability for mathematical models that businesses use to make critical decisions.

Q. If an algorithm applies rules equally across the board, how can the results be biased?

Cathy O’Neil: Algorithms aren’t inherently fair or trustworthy just because they’re mathematical. “Garbage in, garbage out” still holds.

There are many examples: On Wall Street, the mortgage-backed security algorithms failed because they were simply a lie. A program designed to assess teacher performance based only on test results fails because it’s just bad statistics; moreover, there’s much more to learning than testing. A tailored advertising startup I worked for created a system that served ads for things users wanted, but for-profit colleges used that same infrastructure to identify and prey on low-income single mothers who could ill afford useless degrees. Models in the justice system that recommend sentences and predict recidivism tend to be based on terribly biased policing data, particularly arrest records, so their predictions are often racially skewed.

Q. Does bias have to be introduced deliberately for an algorithm to make skewed predictions?

O’Neil: No! Imagine that a company with a history of discriminating against women wants to get more women into the management pipeline and chooses to use a machine-learning algorithm to select potential hires more objectively. They train that algorithm with historical data about successful hires from the last 20 years, and they define successful hires as people they retained for 5 years and promoted at least twice.

They have great intentions. They aren’t trying to be biased; they’re trying to mitigate bias. But if they’re training the algorithm with past data from a time when they treated their female hires in ways that made it impossible for them to meet that specific definition of success, the algorithm will learn to filter women out of the current application pool, which is exactly what they didn’t want.

I’m not criticizing the concept of Big Data. I’m simply cautioning everyone to beware of oversized claims about and blind trust in mathematical models.

Q. What safety nets can business leaders set up to counter bias that might be harmful to their business?

O’Neil: They need to ask questions about, and support processes for, evaluating the algorithms they plan to deploy. As a start, they should demand evidence that an algorithm works as they want it to, and if that evidence isn’t available, they shouldn’t deploy it. Otherwise they’re just automating their problems.

Once an algorithm is in place, organizations need to test whether their data models look fair in real life. For example, the company I mentioned earlier that wants to hire more women into its management pipeline could look at the proportion of women applying for a job before and after deploying the algorithm. If applications drop from 50% women to 25% women, that simple measurement is a sign something might be wrong and requires further checking.

Very few organizations build in processes to assess and improve their algorithms. One that does is Amazon: Every single step of its checkout experience is optimized, and if it suggests a product that I and people like me don’t like, the algorithm notices and stops showing it. It’s a productive feedback loop because Amazon pays attention to whether customers are actually taking the algorithm’s suggestions.

Q. You repeatedly warn about the dangers of using machine learning to codify past mistakes, essentially, “If you do what you’ve always done, you’ll get what you’ve always gotten.” What is the greatest risk companies take when trusting their decision making to data models?

O’Neil: The greatest risk is to trust the data model itself not to expose you to risk, particularly legally actionable risk. Any time you’re considering using an algorithm under regulated conditions, like hiring, promotion, or surveillance, you absolutely must audit it for legality. This seems completely obvious; if it’s illegal to discriminate against people based on certain criteria, for example, you shouldn’t use an algorithm that does so! And yet companies often use discriminatory algorithms because it doesn’t occur to them to ask about it, or they don’t know the right questions to ask, or the vendor or developer hasn’t provided enough visibility into the algorithm for the question to be easily answered.

Q. What are the ramifications for businesses if they persist in believing that data is neutral?

O’Neil: As more evidence comes out that poorly designed algorithms cause problems, I think that people who use them are going to be held accountable for bad outcomes. The era of plausible deniability for the results of using Big Data—that ability to say they were generated without your knowledge—is coming to an end. Right now, algorithm-based decision making is a few miles ahead of lawyers and regulations, but I don’t think that’s going to last. Regulators are already taking steps toward auditing algorithms for illegal properties.

Whenever you use an automated system, it generates a history of its use. If you use an algorithm that’s illegally biased, the evidence will be there in the form of an audit trail. This is a permanent record, and we need to think about our responsibility to ensure it’s working well. D!

Comments

The (R)evolution of PLM, Part 3: Using Digital Twins Throughout The Product Lifecycle

John McNiff

In Part 1 of this series we explored why manufacturers must embrace “live” PLM. In Part 2 we examined the new dimensions of a product-centric enterprise. In Part 3 we look at the role of digital twins.

It’s time to start using digital twins throughout the product lifecycle. In fact, to compete in the digital economy, manufacturers will need to achieve a truly product-centric enterprise in which digital twins guide not only engineering and maintenance, but every business-critical function, from procurement to HR.

Why is this necessary? Because product lifecycles are shrinking. Companies are managing ever-growing streams of data. And customers are demanding product individualization. The only way for manufacturers to respond is to use digital twins to place the product – the highly configurable, endlessly customizable, increasingly connected product – at the center of their operations.

Double the insight

Digital twins are virtual representations of a real-world products or assets. They’re a Top 10 strategic trend for 2017, according to Gartner. And they’re part of a broader digital transformation in which IDC says companies will invest $2.1 trillion a year by 2019.

Digital twins aren’t a new concept, but their application throughout the product lifecycle is. Here are key ways smart manufacturers will leverage digital twins – and achieve a product-centric and model-based enterprise – across operations:

Design and engineering: Traditionally, digital twins have been used by design and engineering to create virtual representations for designing and enhancing products. In this application, the digital twin actually exists before its physical counterpart does, essentially starting out as a vision of what the product should be. But you can also capture data on in-the-field product use and apply that to the digital twin for continuous product improvement.

Maintenance and service: Today, the most common use case for digital twins is maintenance and service. By creating a virtual representation of an asset in the field using lightweight model visualization, and then capturing data from smart sensors embedded in the asset, you can gain a complete picture of real-world performance and operating conditions. You can also simulate that real-world environment for predictive maintenance. Let’s say you manufacture wind turbines. You can capture data on rotor speed, wind speed, operating temperature, ambient temperature, humidity, and so on to understand and predict product performance. By doing so, you can schedule maintenance before a crucial part breaks – optimizing uptime and saving time and cost for a repair.

Quality control: Just as digital twins can help with maintenance and service, they can predictively improve quality during manufacturing. You can also use digital twins to compare quality data across multiple products to better understand global quality issues and quickly visualize issues against the model. And you can apply data collected by maintenance and service to achieve ongoing quality improvements.

Customization: As products become more customizable, digital twins will allow design and engineering to model the various permutations. But digital twins can also incorporate customer demand and usage data to enhance customization options. That sounds obvious, but in the past it was very difficult to incorporate customer input into the manufacturing process. Let’s say you sell high-end custom bikes. You might allow customers to choose different colors, wheels, and other details. By capturing customer preferences in the digital twin, you can get a picture of customer demand. And by capturing customer usage data, you can understand how custom configurations affect product performance. So you can offer the most reliable options or allow customers to configure your products based on performance attributes. You can also visualize lightweight representations of the twin without the burden of heavyweight design systems and parameters.

Finance and procurement: In our custom-configured bike example, different configurations involve different costs. And those different costs involve not only the cost of the various components, but also the cost for assembling the various configurations. By capturing sales data in the digital twin, you can understand which configurations are being ordered and how configuration-specific revenues compare to the cost to build each configuration. What’s more, you can link that data with supplier information. That will help you understand which suppliers contribute to product configurations that perform well in the field. It also can help you identify opportunities to cost-effectively rid yourself of excess supply.

Sales and marketing: The digital twin can also inform sales and marketing. For instance, you can use the digital twin to populate an online product configurator and e-commerce website. That way you can be sure what you’re selling is always tied directly to what you’re engineering in the design studio and what you’re servicing in the field.

Human resources: The digital twin can even extend into HR. For example, you can use the digital twin to understand training and certification needs and be sure the right people are trained on the right product features.

One twin, many views

Digital twins should underlie all manufacturing operations. Ideally you should have a single set of digital twin master data that resides in a central location. That will give you one version of the truth, and with “in-memory” computing-based networks plus a lightweight, change-controlled model capability, you’ll be able to analyze and visualize that data rapidly.

But not all business functions care about the entire data set. You need to deliver the right data to the right people at the right time. Design and engineering requires one set of data, with every specification and tolerance needed to create and continuously improve the product. Sales and marketing requires another set of data, with the features and functions customers can select. And so on.

Ultimately, as the digital product innovation platform extends the dimensions of traditional PLM, at the heart of PLM is an extended version of the digital twin. In future blogs we’ll talk about how you can leverage the latest-generation platform from SAP, based on SAP S/4HANA and SAP’s platform for the Internet of Everything, to achieve a live, visual, and intelligent product-centric enterprise.

Learn how a live supply chain can help your business, visit us at SAP.com.

Comments

John McNiff

About John McNiff

John McNiff is the Vice President of Solution Management for the R&D/Engineering line-of-business business unit at SAP. John has held a number of sales and business development roles at SAP, focused on the manufacturing and engineering topics.

How AI Can End Bias

Yvonne Baur, Brenda Reid, Steve Hunt, and Fawn Fitter

We humans make sense of the world by looking for patterns, filtering them through what we think we already know, and making decisions accordingly. When we talk about handing decisions off to artificial intelligence (AI), we expect it to do the same, only better.

Machine learning does, in fact, have the potential to be a tremendous force for good. Humans are hindered by both their unconscious assumptions and their simple inability to process huge amounts of information. AI, on the other hand, can be taught to filter irrelevancies out of the decision-making process, pluck the most suitable candidates from a haystack of résumés, and guide us based on what it calculates is objectively best rather than simply what we’ve done in the past.

In other words, AI has the potential to help us avoid bias in hiring, operations, customer service, and the broader business and social communities—and doing so makes good business sense. For one thing, even the most unintentional discrimination can cost a company significantly, in both money and brand equity. The mere fact of having to defend against an accusation of bias can linger long after the issue itself is settled.

Beyond managing risk related to legal and regulatory issues, though, there’s a broader argument for tackling bias: in a relentlessly competitive and global economy, no organization can afford to shut itself off from broader input, more varied experiences, a wider range of talent, and larger potential markets.

That said, the algorithms that drive AI don’t reveal pure, objective truth just because they’re mathematical. Humans must tell AI what they consider suitable, teach it which information is relevant, and indicate that the outcomes they consider best—ethically, legally, and, of course, financially—are those that are free from bias, conscious or otherwise. That’s the only way AI can help us create systems that are fair, more productive, and ultimately better for both business and the broader society.

Bias: Bad for Business

When people talk about AI and machine learning, they usually mean algorithms that learn over time as they process large data sets. Organizations that have gathered vast amounts of data can use these algorithms to apply sophisticated mathematical modeling techniques to see if the results can predict future outcomes, such as fluctuations in the price of materials or traffic flows around a port facility. Computers are ideally suited to processing these massive data volumes to reveal patterns and interactions that might help organizations get ahead of their competitors. As we gather more types and sources of data with which to train increasingly complex algorithms, interest in AI will become even more intense.

Using AI for automated decision making is becoming more common, at least for simple tasks, such as recommending additional products at the point of sale based on a customer’s current and past purchases. The hope is that AI will be able to take on the process of making increasingly sophisticated decisions, such as suggesting entirely new markets where a company could be profitable, or finding the most qualified candidates for jobs by helping HR look beyond the expected demographics.

As AI takes on these increasingly complex decisions, it can help reduce bias, conscious or otherwise. By exposing a bias, algorithms allow us to lessen the impact of that bias on our decisions and actions. They enable us to make decisions that reflect objective data instead of untested assumptions; they reveal imbalances; and they alert people to their cognitive blind spots so they can make more accurate, unbiased decisions.

Imagine, for example, a major company that realizes that its past hiring practices were biased against women and that would benefit from having more women in its management pipeline. AI can help the company analyze its past job postings for gender-biased language, which might have discouraged some applicants. Future postings could be more gender neutral, increasing the number of female applicants who get past the initial screenings.

AI can also support people in making less-biased decisions. For example, a company is considering two candidates for an influential management position: one man and one woman. The final hiring decision lies with a hiring manager who, when they learn that the female candidate has a small child at home, assumes that she would prefer a part-time schedule.

That assumption may be well intentioned, but it runs counter to the outcome the company is looking for. An AI could apply corrective pressure by reminding the hiring manager that all qualifications being equal, the female candidate is an objectively good choice who meets the company’s criteria. The hope is that the hiring manager will realize their unfounded assumption and remove it from their decision-making process.

At the same time, by tracking the pattern of hiring decisions this manager makes, the AI could alert them—and other people in HR—that the company still has some remaining hidden biases against female candidates to address.

Look for Where Bias Already Exists

In other words, if we want AI to counter the effects of a biased world, we have to begin by acknowledging that the world is biased. And that starts in a surprisingly low-tech spot: identifying any biases baked into your own organization’s current processes. From there, you can determine how to address those biases and improve outcomes.

There are many scenarios where humans can collaborate with AI to prevent or even reverse bias, says Jason Baldridge, a former associate professor of computational linguistics at the University of Texas at Austin and now co-founder of People Pattern, a startup for predictive demographics using social media analytics. In the highly regulated financial services industry, for example, Baldridge says banks are required to ensure that their algorithmic choices are not based on input variables that correlate with protected demographic variables (like race and gender). The banks also have to prove to regulators that their mathematical models don’t focus on patterns that disfavor specific demographic groups, he says. What’s more, they have to allow outside data scientists to assess their models for code or data that might have a discriminatory effect. As a result, banks are more evenhanded in their lending.

Code Is Only Human

The reason for these checks and balances is clear: the algorithms that drive AI are built by humans, and humans choose the data with which to shape and train the resulting models. Because humans are prone to bias, we have to be careful that we are neither simply confirming existing biases nor introducing new ones when we develop AI models and feed them data.

“From the perspective of a business leader who wants to do the right thing, it’s a design question,” says Cathy O’Neil, whose best-selling book Weapons of Math Destruction was long-listed for the 2016 National Book Award. “You wouldn’t let your company design a car and send it out in the world without knowing whether it’s safe. You have to design it with safety standards in mind,” she says. “By the same token, algorithms have to be designed with fairness and legality in mind, with standards that are understandable to everyone, from the business leader to the people being scored.” (To learn more from O’Neil about transparency in algorithms, read Thinkers in this issue.)

Don’t Do What You’ve Always Done

To eliminate bias, you must first make sure that the data you’re using to train the algorithm is itself free of bias, or, rather, that the algorithm can recognize bias in that data and bring the bias to a human’s attention.

SAP has been working on an initiative that tackles this issue directly by spotting and categorizing gendered terminology in old job postings. Nothing as overt as No women need apply, which everyone knows is discriminatory, but phrases like outspoken and aggressively pursuing opportunities, which are proven to attract male job applicants and repel female applicants, and words like caring and flexible, which do the opposite.

Once humans categorize this language and feed it into an algorithm, the AI can learn to flag words that imply bias and suggest gender-neutral alternatives. Unfortunately, this de-biasing process currently requires too much human intervention to scale easily, but as the amount of available de-biased data grows, this will become far less of a limitation in developing AI for HR.

Similarly, companies should look for specificity in how their algorithms search for new talent. According to O’Neil, there’s no one-size-fits-all definition of the best engineer; there’s only the best engineer for a particular role or project at a particular time. That’s the needle in the haystack that AI is well suited to find.

Look Beyond the Obvious

AI could be invaluable in radically reducing deliberate and unconscious discrimination in the workplace. However, the more data your company analyzes, the more likely it is that you will deal with stereotypes, O’Neil says. If you’re looking for math professors, for example, and you load your hiring algorithm with all the data you can find about math professors, your algorithm may give a lower score to a black female candidate living in Harlem simply because there are fewer black female mathematicians in your data set. But if that candidate has a PhD in math from Cornell, and if you’ve trained your AI to prioritize that criterion, the algorithm will bump her up the list of candidates rather than summarily ruling out a potentially high-value hire on the spurious basis of race and gender.

To further improve the odds that AI will be useful, companies have to go beyond spotting relationships between data and the outcomes they care about. It doesn’t take sophisticated predictive modeling to determine, for example, that women are disproportionately likely to jump off the corporate ladder at the halfway point because they’re struggling with work/life balance.

Many companies find it all too easy to conclude that women simply aren’t qualified for middle management. However, a company committed to smart talent management will instead ask what it is about these positions that makes them incompatible with women’s lives. It will then explore what it can change so that it doesn’t lose talent and institutional knowledge that will cost the company far more to replace than to retain.

That company may even apply a second layer of machine learning that looks at its own suggestions and makes further recommendations: “It looks like you’re trying to do X, so consider doing Y,” where X might be promoting more women, making the workforce more ethnically diverse, or improving retention statistics, and Y is redefining job responsibilities with greater flexibility, hosting recruiting events in communities of color, or redesigning benefits packages based on what similar companies offer.

Context Matters—and Context Changes

Even though AI learns—and maybe because it learns—it can never be considered “set it and forget it” technology. To remain both accurate and relevant, it has to be continually trained to account for changes in the market, your company’s needs, and the data itself.

Sources for language analysis, for example, tend to be biased toward standard American English, so if you’re building models to analyze social media posts or conversational language input, Baldridge says, you have to make a deliberate effort to include and correct for slang and nonstandard dialects. Standard English applies the word sick to someone having health problems, but it’s also a popular slang term for something good or impressive, which could lead to an awkward experience if someone confuses the two meanings, to say the least. Correcting for that, or adding more rules to the algorithm, such as “The word sick appears in proximity to positive emoji,” takes human oversight.

Moving Forward with AI

Today, AI excels at making biased data obvious, but that isn’t the same as eliminating it. It’s up to human beings to pay attention to the existence of bias and enlist AI to help avoid it. That goes beyond simply implementing AI to insisting that it meet benchmarks for positive impact. The business benefits of taking this step are—or soon will be—obvious.

In IDC FutureScapes’ webcast “Worldwide Big Data, Business Analytics, and Cognitive Software 2017 Predictions,” research director David Schubmehl predicted that by 2020 perceived bias and lack of evidentiary transparency in cognitive/AI solutions will create an activist backlash movement, with up to 10% of users backing away from the technology. However, Schubmehl also speculated that consumer and enterprise users of machine learning will be far more likely to trust AI’s recommendations and decisions if they understand how those recommendations and decisions are made. That means knowing what goes into the algorithms, how they arrive at their conclusions, and whether they deliver desired outcomes that are also legally and ethically fair.

Clearly, organizations that can address this concern explicitly will have a competitive advantage, but simply stating their commitment to using AI for good may not be enough. They also may wish to support academic efforts to research AI and bias, such as the annual Fairness, Accountability, and Transparency in Machine Learning (FATML) workshop, which was held for the third time in November 2016.

O’Neil, who blogs about data science and founded the Lede Program for Data Journalism, an intensive certification program at Columbia University, is going one step further. She is attempting to create an entirely new industry dedicated to auditing and monitoring algorithms to ensure that they not only reveal bias but actively eliminate it. She proposes the formation of groups of data scientists that evaluate supply chains for signs of forced labor, connect children at risk of abuse with resources to support their families, or alert people through a smartphone app when their credit scores are used to evaluate eligibility for something other than a loan.

As we begin to entrust AI with more complex and consequential decisions, organizations may also want to be proactive about ensuring that their algorithms do good—so that their companies can use AI to do well. D!

Read more thought provoking articles in the latest issue of the Digitalist Magazine, Executive Quarterly.


About the Authors:

Yvonne Baur is Head of Predictive Analytics for Sap SuccessFactors solutions.

Brenda Reid is Vice President of Product Management for Sap SuccessFactors solutions.

Steve Hunt is Senior Vice President of Human Capital Management Research for Sap SuccessFactors solutions.

Fawn Fitter is a freelance writer specializing in business and technology.

Comments

Tags:

2017: The Year Businesses Will Learn The True Meaning Of Digital Transformation

Hu Yoshida

Over the last 10 years, the exponential growth and power of technology have brought some fascinating, if not mind-bending, opportunities. Machines talk to one another with computer-connected humans on the other end observing, analyzing, and acting on the explosion of Big Data generated. Doctors use algorithms that mine patient history or genetic information to detect possible diagnoses and treatment. Cars are programmed with data-driven precision to direct drivers to the best-possible route to their destination. And even digital libraries for 3D parts are growing rapidly – possibly to the point where we can soon print whatever we need.

With all of this technology, it is common sense to believe that productivity would also rise over the same span of time. However, according to a recent 2016 productivity report released by the Organisation for Economic Co-operation and Development (OECD), this is, sadly, not the case. In fact, most advanced and emerging countries are experiencing declining growth that is cutting across nearly all sectors and affecting both large and small firms. But more interesting is the agency’s observation that this trend does not exclude areas where digital innovation is expected to improve information sharing, communication, and finance.

See how IT can help organizations shift to real-time operations. Read the EIU report.

Although nearly 5 billion people on our planet have a computer in their pocket or their hands at any moment of the day, our digital ways have not translated into productivity gains for the enterprise. The culprit? Businesses are not changing their processes to allow that technology to reach its full potential.

Technology alone does not bring real digital transformation

Every week, I hear how companies worldwide are so excited about their digital transformation initiatives. Some are developing their own applications or executing a new digital commerce strategy. Others may decide to deploy a new analytics tool. No matter the investment, there is always great hope for success. Yet, they often fall short because the focus is typically on how technology will change the business – not how the enterprise will change to fully embrace the digital innovation’s potential.

Take, for example, a bank’s decision to allow the loan process to be initiated through a mobile app or online store. The bank may receive the information from the consumer faster than ever before, but no real benefit is achieved if it still takes three weeks to approve or decline the loan request. Technology may be changing the customer experience online, but back-office processes are unaffected. The same old ways of work are still happening, and productivity is not improving. For a digital world where everything is supposed to be automatic and immediate, a customer will inevitably turn to a competitor that will approve the loan faster.

True digital transformation requires more than technology. Companies must evolve their processes with a keen focus on outcomes, not just infrastructure. All too often, they are focused on creating this sort of digital facade where it appears to be a digital experience for the customer, but, in reality, the back-office still has not caught up to support that level of digitization.

Deep digital transformation starts with process innovation

In the coming year, most companies will look to transition to real-time analytics that drives predictive decision-making and possibly draw from the Internet of Things. While this technology presents a clear opportunity for greater insight, organizations are no better off unless they transform business processes to act quickly on them.

Traditional data processes require days to move data from one database to another, process it, and generate reports in an easy-to-understand format. In-memory computing accelerates these processes from days and weeks to hours and minutes – paving the way for transformative power by moving decision-making closer to data generation. However, no matter how fast the analysis, no benefit is realized if downstream processes and decisions do not capitalize on the resulting insight. Like the loan process I mentioned earlier, you need to make sure that the back office and front office are aligned in order to produce improved business outcomes. Legacy systems and databases may still hinder the ability to achieve faster results, unless they are aligned with in-memory analytics.

The ability to modernize core systems with technologies like in-memory computing and innovative new applications can prove to be highly transformational. The key is to integrate these new technologies into an overall business architecture to support digital transformation and deliver real business improvements.

Are you ready to transform your business? Learn 4 Ways to Digitally Disrupt Your Business Without Destroying It.

Comments

Hu Yoshida

About Hu Yoshida

Hu Yoshida is responsible for defining the technical direction of Hitachi Data Systems. Currently, he leads the company's effort to help customers address data life cycle requirements and resolve compliance, governance and operational risk issues. He was instrumental in evangelizing the unique Hitachi approach to storage virtualization, which leveraged existing storage services within Hitachi Universal Storage Platform® and extended it to externally-attached, heterogeneous storage systems. Yoshida is well-known within the storage industry, and his blog has ranked among the "top 10 most influential" within the storage industry as evaluated by Network World. In October of 2006, Byte and Switch named him one of Storage Networking’s Heaviest Hitters and in 2013 he was named one of the "Ten Most Impactful Tech Leaders" by Information Week.