Sections

Conquer Supply Chain Resource Scarcity With These 7 Technologies

Gabriele Pfaffmann

“Scarce.” It’s a great word. It’s probably a term that first appeared to me on some middle school vocabulary test. Little did I know then that the word would become such an integral part of my everyday lexicon as I embarked on a career in supply chain.

Oddly enough, the word “scarce” has always invoked a feeling of fear in me. Part of that is likely because it’s a mere one letter off from the word “scare.” More likely, I think it’s the fact that running out of something, particularly some precious resource, is a truly frightening prospect.

In today’s increasingly complex supply chain industry, resource scarcity is certainly something to fear – but it’s an issue that organizations can unquestionably overcome as long as they have the right strategies in place.

SCM World recently published a report on the topic of resource scarcity. The paper provides insight on how supply chain organizations can manage their operations in a more sustainable fashion. In essence, the research firm touts technology as a key enabler to conquering today’s greatest resource scarcity concerns.

How is resource scarcity impacting supply chain organizations?

By 2050, the world’s population is expected to grow to 9.7 billion, up from approximately 7.4 billion today, according to a 2015 United Nations Department of Economic and Social Affairs report. This will put a tremendous strain on the world’s water, raw material, mineral, and energy resources.

A water shortage will greatly affect chemical and industrial companies that use the resource in their manufacturing processes.

The declining availability and, incidentally, rising costs of raw materials and minerals will adversely impact end-product affordability, which translates to supply chain organizations operating at a loss.

Oil and gas companies face similar obstacles, with energy resource shortages resulting in high levels of commodity price volatility.

On top of all this is the increasingly concerning issue of human resource scarcity, which 56% of supply chain executives refer to as “extremely challenging,” according to a recent SCM World survey.

Technology is the key to managing resource scarcity

While a majority of surveyed supply chain practitioners believe they’re successfully managing resource scarcity – with 66% proclaiming themselves to be “first movers” or “fast followers” – many still lack the capabilities of overcoming their challenges.

According to SCM World, and the organizations that are thriving in this area, these seven technologies hold the key to addressing natural resource scarcity and sustainability:

1. Blockchain

This online public ledger is a practical way for companies to trace the origin of raw materials back through the supply chain. With an available list of visible records, known as blocks, supply chain organizations can better keep track of their resource supply and plan for potential shortages.

2. Sensors

More and more enterprises are leveraging the power of sensors and other Internet of Things-enabled devices to monitor and manage the use of water, energy, and other resources at production plants and distribution centers.

3. Drones

The military uses drones for combat missions. Children and adults play with toy drones in their leisure time. Now, supply chain professionals are employing drones to do things like check for leaks in hard-to-access oil pipelines.

4. 3D printing

Businesses are using 3D printing for everything today, from action figures to food. In supply chain, organizations leverage the power of 3D printing to reduce waste during the production process.

5&6. Big Data analytics and GPS

Agricultural companies combine the capabilities of GPS and Big Data analytics to optimize their growth activities. They can use these technologies to scout for prime locations to harvest and gain insight on how to improve crop yields.

7. Social media

Businesses regularly mine social media for customer feedback and potential product quality concerns. Supply chain organizations are monitoring the chatter on social media in order to flag certain supply chain risks. For example, social media can help supply chain experts gain vital information on how people in an area are reacting to severe weather events that may result in supply chain disruptions or resource availability.

Don’t let resource scarcity doom your business

No supply chain organization is exempt from resource scarcity. At some point or another, your enterprise will be impacted by a shortage in water, minerals, raw materials, energy, or skilled labor, and you must understand how to overcome this obstacle.

Cutting-edge innovations, such as those listed above, can provide your company with greater visibility into its resources, as well as insight on how your organization can prevent and address its resource scarcity concerns internally and across the entire digital supply chain.

Download the full SCM World report – Resource Scarcity: Supply Chain Strategies for Sustainable Business – to learn more about managing resource scarcity with the latest technology and to find out how leading companies throughout supply chain are already achieving this.

Comments

Gabriele Pfaffmann

About Gabriele Pfaffmann

Gabriele Pfaffmann is a member of the SAP Solution Marketing organization, responsible for the enterprise asset management (EAM) line of business. She joined SAP in 1996, holding positions in solution management and solution marketing, and is based in Walldorf, Germany.

Climate Change: Look North and South – The Evidence Is Real

Nancy Langmeyer

Explorer Sir Robert Swan – the first and only man to walk on both the North and South Poles unsupported – believes that “the greatest threat to our planet is the belief that someone else will save it.”

As a self-proclaimed survivor, Sir Swan, like many others around the globe, believes that climate change and global warming are very serious issues.

The United Nations (UN) adopted 17 Sustainable Development Goals (SDGs) in 2015, and Goal 13 asks the world to “take urgent action to combat climate change and its impacts.” According to the UN, “Climate change is now affecting every country on every continent. It is disrupting national economies and affecting lives, costing people, communities, and countries dearly today and even more tomorrow.”

The National Aeronautics and Space Administration (NASA) says the rate of temperature increase around the globe has nearly doubled in the last 50 years due to greenhouse gases released as people burn fossil fuels. But even though 2016 was the hottest year in recent history, sadly there are still people in the world who say global warming is of no concern and that it is actually a “hoax!”

Well, like Sir Swan, let’s look to the North and Sole Poles and see what we can learn about the reality of this situation.

The Poles have a story to tell us…

Sir Swan believes the North and South Poles hold vital clues to the issue of global warming and that they are an indication of what is going on around the world in respect to climate change.

In his TED talk, Swan showed pictures of melting ice in the North and South Poles, describing it as a dangerous situation. He says, “We need to listen to what these places tell us, and if we don’t, we’ll end up with our own survival situation here on planet Earth.”

So, let’s start in the North and find out what we can learn there.

At 90⁰ north latitude, the North Pole is 450 miles north of Greenland, in the middle of the Arctic Ocean. There is no actual landmass at the North Pole – only massive amounts of ice that expand in winters and shrink down to half the size in summers.

The climate change story here is that the North Pole has been experiencing unusually high temperatures, reaching 32⁰ Fahrenheit in December 2016, which was 50⁰ warmer than typical! This trend has lead to an alarming shrinkage of the Arctic Sea ice masses that equates to approximately 1.07 million km² of ice loss every decade.

Why is this a problem? Well, according to the National Science Foundation, sea ice variability – the amount of water the ice puts into or pulls out of the ocean and the atmosphere – plays a significant role in climate change. NASA says that, “The sea ice cover of the Arctic Ocean and surrounding seas helps regulate the planet’s temperature, influences the circulation of the atmosphere and ocean, and impacts Arctic communities and ecosystems.”

Even the coldest place on Earth is getting warmer!

Now, in the completely opposite direction, what can we learn from the South Pole and Antarctica? At 90⁰ south latitude, Antarctica, which includes approximately 90% of the ice on the planet, is a little over 300 feet above sea level with an ice sheet on it that is about 9,000 feet thick.

Much colder than the North Pole, the temperature here has dropped to a chilling low of -135.8⁰ Fahrenheit in 2013. However, this pole, too, is experiencing warmer weather, with its highest temperature reaching 63.5⁰ in March 2015.

NASA indicates that Antarctica has been losing about 134 gigatonnes of ice per year since 2002. And just recently, a new concern emerged – a rift in the continent that could send a significant part of the polar cap off into the ocean and create one of the largest icebergs ever recorded. This could, in the long run, raise global sea levels by four inches.

So what’s a little rise in sea level?

While a couple inches here or there doesn’t seem like much, NASA says rising sea levels can erode coasts and cause more coastal flooding, and in fact, some island nations could actually disappear.

And that’s just the sea level. There are other ramifications as the climate changes, such as an increase in infectious diseases with the expansion of tropical temperature zones, more intense rain storms and hurricanes, and many other life-threatening issues.

Let’s be the “someone else”

These insights are just the tip of the iceberg (so to speak) in the story of global warming, but it is evident the Poles are telling us that climate change is real. It’s also evident that it’s time for us as the inhabitants of this world to become the “someone else” Sir Swan talks about. And the good news is that it’s not too late for us to save this planet.

We don’t have to go to the North or South Pole to make an impact. We can simply follow Swan’s advice: “A survivor sees a problem and doesn’t go, ‘Whatever.’ A survivor sees a problem and deals with that problem before it becomes a threat.”

Whether it’s at work with a company like SAP that supports the UN SDGs with its vision and purpose, or individually – we all have to help climate change before there are irreversible threats to our place. Let’s be the someone else, starting today.

A quick note: My last blog focused on how women in the arts and sports are helping to break gender inequality barriers. Well, I am happy to report that this same movement is happening in science too! In 2016, an initial 76 women in science embarked on a leadership journey to increase the awareness of climate science. The inaugural session of the year-long Homeward Bound program, which focused on empowering women in science, culminated in December 2016 with the largest female expedition in Antarctica. Here these brilliant, dedicated female scientists and engineers saw the effects of climate change first-hand and brainstormed how they, through “collaborative leadership, diverse thinking, and creative approaches,” could make an impact. 

SAP’s vision is to help the world run better and improve people’s lives. This is our enduring cause; our higher purpose. Learn more about how we work to achieve our vision and purpose.

Comments

Nancy Langmeyer

About Nancy Langmeyer

Nancy Langmeyer is a freelance writer and marketing consultant. She works with some of the largest technology companies in the world and is a frequent blogger. You'll see some under her name...and then there are others that you won't see. These are ones where Nancy interviews marketing executives and leaders and turns their insights into thought leadership pieces..

The (R)evolution of PLM, Part 3: Using Digital Twins Throughout The Product Lifecycle

John McNiff

In Part 1 of this series we explored why manufacturers must embrace “live” PLM. In Part 2 we examined the new dimensions of a product-centric enterprise. In Part 3 we look at the role of digital twins.

It’s time to start using digital twins throughout the product lifecycle. In fact, to compete in the digital economy, manufacturers will need to achieve a truly product-centric enterprise in which digital twins guide not only engineering and maintenance, but every business-critical function, from procurement to HR.

Why is this necessary? Because product lifecycles are shrinking. Companies are managing ever-growing streams of data. And customers are demanding product individualization. The only way for manufacturers to respond is to use digital twins to place the product – the highly configurable, endlessly customizable, increasingly connected product – at the center of their operations.

Double the insight

Digital twins are virtual representations of a real-world products or assets. They’re a Top 10 strategic trend for 2017, according to Gartner. And they’re part of a broader digital transformation in which IDC says companies will invest $2.1 trillion a year by 2019.

Digital twins aren’t a new concept, but their application throughout the product lifecycle is. Here are key ways smart manufacturers will leverage digital twins – and achieve a product-centric and model-based enterprise – across operations:

Design and engineering: Traditionally, digital twins have been used by design and engineering to create virtual representations for designing and enhancing products. In this application, the digital twin actually exists before its physical counterpart does, essentially starting out as a vision of what the product should be. But you can also capture data on in-the-field product use and apply that to the digital twin for continuous product improvement.

Maintenance and service: Today, the most common use case for digital twins is maintenance and service. By creating a virtual representation of an asset in the field using lightweight model visualization, and then capturing data from smart sensors embedded in the asset, you can gain a complete picture of real-world performance and operating conditions. You can also simulate that real-world environment for predictive maintenance. Let’s say you manufacture wind turbines. You can capture data on rotor speed, wind speed, operating temperature, ambient temperature, humidity, and so on to understand and predict product performance. By doing so, you can schedule maintenance before a crucial part breaks – optimizing uptime and saving time and cost for a repair.

Quality control: Just as digital twins can help with maintenance and service, they can predictively improve quality during manufacturing. You can also use digital twins to compare quality data across multiple products to better understand global quality issues and quickly visualize issues against the model. And you can apply data collected by maintenance and service to achieve ongoing quality improvements.

Customization: As products become more customizable, digital twins will allow design and engineering to model the various permutations. But digital twins can also incorporate customer demand and usage data to enhance customization options. That sounds obvious, but in the past it was very difficult to incorporate customer input into the manufacturing process. Let’s say you sell high-end custom bikes. You might allow customers to choose different colors, wheels, and other details. By capturing customer preferences in the digital twin, you can get a picture of customer demand. And by capturing customer usage data, you can understand how custom configurations affect product performance. So you can offer the most reliable options or allow customers to configure your products based on performance attributes. You can also visualize lightweight representations of the twin without the burden of heavyweight design systems and parameters.

Finance and procurement: In our custom-configured bike example, different configurations involve different costs. And those different costs involve not only the cost of the various components, but also the cost for assembling the various configurations. By capturing sales data in the digital twin, you can understand which configurations are being ordered and how configuration-specific revenues compare to the cost to build each configuration. What’s more, you can link that data with supplier information. That will help you understand which suppliers contribute to product configurations that perform well in the field. It also can help you identify opportunities to cost-effectively rid yourself of excess supply.

Sales and marketing: The digital twin can also inform sales and marketing. For instance, you can use the digital twin to populate an online product configurator and e-commerce website. That way you can be sure what you’re selling is always tied directly to what you’re engineering in the design studio and what you’re servicing in the field.

Human resources: The digital twin can even extend into HR. For example, you can use the digital twin to understand training and certification needs and be sure the right people are trained on the right product features.

One twin, many views

Digital twins should underlie all manufacturing operations. Ideally you should have a single set of digital twin master data that resides in a central location. That will give you one version of the truth, and with “in-memory” computing-based networks plus a lightweight, change-controlled model capability, you’ll be able to analyze and visualize that data rapidly.

But not all business functions care about the entire data set. You need to deliver the right data to the right people at the right time. Design and engineering requires one set of data, with every specification and tolerance needed to create and continuously improve the product. Sales and marketing requires another set of data, with the features and functions customers can select. And so on.

Ultimately, as the digital product innovation platform extends the dimensions of traditional PLM, at the heart of PLM is an extended version of the digital twin. In future blogs we’ll talk about how you can leverage the latest-generation platform from SAP, based on SAP S/4HANA and SAP’s platform for the Internet of Everything, to achieve a live, visual, and intelligent product-centric enterprise.

Learn how a live supply chain can help your business, visit us at SAP.com.

Comments

John McNiff

About John McNiff

John McNiff is the Vice President of Solution Management for the R&D/Engineering line-of-business business unit at SAP. John has held a number of sales and business development roles at SAP, focused on the manufacturing and engineering topics.

How AI Can End Bias

Yvonne Baur, Brenda Reid, Steve Hunt, and Fawn Fitter

We humans make sense of the world by looking for patterns, filtering them through what we think we already know, and making decisions accordingly. When we talk about handing decisions off to artificial intelligence (AI), we expect it to do the same, only better.

Machine learning does, in fact, have the potential to be a tremendous force for good. Humans are hindered by both their unconscious assumptions and their simple inability to process huge amounts of information. AI, on the other hand, can be taught to filter irrelevancies out of the decision-making process, pluck the most suitable candidates from a haystack of résumés, and guide us based on what it calculates is objectively best rather than simply what we’ve done in the past.

In other words, AI has the potential to help us avoid bias in hiring, operations, customer service, and the broader business and social communities—and doing so makes good business sense. For one thing, even the most unintentional discrimination can cost a company significantly, in both money and brand equity. The mere fact of having to defend against an accusation of bias can linger long after the issue itself is settled.

Beyond managing risk related to legal and regulatory issues, though, there’s a broader argument for tackling bias: in a relentlessly competitive and global economy, no organization can afford to shut itself off from broader input, more varied experiences, a wider range of talent, and larger potential markets.

That said, the algorithms that drive AI don’t reveal pure, objective truth just because they’re mathematical. Humans must tell AI what they consider suitable, teach it which information is relevant, and indicate that the outcomes they consider best—ethically, legally, and, of course, financially—are those that are free from bias, conscious or otherwise. That’s the only way AI can help us create systems that are fair, more productive, and ultimately better for both business and the broader society.

Bias: Bad for Business

When people talk about AI and machine learning, they usually mean algorithms that learn over time as they process large data sets. Organizations that have gathered vast amounts of data can use these algorithms to apply sophisticated mathematical modeling techniques to see if the results can predict future outcomes, such as fluctuations in the price of materials or traffic flows around a port facility. Computers are ideally suited to processing these massive data volumes to reveal patterns and interactions that might help organizations get ahead of their competitors. As we gather more types and sources of data with which to train increasingly complex algorithms, interest in AI will become even more intense.

Using AI for automated decision making is becoming more common, at least for simple tasks, such as recommending additional products at the point of sale based on a customer’s current and past purchases. The hope is that AI will be able to take on the process of making increasingly sophisticated decisions, such as suggesting entirely new markets where a company could be profitable, or finding the most qualified candidates for jobs by helping HR look beyond the expected demographics.

As AI takes on these increasingly complex decisions, it can help reduce bias, conscious or otherwise. By exposing a bias, algorithms allow us to lessen the impact of that bias on our decisions and actions. They enable us to make decisions that reflect objective data instead of untested assumptions; they reveal imbalances; and they alert people to their cognitive blind spots so they can make more accurate, unbiased decisions.

Imagine, for example, a major company that realizes that its past hiring practices were biased against women and that would benefit from having more women in its management pipeline. AI can help the company analyze its past job postings for gender-biased language, which might have discouraged some applicants. Future postings could be more gender neutral, increasing the number of female applicants who get past the initial screenings.

AI can also support people in making less-biased decisions. For example, a company is considering two candidates for an influential management position: one man and one woman. The final hiring decision lies with a hiring manager who, when they learn that the female candidate has a small child at home, assumes that she would prefer a part-time schedule.

That assumption may be well intentioned, but it runs counter to the outcome the company is looking for. An AI could apply corrective pressure by reminding the hiring manager that all qualifications being equal, the female candidate is an objectively good choice who meets the company’s criteria. The hope is that the hiring manager will realize their unfounded assumption and remove it from their decision-making process.

At the same time, by tracking the pattern of hiring decisions this manager makes, the AI could alert them—and other people in HR—that the company still has some remaining hidden biases against female candidates to address.

Look for Where Bias Already Exists

In other words, if we want AI to counter the effects of a biased world, we have to begin by acknowledging that the world is biased. And that starts in a surprisingly low-tech spot: identifying any biases baked into your own organization’s current processes. From there, you can determine how to address those biases and improve outcomes.

There are many scenarios where humans can collaborate with AI to prevent or even reverse bias, says Jason Baldridge, a former associate professor of computational linguistics at the University of Texas at Austin and now co-founder of People Pattern, a startup for predictive demographics using social media analytics. In the highly regulated financial services industry, for example, Baldridge says banks are required to ensure that their algorithmic choices are not based on input variables that correlate with protected demographic variables (like race and gender). The banks also have to prove to regulators that their mathematical models don’t focus on patterns that disfavor specific demographic groups, he says. What’s more, they have to allow outside data scientists to assess their models for code or data that might have a discriminatory effect. As a result, banks are more evenhanded in their lending.

Code Is Only Human

The reason for these checks and balances is clear: the algorithms that drive AI are built by humans, and humans choose the data with which to shape and train the resulting models. Because humans are prone to bias, we have to be careful that we are neither simply confirming existing biases nor introducing new ones when we develop AI models and feed them data.

“From the perspective of a business leader who wants to do the right thing, it’s a design question,” says Cathy O’Neil, whose best-selling book Weapons of Math Destruction was long-listed for the 2016 National Book Award. “You wouldn’t let your company design a car and send it out in the world without knowing whether it’s safe. You have to design it with safety standards in mind,” she says. “By the same token, algorithms have to be designed with fairness and legality in mind, with standards that are understandable to everyone, from the business leader to the people being scored.” (To learn more from O’Neil about transparency in algorithms, read Thinkers in this issue.)

Don’t Do What You’ve Always Done

To eliminate bias, you must first make sure that the data you’re using to train the algorithm is itself free of bias, or, rather, that the algorithm can recognize bias in that data and bring the bias to a human’s attention.

SAP has been working on an initiative that tackles this issue directly by spotting and categorizing gendered terminology in old job postings. Nothing as overt as No women need apply, which everyone knows is discriminatory, but phrases like outspoken and aggressively pursuing opportunities, which are proven to attract male job applicants and repel female applicants, and words like caring and flexible, which do the opposite.

Once humans categorize this language and feed it into an algorithm, the AI can learn to flag words that imply bias and suggest gender-neutral alternatives. Unfortunately, this de-biasing process currently requires too much human intervention to scale easily, but as the amount of available de-biased data grows, this will become far less of a limitation in developing AI for HR.

Similarly, companies should look for specificity in how their algorithms search for new talent. According to O’Neil, there’s no one-size-fits-all definition of the best engineer; there’s only the best engineer for a particular role or project at a particular time. That’s the needle in the haystack that AI is well suited to find.

Look Beyond the Obvious

AI could be invaluable in radically reducing deliberate and unconscious discrimination in the workplace. However, the more data your company analyzes, the more likely it is that you will deal with stereotypes, O’Neil says. If you’re looking for math professors, for example, and you load your hiring algorithm with all the data you can find about math professors, your algorithm may give a lower score to a black female candidate living in Harlem simply because there are fewer black female mathematicians in your data set. But if that candidate has a PhD in math from Cornell, and if you’ve trained your AI to prioritize that criterion, the algorithm will bump her up the list of candidates rather than summarily ruling out a potentially high-value hire on the spurious basis of race and gender.

To further improve the odds that AI will be useful, companies have to go beyond spotting relationships between data and the outcomes they care about. It doesn’t take sophisticated predictive modeling to determine, for example, that women are disproportionately likely to jump off the corporate ladder at the halfway point because they’re struggling with work/life balance.

Many companies find it all too easy to conclude that women simply aren’t qualified for middle management. However, a company committed to smart talent management will instead ask what it is about these positions that makes them incompatible with women’s lives. It will then explore what it can change so that it doesn’t lose talent and institutional knowledge that will cost the company far more to replace than to retain.

That company may even apply a second layer of machine learning that looks at its own suggestions and makes further recommendations: “It looks like you’re trying to do X, so consider doing Y,” where X might be promoting more women, making the workforce more ethnically diverse, or improving retention statistics, and Y is redefining job responsibilities with greater flexibility, hosting recruiting events in communities of color, or redesigning benefits packages based on what similar companies offer.

Context Matters—and Context Changes

Even though AI learns—and maybe because it learns—it can never be considered “set it and forget it” technology. To remain both accurate and relevant, it has to be continually trained to account for changes in the market, your company’s needs, and the data itself.

Sources for language analysis, for example, tend to be biased toward standard American English, so if you’re building models to analyze social media posts or conversational language input, Baldridge says, you have to make a deliberate effort to include and correct for slang and nonstandard dialects. Standard English applies the word sick to someone having health problems, but it’s also a popular slang term for something good or impressive, which could lead to an awkward experience if someone confuses the two meanings, to say the least. Correcting for that, or adding more rules to the algorithm, such as “The word sick appears in proximity to positive emoji,” takes human oversight.

Moving Forward with AI

Today, AI excels at making biased data obvious, but that isn’t the same as eliminating it. It’s up to human beings to pay attention to the existence of bias and enlist AI to help avoid it. That goes beyond simply implementing AI to insisting that it meet benchmarks for positive impact. The business benefits of taking this step are—or soon will be—obvious.

In IDC FutureScapes’ webcast “Worldwide Big Data, Business Analytics, and Cognitive Software 2017 Predictions,” research director David Schubmehl predicted that by 2020 perceived bias and lack of evidentiary transparency in cognitive/AI solutions will create an activist backlash movement, with up to 10% of users backing away from the technology. However, Schubmehl also speculated that consumer and enterprise users of machine learning will be far more likely to trust AI’s recommendations and decisions if they understand how those recommendations and decisions are made. That means knowing what goes into the algorithms, how they arrive at their conclusions, and whether they deliver desired outcomes that are also legally and ethically fair.

Clearly, organizations that can address this concern explicitly will have a competitive advantage, but simply stating their commitment to using AI for good may not be enough. They also may wish to support academic efforts to research AI and bias, such as the annual Fairness, Accountability, and Transparency in Machine Learning (FATML) workshop, which was held for the third time in November 2016.

O’Neil, who blogs about data science and founded the Lede Program for Data Journalism, an intensive certification program at Columbia University, is going one step further. She is attempting to create an entirely new industry dedicated to auditing and monitoring algorithms to ensure that they not only reveal bias but actively eliminate it. She proposes the formation of groups of data scientists that evaluate supply chains for signs of forced labor, connect children at risk of abuse with resources to support their families, or alert people through a smartphone app when their credit scores are used to evaluate eligibility for something other than a loan.

As we begin to entrust AI with more complex and consequential decisions, organizations may also want to be proactive about ensuring that their algorithms do good—so that their companies can use AI to do well. D!

Read more thought provoking articles in the latest issue of the Digitalist Magazine, Executive Quarterly.


About the Authors:

Yvonne Baur is Head of Predictive Analytics for Sap SuccessFactors solutions.

Brenda Reid is Vice President of Product Management for Sap SuccessFactors solutions.

Steve Hunt is Senior Vice President of Human Capital Management Research for Sap SuccessFactors solutions.

Fawn Fitter is a freelance writer specializing in business and technology.

Comments

Tags:

Next-Generation, Real-Time Data Warehouse: Bringing Analytics To Data

Iver van de Zand

Imagine the following situation: you are analyzing and gathering insights about product sales performance and wonder why a certain area in your country is doing better than others. You deep dive, slice, dice, and use different perspectives to analyze, but can’t find the answer to why sales are better for that region.

You conclude you need data that is not available in your corporate systems. Some geographical data that is available through Hadoop might answer your question. How can you get this information and quickly analyze it all?

Bring analytics to data

If we don’t want to go the traditional route of specifying, remodeling the data warehouse, and uploading and testing data, we’d need a whole new way of modern data warehousing. What we ultimately need is a kind of semantics that allows us to remodel our data warehouse in real time and on the fly – semantics that allows decision makers to leave the data where it is stored without populating it into the data warehouse. What we really need is a way to bring our analytics to data, instead of the other way around.

So our analytics wish list would be:

  • Access to the data source on the fly
  • Ability to remodel the data warehouse on the fly
  • No replication of data; the data stays where it is
  • Not losing time with data-load jobs
  • Analytical processing done in the moment with pushback to an in-memory computing platform
  • Drastic reduction of data objects to be stored and maintained
  • Elimination of aggregates

Traditional data warehousing is probably the biggest hurdle when it comes to agile business analytics. Though modern analytical tools perfectly add data sources on the fly and blend different data sources, these components are still analytical tools. When additional data must be available for multiple users or is huge in scale and complexity, analytical tools lack the computing power and scalability needed. It simply doesn’t make sense to blend them individually when multiple users require the same complex, additional data.

A data warehouse, in this case, is the answer. However, there is still one hurdle to overcome: A traditional data warehouse requires a substantial effort to adjust to new data needs. So we add to our wish list:

  • Adjust and adapt the modeling
  • Develop load and transformation script
  • Assign sizing
  • Setup scheduling and linage
  • Test and maintain

In 2016, the future of data warehousing began. In-memory technology with smart, native, and real-time access moved information from analytics to the data warehouse, as well as the data warehouse to core in-memory systems. Combined with pushback technology, where analytical calculations are pushed back onto an in-memory computing platform, analytics is brought back to data. End-to-end in-memory processing has become the reality, enabling true agility. And end-to-end processing is ready for the Internet of Things at the petabyte scale.

Are we happy with this? Sure, we are! Does it come as a surprise? Of course, not! Digital transformation just enabled it!

Native, real-time access for analytics

What do next-generation data warehouses bring to analytics? Well, they allow for native access from top-end analytics components through the data warehouse and all the way to the core in-memory platform with our operational data. Even more, this native access is real-time. Every analytics-driven interaction from an end-user generates calculations. With the described architecture, these calculations are massively pushed back to the core platform where our data resides.

The same integrated architecture is also a game changer when it comes to agility and data optimization. When new, complex data is required, it can be added without data replication. Since there is no data replication, the data warehouse modeling can be done on the fly, leveraging the semantics. We no longer have to model, create, and populate new tables and aggregates when additional data is required in the data warehouse, because there are no new tables needed! We only create additional semantics, and this can be done on the fly.

Learn why you need to put analytics into your business processes in the free eBook How to Use Algorithms to Dominate Your Industry.

This article appeared on Iver van de Zand.

Comments

Iver van de Zand

About Iver van de Zand

Iver van de Zand is a Business Analytics Leader at SAP responsible for Business Analytics with a special attention towards Business Intelligence Suite, Lumira and Predictive Analytics.