Sections

Coffee Machines Brew Industry Disruption: Digital Twins Emerge In 2017

Susan Galer

How fast can your coffee machine accelerate business growth? Of all the demos I saw at SAP TechEd Barcelona, digital twins was among the most fascinating.

Opportunities explode and industries implode when everyday items like coffee machines power a direct conversation between customers, companies and suppliers, not only crunching high-volume, actionable data in real time, but also looking into the future.

It’s not surprising that digital twins made it into Gartner Research’s top 10 trends for 2017. Those analysts predict hundreds of millions of things will be represented by digital twins within three to five years.

Above, Thomas Kaiser, senior vice president of IoT at SAP, talked with me about how the smartest companies are using digital twin technology to shake up the status quo. Featured is a clip of Ian Kimball of SAP demonstrating the amazing power of digital twins using a connected coffee machine at SAP TechEd Barcelona.

How digital twins disrupt

A digital twin is a virtual representation of a process, product, or service. While companies have been using digital twins for years, it’s only with the Internet of Things (IoT) that they’ve become cost-effective.

Using software on a cloud-based platform, digital twins pull together and analyze data companies can use to monitor and head off repairs and other problems before they occur. They can look into the future, simulating scenarios to uncover new opportunities for delighting customers. The data is deep and broad, encompassing business content like the customer’s name, exact street location of their coffee machines, and service level agreements. Information is also contextual and, of course, from sensors. The digital twin replicates everything about the machine’s operation history, from how many cups and what type of coffee people are drinking, to the precise temperature of the milk and amount of steam pressure used to brew each pour.

Think of digital twins as a combination of your smartest product technician coupled with advanced machine monitoring capabilities plus predictive and preemptive analytics. The measurable gains for companies are astounding. By 2018, IDC predicts companies investing in IoT-based operational sensing and cognitive-based situational awareness will see 30 percent improvements in the cycle times of impacted critical processes.

Four steps to get started

When I talked with SAP’s Thomas Kaiser, senior vice president of IoT, at SAP TechEd, he told me about the hottest industries using digital twins, and what companies can realistically expect. After the event, he added these thoughts to what we covered in my video interview.

“Digital twins are becoming a business imperative, covering the entire life cycle of an asset or process and forming the foundation for connected products and services,” said Kaiser. “Companies that fail to respond will be left behind. Those that embrace digital twins have the opportunity to better understand customer needs, continuously improve their products and services, and even identify new business models that give them competitive advantage.”

Digital twins are becoming a business imperative, forming the foundation for connected products and services.

Kaiser recommended four steps to get started with digital twins, noting that while these steps are easy to list, they can require significant effort to achieve. First, integrate smart components into new or existing products. Second, connect the products/services to a central (cloud-based) location with streaming, Big Data, in-memory, and analytics capabilities to capture sensor data and enrich it with business and contextual data. Third, constantly analyze the data to identify areas for improvements, new products or even new business models. Fourth, use digital insights to create new services that transform the company — disrupt before your business is disrupted.

The coffee machine on stage at SAP TechEd may have looked like every other one, but quietly brewing behind it is a world of innovative difference. As for that question about how fast your coffee machines can fuel growth, using digital twin technology, it’s a potent brew of fresh insights fueling innovation with tremendous business outcomes.

For more on digital twin technology, see Leveraging Digital Twins To Breathe New Life Into Your Products And Services.

Follow me @smgaler

Images via SAP

Comments

How Can IoT Help Retailers?

Sarah McMullin

“The system says that there are 3 shirts in the store, but…” How many times have you gone into a store and asked an associate to look up something, and after the search (which can also take some time), the associate scratches their head and claims the item’s in the store, according to the system, but they still can’t find it?

Or even worse, you researched online and came into the store expecting to buy something and it’s not there? I have spent the better half of the year talking to retailers about inventory accuracy, and I have discovered that everyone has a magic number. You know, that number in the system that you need to see in order to BELIEVE that at least one of the items is actually there.

Why is inventory accuracy so difficult to achieve?

Inaccurate inventory is a problem that has plagued retailers for ages, and you may be shocked to find out that average inventory accuracy is only around 65%. A lack of inventory accuracy produces symptoms in stores such as over-stocking and out-of-stocks, which are actually quite costly for retailers (over $1 trillion a year).

Think about the dynamic nature of retailing. Throughout a day, a store performs many different operations other than selling in the store (which is why after-the-fact point-of-sale [POS] data is never good enough). There are store transfers. There are goods to be received that haven’t been accounted for yet. There is shrink (permanently missing items). There are e-commerce sales. The list goes on.

Improving inventory accuracy with the Internet of Things

How can IoT be used to improve inventory accuracy? First we can digitize the inventory. Let’s define the Things as sensors on individual items in inventory (like one shirt, or one shoe, or one can). These sensors can vary across a wide range of technology like beacons, RFID tags, or shelf liners that can stream raw data about inventory presence. For example, if I take something off a shelf, the shelf can send a message that an item has been moved off the shelf. Or if I take a shirt into a dressing room, the shirt can send a message that it has moved into another area. Now imagine all these items talking, constantly sending state updates per millisecond to a dynamic edge processing server sitting in a store.

What immediately happens is your inventory accuracy level rises as you are given inventory counts that are timelier than cycle counts (physical counting) could ever create. This can be done quickly, run on minimal hardware, and independent from existing systems and network connections. But is that truly retailer ROI for IoT? Knowing in-store inventory levels for a particular item is not useful if you can’t act on it. In fact, you might argue an associate could easily tell you a shelf is empty just by looking at it rather than investing in IoT technology to tell you.

More than IoT data – business of things

But what if you could combine your minimum quantity rules from SAP with an automated purchase order requisition from the store based on the total in-store count for an item? You could potentially avoid stock-outs from ever happening, not from just live inventory counts, but with combined intelligence at the store level. Or what if you could automate receiving so associates don’t need to manually scan everything, highlighting discrepancies between what was received versus the original purchase requisition automatically, freeing the associate up to make sales in the store front?


Six use cases we have identified where IoT combined with dynamic edge processing can help retailers.

Retail reality check

We have the technology today to achieve this, and we have worked on many edge processing scenarios across various industries, including retail. But is retail really ready for it? A lot of the retailers I have spoken to are just starting to dip their toe into the IoT water. They’re interested in simple use cases around improving efficiency with real-time in-store data to build the ROI case. I can appreciate that approach. But I truly believe it won’t be until we marry the insights collected at the store with the business intelligence at the headquarters that IoT will start to make real gains in retail. When retailers are ready, we will be waiting.

Learn more about how to engage your customers wherever they are in Customer Experience: OmniChannel. OmniNow. OmniWow.

Comments

Sarah McMullin

About Sarah McMullin

Sarah McMullin is the Director of Emerging Technologies at SAP, with a focus on food safety and food fraud innovations within supply chains. Her specialties include enterprise software, cloud computing, mobile devices and applications, enterprise mobility and product marketing.

The Key To Gaining ROI From IoT

Daniel Kehrer

On the enterprise technology hype scale, the Internet of Things is a heavyweight champ. There’s only one problem. So far, this “transformative trend” – as Gartner calls it in the firm’s 2016 IoT Hype Cycle report – has remained largely that: Hype.

But that’s about to change. New enabling technologies – including Bluetooth 5, proximity awareness, and others – will speed the path to IoT value creation. The rollout of Bluetooth 5 in early 2017, for example, will quadruple Bluetooth range, double its speed and boost data broadcasting capacity by 800%.

These speed, range, and capacity improvements will open vast new IoT opportunities for companies to build a more accessible and interoperable IoT. This in turn will finally make hypothetical enterprise and industrial IoT use cases a reality.

According to a recent McKinsey Global Institute (MGI) report, the hype surrounding IoT may in fact understate its full potential. McKinsey predicts that if policymakers and businesses get it right, IoT’s linking of physical and digital worlds will generate between $4 trillion (their low estimate) and $11.1 trillion per year in economic value by 2025.

And the bulk of that value – nearly 70% of it, says McKinsey – will come from B2B applications such as construction and manufacturing where IoT technology helps optimize equipment placement and maintenance, improve safety and security, and much more.

Meanwhile, technology suppliers are ramping up IoT-related platforms to help enterprises design, implement, and operate solutions that fill the gap between the ability to collect data and the capacity to capture, analyze, and act on it.

The power of proximity-awareness technology

One of the most powerful tools in helping enterprise organizations gain ROI from IoT is proximity awareness. Smart proximity awareness technology will play a critical role in how enterprise organizations extract value from IoT or, alternatively, IoE – the Internet of Everything.

It’s also IoE because the value-creation chain includes more than just things. It involves people, data streams, locations, equipment, communication systems, and more, all connected to the Internet. Proximity-awareness technology brings these scattered pieces of IoT together into a cohesive, cyber-physical system that organizations can analyze and act on to solve problems, optimize time, and improve productivity. This is increasingly important as connected “things” gain autonomy and begin taking more actions on their own.

Proximity solutions enable organizations to gain greater order, efficiency, automation, and predictability from IoT. They solve for situations where people and things are dispersed haphazardly and sometimes unaccounted for, eliminating guesswork and costly inefficiencies. They enable organizations to see where things are, what’s happening with them, and how to make them more effective and productive.

Smart proximity awareness technology will also enable value creation from enterprise “wearables” – always on, connected computing displays worn on the body for easy, hands-free access to show contextually relevant information – as these devices replace bar code scanners and handheld GPS.

Proximity makes IoT work

Many companies already generate large amounts of data from IoT but only use a fraction of it. That’s because they focus mainly on detecting breakdowns or other anomalies, rather than envisioning new, value-building uses. By deploying smart proximity-awareness technology, companies can realize greater value from IoT by using it to predict and optimize a wide range of activities. As this happens, the old mindset of repair and replace becomes a new mindset of predict and prevent.

Enriching corporate data with proximity awareness pushes several things forward. Knowing where your people, inanimate assets, suppliers, supplies, and customers are – and when their joint movements are actionable – allows automated responses to specific conditions of convergence and divergence.

And by standardizing proximity services in an open platform, enterprises can gather mobile and IoT telemetry, track assets and people in motion, determine when any two or more of them are converging or diverging, and act by triggering prox­imity-aware messages or instructions to both people and things.

As the world becomes increasingly networked with nearly everything linked to everything else, production and supplier networks are expected to grow enormously, meaning manufacturers will need to coordinate more global suppliers. At the same time, boundaries that now separate individual factories and other facilities will be eliminated as IoT and prox­imity awareness connect multiple factories and the people who run them.

According to MGI, “The potential value that could be unlocked with IoT applications in factory settings could be as much as $3.7 trillion in 2025, or about one-third of all potential economic value. Cities are the next largest, with value of up to $1.7 trillion per year.”

Building flexible solutions at scale

But building IoT systems and solutions as vertical silos and operational islands inhibits the ability to gain strategic value. Smart proximity awareness based on a scalable and horizontal technology foundation lowers barriers and makes it easier to integrate all of the pieces into a single whole that is easy to operate, expand, and maintain.

Now is the time to consider the IoT business opportunities at hand, set a vision, establish a plan, and put smart proximity awareness to work as a strategic differentiator. As McKinsey points out, “Businesses that fail to invest in IoT capabilities, culture, and processes, as well as in technology, are likely to fall behind competitors that do.”

Comments

About Daniel Kehrer

Daniel Kehrer has 20+ years leadership and hands-on execution experience as a technology, content marketing and digital media entrepreneur and industry thought leader. He has built & scaled multi-channel and global marketing and content creation teams and engines for VC- & PE-backed tech companies leading to acquisitions totaling nearly $1 billion. He is currently Founder & CEO of BizBest Media Corp. and CMO.partners, working with select startup and growth-stage tech companies. He’s written for Forbes, Harvard Business Review, The New York Times and Digitalist Magazine, among many other publications, writes a syndicated weekly column, is the author of seven books and earned his MBA from UCLA Anderson.

How AI Can End Bias

Yvonne Baur, Brenda Reid, Steve Hunt, and Fawn Fitter

We humans make sense of the world by looking for patterns, filtering them through what we think we already know, and making decisions accordingly. When we talk about handing decisions off to artificial intelligence (AI), we expect it to do the same, only better.

Machine learning does, in fact, have the potential to be a tremendous force for good. Humans are hindered by both their unconscious assumptions and their simple inability to process huge amounts of information. AI, on the other hand, can be taught to filter irrelevancies out of the decision-making process, pluck the most suitable candidates from a haystack of résumés, and guide us based on what it calculates is objectively best rather than simply what we’ve done in the past.

In other words, AI has the potential to help us avoid bias in hiring, operations, customer service, and the broader business and social communities—and doing so makes good business sense. For one thing, even the most unintentional discrimination can cost a company significantly, in both money and brand equity. The mere fact of having to defend against an accusation of bias can linger long after the issue itself is settled.

Beyond managing risk related to legal and regulatory issues, though, there’s a broader argument for tackling bias: in a relentlessly competitive and global economy, no organization can afford to shut itself off from broader input, more varied experiences, a wider range of talent, and larger potential markets.

That said, the algorithms that drive AI don’t reveal pure, objective truth just because they’re mathematical. Humans must tell AI what they consider suitable, teach it which information is relevant, and indicate that the outcomes they consider best—ethically, legally, and, of course, financially—are those that are free from bias, conscious or otherwise. That’s the only way AI can help us create systems that are fair, more productive, and ultimately better for both business and the broader society.

Bias: Bad for Business

When people talk about AI and machine learning, they usually mean algorithms that learn over time as they process large data sets. Organizations that have gathered vast amounts of data can use these algorithms to apply sophisticated mathematical modeling techniques to see if the results can predict future outcomes, such as fluctuations in the price of materials or traffic flows around a port facility. Computers are ideally suited to processing these massive data volumes to reveal patterns and interactions that might help organizations get ahead of their competitors. As we gather more types and sources of data with which to train increasingly complex algorithms, interest in AI will become even more intense.

Using AI for automated decision making is becoming more common, at least for simple tasks, such as recommending additional products at the point of sale based on a customer’s current and past purchases. The hope is that AI will be able to take on the process of making increasingly sophisticated decisions, such as suggesting entirely new markets where a company could be profitable, or finding the most qualified candidates for jobs by helping HR look beyond the expected demographics.

As AI takes on these increasingly complex decisions, it can help reduce bias, conscious or otherwise. By exposing a bias, algorithms allow us to lessen the impact of that bias on our decisions and actions. They enable us to make decisions that reflect objective data instead of untested assumptions; they reveal imbalances; and they alert people to their cognitive blind spots so they can make more accurate, unbiased decisions.

Imagine, for example, a major company that realizes that its past hiring practices were biased against women and that would benefit from having more women in its management pipeline. AI can help the company analyze its past job postings for gender-biased language, which might have discouraged some applicants. Future postings could be more gender neutral, increasing the number of female applicants who get past the initial screenings.

AI can also support people in making less-biased decisions. For example, a company is considering two candidates for an influential management position: one man and one woman. The final hiring decision lies with a hiring manager who, when they learn that the female candidate has a small child at home, assumes that she would prefer a part-time schedule.

That assumption may be well intentioned, but it runs counter to the outcome the company is looking for. An AI could apply corrective pressure by reminding the hiring manager that all qualifications being equal, the female candidate is an objectively good choice who meets the company’s criteria. The hope is that the hiring manager will realize their unfounded assumption and remove it from their decision-making process.

At the same time, by tracking the pattern of hiring decisions this manager makes, the AI could alert them—and other people in HR—that the company still has some remaining hidden biases against female candidates to address.

Look for Where Bias Already Exists

In other words, if we want AI to counter the effects of a biased world, we have to begin by acknowledging that the world is biased. And that starts in a surprisingly low-tech spot: identifying any biases baked into your own organization’s current processes. From there, you can determine how to address those biases and improve outcomes.

There are many scenarios where humans can collaborate with AI to prevent or even reverse bias, says Jason Baldridge, a former associate professor of computational linguistics at the University of Texas at Austin and now co-founder of People Pattern, a startup for predictive demographics using social media analytics. In the highly regulated financial services industry, for example, Baldridge says banks are required to ensure that their algorithmic choices are not based on input variables that correlate with protected demographic variables (like race and gender). The banks also have to prove to regulators that their mathematical models don’t focus on patterns that disfavor specific demographic groups, he says. What’s more, they have to allow outside data scientists to assess their models for code or data that might have a discriminatory effect. As a result, banks are more evenhanded in their lending.

Code Is Only Human

The reason for these checks and balances is clear: the algorithms that drive AI are built by humans, and humans choose the data with which to shape and train the resulting models. Because humans are prone to bias, we have to be careful that we are neither simply confirming existing biases nor introducing new ones when we develop AI models and feed them data.

“From the perspective of a business leader who wants to do the right thing, it’s a design question,” says Cathy O’Neil, whose best-selling book Weapons of Math Destruction was long-listed for the 2016 National Book Award. “You wouldn’t let your company design a car and send it out in the world without knowing whether it’s safe. You have to design it with safety standards in mind,” she says. “By the same token, algorithms have to be designed with fairness and legality in mind, with standards that are understandable to everyone, from the business leader to the people being scored.” (To learn more from O’Neil about transparency in algorithms, read Thinkers in this issue.)

Don’t Do What You’ve Always Done

To eliminate bias, you must first make sure that the data you’re using to train the algorithm is itself free of bias, or, rather, that the algorithm can recognize bias in that data and bring the bias to a human’s attention.

SAP has been working on an initiative that tackles this issue directly by spotting and categorizing gendered terminology in old job postings. Nothing as overt as No women need apply, which everyone knows is discriminatory, but phrases like outspoken and aggressively pursuing opportunities, which are proven to attract male job applicants and repel female applicants, and words like caring and flexible, which do the opposite.

Once humans categorize this language and feed it into an algorithm, the AI can learn to flag words that imply bias and suggest gender-neutral alternatives. Unfortunately, this de-biasing process currently requires too much human intervention to scale easily, but as the amount of available de-biased data grows, this will become far less of a limitation in developing AI for HR.

Similarly, companies should look for specificity in how their algorithms search for new talent. According to O’Neil, there’s no one-size-fits-all definition of the best engineer; there’s only the best engineer for a particular role or project at a particular time. That’s the needle in the haystack that AI is well suited to find.

Look Beyond the Obvious

AI could be invaluable in radically reducing deliberate and unconscious discrimination in the workplace. However, the more data your company analyzes, the more likely it is that you will deal with stereotypes, O’Neil says. If you’re looking for math professors, for example, and you load your hiring algorithm with all the data you can find about math professors, your algorithm may give a lower score to a black female candidate living in Harlem simply because there are fewer black female mathematicians in your data set. But if that candidate has a PhD in math from Cornell, and if you’ve trained your AI to prioritize that criterion, the algorithm will bump her up the list of candidates rather than summarily ruling out a potentially high-value hire on the spurious basis of race and gender.

To further improve the odds that AI will be useful, companies have to go beyond spotting relationships between data and the outcomes they care about. It doesn’t take sophisticated predictive modeling to determine, for example, that women are disproportionately likely to jump off the corporate ladder at the halfway point because they’re struggling with work/life balance.

Many companies find it all too easy to conclude that women simply aren’t qualified for middle management. However, a company committed to smart talent management will instead ask what it is about these positions that makes them incompatible with women’s lives. It will then explore what it can change so that it doesn’t lose talent and institutional knowledge that will cost the company far more to replace than to retain.

That company may even apply a second layer of machine learning that looks at its own suggestions and makes further recommendations: “It looks like you’re trying to do X, so consider doing Y,” where X might be promoting more women, making the workforce more ethnically diverse, or improving retention statistics, and Y is redefining job responsibilities with greater flexibility, hosting recruiting events in communities of color, or redesigning benefits packages based on what similar companies offer.

Context Matters—and Context Changes

Even though AI learns—and maybe because it learns—it can never be considered “set it and forget it” technology. To remain both accurate and relevant, it has to be continually trained to account for changes in the market, your company’s needs, and the data itself.

Sources for language analysis, for example, tend to be biased toward standard American English, so if you’re building models to analyze social media posts or conversational language input, Baldridge says, you have to make a deliberate effort to include and correct for slang and nonstandard dialects. Standard English applies the word sick to someone having health problems, but it’s also a popular slang term for something good or impressive, which could lead to an awkward experience if someone confuses the two meanings, to say the least. Correcting for that, or adding more rules to the algorithm, such as “The word sick appears in proximity to positive emoji,” takes human oversight.

Moving Forward with AI

Today, AI excels at making biased data obvious, but that isn’t the same as eliminating it. It’s up to human beings to pay attention to the existence of bias and enlist AI to help avoid it. That goes beyond simply implementing AI to insisting that it meet benchmarks for positive impact. The business benefits of taking this step are—or soon will be—obvious.

In IDC FutureScapes’ webcast “Worldwide Big Data, Business Analytics, and Cognitive Software 2017 Predictions,” research director David Schubmehl predicted that by 2020 perceived bias and lack of evidentiary transparency in cognitive/AI solutions will create an activist backlash movement, with up to 10% of users backing away from the technology. However, Schubmehl also speculated that consumer and enterprise users of machine learning will be far more likely to trust AI’s recommendations and decisions if they understand how those recommendations and decisions are made. That means knowing what goes into the algorithms, how they arrive at their conclusions, and whether they deliver desired outcomes that are also legally and ethically fair.

Clearly, organizations that can address this concern explicitly will have a competitive advantage, but simply stating their commitment to using AI for good may not be enough. They also may wish to support academic efforts to research AI and bias, such as the annual Fairness, Accountability, and Transparency in Machine Learning (FATML) workshop, which was held for the third time in November 2016.

O’Neil, who blogs about data science and founded the Lede Program for Data Journalism, an intensive certification program at Columbia University, is going one step further. She is attempting to create an entirely new industry dedicated to auditing and monitoring algorithms to ensure that they not only reveal bias but actively eliminate it. She proposes the formation of groups of data scientists that evaluate supply chains for signs of forced labor, connect children at risk of abuse with resources to support their families, or alert people through a smartphone app when their credit scores are used to evaluate eligibility for something other than a loan.

As we begin to entrust AI with more complex and consequential decisions, organizations may also want to be proactive about ensuring that their algorithms do good—so that their companies can use AI to do well. D!

Read more thought provoking articles in the latest issue of the Digitalist Magazine, Executive Quarterly.


About the Authors:

Yvonne Baur is Head of Predictive Analytics for Sap SuccessFactors solutions.

Brenda Reid is Vice President of Product Management for Sap SuccessFactors solutions.

Steve Hunt is Senior Vice President of Human Capital Management Research for Sap SuccessFactors solutions.

Fawn Fitter is a freelance writer specializing in business and technology.

Comments

Tags:

Next-Generation, Real-Time Data Warehouse: Bringing Analytics To Data

Iver van de Zand

Imagine the following situation: you are analyzing and gathering insights about product sales performance and wonder why a certain area in your country is doing better than others. You deep dive, slice, dice, and use different perspectives to analyze, but can’t find the answer to why sales are better for that region.

You conclude you need data that is not available in your corporate systems. Some geographical data that is available through Hadoop might answer your question. How can you get this information and quickly analyze it all?

Bring analytics to data

If we don’t want to go the traditional route of specifying, remodeling the data warehouse, and uploading and testing data, we’d need a whole new way of modern data warehousing. What we ultimately need is a kind of semantics that allows us to remodel our data warehouse in real time and on the fly – semantics that allows decision makers to leave the data where it is stored without populating it into the data warehouse. What we really need is a way to bring our analytics to data, instead of the other way around.

So our analytics wish list would be:

  • Access to the data source on the fly
  • Ability to remodel the data warehouse on the fly
  • No replication of data; the data stays where it is
  • Not losing time with data-load jobs
  • Analytical processing done in the moment with pushback to an in-memory computing platform
  • Drastic reduction of data objects to be stored and maintained
  • Elimination of aggregates

Traditional data warehousing is probably the biggest hurdle when it comes to agile business analytics. Though modern analytical tools perfectly add data sources on the fly and blend different data sources, these components are still analytical tools. When additional data must be available for multiple users or is huge in scale and complexity, analytical tools lack the computing power and scalability needed. It simply doesn’t make sense to blend them individually when multiple users require the same complex, additional data.

A data warehouse, in this case, is the answer. However, there is still one hurdle to overcome: A traditional data warehouse requires a substantial effort to adjust to new data needs. So we add to our wish list:

  • Adjust and adapt the modeling
  • Develop load and transformation script
  • Assign sizing
  • Setup scheduling and linage
  • Test and maintain

In 2016, the future of data warehousing began. In-memory technology with smart, native, and real-time access moved information from analytics to the data warehouse, as well as the data warehouse to core in-memory systems. Combined with pushback technology, where analytical calculations are pushed back onto an in-memory computing platform, analytics is brought back to data. End-to-end in-memory processing has become the reality, enabling true agility. And end-to-end processing is ready for the Internet of Things at the petabyte scale.

Are we happy with this? Sure, we are! Does it come as a surprise? Of course, not! Digital transformation just enabled it!

Native, real-time access for analytics

What do next-generation data warehouses bring to analytics? Well, they allow for native access from top-end analytics components through the data warehouse and all the way to the core in-memory platform with our operational data. Even more, this native access is real-time. Every analytics-driven interaction from an end-user generates calculations. With the described architecture, these calculations are massively pushed back to the core platform where our data resides.

The same integrated architecture is also a game changer when it comes to agility and data optimization. When new, complex data is required, it can be added without data replication. Since there is no data replication, the data warehouse modeling can be done on the fly, leveraging the semantics. We no longer have to model, create, and populate new tables and aggregates when additional data is required in the data warehouse, because there are no new tables needed! We only create additional semantics, and this can be done on the fly.

Learn why you need to put analytics into your business processes in the free eBook How to Use Algorithms to Dominate Your Industry.

This article appeared on Iver van de Zand.

Comments

Iver van de Zand

About Iver van de Zand

Iver van de Zand is a Business Analytics Leader at SAP responsible for Business Analytics with a special attention towards Business Intelligence Suite, Lumira and Predictive Analytics.