Pure SaaS Vs. Cloud Platforms: Understanding What Cloud Actually Means

Kevin Murray

I frequently find there’s more than a little confusion over what “cloud” actually means. So, before we get into the relative product offerings, I’ll quickly define terms. (Nothing so useful as a guide where you’re not quite sure what is being discussed.)

With pure software-as-a-service (SaaS) platforms, the application or platform itself is a cloud service – by which I mean that the infrastructure is made for the cloud. You can only interact through it with APIs, and rather than owning it, you pay to use the model and the platform sits in the cloud, where you interact or customize it as you wish.

By contrast, a cloud-hosted model takes a preexisting software application and offers a hosting service for it. This means, as with pure SaaS platforms, there is no need to set up the infrastructure. However, the platforms are modified for the cloud versions of an existing software.

Pure SaaS platforms

  • Hosting options: The platform is version-less. The version of the platform that’s up there is it. Rather than changing versions, the software vendor continually upgrades this single platform. This means, however, that new features are pushed to you, even if you didn’t request them or decide that you don’t need them. It might be that features are incorporated at the wrong time for your business.
  • Maintenance: Although new features are rolled out to you automatically, you still need to check that these new features don’t clash with your existing site. The need to regression test the changes still exists, and you have no choice about what time to undertake this effort, as you are not in control of when the new features are pushed.
  • New features: The roadmap delivery depth may not be as extensive. Although new features are released (typically) each quarter, these go out to a live environment, so there is a greater need to consider backward compatibility. The vendors also have to ensure that all their customers are not affected, which means that the rate of evolution may not be as progressive.
  • Hosting costs: Hosting costs are included. This is great for clarity, but can mean that you are paying for extra server capacity that you’re not using.
  • Capacity: Any peak utilization is managed by the cloud. You don’t need to worry about capacity.
  • Hosting options: Here you have flexible options about your hosting. Your platform is probably hosted on a private cloud, which means that you can tier different software hosting to suit your needs. The deployment architecture of your software can be customized more.
  • Maintenance: Some level of maintenance is now going to be required. This might be done through third-party application support, but potential issues, such as capacity planning, disaster planning, and so on, need to be considered.
  • New features: You can skip versions but you have to be wary – skip too many versions and the gaps between versions become too big and any future upgrade will be tough.
  • Hosting costs: You pay for whatever hosting you actually use – although this does mean that your hosting costs need to be considered. With this type of platform comes commercial considerations of what your hosting costs are going to look like.
  • Capacity: Infrastructure can provide more servers dynamically to provide capacity.
  • Customization: Software is fully customizable, as this type of platform allows for wider changes from the provider. Your cloud is your cloud, so you can make whatever changes to the software you want to suit your business.
  • Upgrades: Here, you manage when and if you want to upgrade. However, effort is still required on your side to ensure that there are no regressions or clashes if an upgrade is installed. You choose when to upgrade, but you still need to undergo a proper upgrade process to correctly onboard your new features.

Which platform is right for you?

Both pure SaaS platforms and cloud-hosted platforms have an impressive range of features. Depending on your business needs, either could work. The key is to make sure that you are informed about the choices that you’re making and the commercial considerations on the software offering. As with all impactful decisions, it is vital to have all of the information available and on hand when making your choice.

Learn more about application programming interfaces in Unleash the Killer API.


Building The Big Data Warehouse, Part 3: Overcoming Challenges

Barbara Lewis

Part 3 in the “Big Data Warehouse” series.

Welcome to Part 3 of our series on the Big Data warehouse. Part 1 covered why enterprises are looking to create a Big Data warehouse, and Part 2, the key elements of a Big Data warehouse. This discussion covers how to overcome the particular challenges of creating a Big Data warehouse. Since the challenges of the enterprise data warehouse aspects of the Big Data warehouse are often well understood and addressed, this discussion will focus on the newer and rapidly evolving aspect of the Big Data warehouse – the Big Data part of the architecture.

The implementation and operational challenges of Big Data

Big Data solutions like Hadoop and/or Spark-based platforms are attractive to many organizations because they can cost-effectively store and process extremely large volumes of heterogeneous data (text files, video, audio, machine logs, and structured data like transaction information). However, Big Data solutions like Hadoop and Spark pose unusual challenges regarding infrastructure deployment, scaling, and successful ongoing operations. These particular challenges must be taken into consideration when deciding the ideal deployment model for incorporating Big Data into the enterprise data environment.

Big Data deployment models

There are three common methods of Big Data solution deployment:

  1. On-premises, do-it-yourself deployment and operations. The DIY approach requires procurement and provisioning of a scale-out cluster for Hadoop and Spark, as well as installing and configuring Hadoop and other ecosystem components. This approach is resource-intensive in terms of both capital costs and up-front and ongoing human resource costs. IT, and often the data science team, is heavily involved in deployment, upgrades, security implementation, and ongoing operations. The ongoing operations burden is not trivial. Big Data platforms need to be regularly tuned to ensure consistently high performance over time, especially as data volumes scale. Ignoring or diminishing the Big Data operations responsibility inevitably results in painfully slow, ineffective, or nonfunctional data projects.
  1. Infrastructure-as-a-service, with do-it-yourself operations. This approach includes getting generic cloud servers from a provider such as Amazon Web Services or Microsoft Azure and then running a Hadoop and/or Spark platform on top. IT is responsible for configuring the clusters and providing the operational team required to run the solution, as well as providing resources to implement and maintain supporting software. Some infrastructure-as-a-service providers also offer services that perform the initial Hadoop setup for users, such as Amazon EMR or Microsoft HDInsight. However, the critical responsibility of ongoing operations remains the purview of the IT team. Since the operational responsibility is both crucial to success and time intensive, this approach also requires heavy involvement from IT and a well-qualified user community.
  1. Fully managed Big-Data-as-a-service. This is a cloud-delivered service officering that includes computing infrastructure optimized for Hadoop and Spark; a complete Big Data software platform; and the ongoing operational support required to minimize job failure, scale the solution, ensure that solution updates are tested and applied, resolve resource conflicts, and perform ongoing tuning. The vendor also ensures adequate security measures for the customer.

Key aspects of an ideal Big Data solution

In order for a Big Data architecture to be effective, the ideal solution will be capable of the following:

  1. Minimizing the “time to value” of the organization’s Big Data initiatives, such as fraud detection, customer 360, IoT projects, and more
  1. Providing optimized performance on an ongoing basis, to ensure that service requirements are consistently met
  1. Scaling elastically based on actual compute and storage demands, so that capacity is maximized and cost is minimized
  1. Reducing the organization’s ongoing operational burden, so that valuable IT and data science resources are spent on the higher-value aspects of projects that drive the business forward

While some organizations will find that they can achieve this ideal on-premises, there are strong reasons to consider a hybrid cloud or cloud-only environment in order to achieve Big Data goals.

The next blog in this series will explore each of these aspects in greater detail, outlining the pros and cons of the various deployment approaches.

Learn more:


Barbara Lewis

About Barbara Lewis

Barbara Lewis is the VP of Marketing for SAP Cloud Platform Big Data Services and a thought leader in SAP’s Big Data practice, with expertise in cloud, Big Data solutions, data landscape management, Internet of Things (IoT), analytics, and business intelligence. Barbara led the launch of SAP Data Hub, the latest Big Data offering from SAP, and is active in SAP’s Big Data Warehousing initiative.

Building The Big Data Warehouse: Part 2

Barbara Lewis

Part 2 in the “Big Data Warehouse” series

In the first part of this four-part discussion on the Big Data warehouse, we covered why enterprises are looking to create a Big Data warehouse that unites information from Big Data stores and enterprise data stores. Here in part 2, we’ll cover the key elements of a Big Data warehouse and which issues enterprise technology leaders should keep in mind as they evaluate options.

What is a Big Data warehouse?

A Big Data warehouse is an architecture for data management and organization that utilizes both traditional data warehouse architectures and modern Big Data technologies, with the goal of providing rapid analysis across a broad range of information types. While analytics can certainly be run exclusively on Big Data repositories or on enterprise data repositories, it is the combination of the two types of repositories into a unified data architecture that distinguishes a Big Data warehouse.

Forrester defines the Big Data warehouse as: “A specialized, cohesive set of data repositories and platforms used to support a broad variety of analytics running on-premises, in the cloud, or in a hybrid environment. BDW leverages both traditional and new technologies such as Hadoop, columnar and row-based data warehouses, ETL and streaming, and elastic in-memory and storage frameworks.” (Forrester, “The Next Generation EDW is the Big Data Warehouse” Yuhanna, Noel. August 29, 2016, page 6.)

Key elements of the Big Data warehouse

A Big Data warehouse architecture typically encompasses the following elements:

  • A breadth of data repositories. These include repositories for both Big Data and enterprise, structured data. A Big Data warehouse typically draws from multiple data repositories, including traditional relational databases that house structured, enterprise data; columnar data stores tailored for rapid enterprise data aggregation; and Big Data stores (such as Hadoop) that handle both unstructured and structured data in massive volumes.
  • Compute/processing. Fundamental processing can happen at multiple levels in a Big Data warehouse architecture. For example, Hadoop platforms contain processing capability that can deliver aggregated information to the enterprise relational database. Fast-turn analytical processing can also happen at a higher layer, such as using the Spark engine on Big Data. Machine learning analytics can also be applied at a higher level in the stack.
  • Data management capabilities. The data management capabilities necessary for an effective Big Data warehouse include: data integration (tying systems together), data quality (ensuring a level of cleanliness or correctness of information), data transformation (ensuring consistency of data format), data security, and data governance (ensuring compliance with appropriate policy and regulatory rules).
  • Interactive analytics. Interactive analytical capabilities include in-memory analytics, ad hoc interactions, or the ability for analysts to do self-service analytics on the underlying data.
  • Advanced analytics. In addition to traditional data analysis techniques, organizations can also add advanced analytical engines to data managed by the Big Data warehouse architecture. This includes predictive analytics, graph analytics, and spatial analytics, for example.
  • A variety of data environments. Big Data warehouses typically span a variety of data environments often combining on-premises databases, cloud data stores, and hybrid environments that have already been integrated. While it is possible for some organizations to have all on-premises environments or all cloud environments, this is increasingly unusual.

Big Data warehouse general architecture

Figure: Generic Big Data warehouse architecture. (Forrester, “The Next Generation EDW is the Big Data Warehouse” Yuhanna, Noel. August 29, 2016, page 8.)

Driving analytics and business intelligence across the organization

Generally, the goal of the Big Data warehouse is similar to the traditional goals of the enterprise data warehouse: delivering intelligence and analytics to decision-makers to drive business efficiency and effectiveness. While the goal may be the same, there is also typically a goal of making analytics and reporting more broadly available across the organization.

In order for an enterprise to remain agile and respond to emerging opportunities and threats, enterprises typically cannot afford the time delays required for decisions to be made only at the top of the organizations. As a result, to meet changing expectations regarding speed and responsiveness, companies are increasingly providing analytics and reporting tools to additional layers of management or to divisions that did not have this level of insight or autonomy before.

Key issues to keep in mind

Ease of integration. By definition, a Big Data warehouse requires the integration of a wide variety of data repositories, processing capabilities, and analytical capabilities. Thoroughly investigating the ease of integration of major components of the Big Data warehouse will be key not only to initial deployment success, but also the ongoing success of the architecture.

Extensibility. There has been rapid innovation in data management, data storage, and analytics, all happening simultaneously. Ensuring that the architecture can be easily extended to incorporate emerging technologies will be important to ensuring the ongoing relevance of the overall data architecture.

Orchestration. How easy is it to create data pipelines that cross the different elements of the data warehouse? And how easy is it to manage and update those pipelines?

With this overview of the key elements of the Big Data warehouse architecture, the next blog will cover the challenges of implementing a Big Data warehouse architecture and how they can be overcome.

Learn more


Barbara Lewis

About Barbara Lewis

Barbara Lewis is the VP of Marketing for SAP Cloud Platform Big Data Services and a thought leader in SAP’s Big Data practice, with expertise in cloud, Big Data solutions, data landscape management, Internet of Things (IoT), analytics, and business intelligence. Barbara led the launch of SAP Data Hub, the latest Big Data offering from SAP, and is active in SAP’s Big Data Warehousing initiative.

The Blockchain Solution

By Gil Perez, Tom Raftery, Hans Thalbauer, Dan Wellers, and Fawn Fitter

In 2013, several UK supermarket chains discovered that products they were selling as beef were actually made at least partly—and in some cases, entirely—from horsemeat. The resulting uproar led to a series of product recalls, prompted stricter food testing, and spurred the European food industry to take a closer look at how unlabeled or mislabeled ingredients were finding their way into the food chain.

By 2020, a scandal like this will be eminently preventable.

The separation between bovine and equine will become immutable with Internet of Things (IoT) sensors, which will track the provenance and identity of every animal from stall to store, adding the data to a blockchain that anyone can check but no one can alter.

Food processing companies will be able to use that blockchain to confirm and label the contents of their products accordingly—down to the specific farms and animals represented in every individual package. That level of detail may be too much information for shoppers, but they will at least be able to trust that their meatballs come from the appropriate species.

The Spine of Digitalization

Keeping food safer and more traceable is just the beginning, however. Improvements in the supply chain, which have been incremental for decades despite billions of dollars of technology investments, are about to go exponential. Emerging technologies are converging to transform the supply chain from tactical to strategic, from an easily replicable commodity to a new source of competitive differentiation.

You may already be thinking about how to take advantage of blockchain technology, which makes data and transactions immutable, transparent, and verifiable (see “What Is Blockchain and How Does It Work?”). That will be a powerful tool to boost supply chain speed and efficiency—always a worthy goal, but hardly a disruptive one.

However, if you think of blockchain as the spine of digitalization and technologies such as AI, the IoT, 3D printing, autonomous vehicles, and drones as the limbs, you have a powerful supply chain body that can leapfrog ahead of its competition.

What Is Blockchain and How Does It Work?

Here’s why blockchain technology is critical to transforming the supply chain.

Blockchain is essentially a sequential, distributed ledger of transactions that is constantly updated on a global network of computers. The ownership and history of a transaction is embedded in the blockchain at the transaction’s earliest stages and verified at every subsequent stage.

A blockchain network uses vast amounts of computing power to encrypt the ledger as it’s being written. This makes it possible for every computer in the network to verify the transactions safely and transparently. The more organizations that participate in the ledger, the more complex and secure the encryption becomes, making it increasingly tamperproof.

Why does blockchain matter for the supply chain?

  • It enables the safe exchange of value without a central verifying partner, which makes transactions faster and less expensive.
  • It dramatically simplifies recordkeeping by establishing a single, authoritative view of the truth across all parties.
  • It builds a secure, immutable history and chain of custody as different parties handle the items being shipped, and it updates the relevant documentation.
  • By doing these things, blockchain allows companies to create smart contracts based on programmable business logic, which can execute themselves autonomously and thereby save time and money by reducing friction and intermediaries.

Hints of the Future

In the mid-1990s, when the World Wide Web was in its infancy, we had no idea that the internet would become so large and pervasive, nor that we’d find a way to carry it all in our pockets on small slabs of glass.

But we could tell that it had vast potential.

Today, with the combination of emerging technologies that promise to turbocharge digital transformation, we’re just beginning to see how we might turn the supply chain into a source of competitive advantage (see “What’s the Magic Combination?”).

What’s the Magic Combination?

Those who focus on blockchain in isolation will miss out on a much bigger supply chain opportunity.

Many experts believe emerging technologies will work with blockchain to digitalize the supply chain and create new business models:

  • Blockchain will provide the foundation of automated trust for all parties in the supply chain.
  • The IoT will link objects—from tiny devices to large machines—and generate data about status, locations, and transactions that will be recorded on the blockchain.
  • 3D printing will extend the supply chain to the customer’s doorstep with hyperlocal manufacturing of parts and products with IoT sensors built into the items and/or their packaging. Every manufactured object will be smart, connected, and able to communicate so that it can be tracked and traced as needed.
  • Big Data management tools will process all the information streaming in around the clock from IoT sensors.
  • AI and machine learning will analyze this enormous amount of data to reveal patterns and enable true predictability in every area of the supply chain.

Combining these technologies with powerful analytics tools to predict trends will make lack of visibility into the supply chain a thing of the past. Organizations will be able to examine a single machine across its entire lifecycle and identify areas where they can improve performance and increase return on investment. They’ll be able to follow and monitor every component of a product, from design through delivery and service. They’ll be able to trigger and track automated actions between and among partners and customers to provide customized transactions in real time based on real data.

After decades of talk about markets of one, companies will finally have the power to create them—at scale and profitably.

Amazon, for example, is becoming as much a logistics company as a retailer. Its ordering and delivery systems are so streamlined that its customers can launch and complete a same-day transaction with a push of a single IP-enabled button or a word to its ever-attentive AI device, Alexa. And this level of experimentation and innovation is bubbling up across industries.

Consider manufacturing, where the IoT is transforming automation inside already highly automated factories. Machine-to-machine communication is enabling robots to set up, provision, and unload equipment quickly and accurately with minimal human intervention. Meanwhile, sensors across the factory floor are already capable of gathering such information as how often each machine needs maintenance or how much raw material to order given current production trends.

Once they harvest enough data, businesses will be able to feed it through machine learning algorithms to identify trends that forecast future outcomes. At that point, the supply chain will start to become both automated and predictive. We’ll begin to see business models that include proactively scheduling maintenance, replacing parts just before they’re likely to break, and automatically ordering materials and initiating customer shipments.

Italian train operator Trenitalia, for example, has put IoT sensors on its locomotives and passenger cars and is using analytics and in-memory computing to gauge the health of its trains in real time, according to an article in Computer Weekly. “It is now possible to affordably collect huge amounts of data from hundreds of sensors in a single train, analyse that data in real time and detect problems before they actually happen,” Trenitalia’s CIO Danilo Gismondi told Computer Weekly.

Blockchain allows all the critical steps of the supply chain to go electronic and become irrefutably verifiable by all the critical parties within minutes: the seller and buyer, banks, logistics carriers, and import and export officials.

The project, which is scheduled to be completed in 2018, will change Trenitalia’s business model, allowing it to schedule more trips and make each one more profitable. The railway company will be able to better plan parts inventories and determine which lines are consistently performing poorly and need upgrades. The new system will save €100 million a year, according to ARC Advisory Group.

New business models continue to evolve as 3D printers become more sophisticated and affordable, making it possible to move the end of the supply chain closer to the customer. Companies can design parts and products in materials ranging from carbon fiber to chocolate and then print those items in their warehouse, at a conveniently located third-party vendor, or even on the client’s premises.

In addition to minimizing their shipping expenses and reducing fulfillment time, companies will be able to offer more personalized or customized items affordably in small quantities. For example, clothing retailer Ministry of Supply recently installed a 3D printer at its Boston store that enables it to make an article of clothing to a customer’s specifications in under 90 minutes, according to an article in Forbes.

This kind of highly distributed manufacturing has potential across many industries. It could even create a market for secure manufacturing for highly regulated sectors, allowing a manufacturer to transmit encrypted templates to printers in tightly protected locations, for example.

Meanwhile, organizations are investigating ways of using blockchain technology to authenticate, track and trace, automate, and otherwise manage transactions and interactions, both internally and within their vendor and customer networks. The ability to collect data, record it on the blockchain for immediate verification, and make that trustworthy data available for any application delivers indisputable value in any business context. The supply chain will be no exception.

Blockchain Is the Change Driver

The supply chain is configured as we know it today because it’s impossible to create a contract that accounts for every possible contingency. Consider cross-border financial transfers, which are so complex and must meet so many regulations that they require a tremendous number of intermediaries to plug the gaps: lawyers, accountants, customer service reps, warehouse operators, bankers, and more. By reducing that complexity, blockchain technology makes intermediaries less necessary—a transformation that is revolutionary even when measured only in cost savings.

“If you’re selling 100 items a minute, 24 hours a day, reducing the cost of the supply chain by just $1 per item saves you more than $52.5 million a year,” notes Dirk Lonser, SAP go-to-market leader at DXC Technology, an IT services company. “By replacing manual processes and multiple peer-to-peer connections through fax or e-mail with a single medium where everyone can exchange verified information instantaneously, blockchain will boost profit margins exponentially without raising prices or even increasing individual productivity.”

But the potential for blockchain extends far beyond cost cutting and streamlining, says Irfan Khan, CEO of supply chain management consulting and systems integration firm Bristlecone, a Mahindra Group company. It will give companies ways to differentiate.

“Blockchain will let enterprises more accurately trace faulty parts or products from end users back to factories for recalls,” Khan says. “It will streamline supplier onboarding, contracting, and management by creating an integrated platform that the company’s entire network can access in real time. It will give vendors secure, transparent visibility into inventory 24×7. And at a time when counterfeiting is a real concern in multiple industries, it will make it easy for both retailers and customers to check product authenticity.”

Blockchain allows all the critical steps of the supply chain to go electronic and become irrefutably verifiable by all the critical parties within minutes: the seller and buyer, banks, logistics carriers, and import and export officials. Although the key parts of the process remain the same as in today’s analog supply chain, performing them electronically with blockchain technology shortens each stage from hours or days to seconds while eliminating reams of wasteful paperwork. With goods moving that quickly, companies have ample room for designing new business models around manufacturing, service, and delivery.

Challenges on the Path to Adoption

For all this to work, however, the data on the blockchain must be correct from the beginning. The pills, produce, or parts on the delivery truck need to be the same as the items listed on the manifest at the loading dock. Every use case assumes that the data is accurate—and that will only happen when everything that’s manufactured is smart, connected, and able to self-verify automatically with the help of machine learning tuned to detect errors and potential fraud.

Companies are already seeing the possibilities of applying this bundle of emerging technologies to the supply chain. IDC projects that by 2021, at least 25% of Forbes Global 2000 (G2000) companies will use blockchain services as a foundation for digital trust at scale; 30% of top global manufacturers and retailers will do so by 2020. IDC also predicts that by 2020, up to 10% of pilot and production blockchain-distributed ledgers will incorporate data from IoT sensors.

Despite IDC’s optimism, though, the biggest barrier to adoption is the early stage level of enterprise use cases, particularly around blockchain. Currently, the sole significant enterprise blockchain production system is the virtual currency Bitcoin, which has unfortunately been tainted by its associations with speculation, dubious financial transactions, and the so-called dark web.

The technology is still in a sufficiently early stage that there’s significant uncertainty about its ability to handle the massive amounts of data a global enterprise supply chain generates daily. Never mind that it’s completely unregulated, with no global standard. There’s also a critical global shortage of experts who can explain emerging technologies like blockchain, the IoT, and machine learning to nontechnology industries and educate organizations in how the technologies can improve their supply chain processes. Finally, there is concern about how blockchain’s complex algorithms gobble computing power—and electricity (see “Blockchain Blackouts”).

Blockchain Blackouts

Blockchain is a power glutton. Can technology mediate the issue?

A major concern today is the enormous carbon footprint of the networks creating and solving the algorithmic problems that keep blockchains secure. Although virtual currency enthusiasts claim the problem is overstated, Michael Reed, head of blockchain technology for Intel, has been widely quoted as saying that the energy demands of blockchains are a significant drain on the world’s electricity resources.

Indeed, Wired magazine has estimated that by July 2019, the Bitcoin network alone will require more energy than the entire United States currently uses and that by February 2020 it will use as much electricity as the entire world does today.

Still, computing power is becoming more energy efficient by the day and sticking with paperwork will become too slow, so experts—Intel’s Reed among them—consider this a solvable problem.

“We don’t know yet what the market will adopt. In a decade, it might be status quo or best practice, or it could be the next Betamax, a great technology for which there was no demand,” Lonser says. “Even highly regulated industries that need greater transparency in the entire supply chain are moving fairly slowly.”

Blockchain will require acceptance by a critical mass of companies, governments, and other organizations before it displaces paper documentation. It’s a chicken-and-egg issue: multiple companies need to adopt these technologies at the same time so they can build a blockchain to exchange information, yet getting multiple companies to do anything simultaneously is a challenge. Some early initiatives are already underway, though:

  • A London-based startup called Everledger is using blockchain and IoT technology to track the provenance, ownership, and lifecycles of valuable assets. The company began by tracking diamonds from mine to jewelry using roughly 200 different characteristics, with a goal of stopping both the demand for and the supply of “conflict diamonds”—diamonds mined in war zones and sold to finance insurgencies. It has since expanded to cover wine, artwork, and other high-value items to prevent fraud and verify authenticity.
  • In September 2017, SAP announced the creation of its SAP Leonardo Blockchain Co-Innovation program, a group of 27 enterprise customers interested in co-innovating around blockchain and creating business buy-in. The diverse group of participants includes management and technology services companies Capgemini and Deloitte, cosmetics company Natura Cosméticos S.A., and Moog Inc., a manufacturer of precision motion control systems.
  • Two of Europe’s largest shipping ports—Rotterdam and Antwerp—are working on blockchain projects to streamline interaction with port customers. The Antwerp terminal authority says eliminating paperwork could cut the costs of container transport by as much as 50%.
  • The Chinese online shopping behemoth Alibaba is experimenting with blockchain to verify the authenticity of food products and catch counterfeits before they endanger people’s health and lives.
  • Technology and transportation executives have teamed up to create the Blockchain in Transport Alliance (BiTA), a forum for developing blockchain standards and education for the freight industry.

It’s likely that the first blockchain-based enterprise supply chain use case will emerge in the next year among companies that see it as an opportunity to bolster their legal compliance and improve business processes. Once that happens, expect others to follow.

Customers Will Expect Change

It’s only a matter of time before the supply chain becomes a competitive driver. The question for today’s enterprises is how to prepare for the shift. Customers are going to expect constant, granular visibility into their transactions and faster, more customized service every step of the way. Organizations will need to be ready to meet those expectations.

If organizations have manual business processes that could never be automated before, now is the time to see if it’s possible. Organizations that have made initial investments in emerging technologies are looking at how their pilot projects are paying off and where they might extend to the supply chain. They are starting to think creatively about how to combine technologies to offer a product, service, or business model not possible before.

A manufacturer will load a self-driving truck with a 3D printer capable of creating a customer’s ordered item en route to delivering it. A vendor will capture the market for a socially responsible product by allowing its customers to track the product’s production and verify that none of its subcontractors use slave labor. And a supermarket chain will win over customers by persuading them that their choice of supermarket is also a choice between being certain of what’s in their food and simply hoping that what’s on the label matches what’s inside.

At that point, a smart supply chain won’t just be a competitive edge. It will become a competitive necessity. D!

About the Authors

Gil Perez is Senior Vice President, Internet of Things and Digital Supply Chain, at SAP.

Tom Raftery is Global Vice President, Futurist, and Internet of Things Evangelist, at SAP.

Hans Thalbauer is Senior Vice President, Internet of Things and Digital Supply Chain, at SAP.

Dan Wellers is Global Lead, Digital Futures, at SAP.

Fawn Fitter is a freelance writer specializing in business and technology.

Read more thought provoking articles in the latest issue of the Digitalist Magazine, Executive Quarterly.



The Differences Between Machine Learning And Predictive Analytics

Shaily Kumar

Many people are confused about the specifics of machine learning and predictive analytics. Although they are both centered on efficient data processing, there are many differences.

Machine learning

Machine learning is a method of computational learning underlying most artificial intelligence (AI) applications. In ML, systems or algorithms improve themselves through data experience without relying on explicit programming. ML algorithms are wide-ranging tools capable of carrying out predictions while simultaneously learning from over trillions of observations.

Machine learning is considered a modern-day extension of predictive analytics. Efficient pattern recognition and self-learning are the backbones of ML models, which automatically evolve based on changing patterns in order to enable appropriate actions.

Many companies today depend on machine learning algorithms to better understand their clients and potential revenue opportunities. Hundreds of existing and newly developed machine learning algorithms are applied to derive high-end predictions that guide real-time decisions with less reliance on human intervention.

Business application of machine learning: employee satisfaction

One common, uncomplicated, yet successful business application of machine learning is measuring real-time employee satisfaction.

Machine learning applications can be highly complex, but one that’s both simple and very useful for business is a machine learning algorithm that compares employee satisfaction ratings to salaries. Instead of plotting a predictive satisfaction curve against salary figures for various employees, as predictive analytics would suggest, the algorithm assimilates huge amounts of random training data upon entry, and the prediction results are affected by any added training data to produce real-time accuracy and more helpful predictions.

This machine learning algorithm employs self-learning and automated recalibration in response to pattern changes in the training data, making machine learning more reliable for real-time predictions than other AI concepts. Repeatedly increasing or updating the bulk of training data guarantees better predictions.

Machine learning can also be implemented in image classification and facial recognition with deep learning and neural network techniques.

Predictive analytics

Predictive analytics can be defined as the procedure of condensing huge volumes of data into information that humans can understand and use. Basic descriptive analytic techniques include averages and counts. Descriptive analytics based on obtaining information from past events has evolved into predictive analytics, which attempts to predict the future based on historical data.

This concept applies complex techniques of classical statistics, like regression and decision trees, to provide credible answers to queries such as: ‘’How exactly will my sales be influenced by a 10% increase in advertising expenditure?’’ This leads to simulations and “what-if” analyses for users to learn more.

All predictive analytics applications involve three fundamental components:

  • Data: The effectiveness of every predictive model strongly depends on the quality of the historical data it processes.
  • Statistical modeling: Includes the various statistical techniques ranging from basic to complex functions used for the derivation of meaning, insight, and inference. Regression is the most commonly used statistical technique.
  • Assumptions: The conclusions drawn from collected and analyzed data usually assume the future will follow a pattern related to the past.

Data analysis is crucial for any business en route to success, and predictive analytics can be applied in numerous ways to enhance business productivity. These include things like marketing campaign optimization, risk assessment, market analysis, and fraud detection.

Business application of predictive analytics: marketing campaign optimization

In the past, valuable marketing campaign resources were wasted by businesses using instincts alone to try to capture market niches. Today, many predictive analytic strategies help businesses identify, engage, and secure suitable markets for their services and products, driving greater efficiency into marketing campaigns.

A clear application is using visitors’ search history and usage patterns on e-commerce websites to make product recommendations. Sites like Amazon increase their chance of sales by recommending products based on specific consumer interests. Predictive analytics now plays a vital role in the marketing operations of real estate, insurance, retail, and almost every other sector.

How machine learning and predictive analytics are related

While businesses must understand the differences between machine learning and predictive analytics, it’s just as important to know how they are related. Basically, machine learning is a predictive analytics branch. Despite having similar aims and processes, there are two main differences between them:

  • Machine learning works out predictions and recalibrates models in real-time automatically after design. Meanwhile, predictive analytics works strictly on “cause” data and must be refreshed with “change” data.
  • Unlike machine learning, predictive analytics still relies on human experts to work out and test the associations between cause and outcome.

Explore machine learning applications and AI software with SAP Leonardo.


Shaily Kumar

About Shaily Kumar

Shailendra has been on a quest to help organisations make money out of data and has generated an incremental value of over one billion dollars through analytics and cognitive processes. With a global experience of more than two decades, Shailendra has worked with a myriad of Corporations, Consulting Services and Software Companies in various industries like Retail, Telecommunications, Financial Services and Travel - to help them realise incremental value hidden in zettabytes of data. He has published multiple articles in international journals about Analytics and Cognitive Solutions; and recently published “Making Money out of Data” which showcases five business stories from various industries on how successful companies make millions of dollars in incremental value using analytics. Prior to joining SAP, Shailendra was Partner / Analytics & Cognitive Leader, Asia at IBM where he drove the cognitive business across Asia. Before joining IBM, he was the Managing Director and Analytics Lead at Accenture delivering value to its clients across Australia and New Zealand. Coming from the industry, Shailendra held key Executive positions driving analytics at Woolworths and Coles in the past.