Sections

The Future Of Open Data

Heather McIlvaine

How are mobile apps, Big Data, and civic hacking changing the nature of open data in government? The Center for Technology in Government took a look at this topic and presents its findings.

Picture this: You’re a tourist in New York City and after a long day of sightseeing you’re looking for a bite to eat. But before you simply head into the nearest sandwich shop, you look it up on “NYC Restaurant Scrutinizer.” This mobile app isn’t your average restaurant guide:  Photo: iStockphotoAlong with food reviews, you can also see the results from restaurant inspections conducted by the New York City Department of Health and Mental Hygiene. After finding out that the sandwich shop doesn’t properly refrigerate its deli meats, you decide to go to the Chinese restaurant on the corner with an “A” grade. And delicious spring rolls to boot.

“NYC Restaurant Scrutinizer” was developed by a private app developer named Michael Boski in 2009. He beat the government agency to the punch by about three years: “ABCEats,” the health department’s own mobile app for checking restaurant grades, first appeared on iTunes in 2012.

Civic hacking and open data

Boski is a so-called civic hacker, a private citizen who taps into publicly available data to create useful apps for the general population. Since most governments endorse open data policies that disseminate such information, civic hacking is not illegal.

The New York City health department, for example, has been posting its inspection results online for anyone to see since the mid-1990s. It’s worth noting that without this policy, “NYC Restaurant Scrutinizer” would not be possible. On the other hand, it was Boski, not the government, who took the first step in making the information easier to consume, which increased its value to citizens.

That is often the case: In its current form, open government data is large and complex and usually lacks the context for everyday use. But not everyone thinks civic hacking is the answer to this problem. For one thing, apps from private developers, like “NYC Restaurant Scrutinizer,” can’t guarantee the accuracy of information they provide. And they usually don’t take into account any downstream consequences that might negatively affect businesses and government agencies.

“The Dynamics of Opening Government Data”

SAP’s Public Services Industry Business Solutions team wanted to find out more about the impact of these open data initiatives on the public and private sector, so it commissioned the Center for Technology and Government (CTG), an independent research organization at the State University at Albany (SUNY) to do further research in the area. The resulting whitepaper, “The Dynamics of Opening Government Data,”examines two case studies of open government initiatives and presents four recommendations based on CTG’s findings.

One of those case studies was the previously mentioned example with the New York City health department. The other case study involved the Department of Transportation for the city of Edmonton, Canada, and its release of data on planned road construction projects.

“In both cases, it was clear that simply opening government data – making it available to the public – is not the whole story,” says Anthony Cresswell, a senior fellow at CTG. “It has longer-term consequences that are difficult to predict. In our research, we sought to come up with strategies and policies that help government agencies understand ahead of time how opening data will affect all the stakeholders involved.”

The information polity

Identifying and understanding this collection of stakeholders, what Cresswell and his colleagues call an ‘information polity,’ is one of the key aspects of the whitepaper. “An information polity includes the government agencies that create the data, employees who use the data, citizens who want access to the data, app developers who modify the data, and other stakeholders,” explains CTG senior program associate Brian Burke.

In other words, Michael Boski is part of an information polity, as are the people using his app, as are the restaurants being inspected, as is – most importantly – the government agency conducting those inspections and providing the data. “The timeliness, format, and quality of data that the government provides all affect the usability of that data for developers,” says Burke. “And it is the relationship between these information sources that ultimately influences the value of that data for the public.”

Four recommendations from CTG

How should government agencies go about implementing open data initiatives that maximize value and minimize risk? Cresswell and Burke explain the four recommendations outlined in the whitepaper.

1. Release government data that are relevant to both agency performance and the public interest.

As part of the Open Government Initiative launched by the Obama administration in 2009, U.S. federal agencies published high-value datasets online at data.gov. Any person can access the Web site and explore a vast amount of information, ranging from the location of every farmer’s market in the country to the average energy consumption by household to the U.S. trade volume in tomatoes.

But how many citizens really want to know what the current yield of the country’s tomato crop is? “As government agencies try to balance resources, time, and effort, they should choose to focus on those datasets that hold the most public value,” says Cresswell.

2. Invest in strategies to estimate how different stakeholders will use the data.

“Some datasets, like government budgets, don’t lend themselves to use on a smartphone. Others, like restaurant inspection results, make a lot more sense when you connect them to geospatial data so they can be used on the go. If you model how users are likely to interact with the data, you can choose the technology solution that will maximize value,” says Burke.

In addition, different stakeholders may want access to different kinds of data. In the road construction case study, for example, a commuter might want to know what the estimated delay on a particular route is, while a construction site foreman digging near a road block might want access to the location of the newly-laid sewer line. Government agencies will have to decide whether they can – or should – invest in collecting new data to suit these different needs.

3. Devise data management practices that improve context in order to “future-proof” data resources.

“Good meta data will help you ‘future-proof’ data resources. It’s a very simple thing, but very powerful,” says Burke. The developer who created the road construction app for the citizens of Edmonton reported that building the app was very straightforward, thanks to the high-quality meta data already provided by the city.

For example, the dataset already included geospatial data that was compliant with GIS standards, which made mapping the information easier and more accurate. Good data management practices are essential for government agencies looking to make their data more accessible and useful.

4. Think about sustainability.

Planning for the long-term sustainability of a given dataset is strongly linked to understanding the values and risks associated with releasing the data in the first place. Before restaurant grades were posted online by the New York City health department, restaurants had to display their rating on the premises, but they could easily hide a bad report in a less visible place. Once the agency began posting inspection reports online in the 1990s, restaurants felt the consequences of a bad rating much more severely.

As a result, they began demanding more frequent health inspections, so they would have the chance to improve their score. The health department responded to this demand by hiring more inspectors. “If you identify the value and risk of releasing information, you can better predict what additional resources you’ll need down the line,” recommends Burke.

What does this mean for SAP?

While government agencies reevaluate their open data policies in light of these four recommendations, SAP is also examining what this means for the software industry: “CTG’s research gives SAP a better understanding of the challenges that the public sector is facing today to support open data and open government mandates,” says Elizabeth McGowan, director of technology & innovation for business solutions in the public services industry at SAP. “To generate public value going forward, it will be crucial for governments – at all levels – to invest in strategies that open data for exploration by citizens, stakeholders, and other government agencies. We believe that solutions for big data management and analysis will be a critical part of these strategies.”

Follow @SAPOpenGov on Twitter to stay up-to-date on the latest open government news, events, insights.

Photo: iStockphoto

Comments

Heather McIlvaine

About Heather McIlvaine

Heather McIlvaine is the Editor of SAP.info. Her specialties include writing, editing, journalism, online research and publishing.

Tags:

awareness

The Intelligent Supply Chain: A Use Case For Artificial Intelligence

Dr. Ravi Prakash Mathur

The term artificial intelligence (AI) invokes images of robot uprisings, space missions to galaxies far, far away, and lab-created clones that make humans immortal. For years, thought-provoking talks by professors have entertained the notion of whether AI is—or ever will be—self-aware. The more adventurous among us may be drawn toward theosophical discussions on creationism or debates about the realities and influences of the quantum world.

Current thinking about AI may border on science vision (if not science fiction or philosophy)—perhaps for a good reason. Technologies once imagined only on the movie screen now bring convenience and value to our daily lives. Some examples include gestural interfaces, machine-aided purchases, facial recognition, autonomous cars, miniature drones, ubiquitous advertising, and electronic surveillance. Machines are now making predictions on trading stocks, customer purchases, traffic flows, and crime—much as we saw in the 2002 movie “Minority Report.”

From movie screen to real-world applications

Technology leaders have placed big bets on technologies such as brain-computer interfaces, AI in medicine, and deep learning and machine learning tools. AI is expected to lead the new economy, which is becoming known as the Fourth Industrial Revolution or the Second Machine Age. AI is at the forefront of business innovation, along with emerging technologies such as robotics, the Internet of Things3D printingquantum computing and nanotechnology.

Companies are still deciding how AI can be designed to fit into their processes. However, burning questions persist around whether self-learning machines will replace or assist humans in white-collar and blue-collar jobs:

  • Can machines learn common sense and empathy?
  • Who owns the insights that are generated by AI technology, and who owns the responsibility for an erroneous decision made by a machine?
  • Can you teach a machine how to make a decision when dealing with an ethical dilemma?

While these concerns still require much deliberation, most industries understand that the application of AI in businesses brings immense potential. Currently, the top 10 use cases for the technology are data security, personal privacy, financial trading, healthcare, marketing personalization, fraud detection, recommendations, online search, natural language processing (NLP), and smart cars.

Considering how quickly these new technologies are adopted and adapted to new use cases, it is only a matter of time before we start seeing AI capabilities become a part of the fabric of normal business processes. While routine transactions have already been automated, many companies that are higher on the learning curve use predictive and prescriptive analytics to guide their operations.

In the supply chain management function, people talk about degrees of autonomy in the planning process. From use of historical data for planning, it goes through use of automation that can be overridden and ends at nonoptional automation, where planners cannot review the recommendations of the algorithms. The algorithmic supply chain requires organizational maturity and cultural readiness to embed and regularly rely on systems. The concept of an intelligent supply chain goes a step further by incorporating self-learning capabilities of the machine to make better supply-chain decisions.

An opportunity to “learn” and improve–without disruption

Common wisdom tells us that organisations compete on the strength of their supply chain ecosystems. Future organisations would compete on the strength of intelligence embedded in their systems. Ultimately, the winner will be the supply chain that learns most quickly with greatest precision.

At a fundamental level, machine-learning algorithms are a teaching set of data. The machine then answers a question by adding every possible correct or incorrect answer to the teaching set. The algorithm keeps getting better and smarter over time.

Organisations learn in a similar fashion: Every organisation has its own embedded intelligence, which manifests itself through the behavior of its managers and their response to the environment. Supply-chain managers use it to review and modify machine-generated forecasts, production plans, or procurement plans.

Putting a self-learning loop into the system will allow a machine to analyse, for example, why a manual override was made to its recommendation, and it can then check for it during the next cycle. This capability is helpful with managing transactions such as fixing incorrect settings, changing norms, or addressing evolving market dynamics. Over a period of time, machines would learn how managers prioritize their plans based on emerging business scenarios, not just optimization algorithms.

For more on how advanced technology is transforming traditional business models, see Are You Joining The Machine Learning Revolution?

Comments

Dr. Ravi Prakash Mathur

About Dr. Ravi Prakash Mathur

Dr. Ravi Prakash Mathur is Senior Director of Supply Chain Management (SCM) and Head of Logistics and Central Planning at Dr. Reddy’s Laboratories Ltd. He heads the global logistics, central planning, and central sourcing for the pharmaceutical organization. Winner of the 2015 Top 25 Digitalist Thought Leaders of India award from SAP, Dr. Mathur is an author, coach, and supply chain professional with 23 years of experience and is based in Hyderabad. He is also actively involved in academic activities and is an internal trainer for DRL for negotiation skills and SCM. In 2014, he co-authored the book “Quality Assurance in Pharmaceuticals & Operations Management and Industrial Safety” for Dr. B. R. Ambedkar University, Hyderabad. He is also member of The Departmental Visiting Committee (DVC) for Department of Biotechnology, Motilal Nehru National Institute of Technology (MNNIT) Allahabad. Professional recognitions include a citation from World Bank and International Finance Corporation for his contribution to their publication “Doing Business in 2006” and the winner of the Logistics-Week Young Achiever in Supply Chain Award for 2012.

How Better Clinical Trial Recruitment Can Improve Healthcare

Dr. Harald Sourij

Patients want to be certain that they receive the best treatment available, and clinicians want to ensure they’re delivering optimal care. Unfortunately, in many cases, providers can’t be confident they are delivering care that will result in the best outcome because their treatment may not be current or informed by proven medical findings. Time is of the essence when it comes to care—and clinicians often lack sufficient time to research treatment best practices while treating patients.

Clinical research is an important component of patient care. Treatment should be based on the outcomes from clinical trials. However, evidence that is based on clinical trials is not always available to help providers recommend one medication or course of treatment over another.

The number of clinical trials is increasing significantly, but trials cannot be successful without enough participants to gather evidence. Recruiting participants can be challenging, and it can be difficult to match the right participants with studies. Approximately one out of three clinical trials fails to meet recruitment targets, so the sample size becomes too small to draw scientifically justified conclusions. The remaining two-thirds of trials recruit participants slowly. The effort becomes costly and time-consuming.

Recruitment failure in clinical trials is a major concern in the healthcare industry. It may seem unethical to ask trial participants for time and engagement and potentially expose them to some risk. However, if researchers are not successful in recruiting participants, they may never find answers to some of the most pressing medical research questions.

Big data eases recruitment pain points

New digital tools are starting to help researchers do a better job attracting, matching, and including appropriate clinical trial participants. Technology is also helping to facilitate and improve the patient recruitment process. One key is leveraging the Big Data that already exists in healthcare organizations.

Unfortunately, patient data is often unstructured and lives in disparate systems, so it’s difficult for researchers to identify potential participants. For instance, study nurses have traditionally helped identify subjects who fulfill research study criteria, but they have been held back by the need to sift through files of paper-based patient records. Technology enables researchers extract information from electronic medical records to quickly identify potential study participants.

Using data strategically can not only improve recruitment rates, but it also ensure that participants are a good fit with a particular study. Clinic physicians don’t always ask patients if they are interested in participating in clinical research because they lack the time or don’t have sufficient knowledge of specific trials. Now the records of prospective participants can be flagged, enabling clinicians to discuss the study with them during routine medical care.

patient recruiment analytics

Of course, the ultimate goal of clinical research is improving patient care and outcomes, and improving the clinical trial recruitment process helps do just that and more. Process optimization through automation of time-consuming patient screening improves collaboration, saves time, and facilitates the research process for all end users.

Automated patient recruitment benefits hospitals and healthcare systems and improves patient outcomes by increasing physicians’ awareness of clinical trials. It also benefits life science companies and contract research organizations (CROs) by optimizing trial design and protocol based on eligible patients and reducing research and development cycle time and costs.

Clinical trials set stage for better patient outcomes

Center for Biomarker Research in Medicine (CBmed) is working on an innovative software application to help researchers find and screen eligible patients for clinical trials. While the application is still under development, it aims to address common recruitment challenges.

CBmed and the University Hospital Graz are looking into a trial data model that can store all relevant information, create a trial manually or import details from clinicaltrials.gov to reduce manual intervention, automatically match patient data from electronic medical records, and enter criteria tolerance to improve eligible patient screening results. More information on the CBmed project, Innovative Use of Information for Clinical Care, and Biomarker Research (IICCAB) can be found here.

I began working in clinical research because I wanted to find answers to the questions patients ask every day about their own care. Technological innovation is enabling faster, better clinical trials by improving the participant recruitment process, and it will ultimately lead to evidence-based, life-changing, and life-saving treatments for patients.

For more on how technology can help improve patient outcomes, see Patient Engagement: Key To High-Value Care.

Comments

Dr. Harald Sourij

About Dr. Harald Sourij

Harald Sourij is the the Deputy Director of the Division of Endocrinology and Diabetology and the Head of the Diabetes Outpatient Clinic at the Medical University of Graz, Austria. He also leads the Area Metabolism and Inflammation at the Center for Biomarker Research in Medicine (CBmed) in Graz, Austria. His research activities focus mainly on diabetes and its cardiovascular complications. Harald has published over 100 peer-reviewed manuscripts and book chapters in highly ranked journals including The Lancet, European Heart Journal, and Diabetes Care. He is a member of the European Diabetes Association and of the Diabetes & Cardiovascular Disease Study Group of the EASD and is currently the Treasurer of the Austrian Diabetes Association. He has been awarded the Langerhans Award of the Austrian Diabetes Association in 2013 and the Joseph Skoda Award of the Austrian Society for Internal Medicine in 2015. He served as Associate Editor for the scientific journal Trials (2013-2105).

Running Future Cities on Blockchain

Dan Wellers , Raimund Gross and Ulrich Scholl

Building on the Blockchain Framework

Some experts say these seemingly far-future speculations about the possibilities of combining technologies using blockchain are actually both inevitable and imminent:


Democratizing design and manufacturing by enabling individuals and small businesses to buy, sell, share, and digitally remix products affordably while protecting intellectual property rights.
Decentralizing warehousing and logistics by combining autonomous vehicles, 3D printers, and smart contracts to optimize delivery of products and materials, and even to create them on site as needed.
Distributing commerce by mixing virtual reality, 3D scanning and printing, self-driving vehicles, and artificial intelligence into immersive, personalized, on-demand shopping experiences that still protect buyers’ personal and proprietary data.

The City of the Future

Imagine that every agency, building, office, residence, and piece of infrastructure has an entry on a blockchain used as a city’s digital ledger. This “digital twin” could transform the delivery of city services.

For example:

  • Property owners could easily monetize assets by renting rooms, selling solar power back to the grid, and more.
  • Utilities could use customer data and AIs to make energy-saving recommendations, and smart contracts to automatically adjust power usage for greater efficiency.
  • Embedded sensors could sense problems (like a water main break) and alert an AI to send a technician with the right parts, tools, and training.
  • Autonomous vehicles could route themselves to open parking spaces or charging stations, and pay for services safely and automatically.
  • Cities could improve traffic monitoring and routing, saving commuters’ time and fuel while increasing productivity.

Every interaction would be transparent and verifiable, providing more data to analyze for future improvements.


Welcome to the Next Industrial Revolution

When exponential technologies intersect and combine, transformation happens on a massive scale. It’s time to start thinking through outcomes in a disciplined, proactive way to prepare for a future we’re only just beginning to imagine.

Download the executive brief Running Future Cities on Blockchain.


Read the full article Pulling Cities Into The Future With Blockchain

Comments

Dan Wellers

About Dan Wellers

Dan Wellers is founder and leader of Digital Futures at SAP, a strategic insights and thought leadership discipline that explores how digital technologies drive exponential change in business and society.

Raimund Gross

About Raimund Gross

Raimund Gross is a solution architect and futurist at SAP Innovation Center Network, where he evaluates emerging technologies and trends to address the challenges of businesses arising from digitization. He is currently evaluating the impact of blockchain for SAP and our enterprise customers.

Ulrich Scholl

About Ulrich Scholl

Ulrich Scholl is Vice President of Industry Cloud and Custom Development at SAP. In this role, Ulrich discovers and implements best practices to help further the understanding and adoption of the SAP portfolio of industry cloud innovations.

Tags:

Are AI And Machine Learning Killing Analytics As We Know It?

Joerg Koesters

According to IDC, artificial intelligence (AI) is expected to become pervasive across customer journeys, supply networks, merchandizing, and marketing and commerce because it provides better insights to optimize retail execution. For example, in the next two years:

  • 40% of digital transformation initiatives will be supported by cognitive computing and AI capabilities to provide critical, on-time insights for new operating and monetization models.
  • 30% of major retailers will adopt a retail omnichannel commerce platform that integrates a data analytics layer that centrally orchestrates omnichannel capabilities.

One thing is clear: new analytic technologies are expected to radically change analytics – and retail – as we know them.

AI and machine learning defined in the context of retail

AI is defined broadly as the ability of computers to mimic human thinking and logic. Machine learning is a subset of AI that focuses on how computers can learn from data without being programmed through the use of algorithms that adapt to change; in other words, they can “learn” continuously in response to new data. We’re seeing these breakthroughs now because of massive improvements in hardware (for example, GPUs and multicore processing) that can handle Big Data volumes and run deep learning algorithms needed to analyze and learn from the data.

Ivano Ortis, vice president at IDC, recently shared with me how he believes, “Artificial intelligence will take analytics to the next level and will be the foundation for retail innovation, as reported by one out of every two retailers globally. AI enables scale, automation, and unprecedented precision and will drive customer experience innovation when applied to both hyper micro customer segmentation and contextual interaction.”

Given the capabilities of AI and machine learning, it’s easy to see how they can be powerful tools for retailers. Now computers can read and listen to data, understand and learn from it, and instantly and accurately recommend the next best action without having to be explicitly programmed. This is a boon for retailers seeking to accurately predict demand, anticipate customer behavior, and optimize and personalize customer experiences.

For example, it can be used to automate:

  • Personalized product recommendations based on data about each customer’s unique interests and buying propensity
  • The selection of additional upsell and cross-sell options that drive greater customer value
  • Chat bots that can drive intelligent and meaningful engagement with customers
  • Recommendations on additional services and offerings based on past and current buying data and customer data
  • Planogram analyses, which support in-store merchandizing by telling people what’s missing, comparing sales to shelf space, and accelerating shelf replenishment by automating reorders
  • Pricing engines used to make tailored, situational pricing decisions

Particularly in the United States, retailers are already able to collect large volumes of transaction-based and behavioral data from their customers. And as data volumes grow and processing power improves, machine learning becomes increasingly applicable in a wider range of retail areas to further optimize business processes and drive more impactful personalized and contextual consumer experiences and products.

The transformation of retail has already begun

The impacts of AI and machine learning are already being felt. For example:

  • Retailers are predicting demand with machine learning in combination with IoT technologies to optimize store businesses and relieve workforces
  • Advertisements are being personalized based on in-store camera detections and taking over semi-manual clienteling tasks of store employees
  • Retailers can monitor wait times in checkout lines to understand store traffic and merchandising effectiveness at the individual store level – and then tailor assortments and store layouts to maximize basket size, satisfaction, and sell through
  • Systems can now recognize and predict customer behavior and improve employee productivity by turning scheduled tasks into on-demand activities
  • Camera systems can detect the “fresh” status of perishable products before onsite employees can
  • Brick-and-mortar stores are automating operational tasks, such as setting shelf pricing, determining product assortments and mixes, and optimizing trade promotions
  • In-store apps can tell how long a customer has been in a certain aisle and deliver targeted offers and recommendations (via his or her mobile device) based on data about data about personal consumption histories and preferences

A recent McKinsey study provided examples that quantify the potential value of these technologies in transforming how retailers operate and compete. For example:

  • U.S. retailer supply chain operations that have adopted data and analytics have seen up to a 19% increase in operating margin over the last five years. Using data and analytics to improve merchandising, including pricing, assortment, and placement optimization, is leading to an additional 16% in operating margin improvement.
  • Personalizing advertising is one of the strongest use cases for machine learning today. Additional retail use cases with high potential include optimizing pricing, routing, and scheduling based on real-time data in travel and logistics, as well as optimizing merchandising strategies.

Exploiting the full value of data

Thin margins (especially in the grocery sector) and pressure from industry-leading early adopters such as Amazon and Walmart have created strong incentives to put customer data to work to improve everything from cross-selling additional products to reducing costs throughout the entire value chain. But McKinsey has assessed that the U.S. retail sector has realized only 30-40% of the potential margin improvements and productivity growth its analysts envisioned in 2011 – and a large share of the value of this growth has gone to consumers through lower prices. So thus far, only a fraction of the potential value from AI and machine learning has been realized.

According to Forbes, U.S. retailers have the potential to see a 60%+ increase in net margin and 0.5–1.0% annual productivity growth. But there are major barriers to realizing this value, including lack of analytical talent and siloed data within companies.

This is where machine learning and analytics kick in. AI and machine learning can help scale the repetitive analytics tasks required to drive leverage of the available data. When deployed on a companywide, real-time analytics platform, they can become the single source of truth that all enterprise functions rely on to make better decisions.

How will this change analytics?

So how will AI and machine learning change retail analytics? We expect that AI and machine learning will not kill analytics as we know it, but rather give it a new and even more impactful role in driving the future of retail. For example, we anticipate that:

  • Retailers will include machine learning algorithms as an additional factor in analyzing and  monitoring business outcomes in relation to machine learning algorithms
  • They will use AI and machine learning to sharpen analytic algorithms, detect more early warning signals, anticipate trends, and have accurate answers before competitors do
  • Analytics will happen in real time and act as the glue between all areas of the business
  • Analytics will increasingly focus on analyzing manufacturing machine behavior, not just business and consumer behavior

Ivano Ortis at IDC authored a recent report, “Why Retail Analytics are a Foundation for Retail Profits,” in which he provides further insights on this topic. He notes how retail leaders will use new kinds of analytics to drive greater profitability, further differentiate the customer experience, and compete more effectively, “In conclusion, commerce and technology will converge, enabling retailers to achieve short-term ROI objectives while discovering untapped demand. But implementing analytics will require coordination across key management roles and business processes up and down each retail organization. Early adopters are realizing demonstrably significant value from their initiatives – double-digit improvements in margins, same-store and e-commerce revenue, inventory positions and sell-through, and core marketing metrics. A huge opportunity awaits.”

So how do you see your retail business adopting advanced analytics like AI and machine learning? I encourage you to read IDC’s report in detail, as it provides valuable insights to help you invest in – and apply – new kinds of analytics that will be essential to profitable growth.

For more information, download IDC’s “Why Retail Analytics are a Foundation for Retail Profits.

Comments

Joerg Koesters

About Joerg Koesters

Joerg Koesters is the Head of Retail Marketing and Communication at SAP. He is a Technology Marketing executive with 20 years of experience in Marketing, Sales and Consulting, Joerg has deep knowledge in retail and consumer products having worked both in the industry and in the technology sector.