Digital Data – It’s A Risky Business

Brian Lee-Archer

“With great power comes great responsibility.” This poular saying, or derivatives thereof, has been attributed to many famous people, including Winston Churchill and Theodore Roosevelt. There is evidence tracing it back as far as the French Revolution, and it was used in 1962 to establish the folklore surrounding the comic book character Spiderman.

Irrespective of its origins, this saying is relevant to today’s digital world. The ever-increasing volume of citizen-related data coming into the hands of government is a source of great power, and with that there is great responsibility in ensuring it is used to create public value. Another way of looking at this statement is, “when exercising power, know and manage the risks.” In the digital world, this could be transposed to, “when using citizen-related data (for any purpose), know and manage the risks.”

Growing amounts of digital data related to citizens, while providing many new and exciting opportunities for creating public good, carries an increasing risk profile. The public appetite for risk is influenced by many factors, not the least of which is the level of trust people have in the institutions of government charged with managing citizen-related data. From addressing the very real threats of terrorism to delivering better health and education outcomes, the public value to be derived through leveraging citizen data at an individual or cohort level is contingent upon the level of trust people have in the institutions charged with managing the risks.

Trust would be quickly destroyed if a government agency were to take advantage of the data they have at their disposal and use it in a way that could be considered by the public to be unfair or coercive. For example, an electronic health platform offers many advantages for consumers of the health system, in particular for those who need to engage with multiple actors in the system over an extended time. What might be considered unfair however, is if the very same data collected to enable a better health outcome were used to individually risk rate, thereby affecting health insurance premiums or the level of government-funded rebate for health services. In this scenario, the solidarity principle underpinning a universal health insurance scheme would be at risk, and public trust in the stewardship of the health system would surely be undermined.

While I am not aware of a government agency that would contemplate this approach, the possibility exists due to technology and digital data. For the public to come onboard with an electronic health platform, they need to trust the government and the administering agencies of the health system that, while the possibility exists for inappropriate use of the data, it can be managed within acceptance levels of tolerance (the risk appetite). One way to deliver positive value to the trust quotient with the public is a transparent approach to how personal data is collected, used, and protected.

There are other examples in public policy where this principle applies, such as national security where government monitoring of communications data to detect terrorism threats has to be balanced against the risk of misusing this data to target individuals for non-terrorism-related criminal activity. One of the many lessons from the high profile incident involving Edward Snowden was a lack of transparency with the public as to the amount and type of data the national security agencies were collecting.

Ethical issues will emerge as public policy initiatives to leverage citizen-related data for public good are challenged on privacy and data protection grounds. The paper focusing on moral hazards involved in automated decision making, published by the SAP Institute for Digital Government in October 2015, reminds us that digital data is a risky business, and the expertise and judgement of human experts needs to be supplemented by machines rather than replaced. Just because there is risk, it doesn’t mean you don’t make use of citizen data to create public value. As we progress down the digital government path, proposals to collect and use personal data will need to be rigorously evaluated from a risk perspective, including clearance by ethics boards in a similar manner as is done with certain types of health research.

While the potential benefits of leveraging citizen-related data are in many cases obvious, the risks and unintended consequences may not be so obvious. One mistake has the ability to destroy public trust for an extended period, and this may be well beyond the business domain where a mistake occurs. A digital government will win over the public in terms of trust if it works in an open manner and demonstrates the benefits of using and sharing data within acceptable levels of risk. The demands for 100% guarantees against failure are unrealistic expectations – managing data (like most things) is not risk free, and it would be disingenuous to claim otherwise. The appropriate guarantees to give are: a commitment to demonstrate a thorough understanding of the risks involved, how acceptable they are in terms of the potential benefits, and a detailed plan on how they will be managed.

To find out more about the SAP Institute for Digital Government visit, follow us on Twitter @sapsidg and email us at

Brian Lee-Archer

About Brian Lee-Archer

Brian Lee-Archer is director of the SAP Institute for Digital Government Global (SIDG). Launched in 2015, SIDG is a global think tank that aims to create value for government by leveraging digital capability to meet the needs of citizens and consumers of government services. In collaboration with government agencies, universities and partner organizations, SIDG facilitates innovation through digital technology for deeper policy insight and improved service delivery.