Data is mainstream. In today’s world, there is an explosion of data sources because of all the advancements in collection: sensors on the Internet of Things (IoT), new apps, and social media. There’s also the realization that data can be a competitive advantage and the perception of the need to democratize it.
Thought leaders of digital transformation and disruption already understand the value of proper business process modeling. Accepting that everyone wants to be data-driven, we should all recognize that data flows are nothing else but business workflows without prose.
That’s why it’s important to not just have dedicated development operations (DevOps) teams in place, as Thomas Di Giacomo wrote in his latest Digitalist blog, but also to hire enthusiastic data operations (DataOps) experts for effective collaboration between product management, data engineering, data science, and business operations. Talents are not that easy to find, and the wording of these jobs descriptions (“operations,” “process engineering”) isn’t that hot. But it needs to become sexier, and I see a chance that this is happening quite soon. Just remember the rise of the data scientist.
Besides the DataOps talents, proper solutions to tackle the challenges of DataOps will spring up like mushrooms within the next couple of years: Data pipelining, data orchestration, data governance, and policy management solutions will be necessary to work with the data wherever it is stored instead of moving it around. Doing it that way is not just pricey, but also a waste of time. Being faster is an obvious advantage: Imagine you can already see the data insights and cause action while your competitors are still figuring out when and where to move the data from one silo to another. Established companies worry: How can we ensure that sensitive data remains safe if it’s available to everyone? DataOps requires many businesses to comply with strict data governance regulations and there are definitely legitimate concerns.
Gartner is defining DataOps as a hub for collecting and distributing data, with a mandate to provide controlled access to systems of record for customer and marketing performance data, while protecting privacy, usage restrictions, and data integrity. DataOps is a way of managing data wherever it’s located, get it cleaned, versioned, transformed, enriched, and delivered. I can see one big trend that is causing the need for DataOps:
Agility. Every business is using this buzzword to express its state-of-the-art flexibly. If carried to its logical conclusion: Agile data processes are by definition never carved out of stone; the processes, the technologies, and even the frameworks should rather be questioned whenever possible and if necessary redesigned or adjusted to establish an environment that is focused on efficiency, quality, interdisciplinary and continuous improvement. We simply can’t afford to exclude data from an agile decision-making process to achieve the velocity of innovation demanded by the digital economy.
We need a new culture, a new approach; one that does for data what DevOps did for infrastructure. The goal should be to improve results by bringing together the data supplier with the data consumer and at the same time getting rid of a static data lifecycle. Let the DataOps journey begin.
Want to learn about SAP’s approach of making your DataOps management a lot of easier and what the future Big Data warehousing is about? Then register for the upcoming Webinar on January 24 at 11:00 a.m. ET/17:00 p.m. CET. You’ll hear from Marc Hartz, product manager of SAP Data Hub. See you there!