Why does the CIO always have to be the one to say “no”? Technology leaders should promote empowerment, not be forced by technology to impose limitations on new and innovative projects. Sowing a million seeds through data analysis and actionable insight for better business results should be at the top of the CIO’s list of priorities. But, as it turns out, CIOs can’t find the time and budget to innovate in their gardens because they’re too busy tending forests of choking weeds the company or previous CIOs planted years ago.
As reported by ZDNet late last year, a CEB survey of 200 organizations worldwide revealed that 57% of a typical IT budget is devoted to maintenance and upkeep. When so little is left in the CIO’s budget for new projects and innovation, the business isn’t being served. Rather, the suffocating silos of legacy applications (weeds) are being tended.
Seeing through the weeds
As an industry, we have constructed endless farms of silos because, until recently, there was no other way to meet user demands. Consider a basic business case: Sales teams want real-time insights into manufacturing data and supplier timetables so they can make firm guarantees to customers about availability and fulfillment. That’s a terrific business advantage, so the CIO wants to make that happen. Unfortunately, that means integrating data from a minimum of three different data sources and, likely, a minimum of three different applications.
Integration is only the first step. That data then has to be routinely batched, moved, profiled, cleansed, governed, and re-analyzed, creating more burdens, overhead, and complexity. Multiply those steps across the tens of thousands of unique database tables large enterprises commonly have, and before long all you’ll see is the tangle of weeds built around the data.
Moreover, the problem isn’t just where data is stored, but how it’s stored. The majority of application development time isn’t spent on new features or functionality; it’s spent trying to squeeze lag time out of system bottlenecks. And no bottleneck is more problematic than conventional disk storage. Disks simply haven’t kept pace with the huge leaps in computing power and user demand. Every time IT tries to meet the demand for faster output or more real-time access to data, it must build new middleware that pre-processes and stores a chunk of data just a little closer to the end user. Every one of those one-off optimizations represents a new, innovation-sapping silo.
How in-memory clears the way
To solve the problem, we must think differently. Instead of optimizing data placement and data organization for the slowest part of the system – the spinning disk – to get information into memory to process faster (my first class in computers was called data processing!), we need to think about having all the data in memory, immediately ready for every and all application needs. In-memory lets you build virtual data models very quickly, so data can be immediately be acted upon by any business user. In-memory’s advantages are huge and immediate, providing at least 100 times the application performance of a conventional disk architecture. Not only that, but having all data in memory means you don’t need to extract, transform, load, profile, or cleanse the data to re-optimize for datamarts, operational data stores, or data warehouses. The data exists in memory, all the time, waiting to be processed by any application.
With the help of in-memory computing, enterprises have already started innovating their way out of the messy silo grind. Unilever consolidated 250 systems down to just four, and cut its financial close times by 30% to 60%, McKesson reduced pharmaceutical order fill times by over 35%, and ConAgra Foods shaved off 92.9% of its ledger data transfer time.
By clearing away the overhead and inefficiencies of traditional disks, in-memory restores the CIO’s rightful place as a business innovator and makes it okay to say “yes” again.
Watch this video to learn more how Unilever utilized in-memory technology to transform its business.