With big data raining down on enterprises, it’s DBAs (database administrators) who are stuck trying to weather the storm. Their work is particularly arduous because vast increases in data volumes mean significantly more disk capacity to manage. Disk optimization becomes practically a full-time job.
With 20th century databases, disk optimization was a necessary art for a DBA. The best DBAs gave it a lot of thought. A lot. It even became an essential part of a DBA’s education. That’s because without the DBA’s optimization skills, application SLAs would seldom be met.
Devoting so much time to disk optimization is costly. Carl Olofson, research vice president at IDC, considers lowering operational costs, particularly the time DBAs must devote to disk management issues, as a side benefit for deploying ultra-high-performance in-memory databases. He writes, “No more unload/reload operations, index rebuilds, disk redistributions, and backups means that expensive DBA time can be freed up for more high value work.”
By migrating performance-hungry and SLA-critical analytics applications to an in-memory database, the relentless demand for improved response time from business groups can be met without DBAs having to regularly burn the midnight oil simply to squeeze out another millisecond or two in performance. These man-years of labor savings are not a one-time event, but long-term operational savings for IT.
Olofson cites response-time improvements ranging from 10 to 200 times when applications are moved to in-memory databases, pushing SLA metrics to their highest levels. It’s ironic, then, that in-memory databases help keep operational costs to their lowest levels as well.