Companies confronting their big data opportunities face the ever-present enterprise IT problem: performance. Once IT has gathered the relevant information and stored it on hard disk drives (HDD) ready for analytics, delivering responsive queries to business users can be problematic. Mechanical HDDs are simply too sluggish for most big data environments. New approaches are necessary.
That’s why you’re seeing so much attention being paid to in-memory databases. With them, potentially you can load your entire database in a server’s RAM for maximum performance by avoiding the seek-time penalty of HDDs.
As noted by Hasso Plattner and Alexander Zeier in their study In-Memory Data Management: An Inflection Point for Enterprise Applications, in-memory databases have been around since the 1980s. The problem has always been the cost and the limited amount of RAM database servers could use. For example, 2 megabytes of RAM in 1985 would have cost around $600.
But prices have been tumbling. According to SK Hynix Inc., a top tier supplier of memory hardware, prices have been falling for the last 20 years at an annual average of 33%, with an expectation that prices will continue downward between 20% to 30% or more per year. Now you can pick up 8 gigabytes of RAM for $40 after a rebate.
Also, servers have cracked the terabyte ceiling for memory. At the beginning of this century Windows 2000 Advanced Server could only manage 8 gigabytes of memory, hardly enough for most large enterprise databases even back then. Today, Intel ships 1 terabyte server boards widely and Fujitsu has validated the SAP HANA in-memory database with 8 terabytes of RAM.
With memory prices dropping relentlessly and server RAM capacity expanding steadily, performance is no longer a major hurdle for enterprises seeking insight from massive data stores.
Image Credit: ChrisSinjo/Flickri