Falling RAM Prices Drive In-Memory Database Surge

Irfan Khan

Source: ChrisSinjo/FlickrCompanies confronting their big data opportunities face the ever-present enterprise IT problem: performance. Once IT has gathered the relevant information and stored it on hard disk drives (HDD) ready for analytics, delivering responsive queries to business users can be problematic. Mechanical HDDs are simply too sluggish for most big data environments. New approaches are necessary.

That’s why you’re seeing so much attention being paid to in-memory databases. With them, potentially you can load your entire database in a server’s RAM for maximum performance by avoiding the seek-time penalty of HDDs.

As noted by Hasso Plattner and Alexander Zeier in their study In-Memory Data Management: An Inflection Point for Enterprise Applications, in-memory databases have been around since the 1980s. The problem has always been the cost and the limited amount of RAM database servers could use. For example, 2 megabytes of RAM in 1985 would have cost around $600.

But prices have been tumbling. According to SK Hynix Inc., a top tier supplier of memory hardware, prices have been falling for the last 20 years at an annual average of 33%, with an expectation that prices will continue downward between 20% to 30% or more per year. Now you can pick up 8 gigabytes of RAM for $40 after a rebate.

Also, servers have cracked the terabyte ceiling for memory. At the beginning of this century Windows 2000 Advanced Server could only manage 8 gigabytes of memory, hardly enough for most large enterprise databases even back then. Today, Intel ships 1 terabyte server boards widely and Fujitsu has validated the SAP HANA in-memory database with 8 terabytes of RAM.

With memory prices dropping relentlessly and server RAM capacity expanding steadily, performance is no longer a major hurdle for enterprises seeking insight from massive data stores.

Image Credit: ChrisSinjo/Flickri


Recommended for you:

The Great Transformation – The Era of Demand Supply IT Begins

Brad Smith

Guest post by Sina Moatamed, Originally published on SAP Community Network.

The Enterprise IT organization is about to transform into a services organization.

Not by its choice, but because the business is forcing its hand.  Business units are engaging cloud services with or without IT’s involvement.  The decision making power has shifted from IT to the business.  IT organizations will have to evolve their operations to adapt to this new world.

With so much discussion about the cloud, there has been very little in the way of providing a framework of how to orient your IT organizations to become the service organization that the business unit will prefer, versus them dealing direct with the cloud providers.

How to get from a traditional IT model to services organization model deserves its attention from every single enterprise regardless of industry.  This will be a discussion about how cloud computing is impacting organizational behavior, future architecture and what operational infrastructures are required for the successful transformation of Enterprise IT.

The Elephant in the Room

To understand what has changed, we have to follow the money.

Cloud computing services have created a consumption economy.  For a long time you would hear about how Cloud Computing is simply a choice between OPEX and CAPEX.  I always found this argument dismissive of the true value proposition cloud computing was providing.

While it may not capture the essence of cloud computing’s value,  it has certainly dictated new realities of who is in charge of IT spend.  IT typically has the capital dollar budgets but large OPEX budgets live within the business units. This is the piece of data that has been missed by many and the repercussions are many.

In a traditional Plan, Build, Run IT organization, the assumption is that 100% of the demand for services will go to the IT department.  Assuming all demand for IT services comes to the IT department, the IT department can begin to plan.

The “Plan” constitutes understanding business objectives and working through enterprise architecture and risk management to formulate IT strategies.  Then organize IT budgets (capital and expense) to fund the execution of the roadmap.

The “Build” phase is really where the project management lives and where the PMO will implement the projects in support of the roadmap.  Once the projects are completed then the new IT infrastructures will be management by operations.

The “Run” phase is the operations and where ITIL frameworks have been adopted.  Now this is an over-generalization of Plan, Build, Run, but I want you to see the waterfall of activity to deliver services in a traditional IT organization.

To function properly 100% of IT demand needs to go to the IT department.  Without that complete demand, IT governance, capacity planning and budgets are very difficult to predict or measure.

Once cloud services became available, capital budgets in the Plan were no longer sought out.  Instead cloud providers sold their services directly to the business unit, and the business unit paid for it using their own OPEX budget.

This singular event which has now occurred within almost every enterprise marks the end of traditional IT and the Plan, Build, Run operating model.  This is a very significant change and one that needs to be recognized by CIOs everywhere.

When a business uses cloud services they are charged based on usage.  For many business units, they now can fund their own projects and know what the cost of managing ongoing operations is.

When working with the IT organization, they are not provided a common consumption model.  The lack of agility and visibility over their operations has pushed them into the arms of the cloud provider.

In turn, the cloud provider has with open arms provided an OPEX model.  Cloud providers are no longer interested in convincing the CIO or anyone in the IT department to select their services.  The cloud providers are selling Line of Business solutions directly to the business.

This has created the decentralization of IT operations.  This instant decentralization of IT service delivery, will force every single enterprise to establish a new model of IT operations.

Cloud Core Architecture

If you follow the logic, you quickly realize that the narrative of what services are used in the Enterprise is no longer in the hands of IT.  This creates a new challenge architecturally.

IT is no longer in the business of saying “no” to the business when the business wants to bring on a new service.  The IT organization is no longer being asked and as a result they are completely unaware to the degree this is happening.  So how should the IT department respond?


IT will have to create an architecture that will allow the business to select their Line of Business solutions, and yet provide a method to integrate, manage master data, provide identity management and provisioning capabilities, allow for the ability to search for data across any SaaS services, provide a development platform without vendor lock-in, be able to derive analytics across all transactional and master data produced, and guide users by issuing tasks when work is required to fulfill processes across the various systems.

In total, I am referring to this set of services as the “Cloud Core”  This represents the necessary architecture to support this new world order. The birth of the loosely coupled business suite.

Demand Supply IT


To remedy the decentralization of IT, you have to understand what decentralization has done. What are the benefits and the systemic problems created?

For the business unit, they are finding solutions and executing with agility.  In the process each business unit within an Enterprise may be working with the same cloud provider engaging their own contracts and instances of the service.

This has created disparity in cost for service within the same organization and silos of data.  It also produced different SLA’s with the same vendor and without contract standards.  In most cases they are just engaging in what I would call “Credit Card Contracts”.

Demand Supply IT model for operations is not new.  McKinsey & Company talked about this in 2006. Whether they knew it or not, they articulated a model of operations which in today’s Cloud Computing era represents the go forward strategy.

There are in essence two IT organizations.  One which interfaces with the business and manages their demand for IT services and another which provides a supply of IT service sourced internally and/or externally.

The demand organization generally described as the client facing IT department which are oriented by function or site, require a platform where they can select services from a catalog, self-provision, and have visibility into the use of the services.

Gartner has appropriately called this the “Cloud Brokerage Platform”.  In a sense it’s the CRM of IT Operations.  As a result, the Demand IT organization is now the new home for IT governance.  They will be selecting services from the Supply IT catalog (internal and external services).

This interface will need to provide the ability to administer the environment enough to initiate and track the fulfillment of service delivery and ongoing management.  Below is a diagram demonstrating the attributes of the Cloud Brokerage Platform.

There are many emerging Cloud Brokerage Platforms -most are focused around Infrastructure Private Clouds or IaaS service management.  Eventually, they will move up the stack.

So far the thinking is that they will integrate directly with Orchestration platforms but this will not address the need for financial chargeback or address the planning intelligence that will be required for long term operational management.

If the elephant in the room was that fact that business units are using their OPEX dollars to buy their own cloud services, then IT needs to change their entire financial model to a chargeback structure.  I used to think that the chargeback IT organizations were going to be the most badly impacted by the advent of cloud services.  I was wrong.

The traditional IT organization which relies on its large capital budgets as a tool for governance are the ones who are actually the most behind and will experience the most significant transformational challenge.

In either case, cloud is presenting the need for Enterprise IT organizations to chargeback their services. In doing so, we can now compare apple-for-apple  with a common set of data classification and KPIs, whether internally deployed services or externally hosted services are a better fit for an Enterprise.

As I thought about this it became clear to me that inventing a set of workflows to manage the business unit demand against resources provided by the Supply IT organization, did not make much sense.  But none the less, I began mapping those processes and then it became very clear.  It is not appropriate to create a new tool; IT needs to deploy an ERP system to manage its own operations.

The Brokerage Platform is the CRM which should be in the hands of the Demand IT organization but the Supply Organization needs comprehensive supply chain capabilities and of course financials to manage charge back. As I began to apply manufacturing models of operations against the needs to deliver cloud services, it all started to fit.

As you look at the processes for delivering IT services that are internally sourced, or externally hosted, you will find parallels with manufacturing patterns.  Make-to-stock, make-to-order, and just in time supply chains all apply.  Instead of working with MES systems for production execution, we will be working with Orchestration platforms.

Manufacturing ERP vendors need to recognize that there is a new vertical in town, and it’s the “Operations of IT” in the cloud computing era.  This could prove to be one of the most complex supply chains, but the rewards are plenty because every Enterprise has one.  I hope and expect there will be a tremendous amount of innovation in this space.

Final Thoughts

If the IT organization wants to remain relevant, it needs to provide additional value for the business when it delivers cloud solutions.  In turn, the IT organization needs to be a trusted and preferred partner to engage cloud or any other IT service.

Financial operations for IT will need to migrate to a chargeback model to the business.  This will allow for true cost comparisons and evaluations of whether there is value to take a service to the public cloud or keep the service internal.  This will provide a common financial model for all services.  This will also shift more of the budget to the business.

However, since the IT department will be charging back for all services, it will be less of an issue.  The trick is that if the business is now going to make cloud service decisions, they need to also bear the costs related to that decision.  This will provide all the necessary tools to successfully move governance operations to the Demand organization.

The operational model of Plan, Build, and Run no longer represents a working model with the inclusion of Cloud services.  Therefore the IT organization will need to transform organizationally and architecturally to properly provide services back to the business.  Mark Settle, the CIO of BMC Software is a very recent discovery for me, and he describes the change as:

  • Planning will be replaced by Brokering Cloud Services
  • Build will be replaced by Integration of Cloud Services into the company’s business systems portfolio
  • Run will be replaced by Orchestration of services to manage continuity of service delivery.

If you embrace this model and walk through the transformational change that the IT organization will go through, especially in relationship to the business, you will find a new paradigm awaits all of us in IT.  I will discuss this further in a future post.


Recommended for you:

Marketers Continue To Struggle With Big Data

Steve Olenski

It would appear marketers the world over  are collectively losing sleep over one thing: who will be voted off Dancing With the Stars next. I kid. Just wanted to see if you were paying attention. And no for the record, I do not watch DWS – not there’s anything wrong with that.

No what I am referring to is that two word phrase that is surely on the agenda of many a marketing meeting from sea to shining sea. A two word phrase that marketers concern themselves with all the live-long day or at least a significant part of their day: Big Data.big data

And depending on who you listen to and/or believe either marketers are handling their new found wealth of prodigious piles of information quite well and are using insights gleaned from the data to their benefit or, quite simply they are not.

The headline of a recent article on Harvard Business ReviewMarketers Flunk the Big Data Test. The article references a study done by CEB of nearly 800 marketers at Fortune 1000 companies which revealed that “the vast majority of marketers still rely too much on intuition — while the few who do use data aggressively for the most part do it badly.”

The study also revealed:

  • On average only 11% of the decisions marketers make when it comes to consumers is based on data
  • Over 50% of the marketers surveyed said they rely on past experienced and/or their intuition to make decisions
  • In a list of what they rely on to make decisions, marketers listed data dead last after engaging with their co-workers, seeking expert advice and data last on their list — trailing conversations with managers and colleagues, expert advice and one-off customer interactions.

But is using your gut necessarily a bad thing?  I can tell you as one who has worked the “marketing” side of the aisle, trusting my gut instincts has served me well. Christa Carone, the CMO of Xerox, believes there should be a happy medium and there’s nothing wrong with trusting your gut.

“I wouldn’t want to give up the data that helps us make fact-based decisions quickly. But I fear that marketers’ access to and obsession with measuring everything takes away from the business of real marketing,” she told me recently. ”It’s impossible to measure ‘squishier’ meaningful intangibles, such as human emotion, personal connection and the occasional ‘ahhhh’ moment. Those things often come with a marketer’s intuition, and they deliver big-time.  To me, this means trust your gut even while as you’re trying to embrace Big Data.”

Could not agree with her more. There is no way to tell what someone is thinking or feeling. That does not show up on Google Analytics, at least not yet. A good marketer will know his/her audience and be able to not only use the data that is available to them but also integrate their personal experiences and knowledge into their decision-making.

Tsunami of Data

That’s the phrase Acxiom CMO Tim Suther used when I spoke to him earlier this year for a piece I wrote for entitled How To Rein In The Riches Of Big Data. And he of course was spot on for marketers, whether they choose to believe it or are even aware of it, are faced with a seemingly endless amount of data and as Suther puts it “the best companies and brands will be those who do a better job of controlling it.”

And speaking of awareness or in this case, lack of it – from a recent eMarketer article:

The writer of the article chose to look at the positive side writing  ”only 17% were unaware of the concept of Big Data.”

To me, I look at it from the complete opposite perspective and believe 17% who were unaware of Big Data is 17% too many. Now I will admit I do not know who these retail executives were that were surveyed but how in the world can they not know about Big Data at this point in time?

Have they been out of the office for a few days, months, years? C’mon boys and girls, this is the big time, this is Big Data. I understand there are concerns about how to handle the data, what to do with the data and so on but the fact remains marketers – from ALL industries, have a tremendous opportunity to increase their bottom line significantly.

A recent Oracle survey of of North American retail executives showed that nine in 10 thought a failure to capitalize on the benefits offered by data translated to lost revenues.

Exactly my point.

As for how to use the data, Carone believes in not over-complicating things but rather taking a “simpler approach to aggregating the data and mining practical insights from the data.” She says marketers need what she describes as “more elegant interfaces” that can bring all the data together to deliver those much sought after “aha” moments that “really influence marketing strategies and spend.”



Recommended for you:

An Ode To The IT Guy

Lindsey Nelson

 “IT guy.” We all say it. And – consciously or not – our inflection changes just a tic.

It hasn’t got the disdain of “Hello, Newman.” But there’s a resignation, even an annoyance to our tone.

The assumption that if we have to seek out, or are being sought out by, the “IT guy” our productivity is going to be disrupted for an indeterminate amount of time. Our valiant personal effort to contribute to the company’s bottom line or mission?

Thwarted. By the IT guy.

Turns out that attitude might go all the way to the top. According to a 2011 issue of CIO Magazine, the CIO is the least appreciated of the CXO roles.

There’s a common perception that the IT organization is a necessary evil – part of the cost of doing business. While it’s certainly true that IT represents a large operational expense, that’s hardly the whole story.

And there’s another problem. IT is frequently involved in revenue-generating activities – but those activities are almost always owned by other divisions, like sales or marketing. That makes IT the silent partner who gets no credit for a better bottom line.

So what does the chief IT guy have to do to get some respect – for himself and his team? By taking steps to change the way users view IT’s work – and taking credit where credit’s due – CIOs can demonstrate that a solid, well-run IT group is a significant value add in any company.

In a recent white paper from Kaseya, Landmark Ventures, a venture consulting firm, outlines four tactics for making the CIO and his team relevant to a company’s strategic goals:

Make the proactive visible – 60 percent of IT staff time is spent on tasks that no one sees – think patch management and network optimization, for example. Make sure people understand your contributions by implementing operational metrics, and use a reporting mechanism that increases visibility into your successes. Show the company how those wins translate into lower costs and increased productivity.

Make the reactive invisible – A cost-efficient IT systems management strategy relies on automation. When you can eliminate manual administrative tasks and implement mechanisms to fix something when it breaks, you free up your team to focus on strategic projects.

Give users a stake in making IT work – Help your staff make users’ lives easier. Create initiatives that streamline processes and collaborate with other business units to overcome their challenges. And leverage an employee self-service portal that allows end users to do routine maintenance on their own systems.

Create your own opportunities – When you create efficiencies that save money, reallocate it to projects you can wholly own. Invest in a reserve that allows you to take on strategic projects that wouldn’t be funded otherwise. This empowers IT to take a real leadership role in the growth of a lean, streamlined enterprise.

Thanks to some reliable network access and a laptop running smoothly, I’ve got some time on my hands. I think I’ll go buy my IT guy some flowers…


Recommended for you:

How To Innovate like a (Google) Pro

Heather McIlvaine

Alberto Savoia explains how pretotyping, a method for deciding whether or not to pursue a new idea, can save companies millions by bringing the right products to market, like Twitter, and avoiding disasters, like Google Wave.

Alberto Savoia, director of engineering at Google

Innovation is all the rage these days, with companies and contributors racing to rock the market with the ‘next big thing.’  Unfortunately, most new products and innovations fail – not because they are poorly executed but because the idea wasn’t the right idea.

“The biggest challenge in innovation isn’t coming up with new ideas, it’s identifying which of those ideas will be successful in the market,” says Alberto Savoia, director of engineering and innovation agitator at Google.

Savoia is one of the minds behind pretotyping, an approach to innovation and new product development that claims to dramatically increase the odds of market success by helping companies vet their novel concepts. “Pretotyping helps organizations test their innovative ideas to make sure that they have the right ‘it’ before they invest in building ‘it’ right.” sat down with Savoia in Silicon Valley earlier this month for a lively Q&A. Savoia explains why a seemingly great idea goes bust while an outright crazy one makes bank, and argues that pretotyping, often seen as a ‘just a start-up thing,’ is relevant to big business. What was the inspiration behind pretotyping? 

Alberto Savoia: I’ve had the good fortune of being an early employee of two companies that went on to become industry giants – Google and Sun Microsystems – but in my heart of hearts, I’m an entrepreneur. After my first stint at Google from 2001-2002, I had two successful startups (Velogic, Inc. and Agitar Inc.). Then I had another start-up. We felt we were doing everything right. We raised a sufficient amount of money, had great people. Everything should have gone perfectly, but when we launched the product, not enough people wanted it; it wasn’t successful. I asked myself, what went wrong here? I came back to Google (in 2008) with the intention of studying failure: Why is it that eight out of ten start-ups fail? Why do 80 percent of new product introductions flop? Why is there so little innovation that is successful? In investigating those questions, I came up with the concept of pretotyping.

What is the crux of pretotyping approach?

Make sure you’re building the right ‘it’ before you build ‘it’ right. This doesn’t just apply to technology; ‘it’ could be anything. If you’re an author, before you write a book you want to make sure there’s a market for it.

Aren’t companies doing this already? Researching whether there’s a market for your widget seems like a no-brainer in business. 

They think they know the market, but really they’re just guessing. All entrepreneurs fall in love with their great idea, so they invest a lot of money, launch it, and nothing happens. Google Wave, which was supposed to replace email, is a famous example. It was done by two of the smartest people at Google. It had the pedigree; it had the people. Email was ready for an improvement, and yet – after a significant investment – the launch fell flat on its face. Why? Because, instead of actually collecting data on whether or not people would use the product, they asked for opinions about the product and based the development on that.

What’s the difference?

I’ll give you an example. Remember Webvan? They presented the idea and asked people if they’d like to order groceries online. Everybody said, “Yes! Of Course! The Internet is cool; it’s a convenient service; and I like to shop!” So, on the strength of that positive reaction, they went for it and launched the company, all out. In just a few years, Webvan raised nearly a billion dollars, built huge refrigerated warehouses full of stock, and acquired a fleet of vehicles. Well, it turns out that even though nearly 100 percent of the people surveyed loved the idea of Webvan, only about two percent of them actually used it. And the company went out of business.

Apply pretotyping to the Webvan example and instead of spending over $100 million to launch the company based on surveys, they would have started small and watched what happened. They would set up the website, advertise it one market – say, San Francisco and its suburbs – and see how it goes. They would not buy warehouses and trucks; instead they’d work out some kind of deal with a supermarket (or something) to fulfill the orders.

Had they done this, Webvan would have seen that the number of people who said they would use the service was dramatically different than the number of people who actually did use it. They would have seen that the majority of users were in the City. It would have been apparent that they needed to scale back the concept, tailor it to city-dwellers and forgo the suburbs. And they would know this without having spent enormous amounts of money. The Webvan story might have ended differently.


Recommended for you: