In the last few years we have seen a huge demand for distributed applications that has led to a proliferation of data-driven apps in the data centers and increasingly IT departments have struggled to keep pace with the demand of changing technology.
Although architecturally there are hardly any similarities, IT admins have applied the same rules for deploying these new distributed apps as they have used previously for legacy applications.

Underutilized Resources

A strategy to have big enough servers that can address peak workloads works really well. But what about the other periods when there is very little going on. These resources are just sitting idle. What a waste of system resources!!!

In fact, this strategy has resulted in a huge underutilization of system resources. In my past life as an Oracle DBA, I have seen servers running with less than 10 or 20% CPU utilization on an average. Systems with high utilization were very rare and in fact, were treated as an exception than rules.

Proliferation of environments

At the same time, trying to cope with the demands from other departments within IT, we have also seen a huge proliferation of development and test environments. For every production environment, we now see at least 10+ development or test environments. Developers expect to have their own stack so that he or she can build and test in isolation. While this may be a common practice, it definitely doesn’t help the number of environments the IT admins need to spin up.

Data explosion

Thus as one would expect we see a fragmented pool of underutilized servers, a problem of data duplication stemming from the increasing number of development and test environments. There is also a continuous need to create clones of Production or even refresh an existing environment. A huge production environment of few terabytes is often copied over to create a big dev or test that will probably not even see 1% data change.

All this work of creating, maintaining or building a new environment results in a maintenance nightmare for the IT staff.

The big question: what is the solution to this problem?

You probably guessed is right—Consolidation that not only ensures isolation but also guarantees performance predictability.

Today’s data centers need an effective consolidation strategy, built from a single shared set of overhead. And with effective resource management, we can greatly reduce any competition between workload and in turn improve throughput and ensure good response time.

One other point that is critical to the success of good consolidation is intelligent workload-aware placement strategy.

In this blog series, we will examine how both system and application containers running on Robin Cloud Platform can help IT address this data center conundrum. Watch out for the next one!

mm

Author Deba Chatterjee, Director Products

More posts by Deba Chatterjee, Director Products

Join the discussion One Comment