As part of my role leading Product and Marketing at Robin Systems, I often get the privilege to speak with some of the brightest industry analysts and technology leaders that cover our ecosystem.

In fact, at the recent Gartner summit, I was asked to clarify our recent PR where we announced the first Robin Cloud Platform. Well, virtualization is not new, it is decades old back to the IBM 360 days. It wasn’t the first time I got this question. Usually, after a few minutes into the brief, I see the ‘aha’ moment when it is understood – we have passed the virtualization phase and are onto something new.

Virtualization Gets Us Only Part of the Way

“But If you’re in the real enterprise, you work with databases, then what can you do? Absolutely nothing. You’re screwed.[laughs] I’ll be honest: this stuff doesn’t work in all environments,” he says. “That’s the thing most people don’t talk about. You just can’t put every workload in every operation…There are things to think about when you adopt one of these platforms.” – Kelsey Hightower, Google Kubernetes

Kelsey made this statement while discussing the state of Kubernetes, and it really applies to all container orchestrators that are used for virtualizing databases. Keep reading and you’ll see that he was only right in that Kubernetes, Mesos, and the traditional ‘old’ virtualization can’t enter the Application Virtualization age.

Application Virtualization is defined as software technology that encapsulates computer programs from the underlying operating system on which it is executed.

To-date, Application Virtualization has primarily been applied in the context of VDI or terminal services delivering applications for remote sites.  The virtualization aspect has been crafted around the delivery of the app; and not been extended to the entire application and components actually being defined, deployed and run by end users and supported by IT.

Current Virtualization Scenario – Limited to Application Delivery

In recent years, we have seen the term virtualization broadly applied more often than not in relation to specific components in a typical software stack – from compute (server virtualization) to network functions (routers, switches) to storage to entire software defined datacenter.

I’d assume you agree that none of these models really entails application coverage in its entirety from the required infrastructure resources to adhere to SLAs, to s/w stack components and interdependencies and how they affect DevOps through the complete application lifecycle.

As more and more virtualization technologies were introduced, it seemed that while these virtualized point solutions were very good fit for the subject matter, they missed the higher level logical encapsulation typically characterized by an application or even lower end grouping of components as clusters.

Updated solutions such as Hyper-Converged Infrastructure have been introduced in an attempt to address the integration part and while these do consolidate compute and storage into single enclosures/appliances – their “grouping” factor remains only at the basic pre-packaging integration level only.

Pre-packaging helps reduce the initial scripting and manual labor associated with putting the bundle together but this is also where the point solution benefits end.

Containerizing Applications – A Step Forward

When talking to technology leaders, they often confuse what ‘containerized applications’ means. Containers themselves are great means to an end taking the virtualization market by storm by replacing virtual machines. Containerized applications are merely an indication the application runs in a container (as opposed to virtual machine). It is not an indication that the entire application stack and supporting components and resources are being addressed as a unified logic entity – let alone well integrated beyond basic scripting.

While containers and VMs are core building blocks in the software stack, they lack the application-centric requirements driving interdependencies, affinities, and configuration alignment with infrastructure – let alone visibility into the entire application IO path required to guarantee IOPS – in a multi-tenant environment or shared services framework, IOPS control is especially critical.

Localized component based solutions prove useful in very rudimentary applications usually reflecting a limited set of mostly stateless micro-services and containers. Also, only storage solutions are not sufficient given the lack of application awareness and the inability to adhere to an enormous number of volumes imposed by modern container technology.

By design, both traditional SAN and/or NAS suffer from latency deficiency. Recently DAS (directly attached storage – mostly SSD based) became very popular due to its significant performance benefits. However, as it turns out, DAS is not sufficient if the control is at the virtual level (like CEPH) and not at the physical block end.

Providing IOPS guaranty at the application level or even container level requires knowledge of which distributed storage blocks comprise which volumes that are mounted, to which containers that represent which clustered application in order for overall IOPS to be committed. This cannot be established at the SAN, NAS or even DAS storage level having no application awareness.

Hyper-convergence powered by Containers – Application Virtualization

We think that Application Virtualization means virtualizing all aspects of application provisioning, guaranteed performance, and lifecycle management. That means starting with virtualizing the IT resources required to run the application in a CAPEX effective and OPEX efficient way but also guaranteeing bare-metal performance and IOPS without the application users having to play with the physical infrastructure setup and tuning.

Application Virtualization in the True Sense

Application Virtualization is More than Docker Containers - Robin Cloud Platform delivers virtualization benefits to the application lifecycle & IT user

It also means virtualizing the complexity and labor-intensive tedious process of defining an entire application pipeline and all its components, and the ability to deploy complex multi-node clusters in minutes. Tasks that used to take hours and days and even weeks are shrunk to under an hour because the application user now speaks the language of launch, scale, snapshot, clone, restore in the domain they understand without having to be an IT expert in compute, storage, networking or a DevOps scripting guru.

In a prior life, I saw my team struggle to get a Cassandra cluster up and running and then had to devote core team members to on-going maintenance to keep it running. If you have suffered weeks, maybe months or more delays, you will know that guaranteeing bare-metal performance is mandatory but not sufficient for teams that want to innovate, power new products and growth in a repeatable way and timely manner.

At Robin, I saw one of our sales team members, with no engineering background, spin up a Cassandra cluster with a couple of clicks and then secure IOPS with a few more clicks. Even crazier, he then deployed Oracle and Hortonworks on the same hardware. This is what I call simple…

While I have great appreciation for our sales folks, honestly, they do not really know anything about defining an application pipeline and all its components for deployment over multiple nodes. This is where the full power of a true Robin Cloud Platform is – smaller teams bringing applications online faster with lesser resources.

We even took the Application Virtualization one step further and applied all the application lifecycle management to it so it is not limited to the initial deployment (like orchestration tools), but also extended to runtime scale in/out, entire application snapshots (and time travel), application cloning and IOPS guaranty to protect from noisy neighbors and DDOS and alike.

Last but not least – why Platform?

Data-centric applications, usually distributed or clustered, are the most complex to run – this is where Robin’s modern block storage capabilities come to play.

On Robin, any use case from simple web scale stateless applications to distributed complex application pipelines, can easily be defined in a plain English YAML text file listing all components, dependencies, required performance, IOPS, pre/post launch scripts and supporting resources.  Once easily defined, they can be repeatable and securely consumed with user defined customizations or as is out of the catalogue.

Using the Platform

IT can be assured that the resources will be optimally allocated (CAPEX) for multi-app consumption while the business owners will benefit from super-fast time to value bundled with guaranteed IOPS and continued agile application lifecycle management.

Putting it all together

When the focus is application centricity or even awareness, we can not only focus at the compute or storage levels. If you have a really good solution at the app level, why spend extra cycles for the individual components? Containers, virtual machines and other virtualized or software oriented components alone will simply not suffice.

Running complex distributed clustered apps has never been any simpler and easier… Let our sales guys show you how even they can now do tasks across applications that used to take a team or take a test drive – see for yourself.

I hope you agree where the roots are for our claim for being “first” in the context of Application and Platform while not claiming or taking any credit from other application delivery models as VDI or even pre-packaged virtual machines. If you are unsure, take a test drive and see for yourself how Application Virtualization breaks the prior paradigm.

Robin in 2 Minutes 

mm

Author Razi Sharir, Vice President Products & Marketing

More posts by Razi Sharir, Vice President Products & Marketing

Get Robin News

You have Successfully Subscribed!