ROBIN FOR BIG DATA PIPELINE

Big Data Pipeline Challenges

Data, the new currency of the digital economy, is an untapped valuable resource that organizations should learn to extract and use to reap huge rewards. Today, organizations are struggling to cobble together different open source software to build an effective data pipeline that can withstand the volume as well as the speed of data ingestion and analysis.

To stay competitive in today’s global economy, organizations need to harness data from multiple sources, extract information and then make real-time decisions. Organizations are looking to build a Data Pipeline, an automated process that executes at regular intervals to ingest, cleanse, transform and/or aggregate the output dataset in the format that is suitable for downstream.

Operationally you can think of a big data pipeline, as a cluster of applications that enable the extract-load-transform-view data-process. The applications used within the big data pipeline differ from case to case and almost always present multiple challenges.

Challenges

Big Data Pipeline challenges resolved by Robin Application Virtualization Platform

Robin Solution

Creating an elastic, agile, & high-performance Big Data pipeline rapidly

  • Define application details in a single file
  • Deploy within minutes
  • Get Application-to-Spindle Qos
  • Manage Application lifecycle
  • Eliminate noisy neighbor problem 

Robin’s Application Virtualization Platform provides a complete out-of-the-box solution for hosting all the components in your big data pipeline created out of your existing hardware – proprietary / commodity, or cloud components. The solution can be deployed on bare metal or on virtual machines, allowing organizations to rapidly deploy multiple instances of their data-driven applications without creating additional copies of data.

Simple pipeline application & data management

Building Big Data pipeline on Robin Application Virtualization Platform

Robin benefits for Big Data Pipeline

Agile Provisioning

Agile Provisioning Big Data Pipeline - Robin Systems
  • Simplify cluster deployment using application-aware fabric controller—provision an entire operational data pipeline within minutes
  • Deploy container-based “virtual clusters” running across commodity servers
  • Automate tasks – create, schedule and operate virtual application clusters
  • Scale-up or scale-out instantaneously to meet application performance demands

Cloudera Cluster deployment on Robin
Watch demo.

Robin’s application-aware fabric controller simplifies deployment and lifecycle management using container-based “virtual clusters.” Each cluster node is deployed within a container, and a collection of containers running across servers form the “virtual cluster.” This allows Robin to automate all tasks pertaining to the creation, scheduling, and operation of these virtual application clusters, to the extent that an entire data pipeline can be provisioned or cloned with a single click and minimal upfront planning or configuration.

Elastic Scaling and Cloning

Elastic cloning - scale up and scale out - Big Data Pipeline - Robin Systems
    • Build elastic infrastructure that provides all resources to each application as needed
    • Create single-click clone of entire data pipeline
    • Create thin clones on the fly without affecting data in production

Scaling and cloning of an ELK cluster
Watch demo.

It is necessary to scale up or out as demand for resources spikes and then comes back to normal. Robin enables you to scale up with a single click by allocating more resources to the application. Robin enables you to scale out easily when you need to add nodes and helps you clone parts of your data when you need to give data to developers and analysts for analytics, test upgrades, testing changes or for integration testing.

Cluster Consolidation and QoS

Cluster Consolidation Big Data Pipeline Robin Systems
  • Eliminate cluster sprawl with data pipeline components on the same shared hardware
  • Enable multi-tenancy with performance isolation and dynamic performance controls
  • Leverage dynamic QoS controls for every resource – CPU, memory, network and storage

Share data across 2 Cloudera clusters
Watch demo.

Robin eliminates cluster sprawl by deploying a data pipeline on shared hardware. This also results in better hardware utilization. The key to successful multi-tenancy is the ability to provide performance isolation and dynamic performance controls. The Robin application-aware fabric controller equips each virtual cluster with dynamic QoS controls for every resource that it depends on – CPU, memory, network and storage. This creates a truly elastic infrastructure that delivers CPU, memory, network and storage resources – both capacity and performance – to an application exactly at the instant it is needed.

Application Time Travel

Time travel for Applications - Big Data Pipeline - Robin Systems
  • Take unlimited cluster snapshots
  • Restore or refresh a cluster to any point-in-time using snapshots.

Application time travel
Watch demo.

Robin provides out of the box support for application time travel. Cluster level distributed snapshots at pre-defined intervals can be really useful to restore the entire pipeline or parts of it if anything goes wrong. Robin recommends admins to take snapshots before making any major changes. Whether you are upgrading the software version or making a configuration change, make sure to have a snapshot. If anything goes wrong, the entire cluster can be restored to the last known snapshot in a matter of minutes.