Interviews

What’s new with your favorite virtualization companies and executives.

Events

Check out what’s happening in your area, from webinars to lunch and learns.

Blogs

Get the scoop on the latest technology news from industry experts.

How To’s

Step by step instructions on a variety of need to know virtualization topics.

News

Take a look at the industries most recent company and product annoucements.


Home » Top Story

Hyperconvergence and How We Got Here

Submitted by on March 30, 2018 – 11:36 amNo Comment

In order to understand the IT trends of today we must first discuss the past.  In decades past, the mainframe was the only game in town.  Conceptually, a mainframe was a single platform that addressed all elements of delivering a compute platform; the Local Area Network was Bus and Tag cables.  The Wide Area Network was a Communications Controller and Storage was directly attached (via Bus and Tag cables).   These are approximations but the concepts and components remain essentially the same as what we see today.  Platform management was a single interface and the environment supported virtualization (multiple safe spaces for user environments).

The challenge was that mainframes were expensive and not scalable.  When you needed additional capacity, you bought a new, bigger mainframe.  They were incredibly robust but their expense, and the inevitable march of technology, led to other solutions, particularly for the small and medium sides of the business.

Over time, we migrated to other platforms in three distinct pillars.  Compute, Storage and Networking.  Each pillar required distinct, specialized skills.  In larger IT shops, with sufficient resources, each pillar is managed separately.  These pillars provided new ways of solving the problems of scalability, reducing costs and other IT challenges.

Another key aspect of Mainframes was their fault tolerance and robustness.  In today’s IT environment, this is typically solved by moving fault tolerance up into the application layer, providing an equally robust environment… although some mainframe shops might argue that point.

Fast forward to the present and the emergence of “Converged Infrastructure”.  Converged Infrastructure combines the three pillars of Compute, Storage and Networking.  A rack, or group of tightly coupled racks, are combined to create a single, complete compute environment.  The basic concept was to put key resources together to localize traffic for performance and combine, and automate management into a single interface to simplify management.

The next big innovation, in the infrastructure space, involves storage.  Storage has also evolved over time, evolving from individual disks to disk arrays (RAID, etc.) and now to Software Defined Storage (SDS).  In disk arrays, fault tolerance is managed at the controller/disk drive level.  With SDS, the controller becomes a software layer and fault tolerance is at the storage/block level.  The software controller writes blocks down to a pool of disk drives (typically a JBOD or “Just a Bunch of Disks”).  This enables writing blocks (or check bytes) to multiple drives for fault tolerance, performance and other features like nearly instantaneous snapshots among many others.

One aspect of Software Defined Storage that makes it particularly appealing is the ability to layer the storage.  Since the Software Controller is controlling the reads and writes across a pool of drives, it is possible to divide the pool into different capabilities.  In the simplest, most typical implementation, there is a “Cache Tier” of high performance flash drives (or even memory).  The cache tier sits on top of the “Capacity Tier” as an accelerator for all read and write operations.  The Capacity Tier, on the other hand, provides the raw storage capacity and is typically composed of inexpensive, high capacity disk drives.  Sometimes these can also be flash devices for an “All-Flash” implementation for even higher performance requirements.  The bottom line is that Software Defined Storage provides high performance, fault tolerant storage, typically at price points well below traditional large storage arrays.

Hyper-Convergence is a further compression of Converged Infrastructure.  By definition, Software Defined Storage is dedicated storage.  Its only function is to provide storage.  However, it is possible to combine storage functionality with the ability to provide compute capabilities (VM’s, etc.).  When the compute and storage is combined into joint compute/storage nodes, you have a “Hyper-Converged” infrastructure.  Whereas Converged brings all the elements together in a rack (Compute, Storage and Networking) hyper-convergence takes it one-step further and combines the compute and storage.  Examples of this are Nutanix, VMware vSAN or Microsoft Storage Spaces Direct (S2D), among others.

Converged and hyperconverged Infrastructure are specific terms and concepts regarding how we assemble the hardware (infrastructure) layer.   It is important to note that this has nothing to do with how these resources are consumed.  Layered on top will be some level of software to tie everything together, make those resources available and provide resource management… plus much more.  Cloud is a popular topic these days and it one possible consumer of these resources.  However, it is important to separate the infrastructure discussion from the consumption layer discussion.  They are not directly related and are separate conversations… a topic for another day.