Hyperconverged Infrastructure solutions may differ in features, architecture and scalability, but does the differences really matter?
Public Cloud providers, internet search engines and social media companies realized early that traditional legacy infrastructures wouldn´t allow them to scale their data centers as their core business grow. Traditional legacy architectures are complex to set up, complex to operate, complex to maintain and more than often lack the possibilities to scale with business’s needs. Their solution was to develop frameworks, built for virtualization running on off-the-shelf hardware. This would allow them to scale their infrastructures without any limitations, and at the same time reduce complexity by automating day to day tasks. The key factor for this is to use Software-Defined solutions and virtualize the entire infrastructure stack, consisting of servers, storage and networking. As these companies started building their new, modern Software-Defined data centers, other organizations all over the world continued (and still do) to use traditional legacy architectures. Many vendors tried to address this by developing a new (ahem) solution, Converged Infrastructure. The only difference, in my opinion, between a traditional legacy 3-tier architecture and a converged infrastructure is the number of SKU´s. From a buyer´s perspective, a converged infrastructure is a single SKU containing many different components from multiple vendors. Sure, some converged solutions may offer an application that can do some sort of solution management or monitoring, but it´s lightyears away from a truly Software-Defined solution.
Fast-forward a couple years and new-to-the-market vendors came to the conclusion that private data centers would also benefit from all the positive effects of a Software-Defined platform. By merging the traditional infrastructure hardware components, compute, storage and networking into a “single box” and add Software-Defined capabilities, a Hyper-Converged Infrastructure (HCI) solution was born.
Hyperconvergence is a framework that combines compute, storage and networking into a single system. By implementing Software-Defined elements such as Software-Defined-Storage (SDS) and Software-Defined-Networking (SDN), organizations can reduce complexity, add flexibility and increase scalability as these virtual elements, with a seamless Hypervisor integration, adds greater levels of automation.
Today, in 2017, I think that every vendor in the IT infrastructure market has an HCI solution, or is about to launch one. This makes me believe, that it really is time for every organization to start evaluate HCI solutions and discover all the benefits such an investment offers. The size of your organization doesn´t really matter. Who wouldn´t benefit from reduced complexity, increased scalability and added flexibility?
I started this post with a question and I think it´s time to answer that. Yes, it does matter. I like to think of HCI as an organizations private cloud service, and that means it must reduce complexity, increase scalability and add flexibility. To achieve this an HCI solution must have a set of core features:
- A single management solution for the entire stack
- Easy operations and non-disruptive upgrades
- Business dependent consumption model
- Distributed filesystem for scalability and data distribution
- Hardware and hypervisor agnostic
- Self-service portal for end-users
- High level of automation and machine learning
- Embedded security
An HCI solution that has the above possibilities lays the foundation for organizations’ ability to focus on continuing to develop their core business.