Until about the year 2000, each server in a data center ran a single application. While this approach ensured application performance, as processing capability increased, it often left a lot of a server’s processing power sitting idle much of the time. This one application per server approach also uses up a lot of real estate in the data center, not to mention the attendant power and cooling costs.
Taking advantage of available excess capacity, software engineers adapted a concept from the world of supercomputers that provides a layer of abstraction between the hardware and the applications running on it. This allows for a single physical server to be divided into multiple “virtual” servers or virtual machines. This virtualization allows administrators to run multiple applications on a single physical server, recapturing underutilized processing capacity and reducing the data center footprint. It also allowed multiple virtual servers across multiple physical servers to be viewed, managed, and utilized as pooled resources.
Initially, the idea of server consolidation (reducing the number of physical servers in the data center) was attractive to operators of large data centers, due to the significant reduction in hardware, space, administrative, and electrical costs. However, the economy and flexibility of virtualization soon found advocates in much smaller environments. Once administrators were able to utilize a significantly greater percentage of overall server capacity, it was much less expensive to allocate virtual machines to redundancy, thus increasing availability. It also reduced the cost of application and operating system testing by eliminating the need to purchase an isolated server; in a virtualized environment, that capacity can be spun up at will.
While server virtualization initially appealed to large data center operators, it was soon appearing in data centers of all sizes.
In time, engineers realized that the idea behind virtualization could be leveraged to other resources in the data center. Today, storage virtualization and network virtualization are making inroads into data centers, further simplifying administration, shrinking the footprint, increasing agility, and moving the vision of a software-defined data center from concept to reality.
The storage virtualization movement is well underway in 2016. By pooling physical storage residing within multiple network storage devices, storage virtualization allows all of those resources to be viewed and managed as a single storage device from a single console. Because resources are easily viewed and managed, storage virtualization speeds and simplifies backup, archiving, and recovery.
Similarly, virtual storage area networks, such as VMware’s vSAN, add the ability to dynamically scale storage capacity. This is achieved by clustering server-attached flash devices and/or hard disks to provide a flash-optimized, highly resilient shared datastore suitable for workloads including applications, virtual desktops, remote IT, DR, and DevOps infrastructure.
If this is beginning to sound a little like cloud storage to you, you’re right. This is essentially the model for cloud storage–multiple storage devices that are dynamically managed to scale on demand. In fact, a virtualized storage environment greatly simplifies a move to cloud storage as the cloud resources can be viewed simply as extensions of the existing virtualized resources.
As with server and storage virtualization, network virtualization abstracts existing resources and allows them be viewed and managed from a single pane of glass, using open protocols such as OpenFlow. This allows for on-demand provisioning of resources without the need to physically configure cabling and switches with every network change. Software-defined networking (SDN) takes things a step further, entirely separating the control plane from the data plane, and enabling administrators to spin up virtual components virtually at will.
Individually, the components of the software-defined data center yield significant benefits to the organization by optimizing resource utilization, simplifying the management of those resources, and reducing the data center footprint and overall costs.
Taken together, a fully virtualized – and software-defined – data center, not only yields substantial operational and administrative savings, but creates a cloud-ready infrastructure that will pay dividends for many years to come. What’s more, you can start down the path to a software-defined data center at any point – compute, store, or network – whichever will deliver the greatest benefit or return.
Online retailers and travel sites were some of the earliest adopters of virtualized resources, and remain big beneficiaries of the software-defined data center, as they provide consistent service to customers whether they’re shopping on Black Friday, or rebooking a flight home from Thanksgiving with the family.
Regardless of your specific business cycles, a software-defined data center starts to deliver benefits from day one, allowing the organization to rapidly reallocate and scale resources in times of high demand, and then release those resources for other uses as demand drops off.
In our increasingly 24/7, web-centric, cloud-connected world, such agility is vital to delivering a high-quality customer experience. But it’s not an all-or-nothing proposition; Zones works in partnership with industry leaders like Cisco, Dell, Hewlett Packard Enterprise, VMware, and many others to help clients implement advanced data center solutions at any stage.
This article originally appeared in the Fall 2016 edition of Solutions by Zones magazine.
View online