Graham Smith, Director of Virtualization Product Management, Primary Data
Computer World’s Forecast 2014 placed virtualization in the top five areas where IT spending is increasing, while IDC’s State of the Market for 2012 listed server virtualization as the top IT budget priority among respondents. It’s easy to understand why, given how server virtualization increases infrastructure efficiency, provisioning speed, disaster recovery times and return-on-investment. But while server virtualization has matured, there are still limitations in the technologies that address the storage supporting virtualized machines. These boundaries cause enterprises to struggle in keeping up with today’s rapid data growth, aging datasets, managing increasing numbers of VMs, and being responsive and flexible to changing application needs.
This article discusses why physical storage systems fall short, how server virtualization complicates these problems, and how emerging data virtualization technologies can resolve the issues.
Data is Dynamic But Physical Storage is Static
Data is constantly changing; but the storage it rests on today is static. Storage that works well for an application on the day it is deployed is often quickly mismatched to the application’s needs as it becomes more or less critical to the business over time. Today, moving data is complex, laborious, and a manual process. In addition, datacenters are given with a wide dynamic range of storage options to provision and move data: Server-side flash, SAN, NAS, private clouds, and even multiple public clouds.
Virtualized Servers Stress Storage Silos
It’s no secret that server virtualization creates a number of storage challenges. Unpredictable I/O in virtualized environments and the lack of visibility into VM storage requirements and usage forces enterprises to overprovision to provide sufficient performance and capacity to VMs. The ease of creating VMs greatly simplifies administrator tasks, but this ease also led to an issue known as VM sprawl. This sprawl occurs when VMs were created for a temporary need but are long forgotten by their creators. Sometimes even when VMs are destroyed, their virtual disks (VMDKs) remain, creating zombie VMDKs. Abandoned VMs, active but cool or cold VMDKs, and zombie VMDKs all waste valuable space on expensive storage resources, which offsets much of the cost savings and efficiency virtualization can provide.
Data Mobility Enables an Application-Centric Datacenter
The most promising solution to this problem is technologies that virtualize data—abstract a file from the underlying physical storage hardware—and then make it accessible within a single dataspace. Data virtualization enables a host of capabilities that allow the enterprise to achieve an agile, flexible, and cost-efficient datacenter.
Guarantee VM Performance at the Lowest Cost
Once data is virtualized, it is possible for intelligent software to dynamically place it according to the policies. This means that as VM and application workloads change, software can automatically redistribute workloads to maximize resiliency, cost efficiency, data protection, security and performance. By dynamically moving VMs and Virtual Disks to the lowest cost storage that meets their SLOs (Service Level Objectives), enterprises can guarantee VM performance and regulatory compliance without the excessive overprovisioning required in traditional environments.
Simplify Infrastructure by Aligning Resources with Demands
The ability to move data within a global dataspace also enables enterprises to make the best use of their infrastructure investments. First, data mobility simplifies administration by making all devices manageable by policy through a single pane of glass, integrating with hypervisor managers, such as vCenter and Openstackthrough APIs.
Second, data mobility enables enterprises to achieve an application-centric view of their data—to focus on policies that give applications the performance, protection, and security they need. Data resources become a global pool that administrators scale linearly and incrementally. This avoids the proliferation of storage containers and overprovisioned Tier 1 and Tier 2 storage that must all be managed independently.
Third, data virtualization can make storage from any vendor compliant with the hypervisor. For example, data virtualization could present all data to VMs in a VVOL compliant manner, making it unnecessary to limit hardware to models that directly support VVOL. By the same token, data virtualization can make virtualized storage such as VSAN, which is typically a silo in itself, to be usable by physical servers or as a storage pool for user data. Finally, policies can automatically identify cool or cold Virtual Disks so they can be deleted or moved to lower cost storage, freeing capacity on expensive storage tiers to allow future storage purchases to be deferred. They can do this transparently across multiple hypervisor environments such as ESXi, KVM, Xen, and others, so companies can automates a single and consistent policy-based management scheme across all virtualized and physical environments.
New Technologies Accelerate Innovation, Integrate with Legacy Infrastructure
Enterprises frequently have such demanding needs of technology that they often have to wait until products mature to gain full advantage of them. Data virtualization with data mobility can accelerate the adoption curve while still protecting enterprise data and users. A good example of this is container technologies like Docker and CoreOS.
Containers can reduce costs by allowing multiple isolated applications to run on the same physical or virtual server while achieving bare metal performance. But enterprise use for containers today is very limited because container images are currently confined to the local storage of the host operating system. This makes it hard to apply SLOs to the images. Data virtualization enables the files used by applications running in containers to be shared, uniquely managed and moved to ensure enterprises meet SLOs at the low¬est cost. Hot files can automatically be promoted to flash, while cool files can be moved to lower cost arrays or the cloud.
In addition, data virtualization removes the challenge of having to source expensive components to scale out capacity in a SAN. Disks added to typical array systems need to be compatible and sourced from the array vendor, which can be costly and challenging to source in our fast-evolving IT landscape. With data virtualization, new resources for capacity or performance can be added easily, and the system immediately begins to use the new resource to load balance data according to IT policies.
Transparently Meet the Demands of Growth and Changing Application Needs
The massive numbers of VMs, Virtual Disks and containers enterprises need to provision and manage is daunting, and the stress they place on physical storage is immense. Policy-based data mobility through data virtualization improves data center efficiency, agility and flexibility. Automated provisioning ensures VM and container SLOs are met, even as application and VM needs change. Dynamically distributed workloads, managed by policy under a single pane of glass, simplifies management and reduces total cost of ownership (TCO). With these benefits, virtualization finally receives the dynamic, responsive storage support it needs to deliver on the promise of maximizing datacenter efficiency.