From mobile devices, to the cloud, to the Internet of Things (IoT), new technologies give IT teams more types of data to manage than ever before. This data diversity has also created unprecedented storage diversity, as new storage types emerge to serve specific data needs. Innovations like phase-change memory answer the need for ever-faster response times. Object and cloud storage products are reducing costs to store data that isn’t actively used, but can’t be deleted.
Each of these storage types and platforms offer different benefits to the datacenter, but are silos that trap data and need to be managed separately. Data virtualization resolves this problem by enabling IT to automatically allocate storage resources according to changing data needs. Let’s take a closer look at how.
Achieving Global Data Visibility
Server virtualization gives companies global visibility into compute resources with centralized, automated management for application and server deployment. Software-defined networking (SDN) gives organizations the ability to dynamically configure networking, but data is still trapped in silos, making storage the most stagnant and inflexible part of IT today. Data virtualization dynamically aligns storage to changing business needs, which is the next step towards the truly software-defined datacenter (SDDC).
Data virtualization separates metadata (data about the data) from the actual data, to provide a logical abstraction of physical storage within a single global dataspace. This gives IT a unified view of data and how it is being accessed. Metadata statistics and telemetry deliver unprecedented insight into the operational health of the datacenter. Once data can be viewed and accessed independently of the storage container, companies can finally see what’s happening across all of their storage resources.
“Data virtualization can detect when resources are becoming contended and proactively move data to other resources”
Dynamically Aligning Storage to Evolving Business Needs
Once data is freed from storage silos, it can be moved non-disruptively across different types of storage automatically from flash, to SAN and NAS, to the cloud, as business needs evolve.
This automated movement can be controlled by policy. Just as server virtualization enabled IT to assign a virtual machine (VM)’s CPU and RAM requirements, IT can now assign requirements for data—using terms application owners understand.
For example, conventionally, an application or VM admin informs a storage admin of the requirements of their app or VM. Then there is a wait while the storage admin searches available storage devices for a system that can support those needs, and provides the app or VM admin a storage target. With data virtualization, an admin simply has to assign policies that define the application’s performance (IOPS, bandwidth, latency) and protection (availability, durability, security) needs. These objective-based policies can be defined by an application owner or architect and prepackaged in ways that allow admins to assign policies themselves. Once the policy is assigned, the data virtualization software automatically places data on the ideal storage to meet the objectives.
Director-Virtualization Product, Primary Data
Unlike conventional, converged, and hyper-converged technologies, instead of moving data into a new silo that can be managed more easily, data virtualization eliminates the silos that make global data management difficult to begin with. This introduces much more powerful data management capabilities.
While traditional data management software is blind beyond the storage container, data virtualization can perform real-time analytics of statistics with telemetry across all resources in the datacenter. This global visibility enhances data placement decisions. For example, data virtualization enables admins to prioritize data to protect mission critical service levels. When resource contention on a storage device threatens to affect Quality of Service, data virtualization can see whether other storage targets can meet the data requirements and automatically and non-disruptively move the data to maintain its SLO.
Adding Intelligence for Predictive Datacenter Management
The real-time analytics of data virtualization software even makes it possible to implement machine learning algorithms that predict when data needs might increase. For instance, data virtualization software can learn that data associated with financial applications is accessed more at the end of the quarter. It can proactively move this data to higher performing storage before the activity surge occurs, then back to more cost-effective storage once activity levels subside.
Intelligent data virtualization also makes management easier for admins because polices can be leveraged to automate the assignment of data objectives. Data virtualization software can automatically apply policies to data based onactivity or inactivity levels, such as files that have been active or inactive in the last 30 days, or based on pattern matching of file types, such as .tmp, .log, .dat, or .sql files.
Eliminate Fire Drills and Migration Pain
Just as vMotion moves applications and operating environments between servers without disruption, data virtualization can non-disruptively move data between storage resources. This has enormous benefits. For example, an application can suffer high latency when a storage resource can’t meet the application’s needs during times of high demand. Data virtualization can detect when resources are becoming contended and proactively move data to other resources. Since this mobility is non-disruptive, enterprises can also keep applications live during storage migrations and upgrades. With data virtualization, storage upgrades become a simple and fast process of adding the data to the global dataspace, applying the appropriate policies and pointing the application to the new location, with no need for slow and error-prone data migrations. When these actions happen automatically, application uptime increases and IT can focus on other strategic tasks that add more value to the business.
Data virtualization enables companies to finally do with storage what they have been doing with servers for decades. Server virtualization gives companies global visibility into compute resources, as well as the ability to centrally manage and automatically allocate server resources according to VM needs. Similarly, data virtualization gives companies global visibility into storage resources, and the ability to centrally manage data according to business needs. With data virtualization complementing server virtualization, enterprises can finally maximize efficiency from compute to storage resources and achieve the agility necessary to respond instantly to evolving business demands.