DECEMBER, 20228 IN MYV EWBY KEVIN MILLER, PRINCIPAL, DIGITAL MANAGEMENT, AURECONFUTURE-PROOFING DATA CENTER DESIGNS WITH A DASH OF INNOVATIONWhile data centers have, in my 20+ years of experience in designing, building, and operating them, changed in many ways, so much has remained the same. Will the advent of new workloads change that forever? Now, more than ever, the compute loads within them are evolving to a new paradigm driven by requirements for IOT-based data, which in turn will fundamentally change the facilities in which those loads reside. To understand how this shift is happening, it is worth exploring the evolution of the compute loads over time to bring us to the current day. From the earliest days of the mainframe, traditional data centers relied on defined physical infrastructure dedicated to a specific purpose, with expansion of workloads generally requiring a matched expansion in physical hardware. If you wanted more compute, storage, or networking, you bought more hardware, leading to a expansion of data center area. By today's standards utilization was exceedingly low, as invariably not all hardware was running at anywhere near full capacity all the time. Many things have progressed since those physically restrictive days. First we saw virtualization in the early 2000s, which was refined and turbo charged to bring us to the software-defined data center of today. This ushered in an era where a logical computing process was not restricted to a physical piece of hardware but could be moved around the data center in ever more elaborate ways. These innovations led to significant increases in raw computing per rack. It did not generally, however, translate into huge increases in power and cooling requirements for the facility. This is largely due to the fact that historically load provisioning has been done using a very risk-based approach. In the early days, if you had 10 racks with a peak power draw of 5kW each, you'd likely provision for 50kW of power provision (with a little extra for luck). The fact that each of those servers was almost never at 5kW, didn't feature into the consciousness of designers because there was simply no way of knowing what the loads would be, or when. On that one day a year when all servers were running at full throttle, you had to have that power available.Virtualization made the data centers more dense; more of the time, more of the servers were closer to their maximum. But that maximum didn't fundamentally change. So you still provisioned your 50kW for your 10 racks, you just used more of it for most of the time.
< Page 7 | Page 9 >