DECEMBER, 20229 Yes, density has increased; but generally speaking for all the vast increases in raw compute power for a given rack, the power and cooling requirements have not seen a large step change as a general rule (there are exceptions today just as there were in the mainframe days). Look around and you'd still generally find data centers provisioned for 6-10kW per rack.Something else which remained constant was the fact that CPUs did all of the real work. These little chips are incredibly good at processing structured data ­ data in rows and columns ­ which is how the world ordered its data until recently. Today, much of the data is unstructured, coming from sources as diverse as a plethora of IOT devices to AI analysis of our sleep patterns to Bitcoin mining. The benefit of this wealth of information is that all of this huge data (both structured and unstructured) holds the key to helping organizations better understand their customers, follow patterns of behaviour to predict future actions, uncover breakthroughs to cure diseases, perform engineering simulations, and make communities safer, among a myriad of other things.To harness this plethora of data, artificial intelligence and machine learning has emerged, which uses data to train algorithms to find patterns. The problem is, traditional CPUs cannot adequately handle the intensive parallel processing required for machine learning, which is why GPUs are proliferating. GPUs, which are optimised for parallel computing, were originally developed to improve graphic rendering, but it became clear that GPUs are very useful for any application that needs a lot of data to be crunched quickly, including AI and machine learning, gaming, video editing and content creation, meteorology, and a wide range of scientific applications.As a result, GPU workload accelerators are fast becoming as necessary to effectively process large data sets (e.g. deep learning algorithms). While CPU's do have the capabilities to process tons of unstructured data, GPUs can do what a CPU takes days or weeks to do in a matter of minutes. Click on any tech news channel and you'll hear about GPU shortages owing to blockchain miners buying up the world's supply. This is because GPUs can uniquely solve these complex computational challenges.For the data center, with increased requirements for AI-processing, GPUs are becoming ubiquitous. This, in turn, is seeing a requirement for not just increased power and cooling, but for more directed cooling. A GPU can draw several times more power than a CPU, and has a much higher thermal design point (TDP). This means that it gets hotter in one spot so needs more directed cooling; take a look at any consumer grade GPU and note massive cooling fans (and often water cooling blocks). GPUs now mean that individual chips independently need over 3kW of cooling ­ some servers pack in over 15GPUs in a single frame. In short, adding a GPU alongside their traditional CPU can triple the density requirement of the server; in these deployments 30kW/rack is not uncommon, nor is the fact that direct-to-chip cooling is becoming almost essential.The interesting thing about all this is that the way we provision for the data center power and cooling is fundamentally changing, with the data center designers now taking a much greater interest not just in the total loads, but the types of loads. My advice to anyone looking at a new data center strategy? Get your IT teams in the room with the design teams and strategists early in the process; they need ­ more than ever ­ to understand the IT stack and the types of applications which will be running on this stack. Unless it's with your Friday fish supper, no one likes hot chips. WHILE CPU'S DO HAVE THE CAPABILITIES TO PROCESS TONS OF UNSTRUCTURED DATA, GPUS CAN DO WHAT A CPU TAKES DAYS OR WEEKS TO DO IN A MATTER OF MINUTESKevin Miller
< Page 8 | Page 10 >