THANK YOU FOR SUBSCRIBING
The Fusion of AI and HPC
In recent years, AI has revolutionised scientific computing, and the convergence of workloads is ongoing.

By
Apac CIOOutlook | Friday, November 17, 2023
Stay ahead of the industry with exclusive feature stories on the top companies, expert insights and the latest news delivered straight to your inbox. Subscribe today.
In the rapidly evolving landscape of technology, integrating AI's data-intensive capabilities with HPC's computational power necessitates innovative hardware solutions.
FREMONT, CA: In recent years, AI has revolutionised scientific computing, and the convergence of workloads is ongoing. This has led to a growing demand for GPUs in high-performance computing (HPC). The supercomputing and HPC community are now recognising the potential of AI. Notably, the same GPUs are used to accelerate AI, which ensures that one architecture is accessible to all markets and users. In various scientific fields, supercomputers are being utilised to expedite the process of scientific discovery.
The current practice involves the utilisation of physics-based models to simulate phenomena that are challenging to observe through experimental means, as is the case with climate change due to its vast scale and lengthy timescales, making practical experimentation unfeasible for testing hypotheses proposed by climate scientists. Consequently, simulation becomes the method of choice.
The challenge lies in achieving accuracy and securing adequate compute cycles in supercomputing. Simulating atomic-level details is often impractical, necessitating approximations and subsequent validation. In climate change research, historical data aids model testing, yet Earth-scale challenges persist. Capturing intricate phenomena, like cloud formation at a sub-kilometer scale with hundred-metre eddies, demands more compute cycles for simulation precision.
When scaling up computation is unattainable, an alternative involves training AI systems to observe and approximate simulations. While AI operates significantly faster than the original algorithm, its use remains provisional, requiring validation. Nevertheless, it empowers researchers to explore broader possibilities and detect complex phenomena unattainable through traditional methods, followed by in-depth analysis via first-principles physics simulations.
AI surrogates find application at the molecular level, aiding in tasks such as protein folding and the study of the functioning of biological entities like viruses. Simulating the interaction of viruses with drugs presents challenges due to the need for short time steps over an extended duration. However, the utilisation of AI-based surrogate models can accelerate this simulation process.
AI also accelerates existing processes, such as the use of AI-based preconditioners to expedite solving mathematical equations. These preconditioners transform equations into a more efficiently solvable format, an intricate process that, when achieved, significantly speeds up linear equation solving while maintaining numerical accuracy. They are commonly employed in computer-aided engineering, such as vehicle crash analysis. Upcoming changes in supercomputing hardware will revolutionise the connection between CPUs and GPUs, allowing for more integrated solutions.
Historically, optimising data transfer was crucial, but by colocating the CPU and GPU with equal bandwidth, a 600-GB fully coherent GPU is created. This minimises data movement concerns, enabling the OS to efficiently manage data transfer and ultimately advancing scientific breakthroughs through AI.