Supercomputers, also known as HPC (High-Performance Computing), are handling frameworks fit for performing elite execution processing work. Historically used for scientific computing, supercomputers can process those simulations whose complexity falls within the order of numbers outside the door of ordinary PCs or servers in the vast majority of data centers. To convey the idea, at a purely informative level, an HPC present in the TOP500, the list that officially lists the five hundred most powerful supercomputers in the world, can be able to perform in a fraction of a second a calculation job that a single workstation, regardless of its performance, it would take up to several years.
What distinguishes the simple personal computer from the supercomputer is the order of magnitude of the computing power and the hardware-software architecture. Furthermore, the data centers hosting the HPC systems are also highly energy-intensive due to the objective size and number of cores to be powered. They require specific economic and environmental sustainability assessments, with an increasingly marked orientation towards using renewable sources.
The Official TOP500 Ranking And Its New Number One: Frontier
To classify the supercomputers active on the planet uniformly, the TOP500 project was founded in 1993, which twice a year publishes the updated list of the 500 systems considered to be the best performing based on the results obtained through the Linpack benchmark. In addition to the pure and straightforward ranking order, TOP500 summarizes information about the location, the owner and the computational power extremes that characterize the HPC, and the references of the technology providers and the general area of use. Usually, the TOP500 is officially presented at some essential industry events, such as the ISC High-Performance Conference and the SC conference, both of which are regularly held in the United States.
AMD also realizes the promise so far not kept by the Aurora Supercomputer, based on Intel technology, which despite being talked about for several years, continues to be constantly postponed. Still, no specific predictions on its actual delivery are understood. Frontier is, therefore, the most powerful supercomputer in the world, net of the great mystery. Unofficial sources reveal two HPC systems capable of reaching a peak capacity of 1.3 Exaflops, a value higher than the 1.1 Exaflops officially detected by Linpack on the Frontier supercomputer. The aforementioned Chinese technology has never been subjected to any official benchmark, so at the moment, its actual potential continues to remain shrouded in mystery. Opinions on this matter are currently conflicting, to the point of placing each other at the antipodes.
Some experts are convinced that this is pure and simple propaganda, and the proof would be given by the fact that both Chinese HPCs have not been officially subjected to the Linpack tests, from which the official reports for the preparation of the official TOP500 list are extracted. Others, on the other hand, argue that given the number of resources allocated in the field of research and development for supercomputing, the values disclosed could, however impressive, prove to be credible, attributing the aura of mystery that surrounds them to reasons of a political nature. The game is open to all intents and purposes. Still, as regards the reliability guaranteed by the TOP500 project, Frontier today proves to be the undisputed leader of a constantly evolving technology.
AMD’s Overwhelming Rise In The HPC Market
The great success of Frontier, as mentioned, is based on the computational techniques of the third-generation Epyc from AMD, which in just five years, has managed to bring as many as 93 supercomputers to the top 500. Epyc processors, used for HPC and server systems, are distinguished by the high number of cores on the individual CPUs. The Epyc platform is based on “traditional” CPUs and GPUs to perform vector-based calculations, a factor that could revive AMD’s competitiveness against NVIDIA, its historical rival in video hardware and software technologies.
On the CPU front, Intel has accused and officially admitted the blow suffered, to the point that the new CEO of the Santa Clara company would have identified among the first points of strategic renewal a series of operations aimed at gaining that market share lost in against AMD and other producers of non-x86 technologies, such as the projects based on the Arm architecture, whose announced acquisition by NVIDIA has not been completed for reasons mainly related to antitrust. At the TOP500 level, Intel remains the leading provider, with 388 systems under its belt, a figure significantly lower than the 464 HPC it could boast five years ago.
This decline is essentially due to a series of erroneous assessments on the strategic and technological front, which are producing their effects, especially now. Gelsinger has confessed that in 2023 this decline will continue, while if the company’s relaunch strategies do not suffer unexpected events, in 2024, Intel should begin to resume its leadership on the new HPC systems. The technological war in the context of HPC is becoming increasingly intense, but what are the reasons supercomputing is becoming an increasingly demanded resource on the market?
The Applications Of Supercomputers
As mentioned in the introduction, supercomputers are historically used to solve significant problems of science from a computational point of view by carrying out simulations to confirm the reliability or otherwise of the hypotheses formulated by researchers. Supercomputers are therefore used on various fronts of scientific research to arrive at increasingly important discoveries compared to what is still unknown in the universe in which we live.
It is difficult at the moment to mention a single scientific field that may be indifferent to the potential offered by supercomputers, which have long been the subject of significant attention also by the research and development sectors of industrial players. At the mainstream level, supercomputers are constantly engaged in studying the complexity of climate change, simulating weather forecasts and carrying out vital research in microbiology, as has been shown on more than one occasion during the Covid-19 pandemic. An interconnected area that sees supercomputers constantly engaged in research is also represented by medicine and pharmacology.
Read Also: Integrating Google Analytics Into WordPress