UPDATED 18:15 EST / MAY 26 2023

INFRA

Three insights you might have missed from the ISC High Performance event

What comes next in high-performance computing?

That was the big questionposed during this year’s ISC High Performance event, which ran from May 22-24, with faster, bigger storage coming together with more data and artificial intelligence to drive innovation.

To hear more of the latest, theCUBE analysts connected with industry professionals to discuss how the realm of high-performance computing is being reinvented and what market developments are driving the need for innovation.

“We’ve been reporting for years that AI and HPC are coming together in a big way,” said theCUBE industry analyst Dave Vellante. “We’ve seen that accelerate in 2023. Organizations are trying to figure out how to apply the potential of foundation models, such as ChatGPT, to make them more productive. The question is, how do they do it?”

In addition to AI, Vellante and co-analyst John Furrier talked about sustainability, machine learning, quantum and more during SiliconANGLE Media’s livestreaming studio theCUBE’s coverage of the event. (* Disclosure below.)

Here are three key insights you may have missed:

1. Composable computing could be a solution to HPC demands.

In the past, when one wanted to build a cluster and include graphics processing units in the cluster, one would need to buy a specific server that had GPUs in it. Not so with composable computing, which brings to bear a very high-speed network that allows users to decide whether that server is a GPU server or a memory server, according to Jeff Kirk, engineer at Dell Technologies Inc.

“It’s this concept of an external, very high-speed, low-latency fabric, that lets you essentially decide what the architecture of your server is,” he said.

Of course, the network is the critical piece in this whole equation. That’s often the last spot where everyone gets anxious for things to go faster, according to Furrier.

“There’s physics involved, but it connects to servers, storage and makes the hardware act as one HPC system instead of several independent systems,” he said.

Given that context, how is the HPC networking space evolving? The real issue there is latency, according to Kirk.

“If you want to add memory operations to the list — in other words, you have some memory that’s up in a garage connected — then you have to have low latency, because the central processing unit will stall until the memory access completes,” Kirk said. “This is definitely an area where latency matters.”

One big change in the networking space when it comes to HPC and networking is Remote Direct Memory Access over Converged Ethernet, which enables high bandwidth and low latency networking for HPC applications. RoCE has been around for a long time, but the technology has matured significantly to the point where HPC users now have access to a broad ecosystem of hardware and software solutions for their network, according to Laurent Hendrichs, senior product line manager of high-speed Ethernet adapters and SmartNIC at Broadcom Inc.

“In addition to RoCE, Ethernet has substantially closed the performance gap with InfiniBand. Whereas in the past, InfiniBand might have been your go-to technology for a high-performance network, right now you have the ability to deploy a network with similar performance and latency using Ethernet,” Hendrichs said. “And taking all the benefits that come with Ethernet, right, in terms of the standards, software and hardware ecosystem.”

There’s plenty to be intrigued about in the world of HPC, including quantum. Though quantum computing shows promise, there’s also plenty of work to do.

“Specific workloads run better on GPUs while other workloads do not. In that same way, there are going to be particular classes of workloads that quantum technologies will accelerate and improve over our current methods, and then there are others that they won’t,” said Burns Healy, emerging technology researcher with the Dell Research Office.

Here’s the complete video interview with Jeff Kirk, Laurent Hendrichs and Hemal Shah, distinguished engineer and system, software and standards architect at Broadcom, part of SiliconANGLE’s and theCUBE’s coverage of the ISC High Performance event:

2. With great power comes great sustainability.

Dealing with all of this high-powered equipment means companies must also seek high-powered sustainability solutions to manage them. The big topic on Furrier’s mind was power and cooling — but how does one get more power to all these CPUs, GPUs and processors while simultaneously making everything sustainable?

“One of the biggest challenges is bringing sufficient power into these systems to support these high-performance processors, both CPUs and GPUs,” said David Hardy, data center solutions – strategy and business development at Dell, who is product manager for PowerEdge. “Luckily, it’s more than worth it. The performance gains relative to the increased power make it a no-brainer to go with the next-generation systems.”

The other part of the equation, of course, is how to cool it down.

“Generationally, we keep improving how much we can air-cool, and we’ve got liquid cooling options that make everything run very efficiently,” Hardy said.

The relationship between Intel Corp. and Dell has been well-documented over many years. Being mindful of the cooling challenges around sustainability, Intel has also been working with OEMs such as Dell to create solutions for hyper-powered processors.

“We have offerings that target these markets,” said Mohan Kumar, fellow, data center and artificial intelligence at Intel. “We have optimized heat sinks that target liquid cooling-based solutions. And we have, above all, this ‘no one left behind’ approach to solving performance problems.”

Beyond just processors, the company has GPU and AI solutions, according to Kumar. Intel also closely partners with Dell to ensure the proper standards are in place.

“We have an approach to essentially provide them with solutions,” he said.

Here’s theCUBE’s complete video interview with David Hardy, Mohan Kumar and Tim Shedd, engineering technologist, office of the chief technology information officer at Dell:

3. HPC is making an impact in the world of finance.

Traditionally, having only been thought of as involving giant supercomputers for particular research, HPC today has evolved. AI has had a big impact with more believing there’s opportunities available for AI and HPC to intersect.

“AI used to be a lot of research; it would be one or two nodes, one or two GPUs and a lot of testing. But now, with all the things going on with OpenAI and with those things being near completion, you can start throwing some horsepower at it,” said Peter Nguyen, senior product manager at Dell.

HPC has also seen its usage explode across various industries. It’s now being used in healthcare, government, manufacturing and fintech. In the world of finance, HPC is being used at the quantitative level and proving to lead a new frontier in real-time risk analysis.

“Customers are doing large-scale simulations, Monte Carlo simulations, as well as doing risk valuations in real-time, so they can get ahead of the competition,” said Prabhu Ramamoorthy, customer/partner developer relationship manager at Nvidia Corp.

People want to do more in this area combined with artificial intelligence and other technologies, Ramamoorthy added. “For example, they want to use large language models and then use it for trading signals and do their own algorithms in qualitative finance,” he said.

In the middle of all the action is the STAC Benchmark Council, which covers over 50 leading technology vendors and over 400 financial firms. STAC’s main goal is to improve technology discovery and assessment for the finance industry, according to Peter Nabicht, president of the Securities Technology Analysis Center LLC.

When STAC first started, Nabicht was a chief technology officer at a trading firm and had all of his engineers doing nothing but evaluating technology and doing bake-offs. The idea that emerged involved the fact that everyone had similar workloads.

“STAC came along and helped bring together the Benchmark Council to define those workloads so that we could do apples-to-apples comparisons of different technology stacks to see how they solve the problems, how quickly they do it, how much throughput they can get done and how efficiently they do it,” he said. “Now, 15 years later, that’s what we do in a variety of areas, including HPC and AI.”

Here’s theCUBE’s complete video interview with Prabhu Ramamoorthy, Peter Nabicht and Andrew Luu, product manager at Dell:

To watch more of theCUBE’s coverage of the ISC High Performance event, here’s our complete event video playlist:

(* Disclosure: TheCUBE is a paid media partner for the ISC High Performance event. Neither Dell Technologies Inc., the sponsor of theCUBE’s event coverage, nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)

Image by PhonlamaiPhoto / Canva

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU