Nvidia bets on AI hype to help it chip away at Intel

Chip maker Nvidia hopes to mount a stronger challenge to Intel with a new product line that promises the best generative AI capabilities.

Nick Wood

May 30, 2023

5 Min Read
Nvidia Grace Hopper

Chip maker Nvidia hopes to mount a stronger challenge to Intel with a new product line that promises the best generative AI capabilities.

Over the weekend, the company announced that its new ‘superchip’ has entered full production, and will power a new supercomputer offering. The chip combines Nvidia’s Grace line of central processing units (CPUs) with its Hopper graphics processing unit (GPU) into one, all-powerful product, cleverly named Grace Hopper, after the pioneering American computer programmer.

Up to 256 of these will underpin its new DGX GH200, a supercomputer pitched at developers that want to build large language models for AI chatbots, complex algorithms for recommendation engines, and neural networks for fraud detection and data analytics.

Nvidia said Google Cloud, Microsoft, and Facebook parent Meta are among the first hyperscalers expected to make use of DGX GH200’s capabilities. It’s not clear if that means they have actually signed on the dotted line or are just taking at look at it.

“Generative AI is rapidly transforming businesses, unlocking new opportunities and accelerating discovery in healthcare, finance, business services and many more industries,” said Ian Buck, Nvidia’s VP of accelerated computing. “With Grace Hopper superchips in full production, manufacturers worldwide will soon provide the accelerated infrastructure enterprises need to build and deploy generative AI applications that leverage their unique proprietary data.”

In addition to enterprise, the Grace Hopper superchip has potential ramifications for the telecoms industry because it might just settle a debate currently taking place in the open RAN architecture market.

On one side is Intel, which insists that a CPU can handle all the heavy lifting, from the physical layer all the way up the stack to the application layer, with minimal help. On the other side are players like Nvidia, which have a long storied history of augmenting the performance of CPUs with GPUs. Understandably, they think Open RAN requires dedicated hardware acceleration to improve physical layer processing performance. However, this comes at a cost because installing high-performance GPU cards on a server brings additional complexity, cost and power consumption.

Intel’s new 4th Gen Xeon chip offers vRAN Boost, which claims to bring the capabilities of hardware acceleration to a CPU without increasing its energy use. But now Nvidia is here to spoil the party with a single form factor CPU and GPU.

One of the potential bottlenecks in a hardware-accelerated set-up is the connection between the GPU and the rest of the system. It doesn’t matter how good the accelerator card is if the interface with the motherboard isn’t quick enough – there simply isn’t enough capacity. Nvidia says Grace Hopper solves this with something it calls NVLink-C2C. It connects the CPU and the GPU with a bandwidth of up to 900 GBps, offering a seven-fold improvement over the current latest interface, PCIe Gen5.

Of course, Nvidia needs to get server makers on board, and provide support to software developers before it can start encroaching on Intel’s turf in any meaningful way. Intel’s x86-based chip architecture dominates the PC and server markets, and so this is where software development tends to focus. Nvidia’s products are based on Arm designs.

On the software side, Nvidia has three solutions to help developers programme on Grace Hopper. First up is AI Enterprise, which offers more than 100 frameworks, pre-trained models, and development tools to streamline development and deployment of generative AI, computer vision, and speech AI solutions.

Next there is Nvidia Omniverse, a development platform for building and operating metaverse applications.

Finally there is the RTX platform, which leverages Nvidia’s graphics acceleration pedigree to help content creators make use of real-time photorealistic rendering and AI-enhanced graphics, and video and image processing.

When it comes to hardware, Nvidia noted that Cisco, Dell, HPE, Lenovo, Supermicro and Atos-owned Eviden all currently offer a broad range of Nvidia-accelerated systems, the implication being they should find it easy enough to come out with Grace Hopper-powered servers.

Nvidia seems to have already convinced one telco to take the plunge. Softbank announced separately it will roll out new data centres across Japan based on Nvidia’s new platform, and will be used to handle both AI and 5G workloads.

“Our collaboration with Nvidia will help our infrastructure achieve a significantly higher performance with the utilisation of AI, including optimisation of the RAN. We expect it can also help us reduce energy consumption and create a network of interconnected data centres that can be used to share resources and host a range of generative AI applications,” said Softbank CEO Junichi Miyakawa.

Softbank said the data centres will be more evenly distributed across its footprint, which should help it operate more efficiently at peak capacity with low latency and lower overall energy costs. It is also working on 5G and 6G applications for autonomous driving, AR/VR, computer vision, digital twins and AI factories.

Meanwhile, Nvidia is also taking aim at the in-car infotainment and safety market.

It has partnered with modem maker MediaTek, which will incorporate an Nvidia GPU chiplet into its upcoming automotive SoCs. It means MediaTek’s Dimensity Auto platform – which offers various in-car services like voice interaction and high-speed connectivity – will also be able to offer enhanced graphics and driver-assistance capabilities.

“With this partnership, our collaborative vision is to provide a global one-stop shop for the automotive industry, designing the next generation of intelligent, always-connected vehicles,” said Rick Tsai, vice chairman and CEO of MediaTek, in a separate statement.

“Through this special collaboration with Nvidia, we will together be able to offer a truly unique platform for the compute intensive, software-defined vehicle of the future,” he said.

“AI and accelerated computing are fuelling the transformation of the entire auto industry,” added Jensen Huang, founder and CEO of Nvidia. “The combination of MediaTek’s industry-leading SoC and Nvidia’s GPU and AI software technologies will enable new user experiences, enhanced safety and new connected services for all vehicle segments, from luxury to mainstream.”

 

Get the latest news straight to your inbox. Register for the Telecoms.com newsletter here.

About the Author

Nick Wood

Nick is a freelancer who has covered the global telecoms industry for more than 15 years. Areas of expertise include operator strategies; M&As; and emerging technologies, among others. As a freelancer, Nick has contributed news and features for many well-known industry publications. Before that, he wrote daily news and regular features as deputy editor of Total Telecom. He has a first-class honours degree in journalism from the University of Westminster.

Subscribe and receive the latest news from the industry.
Join 56,000+ members. Yes it's completely free.

You May Also Like