Select Page

In a strategic move to maintain its stronghold in the AI chip market, Nvidia (NASDAQ:NVDA) announced on Monday a significant upgrade to its premier artificial intelligence chip. The H200, slated for release next year, boasts enhanced features and is expected to be integrated by industry giants (NASDAQ:AMZN), Alphabet’s Google (NASDAQ:GOOGL), and Oracle (NYSE:ORCL).

Outpacing its predecessor, the H100, the H200’s key improvement lies in the augmentation of high-bandwidth memory, a pivotal component influencing the chip’s data processing speed. Recognized as one of the most costly aspects of the chip, the increased high-bandwidth memory is poised to accelerate the response time of AI services powered by Nvidia, such as OpenAI’s ChatGPT and other generative AI applications.

With a substantial upgrade, the H200 now features 141 gigabytes of high-bandwidth memory, a noteworthy surge from the previous H100’s 80 gigabytes. While Nvidia has not disclosed the specific suppliers for the new chip’s memory, Micron Technology (NASDAQ:MU) previously indicated its efforts to become a supplier for Nvidia in September.

Nvidia’s memory procurement also extends to Korea’s SK Hynix, which reported last month that AI chip demand was contributing to a resurgence in its sales.

In a bid to extend the reach of the H200, Nvidia revealed that major cloud service providers, including Amazon Web Services, Google Cloud, Microsoft Azure (NASDAQ:MSFT), and Oracle Cloud Infrastructure, will be among the initial platforms to offer access to the upgraded chips. Additionally, specialty AI cloud providers CoreWeave, Lambda, and Vultr are set to join the lineup, emphasizing the broad industry collaboration accompanying Nvidia’s latest AI chip advancement.