Chip Tech

Nvidia Chip Shortages Leave AI Startups Scrambling for Computing Power

Sep 28, 2023

AROUND 11 AM Eastern on weekdays, as Europe prepares to sign off, the US East Coast hits the midday slog, and Silicon Valley fires up, Tel Aviv-based startup Astria’s AI image generator is as busy as ever. The company doesn’t profit much from this burst of activity, however.

Companies like Astria that are developing AI technologies use graphics processors (GPUs) to train software that learns patterns in photos and other media. The chips also handle inference, or the harnessing of those lessons to generate content in response to user prompts. But the global rush to integrate AI into every app and program, combined with lingering manufacturing challenges dating back to early in the pandemic, have put GPUs in short supply.

That supply crunch means that at peak times the ideal GPUs at Astria’s main cloud computing vendor (Amazon Web Services), which the startup needs to generate images for its clients, are at full capacity, and the company has to use more powerful—and more expensive—GPUs to get the job done. Costs quickly multiply. “It’s just like, how much more will you pay?” says Astria’s founder, Alon Burg, who jokes that he wonders whether investing in shares in Nvidia, the world’s largest maker of GPUs, would be more lucrative than pursuing his startup. Astria charges its customers in a way that balances out those expensive peaks, but it is still spending more than desired. “I would love to reduce costs and recruit a few more engineers,” Burg says.

 

There is no immediate end in sight for the GPU supply crunch. The market leader, Nvidia, which makes up about 60 to 70 percent of the global supply of AI server chips, announced yesterday that it sold a record $10.3 billion worth of data center GPUs in the second quarter, up 171 percent from a year ago, and that sales should outpace expectations again in the current quarter. “Our demand is tremendous,” CEO Jensen Huang told analysts on an earnings call. Global spending on AI-focused chips is expected to hit $53 billion this year and to more than double over the next four years, according to market researcher Gartner.

The ongoing shortages mean that companies are having to innovate to maintain access to the resources they need. Some are pooling cash to ensure that they won’t be leaving users in the lurch. Everywhere, engineering terms like “optimization” and “smaller model size” are in vogue as companies try to cut their GPU needs, and investors this year have bet hundreds of millions of dollars on startups whose software helps companies make do with the GPUs they’ve got. One of those startups, Modular, has received inquiries from over 30,000 potential customers since launching in May, according to its cofounder and president, Tim Davis. Adeptness at navigating the crunch over the next year could become a determinant of survival in the generative AI economy.

“We live in a capacity-constrained world where we have to use creativity to wedge things together, mix things together, and balance things out,” says Ben Van Roo, CEO of AI-based business writing aid Yurts. “I refuse to spend a bunch of money on compute.”

CLOUD COMPUTING PROVIDERS are very aware that their customers are struggling for capacity. Surging demand has “caught the industry off guard a bit,” says Chetan Kapoor, a director of product management at AWS.

 

The time needed to acquire and install new GPUs in their data centers have put the cloud giants behind, and the specific arrangements in highest demand also add stress. Whereas most applications can operate from processors loosely distributed across the world, the training of generative AI programs has tended to perform best when GPUs are physically clustered tightly together, sometimes 10,000 chips at a time. That ties up availability like never before.

 

Kapoor says AWS’ typical generative AI customer is accessing hundreds of GPUs. “If there’s an ask from a particular customer that needs 1,000 GPUs tomorrow, that’s going to take some time for us to slot them in,” Kapoor says. “But if they are flexible, we can work it out.”

AWS has suggested clients adopt more expensive, customized services through its Bedrock offering, where chip needs are baked into the offering without clients having to worry. Or customers could try AWS’ unique AI chips, Trainium and Inferentia, which have registered an unspecified uptick in adoption, Kapoor says. Retrofitting programs to operate on those chips instead of Nvidia options has traditionally been a chore, though Kapoor says moving to Trainium now takes as little as changing two lines of software code in some cases.

Challenges abound elsewhere too. Google Cloud hasn’t been able to keep up with demand for its homegrown GPU-equivalent, known as a TPU, according to an employee not authorized to speak to media. A spokesperson didn’t respond to a request for comment. Microsoft’s Azure cloud unit has dangled refunds to customers who aren’t using GPUs they reserved, the Information reported in April. Microsoft declined to comment.

Your premier source for cutting-edge news in the realms of technology, artificial intelligence, energy, and more. Explore the future of tech with Arinstar! Stay informed, stay inspired!

Quick Search

Explore our curated content, stay informed about groundbreaking innovations, and journey into the future of science and tech.

© ArinstarTechnology

Privacy Policy