Did DeepSeek Really Pose a Threat to NVIDIA?
Advertisements
In the intricate realm of artificial intelligence (AI), the rapid evolution of technology and the numerous emerging enterprises have significantly transformed the competitive landscapeAmong the front-runners, the DeepSeek model has rapidly gained prominence due to its distinctive approach to model training, which has sparked a broader conversation regarding the conventional "hardware stacking" methodologyThis shift has prompted industry experts to question the long-standing reliance on NVIDIA's GPUs for extensive model trainingThe pertinent question arising is: does DeepSeek truly pose a formidable challenge to NVIDIA's reign? Insights from Morgan Stanley's latest report offer a revealing perspective on this unfolding narrative.
Since the end of 2024, DeepSeek has emerged as a formidable player in the AI arenaIts capabilities in inference and a robust open-source strategy have generated considerable excitement among developers and corporate users alikeThe model's impressive performance, particularly in scenarios that demand high inference efficiency, draws parallels to the best-performing models in the marketFor instance, DeepSeek's open-source inference model, DeepSeek - R1, has excelled in complex tasks such as mathematical computations, coding, and natural language processing, even managing to hold its own against OpenAI's renowned O1 modelWithin the global AI model ranking platform, Arena, DeepSeek - R1 achieved an impressive 1357 score, slightly surpassing O1's 1352, securing the top spot in the style-control model classification alongside itThis significant feat in performance has allowed DeepSeek to attract widespread attention, consequently expanding its market presence with remarkable expedience.
The rise of burgeoning companies like DeepSeek presents a direct challenge to established giants such as NVIDIAAlongside the soaring costs of NVIDIA's products, a wave of alternatives attempting to supplant NVIDIA GPUs has arisen, with Application-Specific Integrated Circuits (ASICs) taking center stage
Advertisements
As NVIDIA's growth began to show signs of fatigue and AMD's performance lagged, investment momentum within the AI sector has shifted notably toward ASIC technologySome ASIC chips have been priced at a fraction of NVIDIA’s equivalent products, drawing initial enthusiasm for their purported performance advantagesHowever, research conducted by Morgan Stanley on approximately 25 alternatives to NVIDIA's GPUs concluded that most alternatives lack long-term viability and recognitionIntel has struggled for about a decade in this domain, failing to achieve significant success despite numerous acquisitionsEven AMD faced challenges with earlier generations of its products, making significant breakthroughs only with the 2024 MI300.
While ASICs unveil potential in niche markets, their triumph is exemplified through Google’s Tensor Processing Unit (TPU). The TPU stands as an exemplar of how custom chips can shine under specific circumstancesGoogle’s innovative transformation of the AI landscape with the introduction of the Transformer technology, which was complemented by optimizing chips developed by Broadcom, allowed it to carve out a leading positionAt the time, NVIDIA was concentrating on optimizing GPUs tailored for convolutional neural networks, granting an opportune edge to Google’s TPU within its specialized domainThis unique focus enabled Broadcom to garner over $8 billion in TPU revenueNonetheless, by 2025, Google continued to allocate a portion of its spending to NVIDIA, a dual strategy fueled by cloud investments and the unmatched efficacy of NVIDIA products in executing large-scale Transformer models.
Examining the cost perspective reveals an intriguing dilemmaWhile certain ASIC chips may appear more cost-effective than NVIDIA's H100 chips—priced at around $3,000 compared to the $20,000 for H100—this simplified view changes when it comes to constructing clustersNVIDIA has leveraged copper cabling to create an interconnect domain featuring 72 GPUs at a lower relative expense
Advertisements
In contrast, ASIC solutions often rely on pricier optical communication technologiesAdditionally, while both manufacturing approaches exhibit similarities in critical elements such as high-bandwidth memory, NVIDIA retains a monopolistic bargaining power in procuring the latest high-bandwidth memory chips (HBM).
On the software front, NVIDIA has established itself through the widely adopted CUDA software development toolkit, rendering its chips more user-friendly for handling software modifications and operating various workloadsFor instance, Databricks, a major player in the big data sector, forecasted that integrating Amazon's Trainium chips would take "weeks or months" to fully operationalizeIn stark contrast, clients utilizing NVIDIA’s offerings can achieve faster deployment speeds, ultimately biasing client preferences towards NVIDIA.
According to Morgan Stanley’s insights, despite the measured success of Google’s TPU and AMD’s MI300, user enthusiasm for NVIDIA's ecosystem remains robustProjections indicate that NVIDIA's market share could further increase by 2025, with estimated processor revenues for NVIDIA reaching $98 billion compared to $5 billion for AMD and $8 billion for BroadcomThe combined revenue from companies like ChipMOS and Marvell stands at approximately $2 billion, with commercial silicon’s share estimated to make up about 90% of the market and ASICs capturing only 10%. It is projected that NVIDIA's growth trajectory will eclipse that of Google’s TPU by 50% to 100%, especially in the latter half of the year, where revenue growth remains significantly stronger than that of ASIC manufacturers or AMD.
The competitive landscape ahead suggests that while DeepSeek injects renewed vigor and thought into the AI field, NVIDIA's supremacy within the AI chip market remains firmly intact at this junctureIts formidable research and development strength, extensive ecosystem, and cost control advantages give it a pronounced edge over a multitude of competitors
Advertisements
Advertisements
Advertisements
Leave a comment
Your email address will not be published