
Why Nvidia’s AI strategy runs through Israel
With breakthroughs in chips and networking, the company’s R&D hub in Yokneam is key to its future.
A trillion dollars. That’s—give or take a few tens of billions—the market value that Nvidia has lost in the two months since Chinese firm DeepSeek introduced its generative artificial intelligence (GenAI) model, R1. The unveiling of R1, which performs comparably to models from companies like OpenAI but was trained using a fraction of their computing power, has sparked speculation about a decline in demand for AI chips—Nvidia’s main source of revenue.
At its annual developer conference, held last week in San Jose, in the heart of Silicon Valley, Nvidia set out to prove that the so-called "DeepSeek moment" would actually drive even greater demand for its products. And its secret weapon in this strategy? The company’s R&D center in Yokneam, Israel.
The highlight of the conference was, of course, the opening keynote by Nvidia founder and CEO Jensen Huang. Dressed in his iconic black leather jacket, he took the stage at the San Jose hockey arena on Tuesday, wielding a T-shirt gun and receiving a rock-star welcome from the 15,000 attendees. Or perhaps, more accurately, a reception befitting this generation’s Steve Jobs (though Huang was born just eight years later). Anyone lucky enough to catch one of his T-shirts got a special prize—Huang’s original signature. A lifelong souvenir, or maybe just a valuable resale opportunity on eBay.
Following this unexpected opening, Huang spent nearly two and a half hours outlining his vision for Nvidia and the AI industry over the coming year. He spoke without a teleprompter, with palpable enthusiasm, though often delving into technical details aimed primarily at the company’s developer community rather than the general public. And while he never explicitly mentioned DeepSeek, his message was clear: the rise of R1 does not signal the end of Nvidia’s AI boom.
"The computing requirements of AI are more powerful and accelerating rapidly," Huang said. "The amount of computing we need for AI agents (models that can perform autonomous tasks) and 'thinking models' is 100 times greater than what we expected this time last year. A thinking AI model breaks down a problem into steps, explores different approaches, chooses the best answer, and verifies it. In the past, ChatGPT couldn’t handle complex questions because it only attempted to answer once. Thinking models, however, iterate step by step and employ multiple techniques. As a result, the amount of generated content (tokens) has surged. These models are now more complex, producing 10 times more tokens, and requiring dramatically more computing power."
To meet these demands, Nvidia’s latest product announcements focused on improving access to significantly greater computing power. The main highlight was the introduction of Blackwell Ultra, Nvidia’s next-generation AI processor, set to launch in the second half of this year. According to Huang, Blackwell Ultra was designed to handle the immense computational requirements of thinking models during runtime—more than offsetting the efficiency gains in the training phase that DeepSeek’s R1 demonstrated.
The processor’s capabilities are so advanced that five server racks—each containing 72 Blackwell Ultra processors—would deliver computing power equivalent to Nvidia’s Israel-1 supercomputer, one of the 35 most powerful supercomputers in the world. The server racks’ communication chips were developed at Nvidia’s R&D center in Yokneam.
Alongside Blackwell Ultra, Nvidia also unveiled Dynamo, an open-source software environment for managing inference (i.e., real-time AI operations) in thinking models. Developed in Israel, Dynamo enables up to 1,000 AI processors to work simultaneously on a single prompt, boosting the performance of models like DeepSeek’s R1 by up to 30 times.
Another major focus of Huang’s presentation was Nvidia’s communications chip solutions, also spearheaded by the Yokneam R&D center. The most significant announcement in this regard was the development of a photonic silicon chip, which Nvidia claims will revolutionize communications infrastructure in data centers.
Communications chips and switches are critical to data centers—if processors cannot exchange data at ultra-high speeds, their computational power is wasted. One of the biggest bottlenecks in AI infrastructure is the optical transceiver, which connects AI chips to network switches by converting optical signals into electrical signals, and vice versa. These transceivers account for 10% of a data center’s total power consumption.
In a facility with 400,000 AI chips, there are 2.4 million optical transceivers consuming 40 megawatts of energy. Nvidia’s silicon photonics solution eliminates the need for these transceivers, integrating the light-to-electricity conversion directly into the media switch. This innovation improves energy efficiency by 3.5 times, enhances network reliability by a factor of 10 (by reducing failure points), and cuts data center construction time by 30%. The breakthrough is the result of research efforts that began more than half a decade ago—well before Nvidia acquired Mellanox and turned it into the backbone of its Israeli R&D operations.
Additional innovations included Agentic AI, an Nvidia AI model tailored for developing AI agents, which was also developed with significant input from the Israeli R&D center and is already in use by Microsoft, Salesforce, and Amdocs. Huang also introduced Isaac GR00T N1, an open-source foundation model for humanoid robotics, which has completed its initial training phase and is now available for companies looking to develop robotic applications.
If you noticed a recurring theme throughout Huang’s announcements—the prominent role of Nvidia’s Yokneam center—that’s no accident. Since acquiring Mellanox for $6.9 billion in 2019, Nvidia has transformed its Israeli R&D activity (which now employs around 15% of its global workforce) into a cornerstone of its chip development. This was reinforced in a slide towards the end of Huang’s presentation, showcasing Nvidia’s roadmap for the next three years. The company highlighted four core processor types as its most critical product lines: AI chips, CPUs, and two categories of communications chips—one for intra-server communication and another for inter-server networking. Notably, the development of three of these four product lines is largely led by the Yokneam R&D center.
Nvidia Israel is no longer just an important R&D hub—it plays a pivotal role in shaping the company’s flagship products. After Huang’s presentation, it’s clear that Nvidia Israel is central to his strategy for regaining the trillion dollars in market value the company has lost in recent months. In many ways, it is almost his entire strategy.
Huang is betting that the rise of thinking models and AI agents will increase demand for computing power and solutions that optimize hardware and server efficiency. And he’s counting on the Yokneam team to deliver those solutions. From a purely technological perspective, it seems the center has already succeeded—delivering multiple breakthroughs that justify Nvidia’s $6.9 billion Mellanox acquisition many times over.
Now, the question is whether Huang’s market assessment and strategy will pay off. If he’s right, and Nvidia returns to a growth trajectory, the engineers and executives in Yokneam will deserve much of the credit. If he’s wrong—if the AI market evolves differently than expected—Nvidia could face some challenging years ahead, potentially overshadowing the successes of the past two and a half years.
The writer was a guest at the conference.