Want to know what’s really happening in the Israeli tech ecosystem? Subscribe to IsraelTech's weekly newsletter and get it all delivered straight to your inbox.

Want to know what’s really happening in the Israeli tech ecosystem? Subscribe to IsraelTech's weekly newsletter and get it all delivered straight to your inbox.

The Biggest Bottleneck in AI Isn’t Talent. It’s Compute.

For most people, AI feels almost limitless.

New models appear every few months. Tools become faster, smarter, and more capable seemingly overnight. Entire workflows that once required teams of people can now be handled by a single prompt.

But according to Boaz Touitou from Impala, the future of AI is currently constrained by one thing above all else: compute.

In a recent conversation on Israel Tech with Yoel Israel, Boaz explained why the race to build more advanced AI systems increasingly comes down to access to GPUs, accelerators, and the infrastructure needed to run models at massive scale.

“Impala strives to solve one of the fundamental bottlenecks on using AI large language models, which is basically compute,” Boaz said. 

Why Compute Matters So Much

Behind every large AI model is an enormous amount of processing power.

That power comes from GPUs and specialized accelerators designed to perform highly repetitive calculations at enormous speed.

Unlike traditional CPUs built for general computing tasks, GPUs focus on handling massive volumes of simpler operations simultaneously.

The challenge is that these chips are incredibly difficult to manufacture.

“They take a lot of power,” Boaz explained. “They’re very hard to fabricate.” 

At the same time, the pace of hardware development is relentless.

Each new generation of GPUs rapidly outperforms the previous one, making older systems far less competitive almost overnight.

That cycle creates constant pressure across the AI industry.

Companies are not only racing to build better models.

They are racing to access enough infrastructure to run them.

AI First Is More Than Adding Chatbots

One of the most interesting parts of the discussion focused on what it actually means to build an “AI-first” company.

According to Boaz, many organizations misunderstand the concept.

Rather than redesigning workflows around AI, they simply attach AI tools onto older systems.

“A lot of companies think about AI first as plugging AI on top of existing systems,” he said. “I don’t think that’s the right approach.”

For Boaz, building AI first means delegating both simple and complex tasks directly to AI systems themselves.

That changes the role of employees entirely.

Instead of performing every task manually, workers increasingly manage AI systems capable of acting like entire teams.

“It’s like managing a fleet of a hundred salespeople or a hundred developers, but using AI,” he explained.

According to Boaz, that shift requires a completely different level of responsibility and oversight from employees.

What Happens When AI Scales Massively

Boaz believes one of the least understood aspects of AI today is how dramatically capabilities change once models operate at huge scale.

“The story of AI is the story of scaling,” he said. 

He pointed to recent examples where models were run thousands of times against the same problem to achieve significantly better performance.

As compute increases, entirely new use cases become possible.

That includes analyzing enormous code repositories for vulnerabilities, processing massive streams of video data, identifying behavioral patterns, and solving problems previously too computationally expensive to attempt.

“The moment you can scale AI to these massive scales, you achieve new frontier possibilities,” Boaz said.

The Bigger Debate Around Human Intelligence

The conversation also moved into a more philosophical direction.

Boaz argued that AI systems may not be fundamentally different from human cognition itself.

“Humans, we have computers in us,” he said during the interview.

From his perspective, AI development increasingly resembles attempts to model and replicate how human learning already works.

He also discussed reinforcement learning systems where AI improves itself through repetition and self-play rather than relying entirely on human supervision.

In his view, there may be fewer hard limits to AI advancement than many people currently assume.

Why the AI Race Is Accelerating

One of the clearest takeaways from the conversation was that AI development is no longer limited to a small number of companies.

“The recipes are open source,” Boaz said. “Science is done out in the open.” 

That means the competitive advantage increasingly shifts toward infrastructure itself.

Who has access to the compute, who can scale models efficiently, who can process the largest amounts of data.

According to Boaz, those factors may define the next phase of the AI race more than the models themselves.

And if compute continues improving at exponential rates, the capabilities unlocked over the next few years may look very different from what most people currently imagine.

Recent Posts

Boaz Touitou from Impala explains why compute has become the biggest bottleneck in AI, and how scaling infrastructure could shape the next era of artificial intelligence.
Yaron Lavi from Deel breaks down how remote-first companies scale globally, why small teams move faster, and what Israeli tech can teach the future of work.
Yev Gelfand shares why global investors are increasingly viewing Israel as a long-term opportunity, despite war, geopolitical pressure, and market uncertainty.