...

DeepSeek’s New AI Model Sparks Shock, Awe, and Questions From US Competitors


The true price of developing DeepSeek’s new models remains unknown, however, since one figure quoted in a single research paper may not capture the full picture of its costs. “I don’t believe it’s $6 million, but even if it’s $60 million, it’s a game changer,” says Umesh Padval, managing director of Thomvest Ventures, a company that has invested in Cohere and other AI firms. “It will put pressure on the profitability of companies which are focused on consumer AI.”

Shortly after DeepSeek revealed the details of its latest model, Ghodsi of Databricks says customers began asking whether they could use it as well as DeepSeek’s underlying techniques to cut costs at their own organizations. He adds that one approach employed by DeepSeek’s engineers, known as distillation, which involves using the output from one large language model to train another model, is relatively cheap and straightforward.

​Padval says that the existence of models like DeepSeek’s will ultimately benefit companies looking to spend less on AI, but he says that many firms may have reservations about relying on a Chinese model for sensitive tasks. So far, at least one prominent AI firm, Perplexity, has publicly announced it’s using DeepSeek’s R1 model, but it says says it is being hosted “completely independent of China.”

Amjad Massad, the CEO of Replit, a startup that provides AI coding tools, told WIRED that he thinks DeepSeek’s latest models are impressive. While he still finds Anthropic’s Sonnet model is better at many computer engineering tasks, he has found that R1 is especially good at turning text commands into code that can be executed on a computer. “We’re exploring using it especially for agent reasoning,” he adds.

DeepSeek’s latest two offerings—DeepSeek R1 and DeepSeek R1-Zero—are capable of the same kind of simulated reasoning as the most advanced systems from OpenAI and Google. They all work by breaking problems into constituent parts in order to tackle them more effectively, a process that requires a considerable amount of additional training to ensure that the AI reliably reaches the correct answer.

A paper posted by DeepSeek researchers last week outlines the approach the company used to create its R1 models, which it claims perform on some benchmarks about as well as OpenAI’s groundbreaking reasoning model known as o1. The tactics DeepSeek used include a more automated method for learning how to problem-solve correctly as well as a strategy for transferring skills from larger models to smaller ones.

One of the hottest topics of speculation about DeepSeek is the hardware it might have used. The question is especially noteworthy because the US government has introduced a series of export controls and other trade restrictions over the last few years aimed at limiting China’s ability to acquire and manufacture cutting-edge chips that are needed for building advanced AI.

In a research paper from August 2024, DeepSeek indicated that it has access to a cluster of 10,000 Nvidia A100 chips, which were placed under US restrictions announced in October 2022. In a separate paper from June of that year, DeepSeek stated that an earlier model it created called DeepSeek-V2 was developed using clusters of Nvidia H800 computer chips, a less capable component developed by Nvidia to comply with US export controls.

A source at one AI company that trains large AI models, who asked to be anonymous to protect their professional relationships, estimates that DeepSeek likely used around 50,000 Nvidia chips to build its technology.

Nvidia declined to comment directly on which of its chips DeepSeek may have relied on. “DeepSeek is an excellent AI advancement,” a spokesman for the Nvidia said in a statement, adding that the startup’s reasoning approach “requires significant numbers of Nvidia GPUs and high-performance networking.”

However DeepSeek’s models were built, they appear to show that a less closed approach to developing AI is gaining momentum. In December, Clem Delangue, the CEO of HuggingFace, a platform that hosts artificial intelligence models, predicted that a Chinese company would take the lead in AI because of the speed of innovation happening in open source models, which China has largely embraced. “This went faster than I thought,” he says.

Source link

#DeepSeeks #Model #Sparks #Shock #Awe #Questions #Competitors