Crypto Currency Tracker logo Crypto Currency Tracker logo
Cryptopolitan 2025-09-30 12:42:20

What to know about DeepSeek's new V3.2-Exp model

China’s tech wonder kid DeepSeek has launched a new experimental model, V3.2-Exp, as part of its attempt to challenge American dominance in AI. The release came on Monday and was first made public through a post on Hugging Face, a popular AI forum. DeepSeek claims that this latest version builds on its current model, V3.1-Terminus, but with a stronger emphasis on speed, cost, and memory handling. According to Hugging Face’s Chinese community lead Adina Yakefu, the model features something called DeepSeek Sparse Attention, or DSA, which she said “makes the AI better at handling long documents and conversations” while also cutting operating costs in half. If you recall , around a year ago, DeepSeek dropped and shook things up by dropping its first model, R1, without warning. That model showed it was possible to train a large language model using fewer chips and much less computing power. No one expected a Chinese team to pull that off under those constraints. With V3.2-Exp, the goal hasn’t changed: less hardware, more performance. Adds DeepSeek Sparse Attention and reduces AI running cost DSA is the big feature in this model. It changes how the AI picks which information to look at. Instead of scanning everything, DeepSeek trains the model to focus only on what seems useful for the task. Adina explained that the benefit here is twofold: “efficiency” and “cost reduction.” By skipping irrelevant data, the model moves faster and requires less energy. She said the model was designed with open-source collaboration in mind. Nick Patience, who leads AI research at The Futurum Group, told CNBC the model has the potential to open up powerful AI tools to developers who can’t afford to use more expensive models. “It should make the model faster and more cost-effective to use without a noticeable drop in performance,” Nick said. But that doesn’t mean there aren’t risks. The way DeepSeek uses sparse attention is like how airlines pick flight routes. There might be hundreds of ways to get from one place to another, but only a few make sense. The model filters through the noise and focuses on what matters — or at least what it thinks matters. But this comes with concerns. Ekaterina Almasque, who cofounded BlankPage Capital, explained it simply: “So basically, you cut out things that you think are not important.” But the issue, she said, is that there’s no guarantee the model is cutting the right things. Ekaterina, who has backed companies like Dataiku, Darktrace, and Graphcore, warned that cutting corners might create problems later. “They [sparse attention models] have lost a lot of nuances,” she said. “And then the real question is, did they have the right mechanism to exclude not important data, or is there a mechanism excluding really important data, and then the outcome will be much less relevant?” Connects to Chinese chips and releases open code Despite those concerns, DeepSeek insists that V3.2-Exp performs just as well as V3.1-Terminus. The model can also run directly on domestic Chinese chips like Ascend and Cambricon, with no extra configurations required. That’s key in China’s broader effort to build AI on homegrown hardware and reduce dependency on foreign tech. “Right out of the box,” Adina said, DeepSeek works with these chips. The company also made the model’s full code and tools public. That means anyone can download, run, modify, or build on top of V3.2-Exp. This move aligns with DeepSeek’s open-source strategy, but it raises another issue: patents. Since the model is open and the core idea, sparse attention, has been around since 2015, DeepSeek can’t lock it down legally. “The approach is not super new,” said Ekaterina. For her, the only defensible part of the tech is how DeepSeek chooses what to keep and what to ignore. That’s where the real competition lies now. Not just in making smarter models, but making them faster, cheaper, and leaner — without screwing up results. Even DeepSeek called this version “an intermediate step toward our next-generation architecture,” which suggests they’re already working on something bigger. Nick said the model shows that efficiency is now just as important as raw power. And Adina believes the company has a long-term play in mind. “DeepSeek is playing the long game to keep the community invested in their progress,” she said. “People will always go for what is cheap, reliable, and effective.” If you're reading this, you’re already ahead. Stay there with our newsletter .

Feragatnameyi okuyun : Burada sunulan tüm içerikler web sitemiz, köprülü siteler, ilgili uygulamalar, forumlar, bloglar, sosyal medya hesapları ve diğer platformlar (“Site”), sadece üçüncü taraf kaynaklardan temin edilen genel bilgileriniz içindir. İçeriğimizle ilgili olarak, doğruluk ve güncellenmişlik dahil ancak bunlarla sınırlı olmamak üzere, hiçbir şekilde hiçbir garanti vermemekteyiz. Sağladığımız içeriğin hiçbir kısmı, herhangi bir amaç için özel bir güvene yönelik mali tavsiye, hukuki danışmanlık veya başka herhangi bir tavsiye formunu oluşturmaz. İçeriğimize herhangi bir kullanım veya güven, yalnızca kendi risk ve takdir yetkinizdedir. İçeriğinizi incelemeden önce kendi araştırmanızı yürütmeli, incelemeli, analiz etmeli ve doğrulamalısınız. Ticaret büyük kayıplara yol açabilecek yüksek riskli bir faaliyettir, bu nedenle herhangi bir karar vermeden önce mali danışmanınıza danışın. Sitemizde hiçbir içerik bir teklif veya teklif anlamına gelmez