DeepSeek’s new AI model appears to be one of the best ‘open’ challengers yet


DeepSeek, a Chinese AI firm, has unveiled DeepSeek V3, one of the most powerful “open” AI models to date. This model excels in tasks like coding, translation, and writing, outperforming competitors like Meta’s Llama 3.1 and OpenAI’s GPT-4o in coding contests and benchmarks. Trained on 14.8 trillion tokens and containing 671 billion parameters, DeepSeek V3 is massive in scale, requiring advanced GPUs to function efficiently. Remarkably, the model was trained in just two months on Nvidia H800 GPUs at a cost of $5.5 million—far less than the development cost of similar models. However, as a Chinese model, it adheres to regulations that limit responses on politically sensitive topics. DeepSeek is funded by High-Flyer Capital, a hedge fund using AI for trading, which builds massive server clusters to support its AI endeavors.

Grey Matterz Thoughts

DeepSeek V3 sets a new benchmark for “open” AI models, showcasing innovation and efficiency. However, its political limitations highlight challenges in achieving true global adaptability.

Source: https://shorturl.at/N73et