The DeepSeek Story is Evolving
Is DeepSeek inaccurate? … claims of plagiarism … how DeepSeek’s technology works … the risks it represents … a roundtable discussion with our experts
The DeepSeek story continues to evolve with new information emerging.
Let’s begin with Reuters:
Chinese AI startup DeepSeek’s chatbot achieved only 17% accuracy in delivering news and information in a NewsGuard audit that ranked it tenth out of eleven in a comparison with its Western competitors including OpenAI’s ChatGPT and Google Gemini.
The chatbot repeated false claims 30% of the time and gave vague or not useful answers 53% of the time in response to news-related prompts, resulting in an 83% fail rate, according to a report published by trustworthiness rating service NewsGuard on Wednesday.
Meanwhile, as to how it caught up with the U.S. incumbent AI platforms so quickly, the answer might be “stealing.”
Here’s The New York Post:
OpenAI, the company behind ChatGPT, says it has proof that the Chinese start-up DeepSeek used its technology to create a competing artificial intelligence model — fueling concerns about intellectual property theft in the fast-growing industry.
OpenAI believes DeepSeek, which was founded by math whiz Liang Wenfeng, used a process called “distillation,” which helps make smaller AI models perform better by learning from larger ones.
While this is common in AI development, OpenAI says DeepSeek may have broken its rules by using the technique to create its own AI system…
Security researchers at Microsoft, which has poured billions into OpenAI, discovered last fall that individuals with possible links to DeepSeek were harvesting vast troves of data through OpenAI’s application programming interface, or API, sources told Bloomberg.