DeepSeek
-
DeepSeek burst onto the scene last week and upended the long-dominant big boys of AI with its smaller, more efficient approach to language model design.
-
New research from Cisco found that DeepSeek’s flagship R1 AI model failed to block a single harmful prompt during a series of tests that uncovered critical safety flaws.
-
Even as market jitters over the DeepSeek sell-off dragged down tech stocks, industry giants posted robust earnings this week, with several reporting billion-dollar revenue gains driven by sustained demand for AI and cloud infrastructure.
-
DeepSeek took the world by storm this week. However, new research reveals the Chinese AI model has serious ethical and security flaws, including being 11 times more likely to generate harmful output than OpenAI’s o1.
-
Yann LeCun, Meta’s chief AI scientist, claimed the market reaction to DeepSeek was “woefully unjustified” and that open source research powered the Chinese startup's meteoric rise, not its hardware.
-
Capacity explores the true costs of DeepSeek's AI model, the safety concerns it raises, and the impact of censorship on its global reception.
Forthcoming events
-
Technology leaders have responded with admiration and intrigue following DeepSeek’s launch of its flagship language model, R1.
-
Chinese AI startup DeepSeek has surged in popularity, climbing to the top of Apple’s App Store in the UK, US, and China, posing a significant challenge to Silicon Valley’s dominance.