Tech

Experts warn DeepSeek is 11 times more dangerous than other AI chatbots

Share
Share

DeepSeek’s R1 AI is 11 times more likely to be exploited by cybercriminals than other AI models – whether that’s by producing harmful content or being vulnerable to manipulation.

This is a worrying finding from new research conducted by Enkrypt AI, an AI security and compliance platform. This security warning adds to the ongoing concerns following last week’s data breach that exposed over one million records.

Share

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles
Cisco Webex security flaw could let hackers hijack your system via a meeting invite
Tech

Cisco Webex security flaw could let hackers hijack your system via a meeting invite

Cisco found and fixed three vulnerabilities, including a high-severity one The high-severity...

All models are wrong—a computational modeling expert explains how engineers make them useful
Tech

All models are wrong—a computational modeling expert explains how engineers make them useful

Credit: Pixabay/CC0 Public Domain Nicknamed “Galloping Gertie” for its tendency to bend...

ASUS reveals critical security flaw affecting AiCloud routers, so patch now
Tech

ASUS reveals critical security flaw affecting AiCloud routers, so patch now

ASUS patches a 9.2-rated security flaw in certain routers The flaw stems...

Want your own personal satellite? Here’s how and what it’ll cost
Tech

Want your own personal satellite? Here’s how and what it’ll cost

by Richard N. Velotta Credit: Pixabay from Pexels The Las Vegas-based company...