Tech

Not even fairy tales are safe – researchers weaponise bedtime stories to jailbreak AI chatbots and create malware

Share
Share


  • Security researchers have developed a new technique to jailbreak AI chatbots
  • The technique required no prior malware coding knowledge
  • This involved creating a fake scenario to convince the model to craft an attack

Despite having no previous experience in malware coding, Cato CTRL threat intelligence researchers have warned they were able to jailbreak multiple LLMs, including ChatGPT-4o, DeepSeek-R1, DeepSeek-V3, and Microsoft Copilot, using a rather fantastical technique.

The team developed ‘Immersive World’ which uses “narrative engineering to bypass LLM security controls” by creating a “detailed fictional world” to normalize restricted operations and develop a “fully effective” Chrome infostealer. Chrome is the most popular browser in the world, with over 3 billion users, outlining the scale of the risk this attack poses.

Share

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles
ASUS reveals critical security flaw affecting AiCloud routers, so patch now
Tech

ASUS reveals critical security flaw affecting AiCloud routers, so patch now

ASUS patches a 9.2-rated security flaw in certain routers The flaw stems...

Want your own personal satellite? Here’s how and what it’ll cost
Tech

Want your own personal satellite? Here’s how and what it’ll cost

by Richard N. Velotta Credit: Pixabay from Pexels The Las Vegas-based company...

Judge’s antitrust ruling is a positive step
Tech

Judge’s antitrust ruling is a positive step

Credit: cottonbro studio from Pexels Of course, Google has an unlawful monopoly...

The OnePlus 13T’s battery just got revealed, and it could come with a surprising twist
Tech

The OnePlus 13T’s battery just got revealed, and it could come with a surprising twist

OnePlus has unveiled more details of its upcoming 13T phone The fresh...