People are tricking AI chatbots into helping commit crimes

zeeforce
6 Min Read




  • Researchers have discovered a “universal jailbreak” for AI chatbots
  • The jailbreak can trick major chatbots into helping commit crimes or other unethical activity
  • Some AI models are now being deliberately designed without ethical constraints, even as calls grow for stronger oversight

I’ve enjoyed testing the boundaries of ChatGPT and other AI chatbots, but while I once was able to get a recipe for napalm by asking for it in the form of a nursery rhyme, it’s been a long time since I’ve been able to get any AI chatbot to even get close to a major ethical line.

But I just may not have been trying hard enough, according to new research that uncovered a so-called universal jailbreak for AI chatbots that obliterates the ethical (not to mention legal) guardrails shaping if and how an AI chatbot responds to queries. The report from Ben Gurion University describes a way of tricking major AI chatbots like ChatGPT, Gemini, and Claude into ignoring their own rules.



Source link

Share This Article
Leave a comment
Optimized by Optimole
Verified by MonsterInsights