AI safety or jailbreaking Posted by Eva on January 10, 2024 How Johnny Can Persuade LLMs to Jailbreak Them:Rethinking Persuasion to Challenge AI Safety by Humanizing LLMsPaper