Prompt injection…


Named Skeleton Key, the AI jailbreak was previously mentioned during a Microsoft Build talk under the name Master Key. The technique enabled an attacker to trick gen-AI models into providing ‘forbidden’ information, such as instructions for making a Molotov Cocktail.

AI chatbots are typically trained to avoid providing potentially hateful or harmful information. However, ever since chatbots came into the spotlight with the launch of ChatGPT, researchers have been looking into ways to bypass these guardrails using what is known as prompt injection or prompt engineering.  

Comments

Popular posts from this blog

Perplexity

Aphorisms: AI

DeepAI's Austen on China