20 min read
Social Engineering an LLM
LLMs are getting better, they say. And I agree. I'm finding them to be more helpful with coding now than a few years ago. They retain context a little better, drift less, and tend to hallucinate less. But what happens when someone with malicious intent uses the same techniques that make LLMs helpful to manipulate them into crossing ethical boundaries?
AISecurity