Malware

AI models, the subject of ongoing safety concerns about harmful and biased output, pose a risk beyond content emission

When wedded with tools that enable automated interaction with other systems, they can act on their own as malicious agents.

Computer scientists affiliated with the University of Illinois Urbana-Champaign (UIUC) have demonstrated this by weaponizing several large language models (LLMs) to compromise vulnerable websites without human guidance. Prior research suggests LLMs can be used, despite safety controls, to assist [PDF] with the creation of malware.

Comments

Popular posts from this blog

Perplexity

Aphorisms: AI

DeepAI's Austen on China