Robert Lemos

"The end users of LLMs typically do not have a lot of information on how providers collected and cleaned the data used to train their models, and the model developers typically conduct only a shallow evaluation of the data because the volume of information is just too vast. 

"This lack of visibility into how artificial intelligence (AI) makes decisions is the root cause of more than a quarter of risks posed by LLMs, according to a new report from the Berryville Institute of Machine Learning (BIML) that describes 81 risks associated with LLMs. 

"The goal of the report, 'An Architectural Risk Analysis of Large Language Models,' is to provide CISOs and other security practitioners with a way of thinking about the risks posed by machine learning (ML) and AI models, especially LLMs and the next-generation large multimodal models (LMMs), so they can identify those risks in their own applications, says Gary McGraw, co-founder of BIML." 

Comments

Popular posts from this blog

Perplexity

Aphorisms: AI

DeepAI's Austen on China