A feature in Nvidia’s artificial intelligence software can be manipulated into ignoring safety restraints and reveal private information, according to new research.
Nvidia has created a system called the “NeMo Framework,” which allows developers to work with a range of large language models—the underlying technology that powers generative AI products such as chatbots.
The chipmaker’s framework is designed to be adopted by businesses, such as using a company’s proprietary data alongside language models to provide responses to questions—a feature that could, for example, replicate the work of customer service representatives, or advise people seeking simple health care advice.
Read 18 remaining paragraphs | Comments
A feature in Nvidia’s artificial intelligence software can be manipulated into ignoring safety restraints and reveal private information, according to new research.
Nvidia has created a system called the “NeMo Framework,” which allows developers to work with a range of large language models—the underlying technology that powers generative AI products such as chatbots.
The chipmaker’s framework is designed to be adopted by businesses, such as using a company’s proprietary data alongside language models to provide responses to questions—a feature that could, for example, replicate the work of customer service representatives, or advise people seeking simple health care advice.
Read 18 remaining paragraphs | Comments
June 09, 2023 at 10:56PM
Post a Comment