Skip to main content
info

"Informed AI News" is an publications aggregation platform, ensuring you only gain the most valuable information, to eliminate information asymmetry and break through the limits of information cocoons. Find out more >>

NIST Launches Open-Source Platform for AI Safety Testing

NIST, the National Institute of Standards and Technology, has launched Dioptra, an open-source tool designed to test the resilience of machine learning models against various attacks. Dioptra features a web-based interface, user authentication, and comprehensive tracking of experiment elements to ensure reproducibility.

The tool is equipped to handle three types of attacks: evasion, poisoning, and oracle. Evasion attacks involve manipulating input data to deceive models. Poisoning attacks tamper with training data, thereby reducing model accuracy. Oracle attacks attempt to reverse-engineer models to gain insights into their datasets or parameters.

Dioptra's modular architecture supports a wide range of experiments, allowing for the combination of different models, datasets, attack methods, and defense strategies. It is accessible to a broad audience, including model developers, users, testers, auditors, and researchers. The platform also supports Python plugins for enhanced functionality and maintains detailed histories of experiments for traceable testing.

In addition, NIST has released three guidance documents. The first document addresses 12 risks associated with generative AI and provides over 200 recommended actions. The second document outlines secure software development practices for generative AI and dual-use foundation models. The third document proposes a plan for global cooperation in the development of AI standards.

Full article>>