Home methods

Tag: methods

Post
Anthropic’s red team methods are a needed step to close

Anthropic’s red team methods are a needed step to close

June 17, 2024 3:27 PM AI red teaming is proving effective in discovering security gaps that other security approaches can’t see, saving AI companies from having their models used to produce objectionable content. Anthropic released its AI red team guidelines last week, joining a group of AI providers that include Google, Microsoft, NIST, NVIDIA and