Home Development GPT-4 developer tool can be exploited for misuse with no easy fix

GPT-4 developer tool can be exploited for misuse with no easy fix

GPT-4 developer tool can be exploited for misuse with no easy fix

Technology

OpenAI’s developer tool for its GPT-4 large language model can be misused to trick the AI into providing information to aid would-be terrorists, and fixing the problem won’t be easy

By Jeremy Hsu

AI chatbots can be fine-tuned to provide information that could help terrorists plan attacks

salarko/Alamy

It is surprisingly easy to remove the safety measures intended to prevent AI chatbots from giving harmful responses that could aid would-be terrorists or mass shooters. The discovery seems to be prompting companies, including OpenAI, to develop strategies to solve the problem. But research suggests their efforts have been met with only limited success so far.

OpenAI worked with academic researchers on a so-called…

More from New Scientist

Explore the latest news, articles and features

Read whole article here

Leave a Reply

Your email address will not be published.