OpenAI Now Scans ChatGPT Chats & Might Call The Police On You

OpenAI will use specialised systems to detect users who are planning to harm others.

Enlarge text
Logo

Follow us on InstagramTikTok, and WhatsApp for the latest stories and breaking news.

In a significant policy update, OpenAI has announced that it is now monitoring ChatGPT conversations for harmful content and may report users to the police

This change comes after a series of disturbing incidents where AI chatbots have been implicated in users experiencing severe mental health crises, leading to self-harm and, in some cases, suicide.

According to a recent blog post, the company will now use specialised systems to detect users who are planning to harm others.

These conversations will then be reviewed by a human team, who may refer the case to law enforcement if they determine an "imminent threat of serious physical harm".

SAYS.com
Image via Matheus Bertelli / Pexels

However, OpenAI's new policy has its limits and raises privacy questions

Interestingly, the company stated that it is "currently not referring self-harm cases to law enforcement to respect people's privacy given the uniquely private nature of ChatGPT interactions".

SAYS.com
Image via Airam Dato-on / Pexels

The new monitoring measure appears to contradict the company's pro-privacy arguments in its ongoing lawsuit with The New York Times, where it is fighting to protect user chat logs

This adds a new layer of complexity for users, as it remains unclear what exact phrases could trigger a review.

OpenAI CEO Sam Altman previously admitted that using ChatGPT is not the same as speaking with a therapist or lawyer, warning that those chats are not protected by the same confidentiality.

Read more trending stories on SAYS

You may be interested in: