OpenAI says GPT-4 AI cuts content material moderation time down from months to hours

by Jeremy

OpenAI, the developer behind ChatGPT, is advocating using synthetic intelligence (AI) utilization in content material moderation, asserting its potential to reinforce operational efficiencies for social media platforms by expediting the processing of difficult duties.

The AI firm, supported by Microsoft, mentioned that its newest GPT-4 AI mannequin has the potential to considerably shorten content material moderation timelines from months to a matter of hours, making certain improved consistency in labeling.

Moderating content material poses a difficult endeavor for social media firms like Meta, the guardian firm of Fb, necessitating the coordination of quite a few moderators globally to stop customers from accessing dangerous materials like baby pornography and extremely violent pictures.

“The method (of content material moderation) is inherently sluggish and might result in psychological stress on human moderators. With this method, the method of creating and customizing content material insurance policies is trimmed down from months to hours.”

In line with the assertion, OpenAI is actively investigating the utilization of enormous language fashions (LLMs) to deal with these points. Its intensive language fashions, akin to GPT-4, possess the flexibility to grasp and produce pure language, rendering them appropriate for content material moderation. These fashions have the capability to make moderation choices guided by coverage tips given to them.

Picture exhibiting GPT-4’s course of for content material moderation.  Supply: OpenAI

GPT-4’s predictions can refine smaller fashions for dealing with intensive knowledge. This idea improves content material moderation in a number of methods together with consistency in labels, swift suggestions loop and easing psychological burden.

The assertion highlighted that OpenAI is at the moment engaged in efforts to reinforce GPT-4’s prediction accuracy. One avenue being explored is the mixing of chain-of-thought reasoning or self-critique. Moreover, it’s experimenting with strategies to establish unfamiliar dangers, drawing inspiration from Constitutional AI.

Associated: China’s new AI laws start to take impact

OpenAI’s objective is to make the most of fashions to detect probably dangerous content material based mostly on broad descriptions of hurt. Insights gained from these endeavors will contribute to refining present content material insurance policies or crafting new ones in uncharted danger domains.

Moreover, on Aug. 15 OpenAI’s CEO Sam Altman clarified that the corporate refrains from coaching its AI fashions utilizing knowledge generated by customers.

Journal: AI Eye: Apple creating pocket AI, deep pretend music deal, hypnotizing GPT-4