OpenAI, Axel Springer in deal to combine AI and journalism, sort out AI ‘hallucinations’

by Jeremy

Axel Springer, one of many largest media firms in Europe, is collaborating with OpenAI to combine journalism with synthetic intelligence (AI) instrument ChatGPT, the German writer mentioned in a press release on its weblog on Dec. 13.

The collaboration entails utilizing content material from Axel Springer media manufacturers to advance the coaching of OpenAI’s massive language fashions. It goals to attain a greater ChatGPT consumer expertise with up-to-date and authoritative content material throughout various matters, in addition to elevated transparency via attributing and linking full articles.

Generative AI chatbots have lengthy grappled with factual accuracy, sometimes producing false info, generally known as “hallucinations.“ Initiatives to scale back these AI hallucinations had been introduced in June in a publish on OpenAI’s web site.

AI hallucinations happen when synthetic intelligence methods generate factually incorrect info that’s deceptive or unsupported by real-world information. Hallucinations can manifest in numerous kinds, corresponding to producing false info, making up nonexistent occasions or individuals or offering inaccurate particulars about sure matters.

The mix of AI and journalism has offered challenges, together with issues about transparency and misinformation. In October, an Ipsos International research revealed that 56% of People and 64% of Canadians consider AI will exacerbate the unfold of misinformation, and globally, 74% suppose AI facilitates the creation of practical faux information.

The partnership between OpenAI and Axel Springer goals to make sure that ChatGPT customers can generate summaries from Axel Springer’s media manufacturers, together with Politico, Enterprise Insider, Bild, and Die Welt. 

Associated: Opensource AI can outperform personal fashions like Chat-GPT – ARK Make investments analysis

Nevertheless, the potential for AI to fight misinformation can be being explored, as seen with instruments like AI Truth Checker and Microsoft’s integration of GPT-4 into its Edge browser.

The Related Press has responded to those issues by issuing tips limiting using generative AI in information reporting, emphasizing the significance of human oversight.

In October 2023, a staff of scientists from the College of Science and Expertise of China and Tencent’s YouTu Lab developed a instrument to fight “hallucinations” by synthetic intelligence (AI) fashions.

Journal: Deepfake Okay-Pop porn, woke Grok, ‘OpenAI has an issue,’ Fetch.AI: AI Eye