How can journalists use instruments powered by synthetic intelligence (AI) of their every day work? What ought to they watch out about when doing so and what ought to they not use AI for? And the way can they extra simply recognise AI-generated content material on-line, comparable to deepfakes, and debunk them?
These had been among the questions addressed in a workshop organised by the European Newsroom (enr). The workshop was facilitated by Stefan Voss, head of verification on the German Press Company (dpa) and Patrick Neumann, head of dpa-academy and recruitment officer.
Synthetic intelligence is a scorching subject all over the world, and its use – in each private {and professional} settings – is rising. AI-powered instruments comparable to OpenAI’s ChatGPT are creating quickly.
As well as, the problem of pretend information and false content material is excessive on the political agenda – each within the European Union and past – with politicians and specialists alike warning of the potential influence on society and customers of on-line content material.
Round 30 journalists and media professionals from companies collaborating the enr attended the workshop, together with from AFP (France), Agerpres (Romania), ANP (the Netherlands), ANSA (Italy), BTA (Bulgaria), dpa (Germany), EFE (Spain), FENA (Bosnia and Herzegovina), MIA (North Macedonia), STA (Slovenia), Tanjug (Serbia) and TT (Sweden).
All through the workshop, the facilitators and individuals engaged in vigorous discussions on AI in journalism. Individuals got a hands-on introduction to make use of some AI-powered instruments, comparable to ChatGPT, of their work as journalists. They had been additionally proven a number of examples of AI-generated content material and deepfakes, and taught the best way to spot and debunk them.