Abstract digital wimmelpicture. At the centre, there's a person lifting a heavy weight. The person is surrounded by lots of pixelated gadgets, computers, bits of data, and traffic cones.
AI & Automization, Verification

AI-CODE: Harness AI, Combat Disinfo, Empower Pros

To identify and understand synthetic content on next generation social platforms and to provide professionals with tools and guidelines for the new media landscape they're facing – that's the in-a-nutshell mission statement of AI CODE, a new media R&D project co-launched by DW in December 2023.

The acronym stands for "artificial intelligence services for continuous trust in emerging digital environments" – another decent summary of the EU-funded innovation action (IA) that features 13 partners from all over Europe. It will run for 36 months, a timespan in which we're likely to see yet another massive transformation of the way digital content is generated, distributed, and perceived.

In terms of tech and research domains, AI-CODE will first and foremost focus on robust, trustworthy AI (for detecting questionable synthetic content and creating trusted one), content/source credibility and trust assessment in general, as well as human-AI collaboration.

Target groups, outcomes, use cases, and benefits

AI-CODE caters to media professionals and media companies, media literacy NGOs and the fact-checking scene, academics and researchers, technology providers, and, last but not least: European civil society. In the course of the project, the consortium wants to develop three content-driven services and three user-driven services. There are also three use cases which revolve around AI tools for trusted content, AI tools to detect/counter (potential) foreign influence operations, and interactive coaching with regard to generative AI and trusted content. In the long run, AI-CODE hopes to foster sustainable AI, inclusive, human-centered design, trusted digital technology, and thus a European approach to AI leadership.

The project's official logo.

DW's role in AI-CODE

DW takes on the role of a media pilot partner, consultant, and communicator. We're tasked with gathering and refining requirements, testing and co-developing new AI tools, and supporting the consortium's dissemination efforts.

Our specific use case – under development as we're writing this – will mostly deal with two rather new realities at the intersection of verification, AI tech, and social media: The need to detect synthetic text, image and AV content that's ever more sophisticated – and the need to analyze disinfo on emerging platforms, which may be immersive or decentralized or both.

Related projects, further info, and updates

Facing a complex project and a challenging media landscape, the consortium luckily doesn't need to start from scratch. Instead, AI-CODE can build on very useful research and development already done in projects like AI4MEDIA, AI4TRUST, EDMO, news-polygraph, TITAN, and vera.ai.

An official website and social media accounts (on old and new networks) are still under construction, but should be up soon. To stay in the AI-CODE news loop, keep an eye on DW Innovation's social media or directly get in touch with our project managers Andy Giefer and Kay Macquarrie.

Key visual by Anne Fehres, Luke Conroy, and AI4MEDIA (via Better Images of AI, retouch by DW Innovation); AI-CODE Logo by the AI-CODE consortium

Author
Logo Deutsche Welle
DW Innovation