To realize the following vision, it's important to make significant progress in all areas mentioned in previous posts: We're talking about AI technology and automation as such, datasets, human-AI collaboration, trustworthy AI, interfacing, and tailored European verification products. The scenario depicted here indicates the overall impact successful R&D activities could have on both the media landscape and society as a whole.
The year is 2040. Carmen is a freelance medical journalist based in Berlin. During her 20+ year career in this sector, she has seen a lot of disinformation on all kinds of platforms–which at some point led to a lot of discussions, upheaval, and societal challenges. Luckily, the growing disinfo problem was eventually tackled with major public upskilling programmes for information workers, students, and citizens. In parallel, regulatory measures and global agreements between media platforms and governments were passed. Carmen also witnessed the development and uptake of complex, AI-powered verification systems for everyday use.
For several years now, Carmen has had a single-user subscription to a web-based product called CADI (CounterActing DIsinformation). It's one of the widely adopted, socially accepted, and enterprise-ready verification systems that makes use of synthetic or trust-certified datasets, and automatically updates to state-of-the-art functions (to keep up with the latest disinfo developments). CADI comes with transparency and trustworthy AI certifications and provides easy-to-grasp assistance via personalized, visually dynamic and flexible end-user interfaces. It's based on UX standards that guarantee seamless human-AI collaboration, a high level of workflow automation, and flexible levels of human oversight. CADI allows Carmen to easily take on all kinds of tasks: She can verify content items, check claims against facts, and analyze complex social media narratives, even as they're unfolding.
A Job for Carmen and CADI
On a typical Tuesday morning in Berlin, Carmen receives a message from one of the managing editor of a major news portal: There are reports about a virus outbreak in a neighboring country, and she's asked to research, produce, and submit a thorough, fact-based video story on the topic by the end of the day.
Checking the news agenda is quickly done with Carmen's personalized CADI dashboard, already set to her preferences (mid-level information detail, low-level technology affinity). Carmen added two required languages and geographic regions, to achieve cultural and linguistic analysis matches, as well as the required content keywords related to her medical topic.
Carmen quickly glances over the resulting data visualizations, showing in an integrated way the breaking news coverage, trending social stories around it, suspected disinformation narratives, already debunked claims, and a list of key media items that are either shown as suspicious or already scrutinized by other information workers; the "checked by" list features labels like "fully synthetic", "synthetically manipulated", and "non-synthetic".
Based on the news and disinfo overview obtained via the CADI dashboard, Carmen conducts a further, universal search across multiple platforms and media types. Apart from reporting the news of the virus outbreak, she'll also contrast official statements with circulating narratives and highlight selected disinformation elements–which is common practice.
In this breaking news situation, CADI acts as an early-warning system in this and automatically suggests suspicious statements, narratives, and media items, which can subsequently be reviewed and used in the fact-checking report. Carmen is particularly pleased to have this function: It took AI systems (and developers) more than a decade to correctly identify what humans might regard dubious and questionable.
Carmen is also happy that in order to avoid overload, the system has automatically deleted several disinfo items in the results feed–based on transparent, certified approaches that users are fully aware of. While Carmen instantly accepts some of CADI's decisions related to disinfo elements (as she knows the system has been certified), she decides to take a closer look at the transparency and trustworthy AI information details provided for others.
In particular, Carmen double-checks the AI system's statement that a popular video featuring the neighboring country's health minister is a deepfake. Getting this wrong would not only affect Carmen's reputation as a journalist, but could also have legal consequences for the media company publishing her video. Later on, her commissioner will also re-check some aspects of the story–for the purpose of editorial control.
Having spent several hours on research and structuring the story, Carmen now starts producing her video. Prior to finalizing it, she asks CADI for a quick news and disinfo update. As there haven't been any major developments, Carmen finishes her story, uploads the video, and decides to call it a day.
Carmen remembers the old days
As she's walking home, her mind wanders back to the beginning of her career in the early 2020s.
She can hardly believe that back then, verification was a difficult, cumbersome job with sometimes limited results. It was almost impossible to do in breaking news situations, hindered by language barriers, and usually conducted by few specialists who had to play catch-up with a myriad of actors producing ever-advancing disinfo
That's it! You've reached the end of the blog series. Thanks for reading–and don't hesitate to get in touch with us if you have a question or comment regarding AI for verification. In case you've missed a post:
Part I (on the status quo and current limitations) is over here.
Part II (on research challenges) is over here.
Part III (on trends and goals) is over here.
Want to find out more about AI4media? Visit the project's official website for results and resources.