Ai4Media logo with a Roman number three next to it.
ai4media_verification_c_3.png
Verification, AI & Automization

AI for Content Verification III: Future Trends and Ultimate Goals

In the scope of the EU co-funded AI4Media project, we recently contributed to a long, public report entitled “AI technologies and applications in media: State of Play, Foresight, and Research Directions”. For better accessibility and readability, we’ve now turned our section into a multi-part blog series (with a couple of edits and updates). Here’s part three, which looks at future trends and goals for AI-driven content verification.

First of all, let's be clear about one thing: As AI technology gets better, cheaper, and easier to handle, there will also be more opportunities for bad actors to produce and distribute mis- and disinformation. However, conducting and applying extensive multi-disciplinary research in the field of AI for verification (as described in the previous posts) could also lead to a positive turning point–and open a wide range of opportunities and benefits for the media industry and other domains.

Here are some future trends that may well become reality:

  • There will be an increasing acceptance of and trust in AI-powered tools–because solutions are trustworthy, transparent, easy to use and comply with journalistic codes of conduct and regulatory frameworks.
  • There will be less barriers to workflow automation due to successful human-AI collaboration models–and a better understanding of the complex concept of "truth".
  • There will be significantly more information workers in media (and society in general) with access to powerful, user-friendly tools that can verify content and counteract disinformation.
  • There will be better early and/or real-time detection–which does away with the problem of being "late to the party", i.e. only being able to tackle disinformation once it has already spread.
  • There will be more media and information workers who can focus on their core tasks (journalism, analysis, communication) because complex, time-consuming verification workflows have been largely automated (in a responsible, trusted way).

Goals for 2040

If things go right, AI-powered support systems for content verification could have the following capabilities in less than two decades:

  • multimodal and cross-platform analysis
  • linguistic, country, culture, and context analysis
  • full synthetic content and synthetic manipulation analysis
  • automatic and early (real-time) detection of disinformation
  • automatic detection of check-worthy items, claims or narratives
  • seamless and flexible human-AI collaboration workflows
  • certified information related to transparency, Trustworthy AI, datasets
  • automatic technology upgrades to match tools of disinformation actors
  • interoperability with content authentication systems (e.g. blockchain)

All this will be enabled by major advances in realizing accurate, performant, well-explained, trusted AI technologies as well as widely available, ethically, and legally certified datasets needed for AI model training and evaluation. In combination, this will drive successful human-AI collaboration. Stand-alone AI technologies, functions and services will be integrated into tailored, user-friendly, and accessible support products for all kinds of information workers. The products are widely available, affordable, web-based and/or designed in a way that allows for seamless integration into corporate CMS and respective UIs. Specific public subsidy and co-funding programmes are in place to ensure access to these high-end systems. Core back-end technologies connect with multiple front ends for different user domains. Front ends feature a high degree of multi-faceted personalization, dashboard views, fine-grain visualization of AI-predictions and–as a general concept–easy-to-grasp, easy-to-accept, trustworthy information.

Milestones

On our way towards this ultimate vision, we can define a number of interim goals:

5-year Milestone

By the mid-2020s–apart from continued advances in AI tech, support products and UX design–AI for content verification will be certified in terms of Trustworthy AI and based on tailored, ethically, and legally compliant (certified) datasets.

10-year Milestone

The developments during the 2020s set the necessary baseline for achieving more (acceptable) automation in fact-checking and verification workflows as well as true human-AI collaboration, which is largely in place by 2030. This achievement is also driven by the by then more powerful AI analysis capabilities, advances in datasets, and widely accepted, accessible support tool products with a strong focus on AI-results usability.

15-year Milestone

By 2035, the progress described above begins to show real impact, leading to a significant reduction of disinformation and its negative effects on media and society. This development is driven by a combination of factors: further technical advances and excellence of Trustworthy AI and underlying datasets,  wide availability of user-friendly and accepted tools (some of which are publicly subsidized), and the implementation of seamless human-AI collaboration–which enables largely automated workflows if and when desired.

This concludes our outlook on positive future trends and attainable goals in the field of AI for verification. The first milestones already seem to be within reach!


In the fourth and final part of this series (coming soon) we'll look at a typical work day in the life of Carmen–a freelance medical journalist in the late 2030s.

Part I (on the status quo and current limitations) is over here.

Part II (on research challenges) is over here.

Author
Logo Deutsche Welle
DW Innovation