Ai4Media logo with a Roman number two next to it.
ai4media_verification_c_2.png
Verification, AI & Automization

AI for Content Verification II: Research Challenges

In the scope of the EU co-funded AI4Media project, we recently contributed to a long, public report entitled “AI technologies and applications in media: State of Play, Foresight, and Research Directions”. For better accessibility and readability, we’ve now turned our section into a multi-part blog series (with a couple of edits and updates). Here’s part two, which looks at research challenges with regard to AI and content verification.


In the third of this series (coming soon) we'll look at future trends and ultimate goals for AI-supported content verification.

Part I (on the status quo and current limitations) is over here.

While existing tools and concepts are already quite sophisticated and invaluable for journalists, investigators and other stakeholders, there is also a clear need for improvement: We need better AI technology, better datasets, better human-AI collaboration, more trustworthiness as well as tailor-made interfaces and products. 

In order to achieve these goals, we need further research in a number of academic fields related to technology, business, and social sciences. To be more concrete, the following challenges should be addressed (since this is mostly a media tech blog, we won't cover cultural, psychological or cognitive aspects):

AI Technology Advancement

This is about filling technology gaps in verification. Research subjects include:

  • multimodal content analysis (e.g., image with integrated text)
  • cross-platform content and network analysis
  • linguistic and country-specific/regional analysis
  • detection/flagging of synthetic media (shallow and deep fakes)
  • dynamic AI-updates for verification tools (to match disinformation actors)
  • automatic identification of check-worthy, potentially harmful elements 
  • early detection of disinformation narratives/elements
  • causal, contextual, and cultural analysis of complex statements
  • analysis of complex disinformation stories/narratives over time
  • analysis with integrated, blockchain-based authentication approaches\

Next Generation Datasets

This is about better data for better training and evaluation of AI technology in verification. Research subjects include:

  • special datasets for disinformation detection
  • multimodal and multilingual datasets 
  • cross-platform datasets
  • datasets that enable early (or real-time) detection
  • synthetic datasets (to overcome issues of real datasets)
  • legal, ethical and IPR compliance certification for datasets
  • regulated datasets for specific uses/users (public value)

Human-AI Collaboration and Automation

This is about enabling true human-AI collaboration and acceptable automation in verification. Research subjects include: 

  • automatic filters for suspicious content
  • characteristics of "acceptable" automation in the verification domain
  • characteristics of Trustworthy AI in human-AI collaboration
  • verification workflows with full, partial, or no automation (optional and mandatory human-in-the-loop concepts, seamless integration)
  • resolution of editorial/legal responsibility conflicts (human vs machine)
  • exploration of issues related to freedom of expression and censorship when using AI
  • exploration of issues related to editorial control, journalistic values, and legal frameworks\

Trustworthy AI Capability

This is about increasing the overall transparency of AI technology and integrating specific trustworthy approaches/tools to enable a responsible and accepted use of AI in verification. Research subjects include:

  • role of transparent/trustworthy AI in the process of tool acceptance
  • tailored Transparent AI certifications (provider, model, data, legal)
  • tailored Trustworthy AI certifications (explainability, fairness, robustness) 
  • translation of trustworthy AI output for non-technical users (UI, UX)
  • balancing decisions with regard to effectiveness and trustworthiness
  • exploring the potential of transparent/trustworthy design with regard to full automation
  • avoiding misuse of AI technology in verification

Function-to-Interface Transfer

This is about making an AI function and its outcomes easy to use for (non-expert) users, including support for a better understanding of what an AI function actually does. Research subjects include:

  • roles and expertise of users
  • alternative ways of presenting output of AI functions
  • better dashboards for AI analysis (UX, UI)
  • translation of (transparent, trustworthy) AI output for interfaces / into user language and reduction of complexity
  • personalized approaches: matching system output with AI-affinity/expertise of users

Tailored European Products 

This is about creating and promoting verification products suitable and accessible for a large number of stakeholders (e.g. media companies, fact-checking organizations, self-employed journalists, civil rights NGOs). Research subjects include:

  • existing AI-powered tools/functions and their providers
  • multilingual products
  • multi-faceted products: one backend, different front ends
  • product characteristics needed to foster adoption by specific by users
  • opportunities/barriers: public sector and commercial sector

As you can see, there's still a lot of research to be done. The good news: A significant number of the challenges outlined above will be tackled in AI- and verification-related R&D projects that are already underway.

Author
Logo Deutsche Welle
DW Innovation