post_AI4MEDIA_key_visual.png
post_AI4MEDIA_key_visual.png
AI & Automization, Best Practice

AI in Media Tools: How to Increase User Trust and Support AI Governance

Predictive AI technology has a well-known issue: how do we know the component is reliable, and can we trust the result enough to use it in our work? Learn more on how DW Innovation has found ways to make an AI-powered service more transparent and secure, assisting not only end-users, but also the process of AI governance.

Today, there are many ways that predictive AI technology is making its way into media tools used by journalists and content professionals: Internally developed components for specific tasks, AI-features within web-based support products, and existing media tools or systems that might get additional AI components. The latter was the case in AI4Media, where it was our task to evaluate novel AI-powered technologies to support content verification workflows in a media industry context. Together with our use case partner ATC, we integrated several AI components from different technology partners into a demonstrator version of Truly Media (a tool for content verification and fact-checking co-developed by ATC and DW that has been in use for several years now). For more info on the Truly upgrade, see the overview below – or read this article on the AI4MEDIA blog.

ai4media_trust_01.png
The 9 AI-driven services in the Truly Media demonstrator.DW Innovation

Aspects of trustworthiness

Now many of these services are based on AI technologies that give a prediction: e.g. as a probability in percent, as a confidence score, or in a yes/no format. Although it was our main goal to assess the technical performance and usefulness of these features, it was clear from the start that aspects of trustworthiness are also important. Some key questions were:

  • How can journalists who use such features trust and interpret the results?

  • Are the components legally compliant and secure?

  • What do we know about them – and how do they generate the predictions?

  • Is the outcome accurate?

This was especially relevant when considering potential implementation in real-world product and workflow environments, where risk management strategies apply. AI governance and guidelines play an important role when it comes to clearing AI services for day-to-day use in a media organisation. In this context, we decided to do some additional research, i.e. learn about and explore ethical, responsible and trustworthy AI for media tools.

Definitions

First of all, it was important to make sense of the big terms that have been discussed for several years now. Below is our own interpretation that we used as guidelines throughout the process:

ai4media_trust_02.png
4 important definitions.DW Innovation

In our use case, we decided to focus on the concept of technically driven Trustworthy AI, also because this was an important field of research in the entire AI4Media project (covered by several tasks and research partners, and led by IBM Research Europe - Ireland). For some years now, there have been well-known criteria describing Trustworthy AI – and a set of widely acknowledged, common principles.

For our work in AI4MEDIA, we summarized them as follows:

ai4mediatrust_03.png
The criteria of Trustworthy AI.DW Innovation

Now it's easy to learn about the principles – but it's a lot harder to implement them in an application. Especially when the AI component is developed by an external third-party and integrated into a media platform that is operated by yet another technology provider. In this situation, it's necessary to properly communicate trustworthy AI requirements – and liaise with all stakeholders. This is the business-related challenge we tackled in our AI4Media use case.

Implementing Selected Trustworthy AI Principles

We started by choosing one suitable AI component integrated into the Truly Media demonstrator as an example, to demonstrate ways of increasing trustworthiness: The Deep Fake Detection AI service from the MeVer Group at CERTH-ITI; this service analyzes a video–and then gives a value (e.g. 80%) indicating the probability of a single extracted face being a deep fake, as well as an overall value for the entire video. We then selected two of the trustworthy AI principles listed above to be realized in a practical way for the Deep Fake Detection Service: transparency and robustness.

Transparency is often the basis for other principles, and it was the most important requirement for us. Without detailed documentation, it's close to impossible for users and managers to assess whether the model behind the service is reliable and its output can be trusted. It's important to know if the AI model behind the service upholds its performance/accuracy in case it's attacked by adverse actors - because such attacks can be difficult to spot.

The transparency of this AI component was increased in cooperation with the component owner CERTH-ITI in the following way:

First, we asked the developers to provide a Model Card for the service, which provided key information in technical language, covering:

  • model details

  • intended use

  • factors affecting performance

  • metrics and datasets used

  • caveats and recommendations

  • quantitative analyses

  • performance intuition

We then worked closely with CERTH-ITI to transform this highly technical information into a more business-oriented text that can be understood and used by AI governance and business managers as well as curious end users of Truly Media. The information in each section was "translated" into non-technical language, but also extended and with additional explanations where necessary. Furthermore, we added a new section that briefly covered wider responsible AI topics, such as legal compliance, privacy, fairness, explainability, general IT security, sustainability and societal/ethical implications. This detailed AI Governance documentation now answers the following questions:

  • What is the problem this AI service aims to solve?

  • Which tasks can it conduct?

  • Which tasks are out of scope?

  • Who are the owners/creators of the service?

  • Which version is running?

  • How does the service work?

  • Which models are in use?

  • Which other technologies and approaches are involved?

  • What about the datasets?

  • What affects the accuracy of this AI service?

  • What about performance measurement?

  • How does the service perform under adversarial attack?

  • What are other responsible AI issues or risks?

Using all available information, an ATC business manager prepared a brief guide now available for end-users–directly incorporated it into the software. The guide offers user-oriented insights and provides answers to questions likely to be asked by less experienced users:

  • How should I use the service?

  • How should I interpret the results?

  • How accurate are they?

  • Why does the service generate false positive/negative results

  • What are the limitations of the service?

  • How can I ensure the analysis is as good as it can be?

  • How much time does the service need to analyze an image/video?

ATC made the desired information available directly within the Truly Media tool in two places: Within a dedicated menu section "Transparent AI" – and via the results interface window under a link "Learn more about this AI service".

The image below shows three transparency documents that are aimed at different audiences: AI technologists, AI governance managers and end users such as journalists and verification specialists.

ai4mediatrust_04.png
Three different transparency documents for three different stakeholders.DW Innovation

Both the technical model card and the AI governance documentation contain information about the robustness of the AI model used for the deep fake detection service. Robustness is one of the Trustworthy AI principles and closely related to AI Governance. It refers to the overall resilience of an AI model against various forms of attacks (e.g. an adversary could obtain access to the deployed model of the AI component and perform very minor imperceptible alterations to the input data to significantly reduce the accuracy of such a model). The trustworthiness of an AI component can be enhanced by applying algorithmic AI-powered robustness tools to its AI model. In our use case, the technology partners used the Adversarial Robustness Toolbox (ART) from IBM.

Following DW's requirements, the AI model of the deep fake detection service was evaluated and enhanced in terms of robustness by the component owner CERTH in close cooperation with the expert partner in AI4Media for algorithmic trustworthy AI technologies, IBM Research Europe - Dublin. DW then developed requirements to assist IBM in producing the right kind of transparency information for the technical and AI governance documentations. In addition, the two partners co-authored a business guide on AI robustness for internal managers in non-technical language that explained the robustness evaluation process in the context of specific tools and business-related security scenarios.

Lessons learned

While it's important to assess vulnerabilities and the robustness of AI components used in media tools, it's also necessary to make the results accessible to non-technical audiences, e.g. AI governance managers or end users. We learned that the practical and technical implementation of trustworthy AI in media tools and systems is complex, especially when third parties such as AI component or media tool operators are involved. The process can be time-consuming as it requires cooperation between several stakeholders. It also remains challenging to find suitable ways of integrating complex transparency information for different audiences into a well-designed user interface.

Principles vs. Practice

Many media organizations now look at AI related risks, staff concerns, acceptance levels and the degree of trust into outcomes or predictions. As in other industries, they might publish AI guidelines or ethical principles for the use of AI in their organization. Globally, there is dynamic development in the field of Responsible and Trustworthy AI, going beyond accuracy with dimensions such as fairness, privacy, explainability, robustness, security, transparency and governance. In addition, there is specific AI regulation such as the AI Act by the European Commission. While the principles of responsible AI seem to be well established, more work and research is needed to ensure their implementation in real-world media workflows.

Many thanks for the input, interest and enthusiasm go to: ATC, CERTH-ITI, and IBM Research Europe – Ireland.

Author
Logo Deutsche Welle
DW Innovation