vera.ai's core mission is to develop and build reliable AI tools to help spot and counter disinformation online. This includes all types of content: text, audio, images, and video. The project caters to what has become the core group for digital verification and fake detection: fact-checkers, journalists, human rights activists, and legal investigators. The pan-European consortium consists of 14 partners, led by the Media Verification (MeVer) team of the Informatics Research Institute ITI-CERTH in Thessaloniki. All members of vera.ai are dedicated to a user-centric approach, thus ensuring the project meets real life verification needs and requirements. Speaking of which: The iterative formulation and refinement of user requirements will be one of DW's main tasks in the project. Our other duties include the operation of the vera.ai website and social media channels. We will also contribute to several technological work packages, e.g. multilingual credibility assessment and evidence retrieval, audiovisual content analysis, and multimodal deep fake and manipulation analysis.
vera.ai greatly benefits from the consortium's expertise in the domain of verification and disinfo analysis. Most notably, the project relies on the work done in WeVerify, which ran from early 2019 to late 2021. As a consequence, the vera.ai consortium has already delivered its first milestone, in record time: A new version of the very popular and indispensable verification plug-in (70.000+ monthly users so far). More info on vera.ai (including a full list of partners) in this kick-off thread on Twitter. For now, twitter.com/veraai_eu is the project's primary communication channel. However, a fully-fledged website is scheduled to be launched in November/December (bookmark: www.veraai.eu).