Two new AI projects: vera.ai and MAMMOth

Two new Horizon Europe AI projects were recently launched by members of our network. The first one, vera.ai, seeks to build trustworthy AI solutions against disinformation and to set the foundation for future research in the area of AI against disinformation. The second project, MAMMOth, tackles bias that may lead to discriminating against minority and marginalised groups. MAMMOth will create tools for fairness-aware AI which ensure respect to protected attributes like gender, race and age.


MAMMOth – Multi-Attribute, Multimodal Bias Mitigation in AI Systems

Artificial intelligence (AI) offers great promise for solving business and social problems, but it also risks inadvertently discriminating against minority and marginalised groups. The EU-funded MAMMOth project, which started on November 1st 2022, tackles this bias by focusing on multi-discrimination mitigation for tabular, network and multimodal data. Working with computer science and AI experts, the project will create tools for fairness-aware AI which ensure accountability with respect to protected attributes like gender, race and age. The project will also engage with communities of vulnerable and/or underrepresented groups in AI research to ensure that user needs and pains are truly at the centre of the agenda. The end goal is to develop pilot projects for finance/loan applications, identity verification and academic evaluation.
You can follow the project on Twitter, Facebook and LinkedIn .

vera.ai: VERification Assisted by Artificial Intelligence

Online disinformation and fake media content have emerged as a serious threat to democracy, economy and society. Recent advances in AI have enabled the creation of highly realistic synthetic content and its artificial amplification through AI-powered bot networks. Consequently, it is extremely challenging for researchers and media professionals to assess the veracity/credibility of online content and to uncover the highly complex disinformation campaigns.

vera.ai, a Horizon Europe project that launched on September 15th 2022, seeks to build professional trustworthy AI solutions against advanced disinformation techniques, co-created with and for media professionals & researchers and to also set the foundation for future research in the area of AI against disinformation.

Key novel characteristics of the vera.ai AI models will be fairness, transparency (incl. explainability), robustness against concept drifts, continuous adaptation to disinformation evolution through a fact-checker-in-the-loop approach, and ability to handle multimodal and multilingual content. Recognising the perils of AI generated content, we will develop tools for deepfake detection in all formats (audio, video, image, text).

The project adopts a multidisciplinary co-creation approach to AI technology design, coupled with open source algorithms. A unique key proposition is grounding of the AI models on continuously collected fact-checking data gathered from the tens of thousands of instances of “real life” content being verified in the InVID-WeVerify plugin and the Truly Media/EDMO platform. Social media and web content will be analysed and contextualised to expose disinformation campaigns and measure their impact.

Results will be validated by professional journalists and fact checkers from project partners (DW, AFP, EUDL, EBU), external participants (through our affiliation with EDMO and seven EDMO Hubs), the community of more than 53,000 users of the InVID-WeVerify verification plugin, and by media literacy, human rights and emergency response organisations.

You can follow the project on Twitter.