The expression of hate speech against Afro-descendant, Roma, and LGBTQ+ communities in YouTube comments
What’s in a word, and especially one mobilized for online hate speech (OHS)? A team of INESC-ID researchers has asked exactly that.
Authored by INESC-ID Information and Decision Support Systems (IDSS) researchers Paula Carvalho and Danielle Caled, and Human Language Technologies (HLT) researchers Fernando Batista and Ricardo Ribeiro (together with Cláudia Silva from ITI-LARSyS)*, The expression of hate speech against Afro-descendant, Roma, and LGBTQ+ communities in YouTube comments — published this month in the Journal of Language Aggression and Conflict — explores the prevalence of overt and covert hate speech, counter-speech and offensive speech in CO-HATE (Counter, Offensive and Hate speech), a corpus of Portuguese 20,590 YouTube comments posted by more than 8,000 different online users.
By asking two simple yet challenging questions — 1) how does OHS against the Afro-descendant, Roma, and LGBTQ+ communities materialize in the Portuguese social context? and 2) which are the main linguistic and rhetorical features underlying the expression of covert hate speech? — and creating a detailed database of written Portuguese (essential in studying and identifying online hate speech targeting Afro-descendant, Roma, and LGBTQ+ communities on social media), the team analyzed the specific characteristics of hateful comments towards these groups by combining quantitative and qualitative research methods based on corpus linguistics (which analyze large collections of texts to understand how language is used, uncovering patterns and relationships between words and structures, thus providing data-driven insights into the myriad ways language is used). They then measured agreement among annotators when identifying OHS and related topics.
By studying how people express hatred in their comments, the team found that comment writers often use specific language and persuasive techniques. They also discovered that hate speech is often hidden behind irony and misleading arguments, a kind of speech that tries to make people afraid and encourages them to take action.
This study offers valuable insights that can help detect online hate speech more effectively. It also deepens our understanding of how hate speech works online in Portugal, especially towards marginalized groups. Furthermore, the corpus created by Paula Carvalho et al. will be a valuable resource for those interested in developing methods to detect both obvious and hidden hate speech, as well as other related behaviors like counter-speech and offensive language, in Portuguese.
Future research venues might involve expanding this study to other social media platforms like Twitter and include more communities such as migrants and refugees. The team is also planning on involving more annotators, considering their social backgrounds, to better assess agreement between different communities.
This project follows a very successful research line at INESC-ID. Last year we had reported on the FCT-funded HATE COVID-19.PT project, coordinated by Paula Carvalho, and under which methods for semi-automatically putting together a large-scale Portuguese annotated corpus covering online hate speech were created.
*Paula Carvalho, Danielle Caled and Cláudia Silva are also affiliated with Instituto Superior Técnico, and Ricardo Ribeiro with Instituto Universitário de Lisboa (ISCTE-IUL).
Upcoming Events
NII International Internship Programme Presentation and Q&A by Emmanuel Planas
On April 30, Emmanuel Planas, the acting director of the Global Liaison Office (GLO) and responsible for the internationalisation program at the National Institute of Informatics (NII) in Tokyo, Japan, will give a presentation to introduce the NII and its internship program to INESC-ID students and IST’s Master’s in Computer Science students.
Date & Time: April 30, 14h00
Where: Sala Polivalente, Técnico – Taguspark
“The NII International Internship Program is an exchange activity with students from institutions with which NII has concluded a Memorandum of Understanding (MOU) agreement. This incentive program aims at giving interns the opportunity for professional and personal development by engaging in research activities under the guidance and supervision of NII researchers.
The NII Internship Program is open to Research Master’s and PhD students who are currently enrolled at one of the partner institutions that have signed an MOU agreement with NII.”
Educational Workshop on Responsible AI for Peace and Security (UNODA)
On June 6 and 7, The United Nations Office for Disarmament Affairs (UNODA) and the Stockholm International Peace Research Institute (SIPRI) are offering a selected group of technical students the opportunity to join a 2-day educational workshop on Responsible AI for peace and security.
The third workshop in the series will be held in Porto Salvo, Portugal, in collaboration with GAIPS, INESC-ID, and Instituto Superior Técnico. The workshop is open to students affiliated with universities in Europe, Central and South America, the Middle East and Africa, Oceania, and Asia.
Date & Time: June 6 a 7
Where: IST – Tagus Park, Porto Salvo
Registration deadline: April 8
Summary: “As with the impacts of Artificial intelligence (AI) on people’s day-to-day lives, the impacts for international peace and security include wide-ranging and significant opportunities and challenges. AI can help achieve the UN Sustainable Development Goals, but its dual-use nature means that peaceful applications can also be misused for harmful purposes such as political disinformation, cyberattacks, terrorism, or military operations. Meanwhile, those researching and developing AI in the civilian sector remain too often unaware of the risks that the misuse of civilian AI technology may pose to international peace and security and unsure about the role they can play in addressing them. Against this background, UNODA and SIPRI launched, in 2023, a three-year educational initiative on Promoting Responsible Innovation in AI for Peace and Security. The initiative, which is supported by the Council of the European Union, aims to support greater engagement of the civilian AI community in mitigating the unintended consequences of civilian AI research and innovation for peace and security. As part of that initiative, SIPRI and UNODA are organising a series of capacity building workshops for STEM students (at PhD and Master levels). These workshops aim to provide the opportunity for up-and-coming AI practitioners to work together and with experts to learn about a) how peaceful AI research and innovation may generate risks for international peace and security; b) how they could help prevent or mitigate those risks through responsible research and innovation; c) how they could support the promotion of responsible AI for peace and security.”