INESC-ID awarded $300K by the USA Air Force to investigate human-robot interactions
INESC-ID has been awarded $300K by the USA Air Force Office of Scientific Research (AFOSR) to investigate human-robot interactions. The funded project — “Trustworthy Ad Hoc Teamwork” — officially starts today, 01 September 2022, and will be funded for two years.
This project is led at INESC-ID by Alberto Sardinha, researcher in the Artificial Intelligence for People and Society Research Area and Assistant Professor at the Department of Computer Science and Engineering of Instituto Superior Técnico. Alberto explained: “Trust is an essential element for effective cooperation between humans and robots. However, to the best of our knowledge, no research work has analyzed trust between robots and humans within ad hoc teamwork scenarios or created tailored algorithms for trustworthy ad hoc teamwork.”
“Trustworthy Ad Hoc Teamwork” investigates how to create an autonomous agent that can efficiently and robustly collaborate with previously unknown teammates. The project plans to achieve this through novel ad hoc teamwork algorithms to build trust within human-robot interactions and by addressing a vital question: How can a robot learn to cooperate with unknown human teammates in complex domains while fostering trust development, whereby the robot does not have any pre-coordination protocol and must learn to cooperate on the fly?
By implementing a combination of sensors (networked within the environment – e.g., microphones, wall-mounted cameras – and the robot’s on-board sensors – e.g., lasers, cameras, RFID readers), an ad hoc robot (containing a combination of perception, decision and execution modules) and output modalities (different components or hardware that support the robot’s behavior or actions), this project aims to answer a series of additional unfolding queries: How can a robot learn, in a trustworthy way, the cooperative task in the absence of a reward signal and without full observability of states and actions? How to tailor the ad hoc teamwork algorithms in order to increase a human’s trust in robots? How can the multimodal capabilities of a robotic teammate (e.g., speech recognition and synthesis, recognition of people and intentions, social robot expression) provide valuable information to facilitate the development of trust within ad hoc teamwork scenarios?
“Trustworthy Ad Hoc Teamwork” is Alberto Sardinha’s third research project on this topic. The first project (Ad Hoc Teams: Ad Hoc Teams With Humans And Robots) was also funded by the AFOSR and the second (HOTSPOT: Human-robOt TeamS without PrecoOrdinaTion) by FCT. Trust and Influence is one of AFOSR’s interdisciplinary basic research funding portfolios, fitting the scope of the current grant.
Led by INESC-ID, “Trustworthy Ad Hoc Teamwork” will count with the additional participation of PUC-Rio (Brazil).
Upcoming Events
Educational Workshop on Responsible AI for Peace and Security (UNODA)
On June 6 and 7, The United Nations Office for Disarmament Affairs (UNODA) and the Stockholm International Peace Research Institute (SIPRI) are offering a selected group of technical students the opportunity to join a 2-day educational workshop on Responsible AI for peace and security.
The third workshop in the series will be held in Porto Salvo, Portugal, in collaboration with GAIPS, INESC-ID, and Instituto Superior Técnico. The workshop is open to students affiliated with universities in Europe, Central and South America, the Middle East and Africa, Oceania, and Asia.
Date & Time: June 6 a 7
Where: IST – Tagus Park, Porto Salvo
Registration deadline: April 8
Summary: “As with the impacts of Artificial intelligence (AI) on people’s day-to-day lives, the impacts for international peace and security include wide-ranging and significant opportunities and challenges. AI can help achieve the UN Sustainable Development Goals, but its dual-use nature means that peaceful applications can also be misused for harmful purposes such as political disinformation, cyberattacks, terrorism, or military operations. Meanwhile, those researching and developing AI in the civilian sector remain too often unaware of the risks that the misuse of civilian AI technology may pose to international peace and security and unsure about the role they can play in addressing them. Against this background, UNODA and SIPRI launched, in 2023, a three-year educational initiative on Promoting Responsible Innovation in AI for Peace and Security. The initiative, which is supported by the Council of the European Union, aims to support greater engagement of the civilian AI community in mitigating the unintended consequences of civilian AI research and innovation for peace and security. As part of that initiative, SIPRI and UNODA are organising a series of capacity building workshops for STEM students (at PhD and Master levels). These workshops aim to provide the opportunity for up-and-coming AI practitioners to work together and with experts to learn about a) how peaceful AI research and innovation may generate risks for international peace and security; b) how they could help prevent or mitigate those risks through responsible research and innovation; c) how they could support the promotion of responsible AI for peace and security.”