iv4XR: Intelligent Verification/Validation for Extended Reality Based Systems
iv4XR (Intelligent Verification/Validation for Extended Reality Based Systems), a Horizon 2020-funded European project, finished its run on 31 December 2022 with some very exciting outputs.
Coordinated by Rui Prada — INESC-ID Artificial Intelligence for People and Society Senior Researcher and Associate Professor at Instituto Superior Técnico — and active between October 2019 and December 2022, iv4XR aimed to build “a novel verification and validation technology for XR [Extended Reality] systems based on techniques from AI to provide learning and reasoning over a virtual world.” Ultimately, iv4XR provided a novel toolkit for XR developers to test and explore their XR systems as they build them, bit by bit.
“Automating Quality Assurance (QA) tests of XR systems is a challenging but promising research field. It can bring great benefits to the XR industry by extending the possibilities of current QA practices and reducing its costs by reducing the need for user testing,” Prada commented.
The project has delivered the iv4xr toolkit — a multi-agent testing framework — as well as a bounty of publications. Looking back from the final consortium meeting that was held in person on 29 November 2022 in Utrecht (the Netherlands), Rui Prada emphasized that “In the past three years, the iv4XR team was highly engaged in achieving the ambitious vision of the project. We are proud of the results presented in the toolkit that establish the basis for agent-based testing as a practice to test XR systems. Our studies in the domains of games, AI simulations, and sensor networks show promising results for the value of the approach. We close the project with a sense of accomplishment, but with the certainty that many interesting research questions remain.”
To sum up the project the iv4XR team prepared the video below. Have a look at how XR system testing has just become easier:
Upcoming Events
NII International Internship Programme Presentation and Q&A by Emmanuel Planas
On April 30, Emmanuel Planas, the acting director of the Global Liaison Office (GLO) and responsible for the internationalisation program at the National Institute of Informatics (NII) in Tokyo, Japan, will give a presentation to introduce the NII and its internship program to INESC-ID students and IST’s Master’s in Computer Science students.
Date & Time: April 30, 14h00
Where: Sala Polivalente, Técnico – Taguspark
“The NII International Internship Program is an exchange activity with students from institutions with which NII has concluded a Memorandum of Understanding (MOU) agreement. This incentive program aims at giving interns the opportunity for professional and personal development by engaging in research activities under the guidance and supervision of NII researchers.
The NII Internship Program is open to Research Master’s and PhD students who are currently enrolled at one of the partner institutions that have signed an MOU agreement with NII.”
Educational Workshop on Responsible AI for Peace and Security (UNODA)
On June 6 and 7, The United Nations Office for Disarmament Affairs (UNODA) and the Stockholm International Peace Research Institute (SIPRI) are offering a selected group of technical students the opportunity to join a 2-day educational workshop on Responsible AI for peace and security.
The third workshop in the series will be held in Porto Salvo, Portugal, in collaboration with GAIPS, INESC-ID, and Instituto Superior Técnico. The workshop is open to students affiliated with universities in Europe, Central and South America, the Middle East and Africa, Oceania, and Asia.
Date & Time: June 6 a 7
Where: IST – Tagus Park, Porto Salvo
Registration deadline: April 8
Summary: “As with the impacts of Artificial intelligence (AI) on people’s day-to-day lives, the impacts for international peace and security include wide-ranging and significant opportunities and challenges. AI can help achieve the UN Sustainable Development Goals, but its dual-use nature means that peaceful applications can also be misused for harmful purposes such as political disinformation, cyberattacks, terrorism, or military operations. Meanwhile, those researching and developing AI in the civilian sector remain too often unaware of the risks that the misuse of civilian AI technology may pose to international peace and security and unsure about the role they can play in addressing them. Against this background, UNODA and SIPRI launched, in 2023, a three-year educational initiative on Promoting Responsible Innovation in AI for Peace and Security. The initiative, which is supported by the Council of the European Union, aims to support greater engagement of the civilian AI community in mitigating the unintended consequences of civilian AI research and innovation for peace and security. As part of that initiative, SIPRI and UNODA are organising a series of capacity building workshops for STEM students (at PhD and Master levels). These workshops aim to provide the opportunity for up-and-coming AI practitioners to work together and with experts to learn about a) how peaceful AI research and innovation may generate risks for international peace and security; b) how they could help prevent or mitigate those risks through responsible research and innovation; c) how they could support the promotion of responsible AI for peace and security.”