Isabel Trancoso appointed IEEE Fellow Committee Vice Chair
Isabel Trancoso has been appointed IEEE Fellow Committee Vice Chair, a prestigious leadership role that recognizes a singular career and unique research contributions.
The Institute of Electrical and Electronics Engineers (IEEE) is the world’s largest technical professional organization dedicated to advancing technology for the benefit of humanity. Appointed by the IEEE Board of Directors for a term lasting from 01 January to 31 December 2023, Professor Trancoso — renowned INESC-ID Human Language Technologies researcher and Full Professor at Instituto Superior Técnico (IST) — is now Vice Chair of the Committee that makes recommendations for nominees to be conferred the grade of Fellow, itself “a distinction reserved for select IEEE members whose extraordinary accomplishments in any of the IEEE fields of interest are deemed fitting of this prestigious grade elevation.”
“I view this appointment as an appreciation of the work I have done during several years targeting the improvement of the Fellow selection process,” Professor Trancoso commented. “In particular in the last year, in which I was deeply involved in an ad hoc committee that made major recommendations towards this goal. So I’m equally excited and frightened by the challenges that lie ahead.” Professor Trancoso added that “Being Vice Chair of a Committee that recommends nominees to be conferred the grade of IEEE Fellow would be an enormous responsibility and a great honour at any time. This year it brings the added responsibility of helping to put in place significant modifications.”
Appointment as IEEE Fellow Committee Vice Chair follows a long history of recognition of Professor Trancoso’s career by IEEE, including membership of the IEEE Fellows Committee, chairing of the IEEE James Flanagan Award Committee and the ISCA Fellow Selection Committee. Professor Trancoso was elevated to IEEE Fellow in 2011 and to ISCA Fellow in 2014.
Upcoming Events
Educational Workshop on Responsible AI for Peace and Security (UNODA)
On June 6 and 7, The United Nations Office for Disarmament Affairs (UNODA) and the Stockholm International Peace Research Institute (SIPRI) are offering a selected group of technical students the opportunity to join a 2-day educational workshop on Responsible AI for peace and security.
The third workshop in the series will be held in Porto Salvo, Portugal, in collaboration with GAIPS, INESC-ID, and Instituto Superior Técnico. The workshop is open to students affiliated with universities in Europe, Central and South America, the Middle East and Africa, Oceania, and Asia.
Date & Time: June 6 a 7
Where: IST – Tagus Park, Porto Salvo
Registration deadline: April 8
Summary: “As with the impacts of Artificial intelligence (AI) on people’s day-to-day lives, the impacts for international peace and security include wide-ranging and significant opportunities and challenges. AI can help achieve the UN Sustainable Development Goals, but its dual-use nature means that peaceful applications can also be misused for harmful purposes such as political disinformation, cyberattacks, terrorism, or military operations. Meanwhile, those researching and developing AI in the civilian sector remain too often unaware of the risks that the misuse of civilian AI technology may pose to international peace and security and unsure about the role they can play in addressing them. Against this background, UNODA and SIPRI launched, in 2023, a three-year educational initiative on Promoting Responsible Innovation in AI for Peace and Security. The initiative, which is supported by the Council of the European Union, aims to support greater engagement of the civilian AI community in mitigating the unintended consequences of civilian AI research and innovation for peace and security. As part of that initiative, SIPRI and UNODA are organising a series of capacity building workshops for STEM students (at PhD and Master levels). These workshops aim to provide the opportunity for up-and-coming AI practitioners to work together and with experts to learn about a) how peaceful AI research and innovation may generate risks for international peace and security; b) how they could help prevent or mitigate those risks through responsible research and innovation; c) how they could support the promotion of responsible AI for peace and security.”