“GPT is wonderful! Use it without fear, – but with caution”, advises Luísa Coheur at Técnico Open Day
The auditorium was completely full, mostly of youngsters, who wanted to listen to Luísa Coheur’s talk at Técnico Open Day: “ChatGPT – potentials and risks.” And there was no disappointment, since a good share of the participants stayed beyond schedule in a very vivid conversation with the researcher at INESC-ID’s Human Language Technologies (HLT) lab and a teacher at Técnico.
The talk started with a retrospective on the origins of the now ubiquitous Large Language Models (LLM). “They were not born today, they are the result of many years of study in natural language processing, and also machine learning”, the researcher noted.
Starting on the sixties of past century, the field has grown ever since, with an impressive evolution after the first public presentation of the most famous LLM, GPT, in 2019. “The first versions generated text that was correct, but still a bit confusing”, Luísa told the audience. “But with GPT-3 it is madness!”
Assuming herself as a great enthusiast of the models, the researcher and teacher urged the students to incorporate this tool in their lives, including to fulfil their academic tasks. “I use it all the time, to prepare classes or to make presentations like this one”, Luísa revealed, giving the example of the illustrations, all created through instructions given to the model.
But if first half of the conference was devoted to the advantages of using LLM, the second was focused on the risks. “Never trust it completely, always check.” Voice and image manipulation, made up sentences, invented sources, are the most critical aspects of this technology. But there is only one way to fight it: is to know it well and be aware of its faults.
As the former President of USA, Franklin D. Roosevelt, famously said: “the only thing we have to fear is fear itself.”
Another presentation that also generated interest from the participants, was the one about the first University CubeSat, a satellite entirely conceived and developed in Portugal, in a collaborative team that includes INESC-ID researchers, Gonçalo Tavares and Moisés Piedade. The conference was delivered by João Paulo Monteiro, one of the researchers responsible the project, who will likely be at French Guiana in September, to assist the launch event.
(Photo: © 2024 INESC-ID)
Upcoming Events
Educational Workshop on Responsible AI for Peace and Security (UNODA)
On June 6 and 7, The United Nations Office for Disarmament Affairs (UNODA) and the Stockholm International Peace Research Institute (SIPRI) are offering a selected group of technical students the opportunity to join a 2-day educational workshop on Responsible AI for peace and security.
The third workshop in the series will be held in Porto Salvo, Portugal, in collaboration with GAIPS, INESC-ID, and Instituto Superior Técnico. The workshop is open to students affiliated with universities in Europe, Central and South America, the Middle East and Africa, Oceania, and Asia.
Date & Time: June 6 a 7
Where: IST – Tagus Park, Porto Salvo
Registration deadline: April 8
Summary: “As with the impacts of Artificial intelligence (AI) on people’s day-to-day lives, the impacts for international peace and security include wide-ranging and significant opportunities and challenges. AI can help achieve the UN Sustainable Development Goals, but its dual-use nature means that peaceful applications can also be misused for harmful purposes such as political disinformation, cyberattacks, terrorism, or military operations. Meanwhile, those researching and developing AI in the civilian sector remain too often unaware of the risks that the misuse of civilian AI technology may pose to international peace and security and unsure about the role they can play in addressing them. Against this background, UNODA and SIPRI launched, in 2023, a three-year educational initiative on Promoting Responsible Innovation in AI for Peace and Security. The initiative, which is supported by the Council of the European Union, aims to support greater engagement of the civilian AI community in mitigating the unintended consequences of civilian AI research and innovation for peace and security. As part of that initiative, SIPRI and UNODA are organising a series of capacity building workshops for STEM students (at PhD and Master levels). These workshops aim to provide the opportunity for up-and-coming AI practitioners to work together and with experts to learn about a) how peaceful AI research and innovation may generate risks for international peace and security; b) how they could help prevent or mitigate those risks through responsible research and innovation; c) how they could support the promotion of responsible AI for peace and security.”