“GPT is wonderful! Use it without fear, – but with caution”, advises Luísa Coheur at Técnico Open Day
The auditorium was completely full, mostly of youngsters, who wanted to listen to Luísa Coheur’s talk at Técnico Open Day: “ChatGPT – potentials and risks.” And there was no disappointment, since a good share of the participants stayed beyond schedule in a very vivid conversation with the researcher at INESC-ID’s Human Language Technologies (HLT) lab and a teacher at Técnico.
The talk started with a retrospective on the origins of the now ubiquitous Large Language Models (LLM). “They were not born today, they are the result of many years of study in natural language processing, and also machine learning”, the researcher noted.
Starting on the sixties of past century, the field has grown ever since, with an impressive evolution after the first public presentation of the most famous LLM, GPT, in 2019. “The first versions generated text that was correct, but still a bit confusing”, Luísa told the audience. “But with GPT-3 it is madness!”
Assuming herself as a great enthusiast of the models, the researcher and teacher urged the students to incorporate this tool in their lives, including to fulfil their academic tasks. “I use it all the time, to prepare classes or to make presentations like this one”, Luísa revealed, giving the example of the illustrations, all created through instructions given to the model.
But if first half of the conference was devoted to the advantages of using LLM, the second was focused on the risks. “Never trust it completely, always check.” Voice and image manipulation, made up sentences, invented sources, are the most critical aspects of this technology. But there is only one way to fight it: is to know it well and be aware of its faults.
As the former President of USA, Franklin D. Roosevelt, famously said: “the only thing we have to fear is fear itself.”
Another presentation that also generated interest from the participants, was the one about the first University CubeSat, a satellite entirely conceived and developed in Portugal, in a collaborative team that includes INESC-ID researchers, Gonçalo Tavares and Moisés Piedade. The conference was delivered by João Paulo Monteiro, one of the researchers responsible the project, who will likely be at French Guiana in September, to assist the launch event.
(Photo: © 2024 INESC-ID)
Upcoming Events
INESC-ID Distinguished Lecture: “(Programming Languages) in Agda = Programming (Languages in Agda)” by Professor Philip Wadler
On June 4, Professor Philip Wadler will give an INESC-ID Distinguished Lecture organized in the scope of the BIG ERA Chair Project, titled “(Programming Languages) in Agda = Programming (Languages in Agda)”.
Registration: here (free but mandatory)
Date: June 4, 2024
Time: 15h00-16h15
Where: Anfiteatro Abreu Faro – Complexo Interdisciplinar, Instituto Superior Técnico (Alameda)
Abstract: The most profound connection between logic and computation is a pun. The doctrine of Propositions as Types asserts that propositions correspond to types, proofs to programs, and simplification of proofs to evaluation of programs. Proof by induction is just programming by recursion. Finding a proof becomes as fun as hacking a program. Dependently-typed programming languages, such as Agda, exploit this pun. This talk introduces *Programming Language Foundations in Agda*, a textbook that doubles as an executable Agda script—and also explains the role Agda plays in IOG’s Cardano cryptocurrency.
Short Bio: Philip Wadler is a Professor of Computer Science at the University of Edinburgh and a Senior Research Fellow at IOHK. He is a Fellow of the Royal Society, a Fellow of the Royal Society of Edinburgh, and an ACM Fellow. He is head of the steering committee for Proceedings of the ACM, past editor-in-chief of PACMPL and JFP, past chair of ACM SIGPLAN, past holder of a Royal Society-Wolfson Research Merit Fellowship, winner of the SIGPLAN Distinguished Service Award, and a winner of the POPL Most Influential Paper Award. Previously, he worked or studied at Stanford, Xerox Parc, CMU, Oxford, Chalmers, Glasgow, Bell Labs, and Avaya Labs, and visited as a guest professor in Copenhagen, Sydney, and Paris. He has an h-index of over 70 with more than 25,000 citations to his work, according to Google Scholar. He contributed to the designs of Haskell, Java, and XQuery, and is co-author of Introduction to Functional Programming (Prentice Hall, 1988), XQuery from the Experts (Addison Wesley, 2004), Generics and Collections in Java (O’Reilly, 2006), and Programming Language Foundations in Agda (2018). He has delivered invited talks in locations ranging from Aizu to Zurich.
Philip Wadler likes to introduce theory into practice, and practice into theory. An example of theory into practice: GJ, the basis for Java with generics, derives from quantifiers in second-order logic. An example of practice into theory: Featherweight Java specifies the core of Java in less than one page of rules. He is a principal designer of the Haskell programming language, contributing to its two main innovations, type classes and monads. The YouTube video of his Strange Loop talk Propositions as Types has over 100,000 views. Wadler is also area leader for programming languages at IOHK (now Input Output Global), the blockchain engineering company developing Cardano. He has contributed to work on Plutus, a Turing-complete smart contract language for Cardano written in Haskell; the UTXO ledger system, native tokens, and System F in Agda.
Educational Workshop on Responsible AI for Peace and Security (UNODA)
On June 6 and 7, The United Nations Office for Disarmament Affairs (UNODA) and the Stockholm International Peace Research Institute (SIPRI) are offering a selected group of technical students the opportunity to join a 2-day educational workshop on Responsible AI for peace and security.
The third workshop in the series will be held in Porto Salvo, Portugal, in collaboration with GAIPS, INESC-ID, and Instituto Superior Técnico. The workshop is open to students affiliated with universities in Europe, Central and South America, the Middle East and Africa, Oceania, and Asia.
Date & Time: June 6 a 7
Where: IST – Tagus Park, Porto Salvo
Registration deadline: April 8
Summary: “As with the impacts of Artificial intelligence (AI) on people’s day-to-day lives, the impacts for international peace and security include wide-ranging and significant opportunities and challenges. AI can help achieve the UN Sustainable Development Goals, but its dual-use nature means that peaceful applications can also be misused for harmful purposes such as political disinformation, cyberattacks, terrorism, or military operations. Meanwhile, those researching and developing AI in the civilian sector remain too often unaware of the risks that the misuse of civilian AI technology may pose to international peace and security and unsure about the role they can play in addressing them. Against this background, UNODA and SIPRI launched, in 2023, a three-year educational initiative on Promoting Responsible Innovation in AI for Peace and Security. The initiative, which is supported by the Council of the European Union, aims to support greater engagement of the civilian AI community in mitigating the unintended consequences of civilian AI research and innovation for peace and security. As part of that initiative, SIPRI and UNODA are organising a series of capacity building workshops for STEM students (at PhD and Master levels). These workshops aim to provide the opportunity for up-and-coming AI practitioners to work together and with experts to learn about a) how peaceful AI research and innovation may generate risks for international peace and security; b) how they could help prevent or mitigate those risks through responsible research and innovation; c) how they could support the promotion of responsible AI for peace and security.”