In the Media: Rui Maranhão talks with Exame Informática about ICSE24’s Impact on Software Engineering and AI Innovations
INESC-ID researcher and full professor at Faculdade de Engenharia da Universidade do Porto (FEUP), Rui Maranhão*, has recently talked to Exame Informática, from Trust in News Group, about the upcoming 2024 edition of the International Conference on Software Engineering (ICSE), which will take place in Lisbon.
The interview was an opportunity for Rui Maranhão, chair of the conference, to share some exciting news and thoughts about the event. This included the record numbers of participants from African countries, and emphasised the fundamental role ICSE has in the software engineering community. ICSE not only highlights recent innovations and research in the field but ethical and environmental concerns as well, while also defining future tendencies and fostering collaboration between academia and industry.
ICSE is considered the premier international software engineering conference. This year, it will be held at Centro Cultural de Belém, in Lisbon, from April 14 to April 20. Registrations have reached record numbers, and the conference will focus on the link between software engineering and artificial intelligence (AI), with a particular emphasis on Large Language Models.
Read the full article (in PT) here.
*Rui Maranhão is a researcher at INESC-ID within the Automated Reasoning and Software Reliability researcher area, and full professor at Faculdade de Engenharia da Universidade do Porto. His research focuses on software quality, emphasising automating the testing and debugging phases of the software development life-cycle as well as self-adaptation.
Upcoming Events
INESC-ID Distinguished Lecture: “(Programming Languages) in Agda = Programming (Languages in Agda)” by Professor Philip Wadler
On June 4, Professor Philip Wadler will give an INESC-ID Distinguished Lecture organized in the scope of the BIG ERA Chair Project, titled “(Programming Languages) in Agda = Programming (Languages in Agda)”.
Registration: here (free but mandatory)
Date: June 4, 2024
Time: 15h00-16h15
Where: Anfiteatro Abreu Faro – Complexo Interdisciplinar, Instituto Superior Técnico (Alameda)
Abstract: The most profound connection between logic and computation is a pun. The doctrine of Propositions as Types asserts that propositions correspond to types, proofs to programs, and simplification of proofs to evaluation of programs. Proof by induction is just programming by recursion. Finding a proof becomes as fun as hacking a program. Dependently-typed programming languages, such as Agda, exploit this pun. This talk introduces *Programming Language Foundations in Agda*, a textbook that doubles as an executable Agda script—and also explains the role Agda plays in IOG’s Cardano cryptocurrency.
Short Bio: Philip Wadler is a Professor of Computer Science at the University of Edinburgh and a Senior Research Fellow at IOHK. He is a Fellow of the Royal Society, a Fellow of the Royal Society of Edinburgh, and an ACM Fellow. He is head of the steering committee for Proceedings of the ACM, past editor-in-chief of PACMPL and JFP, past chair of ACM SIGPLAN, past holder of a Royal Society-Wolfson Research Merit Fellowship, winner of the SIGPLAN Distinguished Service Award, and a winner of the POPL Most Influential Paper Award. Previously, he worked or studied at Stanford, Xerox Parc, CMU, Oxford, Chalmers, Glasgow, Bell Labs, and Avaya Labs, and visited as a guest professor in Copenhagen, Sydney, and Paris. He has an h-index of over 70 with more than 25,000 citations to his work, according to Google Scholar. He contributed to the designs of Haskell, Java, and XQuery, and is co-author of Introduction to Functional Programming (Prentice Hall, 1988), XQuery from the Experts (Addison Wesley, 2004), Generics and Collections in Java (O’Reilly, 2006), and Programming Language Foundations in Agda (2018). He has delivered invited talks in locations ranging from Aizu to Zurich.
Philip Wadler likes to introduce theory into practice, and practice into theory. An example of theory into practice: GJ, the basis for Java with generics, derives from quantifiers in second-order logic. An example of practice into theory: Featherweight Java specifies the core of Java in less than one page of rules. He is a principal designer of the Haskell programming language, contributing to its two main innovations, type classes and monads. The YouTube video of his Strange Loop talk Propositions as Types has over 100,000 views. Wadler is also area leader for programming languages at IOHK (now Input Output Global), the blockchain engineering company developing Cardano. He has contributed to work on Plutus, a Turing-complete smart contract language for Cardano written in Haskell; the UTXO ledger system, native tokens, and System F in Agda.
Educational Workshop on Responsible AI for Peace and Security (UNODA)
On June 6 and 7, The United Nations Office for Disarmament Affairs (UNODA) and the Stockholm International Peace Research Institute (SIPRI) are offering a selected group of technical students the opportunity to join a 2-day educational workshop on Responsible AI for peace and security.
The third workshop in the series will be held in Porto Salvo, Portugal, in collaboration with GAIPS, INESC-ID, and Instituto Superior Técnico. The workshop is open to students affiliated with universities in Europe, Central and South America, the Middle East and Africa, Oceania, and Asia.
Date & Time: June 6 a 7
Where: IST – Tagus Park, Porto Salvo
Registration deadline: April 8
Summary: “As with the impacts of Artificial intelligence (AI) on people’s day-to-day lives, the impacts for international peace and security include wide-ranging and significant opportunities and challenges. AI can help achieve the UN Sustainable Development Goals, but its dual-use nature means that peaceful applications can also be misused for harmful purposes such as political disinformation, cyberattacks, terrorism, or military operations. Meanwhile, those researching and developing AI in the civilian sector remain too often unaware of the risks that the misuse of civilian AI technology may pose to international peace and security and unsure about the role they can play in addressing them. Against this background, UNODA and SIPRI launched, in 2023, a three-year educational initiative on Promoting Responsible Innovation in AI for Peace and Security. The initiative, which is supported by the Council of the European Union, aims to support greater engagement of the civilian AI community in mitigating the unintended consequences of civilian AI research and innovation for peace and security. As part of that initiative, SIPRI and UNODA are organising a series of capacity building workshops for STEM students (at PhD and Master levels). These workshops aim to provide the opportunity for up-and-coming AI practitioners to work together and with experts to learn about a) how peaceful AI research and innovation may generate risks for international peace and security; b) how they could help prevent or mitigate those risks through responsible research and innovation; c) how they could support the promotion of responsible AI for peace and security.”