RiverCure: a new paradigm in flood modelling and simulation
The second RiverCure workshop took place on 02 June 2022 at Instituto Superior Técnico, signaling the close of this four-year FCT-funded project that brought together INESC-ID, Instituto Superior Técnico and Agência Portuguesa do Ambiente (APA).
The workshop included an overview of the RiverCure project and its main outputs, as well as the mathematical modelling research, laboratory and field work it involved and the conception, implementation and applications of the “RiverCure Portal”.
Ran from June 2018 to June 2022, RiverCure (full title “RiverCure: Curating and assimilating crowdsourced and authoritative data to reduce uncertainty in river flow modelling”) proposed to reduce the uncertainty involved in flood simulation and forecasting, doing so by designing and implementing a novel Web Geographic Information System (GIS) platform — the “RiverCure Portal” — that combines observations and hydrodynamic modelling tools for the operational response, emergency preparedness, and risk assessment stages of the flood risk management cycle.
In addition to combining observations with computer modelling, RiverCure explored recent advances in computer vision and deep learning to classify geo-referenced images of flooding events shared by citizens in social media, evaluating the use of neural networks — computing systems that mimic biological neural networks by processing information through a series of interconnected mathematical functions represented by “artificial neurons” — in discriminating images showing direct evidence of a flood while estimating the severity of the flooding event.
RiverCure was coordinated by Rui Ferreira (IST/CERIS) and Alberto Silva (INESC-ID). Rui Ferreira is Associate Professor at Instituto Superior Técnico and a CERIS Hydraulics Research Group senior researcher. Alberto Silva is researcher within the INESC-ID Information and Decision Support Systems Research Area and Associate Professor at the Department of Computer Science of Instituto Superior Técnico.
Upcoming Events
Educational Workshop on Responsible AI for Peace and Security (UNODA)
On June 6 and 7, The United Nations Office for Disarmament Affairs (UNODA) and the Stockholm International Peace Research Institute (SIPRI) are offering a selected group of technical students the opportunity to join a 2-day educational workshop on Responsible AI for peace and security.
The third workshop in the series will be held in Porto Salvo, Portugal, in collaboration with GAIPS, INESC-ID, and Instituto Superior Técnico. The workshop is open to students affiliated with universities in Europe, Central and South America, the Middle East and Africa, Oceania, and Asia.
Date & Time: June 6 a 7
Where: IST – Tagus Park, Porto Salvo
Registration deadline: April 8
Summary: “As with the impacts of Artificial intelligence (AI) on people’s day-to-day lives, the impacts for international peace and security include wide-ranging and significant opportunities and challenges. AI can help achieve the UN Sustainable Development Goals, but its dual-use nature means that peaceful applications can also be misused for harmful purposes such as political disinformation, cyberattacks, terrorism, or military operations. Meanwhile, those researching and developing AI in the civilian sector remain too often unaware of the risks that the misuse of civilian AI technology may pose to international peace and security and unsure about the role they can play in addressing them. Against this background, UNODA and SIPRI launched, in 2023, a three-year educational initiative on Promoting Responsible Innovation in AI for Peace and Security. The initiative, which is supported by the Council of the European Union, aims to support greater engagement of the civilian AI community in mitigating the unintended consequences of civilian AI research and innovation for peace and security. As part of that initiative, SIPRI and UNODA are organising a series of capacity building workshops for STEM students (at PhD and Master levels). These workshops aim to provide the opportunity for up-and-coming AI practitioners to work together and with experts to learn about a) how peaceful AI research and innovation may generate risks for international peace and security; b) how they could help prevent or mitigate those risks through responsible research and innovation; c) how they could support the promotion of responsible AI for peace and security.”