Mixing Consistency in Geodistributed Transactions (Distinguished Lecture)
Cornell University, USA –
Programming concurrent, distributed systems that mutate shared, persistent, geo-replicated state is hard. To enable high availability and scalability, a new class of weakly consistent data stores has become popular. However, some data needs strong consistency. We introduce mixed-consistency transactions, embodied in a new embedded language, MixT. Programmers explicitly associate consistency models with remote storage sites; within each atomic, isolated transaction, data can be accessed with a mixture of different consistency models.
Compile-time information-flow checking, applied to consistency models, ensures that these models are mixed safely and enables the compiler to automatically partition transactions into a single sub-transaction per consistency model. New run-time mechanisms ensure that consistency models can also be mixed safely, even when the data used by a transaction resides on separate, mutually unaware stores. Performance measurements show that despite offering strong guarantees, mixed-consistency transactions can significantly outperform traditional serializable transactions.
Andrew Myers is a Professor in the Department of Computer Science at Cornell University in Ithaca, NY. He received his Ph.D. in Electrical Engineering and Computer Science from MIT in 1999, advised by Barbara Liskov.
His research interests include computer security, programming languages, and distributed and persistent programming systems. His work on computer security has focused on practical, sound, expressive languages and systems for enforcing information security. The Jif programming language makes it possible to write programs which the compiler ensures are secure, and the Fabric system extends this approach to distributed programming. The Polyglot extensible compiler framework has been widely used for programming language research.
Myers is an ACM Fellow. He has received awards for papers appearing in POPL’99, SOSP’01, SOSP’07, CIDR’13, PLDI’13, and PLDI’15.
Myers is the current Editor-in-Chief for ACM Transactions on Programming Languages and Systems (TOPLAS) and past co-EiC for the Journal of Computer Security. He has also served as program chair or co-chair for a few conferences: ACM POPL 2018, ACM CCS 2016, POST 2014, IEEE CSF 2010, and IEEE S&P 2009.
Rodrigo Seromenho Miragaia Rodrigues
Anfiteatro VA4 no piso-1 do Edificio de Civil – IST/Alameda
Workshop: Creative Explanations for Misleading Information
You are being invited to take part in a research study. Before you decide whether or not to take part, it is important for you to understand why the research is being done and what it will involve. Please take time to read the following information carefully.
You have been invited as a Computer Science /Journalism Master/PhD student. It is important that you know that choosing to either take part or not take part in the study will have no impact on your marks, assessments or future studies. You will receive a 20 Euros voucher as compensation for your time.
About This Research
Misinformation is a major societal challenge that is influencing our behaviour and perception in various domains, including democracy, health, economy, and social affairs.
Fact-check is a journalistic process that verifies pieces of information in order to promote the veracity and correctness of reporting. Therefore, some misinformation is harder to be assessed and explained as they are not entirely false or true. Typically, they use tactics of exaggeration, bias, and sensationalised terminology, for example.
As part of the European research project CIMPLE, we are looking for creative ways to generate explanations for verified misinformation, in particular those applying misleading strategies. Creative explanations include memes, lyrics, poems, etc, just to illustrate some possible ideas.
With this workshop input, we will inform artificial intelligence algorithms to generate creative fact-checking explanations that are trustworthy and more appealing for social media users than traditional ones.
What Will I Be Asked To Do If I Agree To Take Part?
The participants will be asked to attend a workshop on 25/May from 10:30 to 12:30, taking place at INESC-ID Lab Alunos salas 407-409
This workshop is about creative explanations for misleading information that can be found online.
The workshop will involve from 12 to 15 participants, Computer Sciences students or Journalism students in their final years.
An Open University researcher will be leading the activities with the support of the project partners from INESC-ID.
If you decide to take part, your participation is entirely voluntary, and you can withdraw from the workshop at any point if you wish.
During the workshop, the participants will:
– Receive an introduction about the state-of-the-art in AI to fight misinformation and creative computing.
– Individually, the participants will select two fact-checks verifying online news on the topics of climate change, pandemics or Ukraine invasion. They will be asked to identify the most relevant terms and concepts used in the fact-check to represent them graphically.
– Then, In groups, they will suggest an alternative and creative way to present two fact-check explanations.
-The groups will evaluate others’ results and provide feedback.
Please, bring your laptop if you can.
How The Data You Provide Will Be Used
The researchers are interested in the ideas that will be generated to explain the fact-check, the main components of these ideas and the rationale behind. This material will be collected and stored on a secure server.
No personal identifying information will be asked or collected. Results will be analysed preserving the participants’ anonymity.
The signed consent forms will be stored securely, preserving the identity of the participants and will be destroyed 5 years after the project is concluded. Your name and email address will not be shared beyond the workshop organisers.
The results of this study will be published as scientific publications in the computer sciences domain and project reports as design requirements for creative explanations for misinformation online, always preserving the anonymity of the participants.
If you wish to receive a summary of this research findings, please, inform your email address in the consent form to receive a copy of the report.
Your Right To Withdraw From The Study
You have the right to withdraw from the study at any time during your participation by leaving the meeting.
You have the right to ask for your data to be removed after your participation in the study by contacting Dr Lara Piccolo – firstname.lastname@example.org without having to give a reason, up until the 1st of June of 2022, time all data have been aggregated for analysis.
How Do I Agree To Take Part?
It is important that you know that your decision to take part in this study or not will not impact in any way your studies.
Francisco António Chaves Saraiva de Melo: “Agregação” (Habilitation) in Computer Science and Engineering
Francisco António Chaves Saraiva de Melo, INESC-ID researcher within the Artificial Intelligence for People and Society Research Area and Associate Professor at the Department of Computer Science and Engineering of Instituto Superior Técnico, will present a course unit report and a seminar in order to be awarded his Agregação (Habilitation) in Computer Science and Engineering on 30th and 31st May 2022.
Francisco Melo will present the course unit report titled Planning, Learning, and Intelligent Decision Making at 9am on 30th May 2022 (available on Zoom), followed by the seminar Reinforcement learning: A dynamical systems viewpoint at 9am on 31st May 2022 (available on Zoom).