Positions: 7
Research Grant (BII)
BII|2026/898 - Project VERSACOMP
Type of position: Research Grant (BII)
Duration: 3 months
Deadline to apply: 2026-05-15
DescriptionONE (1) research grant for students enrolled in a BSc or MSc programme with reference number BII|2026/898 under the scope of the Project VERSACOMP – Refª 2023.18110.ICDT / LISBOA2030-FEDER-00869000 funded by operation nº 15038, Balcão dos Fundos, FEDER and FCT, is available under the following conditions:
OBJECTIVES | FUNCTIONS
This scholarship recipient is expected to perform the following tasks:
a) Investigation into the core computational patterns that are suitable for diverse types of AI-enhanced hardware, identifying acceleration opportunities within application hotspots.
b) Development of methods to enable efficient execution on heterogeneous systems, exploiting the parallelism of modern computing platforms, including supercomputers.
c) Write a technical report about the performed work.
Contact email: bolsas@inesc-id.ptBII|2026/899 - Project AHEAD - Refª 101160665
Type of position: Research Grant (BII)
Duration: 6 months
Deadline to apply: 2026-05-07
DescriptionONE (1) research grants for students enrolled in BSc programme with reference number BII|2026/899, project AHEAD - Refª 101160665, funded by European Commission - Program HORIZON EUROPE, is now available under the following conditions:
OBJECTIVES | FUNCTIONS
Support the development of flexibility models for local energy systems Assist in the analysis of energy scenarios including demand, generation, and storage Contribute to the implementation of energy scheduling tools Support integration of solutions into digital platforms
Contact email: bolsas@inesc-id.pt
Research Grant (BI)
BI|2026/880 Projet SALVE – refª 2024.14936.PEX
Type of position: Research Grant (BI)
Duration: 5 months
Deadline to apply: 4216-05-30
DescriptionONE (1) research grant for students with BSc degree with reference number BI|2026/880 under the scope of the Projet SALVE: Securing Artificial Language Models Against Vulnerability Encoding (2024.14936.PEX), funded by Fundação para a Ciência e a Tecnologia, is available under the following conditions:
OBJECTIVES | FUNCTIONS
This task evaluates the impact of controlled code perturbations on the classification stability and robustness of Large Language Models (LLMs) in distinguishing secure from insecure JavaScript code. The student will design and implement a systematic evaluation pipeline to assess model behavior under perturbation-induced variations. The work plan includes:
Evaluation Pipeline Development (Month 1) Implement a scalable evaluation framework using local LLM infrastructure (e.g., Ollama, LMStudio). Integrate multiple LLMs for comparative evaluation. Automate classification experiments across original and perturbed datasets. Classification Shift Analysis (Month 2) Measure classification changes between original and perturbed code variants. Identify perturbations that cause label flips (secure ↔ insecure).Quantify: Misclassification rate, Stability rate, False positive rate, False negative rate
Robustness Assessment (Month 3-4) Define robustness metrics for security classification consistency. Evaluate resilience to obfuscation, control-flow changes, and API variations.Compare robustness performance across different models.
Misclassification Characterization (Month 4-5) Construct an augmented misclassification dataset containing: original and perturbed variants, model predictions, correct labels, perturbation typeAnalyze patterns in failure cases.
Exploratory Explainability Analysis (Optional) Investigate whether explainability tools can help identify model reliance on superficial features. Analyze whether models rely on syntax-level heuristics versus security-relevant semantics.All experimental artifacts, code, and results will be released in an open-source repository. The selected candidate will be integrated into a research team with established expertise in software security, program analysis, and AI-driven code intelligence, with a track record of collaboration with leading technology companies and publications in top-tier international conferences and journals.
Contact email: bolsas@inesc-id.ptBI|2026/882 Projet SALVE – refª 2024.14936.PEX
Type of position: Research Grant (BI)
Duration: 6 months
Deadline to apply: 2026-12-31
DescriptionONE (1) research grant for students with BSc degree with reference number BI|2026/882 under the scope of the Projet SALVE: Securing Artificial Language Models Against Vulnerability Encoding (2024.14936.PEX), funded by Fundação para a Ciência e a Tecnologia, is available under the following conditions:
OBJECTIVES | FUNCTIONS
This task aims to develop an automated and scalable framework for the continuous improvement of security-aware Large Language Models (LLMs), integrating dataset expansion, evaluation, incremental fine-tuning, and security-aware code generation validation. The student will build an integrated pipeline that reuses artifacts developed in previous tasks and ensures systematic model improvement over time. The work plan includes:
Automated Dataset Expansion (Month 1) Implement mechanisms to collect and track secure and insecure JavaScript code from open-source repositories. Identify and label security-related commits using diff-based analysis. Integrate synthetic data generation (e.g., AST-based vulnerability injection) to increase dataset diversity. Continuous Model Evaluation (Month 2) Implement automated evaluation of security classification performance on expanded datasets. Measure classification accuracy, precision, recall, and robustness over time. Track performance differentials across evaluation cycles. Incremental Fine-Tuning and Feedback Integration (Month 3) Implement periodic fine-tuning of selected models using curated secure–insecure code pairs. Integrate adaptive feedback mechanisms based on misclassification analysis. Ensure reproducibility and version control of model updates. Security-Aware Code Generation Testing (Month 4) Integrate static analysis tools (e.g., Semgrep, CodeQL) to assess generated code. Measure vulnerability density (e.g., vulnerabilities per 100 lines of code). Compare improvements across pipeline iterations. Validation and Framework Assessment (Month 5-6) Conduct two full validation cycles in the final four months. Measure improvements in: Security classification accuracy Robustness to adversarial modifications Reduction of AI-generated vulnerabilitiesAll artifacts will be released as open-source and documented for reproducibility. The selected candidate will be integrated into a research team with established expertise in software security, program analysis, and AI-driven code intelligence, with a track record of collaboration with leading technology companies and publications in top-tier international conferences and journals.
Contact email: bolsas@inesc-id.ptBI|2026/881 Projet SALVE – refª 2024.14936.PEX
Type of position: Research Grant (BI)
Duration: 6 months
Deadline to apply: 2026-09-30
DescriptionONE (1) research grant for students with MSc degree with reference number BI|2026/881 under the scope of the Projet SALVE: Securing Artificial Language Models Against Vulnerability Encoding (2024.14936.PEX), funded by Fundação para a Ciência e a Tecnologia, is available under the following conditions:
OBJECTIVES | FUNCTIONS
This task aims to enhance the ability of Large Language Models (LLMs) to distinguish secure from insecure JavaScript code using contrastive learning with a tailored security-aware loss function. The student will fine-tune selected models using secure-insecure code pairs derived from Tasks 1 and 2 and evaluate improvements in classification stability and security-aware code generation.
The work plan includes:
(Month 1) Implement contrastive learning fine-tuning using a tailored Multiple Negatives Ranking Loss (MNRL) formulation. (Month 2) Design and integrate a security penalty term to balance false positives and false negatives. (Month 3) Analyze embedding-space separation using cosine similarity and alternative visualization techniques. (Month 4) Evaluate improvements in classification metrics (accuracy, precision, recall, F1, FNR, FPR). (Month 4) Compare fine-tuned models against baseline models without contrastive learning. (Month 5) Assess secure-by-default code generation using static analysis tools (e.g., Semgrep, CodeQL), measuring vulnerabilities per 100 lines of generated code. (Month 6) Ensure reproducibility and open-source release of training and evaluation pipelines.The selected candidate will be integrated into a research team with established expertise in software security, program analysis, and AI-driven code intelligence, with a track record of collaboration with leading technology companies and publications in top-tier international conferences and journals
Contact email: bolsas@inesc-id.ptBI|2026/895 - BI|2026/896 - BI|2026/897 -Crypto_Chaves
Type of position: Research Grant (BI)
Duration: 6 months
Deadline to apply: 2026-05-05
DescriptionTHREE (3) research grants for students with BSc degree with reference numbers BI|2026/894 - BI|2026/896 - BI|2026/897 under the scope of the cryptographic development task funded under the scope of the cryptographic development task funded by the HPCAS group, is now available under the following conditions:
OBJECTIVES | FUNCTIONS
Development and definition of cryptographic systems for data and its evaluation.
Contact email: bolsas@inesc-id.ptBI|2026/894 - Project FintechGaming
Type of position: Research Grant (BI)
Duration: 3 months
Deadline to apply: 2026-05-05
DescriptionONE (1) research grants for students with MSc degree with reference number BI|2026/894, Project FintechGaming, funded by INESC-ID, is now available under the following conditions:
OBJECTIVES | FUNCTIONS
To support the development activities of the LLM agent that generates stories and the quizzes for the Paynest gamified fintech app. To write the state of the art of the game-based financial literacy.
Contact email: bolsas@inesc-id.pt