Talks and presentations

See a map of all the places I've given a talk!

Learning Causal Abstractions

November 22, 2023

Talk, Machine Learning Seminar Series, Bergen, Norway

In this presentation we review the definition of structural causal models and we introduce the problem of relating these models via an abstraction map. We formalize the problem of learning such a causal abstraction map as a minimizer of an abstraction error expressed in terms of interventional consistency, and we discuss some of the challenges involved in this optimization problem. We then present an approach based on a relaxation and parametrization of the problem, leading to a solution based on differentiable programming. The solution approach is evaluated both on synthetic and real-world data.

Abstraction between Structural Causal Models and Measures of Abstraction Error

September 05, 2023

Talk, Seminar at the University of Kyoto, Kyoto, Japan

In this talk we discuss how rigorous relations between causal models may be defined and quantitatively evaluated. We will start with a quick introduction to the popular formalism of structural causal models. Next, we will review alternative proposals for expressing relations of abstractions between these models. We will then focus on one particular framework, and show how a notion of abstraction error can be introduced in this setup. Finally, we will discuss some of the limitations of this measure, and how alternative measures of error may be developed in order to capture different aspects of abstraction and fit different aims. We will conclude with a few considerations about possible future developments of this theory of abstraction.

Quantifying Consistency and Information Loss for Causal Abstraction Learning

August 22, 2023

Talk, IJCAI 2023, Macau, China

In this presentation we quickly review the idea of defining an abstraction between structural causal models and we present the standard measure of abstraction error proposed in the literature. We then consider some potential limitations when using this single measure to assess or learn abstractions. To overcome this limit, we propose an extension of the original definition of abstraction approximation, we derive new measures of abstraction error, and we discuss theoretical and applied properties of these new measures.

Structural causal models and abstraction for modelling battery manufacturing

June 20, 2023

Talk, Battery Modelling Group Weekly Seminar (online), Pittsburgh, US

Modelling complex systems and processes, such as battery manufacturing, is a significant scientific and technical challenge. Mathematics, statistics and machine learning provide useful tools to tackle this problem. In this talk, we will focus on the recent formalism of structural causal models (SCM) and casual abstractions (CA). We will first offer a high-level introduction to SCMs and CA, discussing in particular their importance and relevance for modelling. We will then make a reference to our original methodology for learning CAs. Finally, we will showcase our preliminary results on the problem of modelling one stage of the lithium-ion battery manufacturing process, demonstrating the potential for integrating data collected by different research groups.

Introduction to Causality: Structural Causal Modelling

April 24, 2023

Talk, Trial Lecture at UiB, Bergen, Norway

In this talk we will introduce one of the most important formalisms to represent causal systems in computer science. We will start with a brief review of causality, highlighting the meaning of causal queries and the limitations of standard statistics and machine learning in answering them. To address these shortcomings, we will present the formalism of structural causal models (SCMs). We will then show how these models can be used to rigorously answer different types of causal questions, including observational, interventional and counterfactual questions. Finally, we will conclude by discussing how this formalization gives rise to a rich theory of causality, and how the ideas underlying causality have strong and promising intersections with artificial intelligence and machine learning.

Jointly Learning Consistent Causal Abstractions Over Multiple Interventional Distributions

April 12, 2023

Talk, CLeaR 2023, Tubingen, Germany

In this presentation we review the definition of abstraction between structural causal models and we frame the problem of learning a mapping between them. We discuss the challenges of learning a causal abstraction that minimizes the abstraction error in terms of interventional consistency. We then suggest an approach based on a relaxation and parametrization of the problem, leading to a solution based on differentiable programming. The solution approach is evaluated both on synthetic and real-world data.

Abstraction of Causal Structural Models

October 31, 2022

Talk, Statistics Department Research Seminar, Warwick, United Kingdom

Causal models can represent an identical system or phenomenon at different levels of abstraction. In this talk, we will focus on structural causal models (SCM) and review two frameworks which have been proposed to express a relation of abstraction between SCMs and to measure the interventional consistency of an abstraction. We will then discuss some current directions of research, including the problem of learning abstractions.

Abstraction between SCMs: A Review of Definitions and Properties

August 05, 2022

Talk, UAI 2022 Workshop on Causal Representation Learning (online), Eindhoven, Netherlands

In this presentation we first offer a review of definitions of abstractions proposed in the literature, and then we propose a framework to align these definitions and evaluate their properties. We suggest analyzing abstractions on two layers (a structural layer and a distributional layer) and we review some basic properties that may be enforced on maps defined on each layer. We suggest that this framework may contribute to a better understanding of different forms of abstraction, as well as providing a way to tailor application-specific definitions of abstraction.

Abstracting Causal Structural Models

February 17, 2022

Talk, Warwick Machine Learning Group Reading Group (online), Warwick, United Kingdom

In this presentation we consider the problem of relating causal models representing the same phenomenon or system at different levels of abstraction. A given system may be represented with more or less details according to the resources or the need of a modeler; switching between descriptions at different levels of abstraction is not trivial, and it raises questions of consistency. In this presentation, we will focus in particular on structural causal models (SCM) and we will express properties of consistency in this context. We will then present two formalisms for defining a relation of abstraction between SCMs: an approach based on the definition of a transformation between the outcomes of models, and an approach based on the definition of a mapping between the structure of models. We will then conclude with some observations and some questions regarding this current direction of research.

Applications of reinforcement learning to computer security: problems, models, and perspectives

November 22, 2021

Talk, Oslo Metropolitan University (online), Oslo, Norway

These slides analyze the application of reinforcement learning for modelling the problem of penetration testing in computer security. After a conceptual overview of reinforcement learning, we discuss which are the specific challenges in modeling penetration testing as a game that may be solved by a reinforcement learning agent. Finally, we present some of the work done by the research group at the University of Oslo on this topic, including conceptual modelling and preliminary practical implementations of reinforcement learning environments and agents.

Abstracting Causal Models (short)

October 18, 2021

Talk, University of Warwick (online), Warwick, United Kingdom

These slides provide a synthetic overview of the problem of relating structural causal models (SCMs) at different levels of abstraction. We define the problem and discuss the desiderata of our solution. We present a few of the existing formalizations and solutions offered in the literature. We then conclude highglighting interesting future direction of research in this area.

Abstracting Causal Models

September 27, 2021

Talk, University of Utrecht (online), Utrecht, Netherlands

Structural causal models (SCMs) constitute a rigorous and tested formalism to deal with causality in many fields, including artificial intelligence and machine learning. Systems and phenomena may be modelled as SCMs and then studied using the tools provided by the framework of causality. A given system can, however, be modelled at different levels of abstraction, depending on the aims or the resources of a modeller. The most exemplar case is probably statistical physics, where a thermodynamical system may be represented both as a collection of microscopic particles or as a single body with macroscopic properties. In general, however, switching between models with different granularities presents non-trivial challenges and raises questions of consistency. These slides will first provide a brief introduction to SCMs, and then consider how we can express the problem of relating SCMs representing the same phenomenon at different levels of abstraction. Finally, we will discuss open challenges and present some existing solutions, as well as pointing towards possible future directions of research.

The (new) attack surfaces of data-learned models: Adversarial attacks and defenses for ML models

December 11, 2020

Talk, Keynote at IEEE Big Data CyberHunt Workshop (online), Atlanta, GA, USA

These slides provide an overview on the topic of the security of machine learning systems. We identify the two main attack surfaces inherent in machine learned systems, and we then provide a review of the main attack and defenses, heavily relying on analogical reasoning to illustrate and explain these methods. The presentation ends with remarks on the practical implications of these vulnerabilities and the current directions of research.

Neural Networks, Information Bottleneck and Unsupervised Learning

October 15, 2020

Talk, University of Innsbruck (online), Innsbruck, Austria

These slides provides a quick conceptual introduction to neural networks for supervised learning, and review some hypothesis and theories meant to explain the generalization performance of learning. The presentation then focuses on one of these possible interpretative frameworks, information bottleneck, and discusses its possible application to understand the dynamics of unsupervised learning algorithms, such as sparse filtering.

Information Bottleneck (and Unsupervised Learning)

May 14, 2020

Talk, Robotics and Intelligent Systems (ROBIN) group lunch seminar, University of Oslo, Oslo, Norway

This short presentation introduces the method of information bottleneck by describing its formulation and by illustrating its application in analyzing the behaviour of deep neural networks. The presentation ends discussing the problem of using a similar information-theoretic method to study the behaviour of unsupervised learning algorithms, focusing in particular on the analysis of the sparse filtering algorithm.

Modelling Capture-the-Flag Challenges Using Reinforcement Learning

April 24, 2020

Talk, Oslo Analytics Scientific Advisory Board Presentation, Oslo, Norway

These slides present the research project of modelling hacking in the form of capture-the-flag (CTF) games as problems solvable by agents trained by reinforcement learning (RL). The main assumptions and challengese are presented, along with some preliminary results.

Causal Models and Machine Learning

December 02, 2019

Talk, Oslo Machine Learning Meetup, Oslo, Norway

This talk aims at providing an overall understanding of the role of causal modelling, and its relationship to machine learning. We are going to introduce casual models following the popular approach based on structural causal models proposed by Pearl, and show how they can capture the notion of causal relations. We will consider paradigmatic casual problems (causal inference and causal discovery) and discuss how they can be tackled. Finally, we will briefly explore connections between causality and machine learning, touching on topics such as learning with causal assumptions, using counterfactuals to assess fairness, and expressing reinforcement learning problems in causal terms.

A Gentle Introduction to Casual Models

October 08, 2019

Talk, AI Seminar, Oslo Metropolitan University, Oslo, Norway

n this presentation we are going to introduce causal model from the point of view of computer science, following the approach based on structural causal models proposed by Pearl. We will start by showing the place of causality theory and by discussing its relationship with standard statistics. We will then present graphical models (directed acyclic grpahs, Bayesian networks, causal Bayesian networks, and structural causal models) to address causal questions. We will then review some paradigmatic problems that arise in the field of causality, and how they can be solved.

Perspectives on AI/ML and Cybersecurity

June 06, 2019

Talk, CyberControl workshop, University of Oslo, Oslo, Norway

This talk provides a short presentations of three perspectives to explore the intersection of cybersecurity and machine learning. It examines an instrumental perspective (in which ML is seen as a tool), a systemic perspective (in which ML is seen as component of a system to defend), and a societal perspective (in which ML is seen as a part of societal processes). Each perspective is reconnected with specific areas of research (cybersecurity, adversarial learning, AI safety).

Overview of Adversarial Machine Learning and AI Safety

February 06, 2019

Talk, Workshop on the Security of Autonomous Systems, Kjeller, Norway

This talk provides an overview of the research in the fields of adversarial machine learning and AI safety. The first part of the talk gives a brief introduction to machine learning from a conceptual point of view; the second and the third part respectively illustrates some representative attacks and defenses for machine learning systems; and, finally, the last part lists safety concerns related to machine learning and artificial intelligence. (This presentation has some overlap with the previous talk “Research Challenges for Applying Machine Learning in Cybersecurity”)

Counterfactually Fair Prediction Using Multiple Causal Models

December 07, 2018

Talk, European Conference on Multi-Agent Systems, University of Bergen, Bergen, Norway

This talk provides an overview of the problem of aggregating several probablistic structural causal models and it offers a walkthrough of our algorithm applied to a toy case scenario.

Research Challenges for Applying Machine Learning in Cybersecurity

February 09, 2018

Talk, AFSecurity Seminar, University of Oslo, Oslo, Norway

This talk provides an overview of some topics at the intersection of cybersecurity and machine learning with the aim of illustrating the possibilities offered by machine learning and surveying recent promising lines of research at the border between the two disciplines. The first part of the talk gives a brief introduction to machine learning from a conceptual point of view. The second part then explores research topics in three main domains: applications of machine learning to security; security aspects of machine learning; and, finally, safety concerns related to machine learning.

Introduction to Information Theoretical Learning

July 04, 2016

Talk, Machine Learning and Optimization Seminar, University of Manchester, Manchester, UK

This talk is meant to be a simple introduction to Principe’s framework for Information Theoretic Learning. We will first review a standard information theoretic measure, going through its derivation, its properties and its limitation. We will then derive a more general form of this information theoretic measure, and we will use it to compute statistical estimators. Finally, we will define an information theoretic loss function that can be used for learning.

Review of Sparse Filtering

May 27, 2015

Talk, Machine Learning and Optimization Seminar, University of Manchester, Manchester, UK

Sparse filtering is an algorithm for unsupervised learning proposed in 2011. The authors introduced this algorithm as a paradigm of feature distribution learning, contrasting it with more traditional data distribution learning. In this seminar, we will explore the ideas behind sparse filtering following the original paper published in 2011. We will first discuss the general idea of feature distribution learning; then, we will present the specific algorithm for sparse filtering; finally, we will conclude with a discussion of the algorithm and a summary of further developments since the publication of the original paper.