## Reviewing Consistencies

** Published:**

In previous posts we have discussed theoretical and conceptual properties of abstractions between causal models, while at the same time implementing code for checking our statements and for running simulations.

** Published:**

In previous posts we have discussed theoretical and conceptual properties of abstractions between causal models, while at the same time implementing code for checking our statements and for running simulations.

** Published:**

In previous posts we have discussed abstractions between causal models; we have evaluated the abstraction error between a micromodel to a macromodel with respect to a property of interventional consistency.

** Published:**

Causality and causal inference deal with expressing and reasoning about relationships of cause and effects, and structural causal model provide a rigorous formalism to assess causality.

** Published:**

In previous posts we have seen how to define abstractions between causal models, and how to compute automatically the abstraction error when switching from a micromodel to a macromodel.

** Published:**

In previous posts we have explored how causal models may be related to each other at different levels of abstraction using the framework proposed by Rischel to assess their consistency and evaluate the error that may be introduced by abstraction.

** Published:**

In previous posts we have analyzed how causal models may be related to each other at different levels of abstraction relying on the framework proposed by Rischel and grounded in category theory.

** Published:**

In a previous post we have discussed how causal models may be related to each other at different levels of abstraction. In particular, following the work of Rischel, we reviewed how we can precisely define abstraction, model it using category theory, and evaluate abstraction error.

** Published:**

Causal models offer a rigorous formalism to express causal relations between variables of interest. Causal systems may be represented at different levels of granularity or abstraction; think, for example, to microscopic and macroscopic descriptions of thermodynamics systems. Reasoning about the relationship between causal models at different levels of abstraction is a non trivial problem.

** Published:**

Sparse filtering is a recently-developed unsupervised feature distribution lerning algorithm with interesting properties. Basic implementations are available in Matlab, and python relying on the *numpy* library. In this post we explain and analyze a re-implementation of the python code in order to work within the *tensorflow* framework.

** Published:**

Differentiable programming (also known as software 2.0) offers a novel approach to coding, focused on defining parametrized differentiable model to solve a problem instead of coding a precise algorithm. In this post we explore the use of this coding paradigm to solve the problem of consensus reaching in group-decision making.

** Published:**

In this post we consider again the problem of performing causal analysis on covid19 data, specifically the question How is the implementation of existing strategies affecting the rates of COVID-19 infection?.

** Published:**

In this post we present a mock setup for performing causal analyses on covid19 data using the dowhy library for causal analysis.

** Published:**

In this post we present a game developed with my colleague Ã…vald and submitted to the IBM Quantum Game Challenge. This project exploit the integration of IBM qiskit, a library developed to design, run and simulate quantum circuits, and OpenAI gym, a library developed to define, train and run reinforcement learning agents.

** Published:**

In this post we keep exploring the integration of IBM qiskit, a library developed to design, run and simulate quantum circuits, and OpenAI gym, a library developed to define, train and run reinforcement learning agents.

** Published:**

In this post we explore the integration of IBM qiskit, a library developed to design, run and simulate quantum circuits, and OpenAI gym, a library developed to define, train and run reinforcement learning agents.

** Published:**

In a previous posts we have explored the use of Bayesian coreset on synthetic data and its application to a phishing data set. We replicated experiments from the original article by integrating the original code with the Edward framework.

** Published:**

Causal inference tackles the problem of dealing with causal statements. A rigorous statistical formalism to assess causality has been proposed by Pearl.

** Published:**

In a previous post we have explored the use of Bayesian coreset on synthetic data and we have integrated the original code with Edward in order to exploit the features offered by probabilistic programming.

** Published:**

Modern datasets often contain a large number of redundant samples, making the storing of data and the learning of models expensive. Coreset computation is an approach to reduce the amount of samples by selecting (and weighting) informative samples and discarding redundant ones.