AI – Explanation & Law


At the intersection between artificial intelligence, transparency, privacy and law, there is a need for more research also at BigInsight.


Artificial intelligence, statistical models or machine learning models can often be seen as black boxes to those who construct the model and/or to those who use or are exposed to the models. This can be due to:
•    Complicated models, such as deep neural nets, boosted tree models or ensemble models
•    Models with many variables/parameters
•    Dependencies between the variables
Even simple models can be difficult to explain to persons who are not mathematically literate. Some models can be explained, but only through their global, not personalised, behaviour. There are a number of good reasons for explaining how a black box model works for each individual:
•    Those who construct or use the model should understand how the model works
•    Those who are exposed to the model should, and sometimes will, have the right to an explanation about a model’s behaviour
•    It should be possible to detect undesired effects in the model, for example an unfair or illegal treatment of certain groups of individuals, or too much weight on irrelevant variables

We have identified two specific and important themes:

1.    Correct explanations when there is dependence between the variables.
In many real life models, some or many of the variables of interest are dependent. For example, income and age typically follow each other quite closely. Current approaches to individual explanations do not handle dependent variables at all or not very well, especially in terms of the computational burden needed even for a handful of variables. We will construct new, computationally efficient methods to handle these situations.
2.    Construct explanations for times series models.
Many of the current approaches are either suitable for models designed for image analysis or models used in a regular classification or regression setting. We will develop methods that are suitable for explaining time series models. The two themes will be coordinated with corresponding activities within the current BigInsight projects.



Research at BigInsight challenges some of the legal principles that govern data privacy, including the risk of re-identification of anonymised parties, the wish to minimise data made available to discover associations and causes and the uncertainty of the common good value created by big data research. Our methods and algorithms follow the five principles of responsibility, explainability, accuracy, auditability and fairness. There is a need to design new legislation and legal practices that allow exploiting big data while guaranteeing privacy protection.  We start a research line which will go deeply into these themes from a legal perspective, in collaborations with the Department of Private Law of the University of Oslo and its Norwegian Research Center for Computers and Law. A PhD project is planned also with Oslo University Hospital.

 Principal Investigator  Anders Løland

Principal Investigator
Anders Løland

 Co-Principal Investigator  Arnoldo Frigessi

Co-Principal Investigator
Arnoldo Frigessi