Explainable AI

button-icon-arrow-right
button-icon-arrow-left

button-icon-arrow-leftBack

Event

Explainable AI

4 June 2019

Reading

Added 31-Jul-2019

Agenda:

18:30 - Food, drinks, networking

19:00 - Dean Allsopp - an overview of interpretability in machine learning

19:50 Short break

20:00 - Janis Klaise - Seldon's open-source model explanation library Alibi
(more details to follow)

21: 00 - Event close

Talk 1:

Being able to communicate how machine learning predictions are made can provide a foundation for fairness, accountability and transparency in their use. With complex models such as tree ensembles and neural networks there is a challenge in being able to communicate how specific predictions are made.
What open source machine learning interpretation tools are available now and how can they help? By looking at both techniques and tools this presentation aims to offer practical help with answering these questions about supervised ML:

-What sort of interpretations are provided?

-Who is likely to understand these interpretations?

-What interpretation packages work with which ML algorithms?

-How do the interpretation techniques work?

BIO : Dean Allsopp is a database programmer/architect turned data scientist. Aiming to help business use machine learning responsibly.

Talk 2:

Talk plus hands-on examples using Seldon's open-source model explanation library Alibi.

Top