4 June 2019
18:30 - Food, drinks, networking
19:00 - Dean Allsopp - an overview of interpretability in machine learning
19:50 Short break
20:00 - Janis Klaise - Seldon's open-source model explanation library Alibi
(more details to follow)
21: 00 - Event close
Being able to communicate how machine learning predictions are made can provide a foundation for fairness, accountability and transparency in their use. With complex models such as tree ensembles and neural networks there is a challenge in being able to communicate how specific predictions are made.
What open source machine learning interpretation tools are available now and how can they help? By looking at both techniques and tools this presentation aims to offer practical help with answering these questions about supervised ML:
-What sort of interpretations are provided?
-Who is likely to understand these interpretations?
-What interpretation packages work with which ML algorithms?
-How do the interpretation techniques work?
BIO : Dean Allsopp is a database programmer/architect turned data scientist. Aiming to help business use machine learning responsibly.
Talk plus hands-on examples using Seldon's open-source model explanation library Alibi.