18 - 21 February 2019
Machine (data-driven learning-based algorithmic) decision making is increasingly being used to assist or replace human decision making in a variety of domains ranging from banking (rating user credit) and recruiting (ranking applicants) to judiciary (profiling criminals) and journalism (recommending news-stories). Recently concerns have been raised about the potential for bias and unfairness in such algorithmic decisions. Against this background, in this talk, we will attempt to tackle the following foundational questions about man-machine decision making:
(a) How do machines learn to make biased or unfair decisions?
(b) How can we quantify (measure) and control (mitigate) bias or unfairness in machine decision making?
(c) Can machine decisions be engineered to help humans control (mitigate) bias or unfairness in their own decisions?