The difficulties of explaining the unexplainable
1 Introduction
I started this journey in the hopes that XAI methods will be able to simplify the complexity of machine learning models ( and even deep learning models) into something that the general public can understand. But as I started getting into the topic I realised that XAI itself was complex as it is. I had no sense of how each of these methods worked and which was better, and that was assuming there was a better method. And then when I had a break and explained what XAI methods were, I started seeing the patterns around these methods and that made me want to bridge the gap between the complexity of XAI methods and the current understanding of model developers.
1.1 Thesis Outline
In Chapter 2, I propose a set of geometric representations for XAI methods. This proposal is aimed to bring together the XAI explanations, the model and the data together into one hollistic view. And each XAI method that I propose representations for have their geometries based on the fundamental idea that is used to build these XAI methods.
In the vein of simplifying XAI methods, I have also worked on implementing complex XAI methods in simpler methods to make it easier for anyone to understand how the method works. In Chapter 4, I dive in to the simplifications and structure of these implementations.
Moving away from XAI methods and machine learning, in the spirit of simplifying and explaining I also propose in Chapter 3 that parsimonious deep learning models should be used wherever possible by searching through different seeds.
My journey was set on bridging the gap between XAI methods and the user, and my proposals would not be fruitful if the user was not involved in the story, and therefore I have built two interactive web applications that bring the results of Chapter 2 and Chapter 3 closer to the user.
Finally, Chapter 6 summarises my efforts and provides the conclusions from my phd journey