Did some stuff

This commit is contained in:
Alexander Munch-Hansen 2019-11-04 15:40:56 +01:00
parent dfbfe94134
commit a331729e56
1 changed files with 40 additions and 4 deletions

View File

@ -86,6 +86,20 @@
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{Problem definition}
\begin{itemize}
\item People often use Machine Learning models for predictions
\item Blindly trusting a prediction can lead to poor decision making
\item We seek to understand the reasons behind predictions
\begin{itemize}
\item As well as the model doing the predictions
\end{itemize}
\end{itemize}
\center
\includegraphics[scale=0.2]{graphics/doctor_pred.png}
\end{frame}
%\subsection{Previous Solutions}
\begin{frame}
\frametitle{Previous Solutions}
@ -120,15 +134,37 @@
\end{frame}
\begin{frame}
\frametitle{Intepretability}
\frametitle{Properties of a good explanation}
\begin{itemize}
\item It should be \emph{intepretable}:
\begin{itemize}
\item They must provide qualitative understanding between the input variables and the response
\item They must take into account the users limitations
\item Use a representation understandable to humans
\item Could be a binary vector indicating presence or absence of a word
\item Could be a binary vector indicating presence of absence of super-pixels in an image
\end{itemize}
\item It should have \emph{fidelity}:
\begin{itemize}
\item Essentially means the model should be faithful.
\item Local fidelity does not imply global fidelity
\item The explanation should aim to correspond to how the model behaves in the vicinity of the instance being predicted
\end{itemize}
\item It should be \emph{model-agnostic}:
\begin{itemize}
\item The explanation should be blind to what model is underneath
\end{itemize}
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{Fidelity}
\end{frame}
\subsection{Explaining Predictions}
\begin{frame}
\frametitle{The Fidelity-Interpretability Trade-off}
\end{frame}
% \subsubsection{Examples}
\begin{frame}
% \frametitle{Sparse Linear Explanations}