Explainable AI
Would you trust an AI to diagnose you? Would you swear that a model is unbiased? Or more simply, how can we be sure that a model doesn’t hallucinate an answer?
Understanding model behavior is very challenging, but we believe that in contexts where trust is paramount it is essential for an AI model to be interpretable. Its responses need to be explainable.
For society to reap the full benefits of AI, more work needs to be done on explainable AI. We are interested in funding people building new interpretable models or tools to explain the output of existing models.