Photo: Kavya Kopparapu and Neeyanth Kopparpu |
Brain scan of stroke |
But for patients, something is still missing. If a doctor determined you had terminal brain cancer, your first question would probably be “Why?” Unfortunately, because most powerful AI models are unable to explain their decisions, your doctor would be stuck saying, “because a computer told me.”
Recently, advancements in the field of AI have allowed, if not yet an explanation, at least a kind of interpretation. These AI models can provide additional information about what is important in the given data...
Most important, interpretability may build trust by providing additional insight into using AI. It brings us closer to a future of human/machine teams, allowing doctors to start to understand why the AI made its decision.
Read more...
Source: WebMD