Home / Companies / Neo4j / Blog / Post Details
Content Deep Dive

AI & Graph Technology: AI Explainability

Blog post from Neo4j

Post Details
Company
Date Published
Author
Amy E. Hodler
Word Count
519
Language
English
Hacker News Points
-
Summary

In the field of artificial intelligence (AI), understanding how an AI solution makes a particular decision is a significant challenge. Graphs have emerged as a promising area of research to address this issue, providing easier ways to trace and explain AI predictions. This ability is crucial for long-term AI adoption in various industries such as healthcare, credit risk scoring, and criminal justice, where explanations are necessary for credibility. There are three categories of explainability: explainable data, explainable predictions, and explainable algorithms, with graphs tackling the first two areas fairly easily using data lineage methods. Graphs can provide insight into features and weights used for a particular prediction by associating nodes in a neural network to a labeled knowledge graph. While significant progress is needed in explainable algorithms, research suggests that constructing tensors in graphs with weighted linear relationships may enable explanations and coefficients at each layer.