Repository landing page

We are not able to resolve this OAI Identifier to the repository landing page. If you are the repository manager for this record, please head to the Dashboard and adjust the settings.

Explanation Techniques using Markov Logic Networks

Abstract

Explaining the results of Artificial Intelligence (AI) or Machine Learning (ML) algorithms is crucial given the rapid growth and potential applicability of these methods in critical domains including healthcare, defense, autonomous driving, etc. While AI/ML approaches yield highly accurate results in many challenging tasks such as natural language understanding, visual recognition, game playing, etc., the underlying principles behind such results are not easily understood. Thus, the trust in AI/ML methods for critical application domains is significantly lacking. While there has been progress in explaining classifiers, there are two significant drawbacks. First, current explanation approaches assume independence in the data instances which is problematic when the data is relational in nature, which is the case in several real-world problems. Second, explanations that only rely on individual instances are less interpretable since they do not utilize relational information which may be more intuitive to understand for a human user. In this dissertation, we have developed explanations using Markov Logic Networks (MLNs) which are highly expressive statistical relational models that combine first-order logic with probabilistic graphical models. Since MLNs are symbolic models, it is possible to extract explanations that are human-interpretable. However, doing this is computationally hard for large MLNs since we need to perform probabilistic inference to attribute the influence of symbolic formulas to the predictions. In this dissertation, we have developed a suite of fundamental techniques that help us in i) explaining probabilistic inference in MLNs and also ii) utilize MLNs as a symbolic model for specifying relational dependencies that can be used in other explanation methods. Thus, this dissertation significantly advances the state-of-the-art in explanations for relational models, and helps improve transparency and trust in these models

Similar works

This paper was published in University of Memphis Digital Commons.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.