Repository landing page

We are not able to resolve this OAI Identifier to the repository landing page. If you are the repository manager for this record, please head to the Dashboard and adjust the settings.

Towards Explainable and Trustworthy AI for Decision Support in Medicine: An Overview of Methods and Good Practices

Abstract

Artificial Intelligence (AI) is defined as intelligence exhibited by machines, such as electronic computers. It can involve reasoning, problem solving, learning and knowledge representation, which are mostly in focus in the medical domain. Other forms of intelligence, including autonomous behavior, are also parts of AI. Data driven methods for decision support have been employed in the medical domain for some time. Machine learning (ML) is used for a wide range of complex tasks across many sectors of the industry. However, a broader spectrum of AI, including deep learning (DL) as well as autonomous agents, have been recently gaining more focus and have risen expectation for solving numerous problems in the medical domain. A barrier towards AI adoption, or rather a concern, is trust in AI, which is often hindered by issues like lack of understanding of a black-box model function, or lack of credibility related to reporting of results. Explainability and interpretability are prerequisites for the development of AI-based systems that are lawful, ethical and robust. In this respect, this paper presents an overview of concepts, best practices, and success stories, and opens the discussion for multidisciplinary work towards establishing trustworthy AI

Similar works

Full text

thumbnail-image

Aristotle University of Thessaloniki: Open Journals / ΑΡΙΣΤΟΤΕΛΕΙΟ ΠΑΝΕΠΙΣΤΗΜΙΟ ΘΕΣΣΑΛΟΝΙΚΗΣ

redirect
Last time updated on 20/02/2021

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.