Repository landing page

We are not able to resolve this OAI Identifier to the repository landing page. If you are the repository manager for this record, please head to the Dashboard and adjust the settings.

Abstract

Reading or viewing recommendations are a common feature on modern media sites. What is shown to consumers as recommendations is nowadays often automatically determined by AI algorithms, typically with the goal of helping consumers discover relevant content more easily. However, the highlighting or filtering of information that comes with such recommendations may lead to undesired effects on consumers or even society, for example, when an algorithm leads to the creation of filter bubbles or amplifies the spread of misinformation. These well-documented phenomena create a need for improved mechanisms for responsible media recommendation, which avoid such negative effects of recommender systems. In this research note, we review the threats and challenges that may result from the use of automated media recommendation technology, and we outline possible steps to mitigate such undesired societal effects in the future

Similar works

Full text

thumbnail-image

NORA - Norwegian Open Research Archives

redirect
Last time updated on 19/12/2021

This paper was published in NORA - Norwegian Open Research Archives.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.