Repository landing page

We are not able to resolve this OAI Identifier to the repository landing page. If you are the repository manager for this record, please head to the Dashboard and adjust the settings.

FAIR2: A framework for addressing discrimination bias in social data science

Abstract

[EN] Building upon the FAIR principles of (meta)data (Findable, Accessible, Interoperable and Reusable) and drawing from research in the social, health, and data sciences, we propose a framework -FAIR2 (Frame, Articulate, Identify, Report) - for identifying and addressing discrimination bias in social data science. We illustrate how FAIR2 enriches data science with experiential knowledge, clarifies assumptions about discrimination with causal graphs and systematically analyzes sources of bias in the data, leading to a more ethical use of data and analytics for the public interest. FAIR2 can be applied in the classroom to prepare a new and diverse generation of data scientists. In this era of big data and advanced analytics, we argue that without an explicit framework to identify and address discrimination bias, data science will not realize its potential of advancing social justice.This work was generously funded by grant #015865 from the Public Interest Technology University Network - New America Foundation.Richter, F.; Nelson, E.; Coury, N.; Bruckman, L.; Knighton, S. (2023). FAIR2: A framework for addressing discrimination bias in social data science. Editorial Universitat Politècnica de València. 327-335. https://doi.org/10.4995/CARMA2023.2023.1640032733

Similar works

Full text

thumbnail-image

RiuNet

redirect
Last time updated on 30/01/2024

This paper was published in RiuNet.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.