2023.4.24: news.cyb/ai/
Timnit Gebru and Margaret Mitchell on AI Accountability
. late 2020 and early 2021, two researchers
— Timnit Gebru and Margaret Mitchell —
were given special attention
after they authored a research paper
addressing flaws in today's AI systems.
[James Vincent Apr 19, 2023]
some papers by Timnit Gebru and Margaret Mitchell:
2022:
Vinodkumar Prabhakaran,
Margaret Mitchell,
Timnit Gebru,
Iason Gabriel
https://arxiv.org/abs/2210.02667
A Human Rights-Based Approach to Responsible AI.
. Research on fairness, accountability, transparency and ethics
of AI-based interventions in society
has gained much-needed momentum in recent years.
However it lacks an explicit alignment with
a set of normative values and principles
that guide this research and interventions.
Rather, an implicit consensus is often assumed to hold
for the values we impart into our models
- something that is at odds with the pluralistic world we live in.
In this paper, we put forth the doctrine of
universal human rights
as a set of globally salient and cross-culturally recognized
set of values that can serve as a grounding framework for explicit value alignment in responsible AI
- and discuss its efficacy as a framework for
civil society partnership and participation.
We argue that a human rights framework orients the research in this space
away from the machines
and the risks of their biases,
and towards humans
and the risks to their rights,
essentially helping to center the conversation
around who is harmed, what harms they face,
and how those harms may be mitigated.
2021:
Daniel J. Liebling Katherine Heller
Margaret Mitchell
Mark Díaz Michal Lahav
Niloufar Salehi Samantha Robertson Samy Bengio
Timnit Gebru
Wesley Deng
https://research.google/pubs/pub50504/
Three Directions for the Design of
Human-Centered Machine Translation.
https://www.samantha-robertson.com/publication/hcmt/hcmt.pdf
As people all over the world adopt
machine translation (MT) to communicate across languages,
there is increased need for affordances that
aid users in understanding when to rely on
automated translations.
Identifying the information and interactions that will
most help users meet their translation needs
is an open area of research at the intersection of
Human-Computer Interaction (HCI)
and Natural Language Processing (NLP).
This paper advances work in this area by
drawing on a survey of users' strategies in
assessing translations.
We identify three directions for the
design of translation systems that support
more reliable and effective use of machine translation:
helping users craft good inputs,
helping users understand translations,
and expanding interactivity and adaptivity.
We describe how these can be introduced in current MT systems
and highlight open questions for HCI and NLP research.
2020:
Andrew Smart Becky White Ben Hutchinson Daniel Theron Inioluwa Deborah Raji Jamila Smith-Loud
Margaret Mitchell
Parker Barnes
Timnit Gebru
https://research.google/pubs/pub48823/
Closing the AI accountability gap:
defining an end-to-end framework for internal algorithmic auditing.
in FAT* Barcelona, 2020,
(ACM Conference on Fairness, Accountability, and Transparency)
Rising concern for the societal implications of
artificial intelligence systems
has inspired a wave of academic and journalistic literature
in which deployed systems are audited for harm
by investigatorsfrom outside the organizations deploying the algorithms.
However, it remains challenging for practitioners to identify
the harmful repercussions of their own systems
prior to deployment,
and, once deployed, emergent issues can become
difficult or impossible to trace back to their source.
In this paper, we introduce a framework for
algorithmic auditing that supports
artificial intelligence system development
end-to-end, to be applied throughout the
internal organization development life-cycle.
Each stage of the audit yields a set of documents
that together form an overall audit report,
drawing on an organization’s values or principles
to assess the fit of decisions made throughout the process.
The proposed auditing framework is intended to contribute to
closing the accountability gap
in the development and deployment of
large-scale artificial intelligence systems
by embedding a robust process to ensure audit integrity.
2018:
Margaret Mitchell,
Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji,
Timnit Gebru
https://arxiv.org/abs/1810.03993
Model Cards for Model Reporting.
https://research.latinxinai.org/papers/neurips/2018/pdf/Oral_Andrew_Zaldivar.pdf
Trained machine learning models are increasingly used to perform
high-impact tasks in areas such as law enforcement,
medicine, education, and employment.
In order to clarify the intended use cases of machine learning models
and minimize their usage in contexts for which they are
not well suited, we recommend that
released models be accompanied by documentation
detailing their performance characteristics.
In this paper, we propose a framework that we call
model cards,
to encourage such transparent model reporting.
Model cards are short documents accompanying trained machine learning models
that provide benchmarked evaluation in a variety of conditions,
such as across different cultural, demographic, or phenotypic groups
(e.g., race, geographic location, sex, Fitzpatrick skin type)
and intersectional groups
(e.g., age and race, or sex and Fitzpatrick skin type)
that are relevant to the intended application domains.
Model cards also disclose the context in which
models are intended to be used,
details of the performance evaluation procedures,
and other relevant information.
While we focus primarily on human-centered machine learning models
in the application fields of computer vision and natural language processing,
this framework can be used to document
any trained machine learning model.
To solidify the concept, we provide cards for two
supervised models:
One trained to detect smiling faces in images,
and one trained to detect toxic comments in text.
We propose model cards as a step towards the
responsible democratization of machine learning
and related AI technology,
increasing transparency into how well AI technology works.
We hope this work encourages those
releasing trained machine learning models
to accompany model releases with
similar detailed evaluation numbers
and other relevant documentation.
No comments:
Post a Comment