FATE/MM '20: Proceedings of the 2nd International Workshop on Fairness, Accountability, Transparency and Ethics in Multimedia




FATE/MM '20: Proceedings of the 2nd International Workshop on Fairness, Accountability, Transparency<br /> and Ethics in Multimedia<br />

FATE/MM '20: Proceedings of the 2nd International Workshop on Fairness, Accountability, Transparency
and Ethics in Multimedia


Full Citation in the ACM Digital Library

SESSION: Session 1

Gender Slopes: Counterfactual Fairness for Computer Vision Models by Attribute Manipulation

  • Jungseock Joo
  • Kimmo Kärkkäinen

Automated computer vision systems have been applied in many domains including security,
law enforcement, and personal devices, but recent reports suggest that these systems
may produce biased results, discriminating against people in certain demographic groups.
Diagnosing and understanding the underlying true causes of model biases, however,
are challenging tasks because modern computer vision systems rely on complex black-box
models whose behaviors are hard to decode. We propose to use an encoder-decoder network
developed for image attribute manipulation to synthesize facial images varying in
the dimensions of gender and race while keeping other signals intact. We use these
synthesized images to measure counterfactual fairness of commercial computer vision
classifiers by examining the degree to which these classifiers are affected by gender
and racial cues controlled in the images, e.g., feminine faces may elicit higher scores
for the concept of nurse and lower scores for STEM-related concepts.

Not Judging a User by Their Cover: Understanding Harm in Multi-Modal Processing within Social Media Research

  • Jiachen Jiang
  • Soroush Vosoughi

Social media has shaken the foundations of our society, unlikely as it may seem. Many
of the popular tools used to moderate harmful digital content, however, have received
widespread criticism from both the academic community and the public sphere for middling
performance and lack of accountability. Though social media research is thought to
center primarily on natural language processing, we demonstrate the need for the community
to understand multimedia processing and its unique ethical considerations. Specifically,
we identify statistical differences in the performance of Amazon Turk (MTurk) annotators
when different modalities of information are provided and discuss the patterns of
harm that arise from crowd-sourced human demographic prediction. Finally, we discuss
the consequences of those biases through auditing the performance of a toxicity detector
called Perspective API on the language of Twitter users across a variety of demographic
categories.

SESSION: Session 2

Balancing Fairness and Accuracy in Sentiment Detection using Multiple Black Box Models

  • Abdulaziz A. Almuzaini
  • Vivek K. Singh

Sentiment detection is an important building block for multiple information retrieval
tasks such as product recommendation, cyberbullying, fake news and misinformation
detection. Unsurprisingly, multiple commercial APIs, each with different levels of
accuracy and fairness, are now publicly available for sentiment detection. Users can
easily incorporate these APIs in their applications. While combining inputs from multiple
modalities or black-box models for increasing accuracy is commonly studied in multimedia
computing literature, there has been little work on combining different modalities
for increasingfairness of the resulting decision. In this work, we audit multiple
commercial sentiment detection APIs for the gender bias in two-actor news headlines
settings and report on the level of bias observed. Next, we propose a "Flexible Fair
Regression" approach, which ensures satisfactory accuracy and fairness by jointly
learning from multiple black-box models. The results pave way for fair yet accurate
sentiment detectors for multiple applications.

Fighting Filterbubbles with Adversarial Training

  • Lukas Pfahler
  • Katharina Morik

Recommender engines play a role in the emergence and reinforcement of filter bubbles.
When these systems learn that a user prefers content from a particular site, the user
will be less likely to be exposed to different sources or opinions and, ultimately,
is more likely to develop extremist tendencies. We trace roots of this phenomenon
to the way the recommender engine represents news articles. The vectorial features
modern systems extract from the plain text of news articles are already highly predictive
of the associated news outlet. We propose a new training scheme based on adversarial
machine learning to tackle this issue . Our preliminary experiments show that the
features we can extract this way are significantly less predictive of the news outlet
and thus offer the possibility to reduce the risk of manifestation of new filter bubbles.