Searching for Fairness

This is the resource page for my keynote talk at the AI & Society Conference on June 28 in Zhuhai, China, entitled “Searching for Fairness: Grounding and Measuring Fairness and Social Impacts in Information Access”.

Abstract

Information access systems, such as search engines, recommender systems, and conversational agents, are used daily by billions of Internet users and have a profound impact on users’ information experiences, access to knowledge, and understanding of the world and people around them. These systems differ in crucial ways from the kinds of systems most frequently studied in the algorithmic fairness literature, requiring new techniques to properly understand and measure their social impacts. In this talk, I will discuss what makes these systems different and interesting; ground the quest for fairness and mitigating social harms in the varying goals of recommender systems; and describe several specific approaches and a general philosophy for measuring and mitigating harms in information access and other AI systems.

Slides

Papers

My work discussed in the talk.

FnT22
2022

Michael D. Ekstrand, Anubrata Das, Robin Burke, and Fernando Diaz. 2022. Fairness in Information Access Systems. Foundations and Trends® in Information Retrieval 16(1–2) (July 2022), 1–177. DOI 10.1561/1500000079. arXiv:2105.05779 [cs.IR]. NSF PAR 10347630. Impact factor: 8. Cited 217 times. Cited 102 times.

ECIR24i
2024

Michael D. Ekstrand, Lex Beattie, Maria Soledad Pera, and Henriette Cramer. 2024. Not Just Algorithms: Strategically Addressing Consumer Impacts in Information Retrieval. In Proceedings of the 46th European Conference on Information Retrieval (ECIR ’24, IR for Good track), Mar 24–28, 2024. Lecture Notes in Computer Science 14611:314–335. DOI 10.1007/978-3-031-56066-8_25. NSF PAR 10497110. Acceptance rate: 35.9%. Cited 12 times. Cited 5 times.

HEAL24
2024

Jacy Reese Anthis, Kristian Lum, Michael Ekstrand, Avi Feller, Alexander D’Amour, and Chenhao Tan. 2024. The Impossibility of Fair LLMs. In HEAL: Human-centered Evaluation and Auditing of Language Models, a non-archival workshop at CHI 2024, May 12, 2024. arXiv:2406.03198v1 [cs.CL]. Cited 21 times. Cited 14 times.

ECIR24g
2024

Amifa Raj and Michael D. Ekstrand. 2024. Towards Optimizing Ranking in Grid-Layout for Provider-side Fairness. In Proceedings of the 46th European Conference on Information Retrieval (ECIR ’24, IR for Good track), Mar 24–28, 2024. Lecture Notes in Computer Science 14612:90–105. DOI 10.1007/978-3-031-56069-9_7. NSF PAR 10497109. Acceptance rate: 35.9%. Cited 3 times. Cited 2 times.

TORS24d
2024

Michael D. Ekstrand, Ben Carterette, and Fernando Diaz. 2024. Distributionally-Informed Recommender System Evaluation. Transactions on Recommender Systems 2(1) (March 2024; online Aug 4, 2023), 6:1–27. DOI 10.1145/3613455. arXiv:2309.05892 [cs.IR]. NSF PAR 10461937. Cited 18 times. Cited 11 times.

FAccTRec23
2023

Amifa Raj and Michael D. Ekstrand. 2023. Towards Measuring Fairness in Grid Layout in Recommender Systems. Presented at the 6th FAccTrec Workshop on Responsible Recommendation at RecSys 2023 (peer-reviewed but not archived). arXiv:2309.10271 [cs.IR]. Cited 1 time.

FAccTRec22
2022

Michael D. Ekstrand and Maria Soledad Pera. 2022. Matching Consumer Fairness Objectives & Strategies for RecSys. Presented at the 5th FAccTrec Workshop on Responsible Recommendation at RecSys 2022 (peer-reviewed but not archived). arXiv:2209.02662 [cs.IR]. Cited 5 times. Cited 3 times.

SIGIRec22
2022

Amifa Raj and Michael D. Ekstrand. 2022. Fire Dragon and Unicorn Princess: Gender Stereotypes and Children’s Products in Search Engine Responses. In SIGIR eCom ’22, Jul 15, 2022. 9 pp.  DOI 10.48550/arXiv.2206.13747. arXiv:2206.13747 [cs.IR]. Cited 14 times. Cited 6 times.

SIGIR22
2022

Amifa Raj and Michael D. Ekstrand. 2022. Measuring Fairness in Ranked Results: An Analytical and Empirical Comparison. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’22), Jul 11, 2022. pp. 726–736. DOI 10.1145/3477495.3532018. NSF PAR 10329880. Acceptance rate: 20%. Cited 79 times. Cited 54 times.

AIMAG22
2022

Nasim Sonboli, Robin Burke, Michael Ekstrand, and Rishabh Mehrotra. 2022. The Multisided Complexity of Fairness in Recommender Systems. AI Magazine 43(2) (June 2022), 164–176. DOI 10.1002/aaai.12054. NSF PAR 10334796. Cited 40 times. Cited 22 times.

UMUAI21
2021

Michael D. Ekstrand and Daniel Kluver. 2021. Exploring Author Gender in Book Rating and Recommendation. User Modeling and User-Adapted Interaction 31(3) (February 2021), 377–420. DOI 10.1007/s11257-020-09284-2. arXiv:1808.07586v2. NSF PAR 10218853. Impact factor: 4.412. Cited 213 times (shared with RecSys18. Cited 115 times (shared with RecSys18.

CIKM20ee
2020

Fernando Diaz, Bhaskar Mitra, Michael D. Ekstrand, Asia J. Biega, and Ben Carterette. 2020. Evaluating Stochastic Rankings with Expected Exposure. In Proceedings of the 29th ACM International Conference on Information and Knowledge Management (CIKM ’20), Oct 21, 2020. ACM, pp. 275–284. DOI 10.1145/3340531.3411962. arXiv:2004.13157 [cs.IR]. NSF PAR 10199451. Acceptance rate: 20%. Nominated for Best Long Paper. Cited 205 times. Cited 179 times.

FAT18ck
2018

Michael D. Ekstrand, Mucun Tian, Ion Madrazo Azpiazu, Jennifer D. Ekstrand, Oghenemaro Anuyah, David McNeill, and Maria Soledad Pera. 2018. All The Cool Kids, How Do They Fit In?: Popularity and Demographic Biases in Recommender Evaluation and Effectiveness. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency (FAT* 2018), Feb 23, 2018. PMLR, Proceedings of Machine Learning Research 81:172–186. Acceptance rate: 24%. Cited 323 times. Cited 226 times.

Related resource: book data tools.

Bibliography

Other work cited in the talk:

  • Belkin, Nicholas J, and Stephen E Robertson. 1976. “Some Ethical and Political Implications of Theoretical Research in Information Science.” In Proceedings of the ASIS Annual Meeting. https://www.researchgate.net/publication/255563562.
  • Beutel, Alex, Ed H. Chi, Cristos Goodrow, Jilin Chen, Tulsee Doshi, Hai Qian, Li Wei, et al. 2019. “Fairness in Recommendation Ranking through Pairwise Comparisons.” In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. doi:10.1145/3292500.3330745.
  • Biega, Asia J., Krishna P. Gummadi, and Gerhard Weikum. 2018. “Equity of Attention: Amortizing Individual Fairness in Rankings.” In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, 405–14. ACM. doi:10.1145/3209978.3210063.
  • Binns, Reuben. 2020. “On the Apparent Conflict between Individual and Group Fairness.” In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 514–24. FAT* ’20. doi:10.1145/3351095.3372864.
  • Burke, Robin, and Morgan Sylvester. 2024. “Post-Userist Recommender Systems: A Manifesto.” <arXiv:2410.11870>.
  • Chouldechova, Alexandra. 2017. “Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments.” Big Data 5(2): 153–63. doi:10.1089/big.2016.0047.
  • Dwork, Cynthia, and Christina Ilvento. 2019. “Fairness under Composition.” In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). doi:10.4230/LIPICS.ITCS.2019.33.
  • Fish, Benjamin, Ashkan Bashardoust, Danah Boyd, Sorelle Friedler, Carlos Scheidegger, and Suresh Venkatasubramanian. 2019. “Gaps in Information Access in Social Networks?” In WWW ’19: The World Wide Web Conference, 480–90. doi:10.1145/3308558.3313680.
  • Friedler, Sorelle A., Carlos Scheidegger, and Suresh Venkatasubramanian. 2021. “The (Im)possibility of Fairness.” Communications of the ACM 64(4): 136–43. doi:10.1145/3433949.
  • Friedman, Batya, and Helen Nissenbaum. 1996. “Bias in Computer Systems.” ACM Transactions on Information Systems 14(3): 330–47. doi:10.1145/230538.230561.
  • Hill, William, Larry Stead, Mark Rosenstein, and George Furnas. 1995. “Recommending and Evaluating Choices in a Virtual Community of Use.” In CHI ’95: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 194–201. doi:10.1145/223904.223929.
  • Kamishima, Toshihiro, Shotaro Akaho, Hideki Asoh, and Jun Sakuma. 2018. “Recommendation Independence.” In Proceedings of the 1st Conference on Fairness, Accountability and Transparency, 81:187–201. Proceedings of Machine Learning Research. New York, NY, USA: PMLR. http://proceedings.mlr.press/v81/kamishima18a.html
  • Mehrotra, Rishabh, Ashton Anderson, Fernando Diaz, Amit Sharma, Hanna Wallach, and Emine Yilmaz. 2017. “Auditing Search Engines for Differential Satisfaction Across Demographics.” In Proceedings of the 26th International Conference on World Wide Web Companion, 626–33. doi:10.1145/3041021.3054197.
  • Mitchell, Shira, Eric Potash, Solon Barocas, Alexander D’Amour, and Kristian Lum. 2020. “Algorithmic Fairness: Choices, Assumptions, and Definitions.” Annual Review of Statistics and Its Application 8 (November). doi:10.1146/annurev-statistics-042720-125902.
  • Selbst, Andrew D., Danah Boyd, Sorelle A. Friedler, Suresh Venkatasubramanian, and Janet Vertesi. 2019. “Fairness and Abstraction in Sociotechnical Systems.” In Proceedings of the Conference on Fairness, Accountability, and Transparency, 59–68. FAT* ’19. doi:10.1145/3287560.3287598.
  • Smith, Jessie J., Lex Beattie, and Henriette Cramer. 2023. “Scoping Fairness Objectives and Identifying Fairness Metrics for Recommender Systems: The Practitioners’ Perspective.” In Proceedings of the ACM Web Conference 2023, 3648–59. doi:10.1145/3543507.3583204.
  • Smith, Jessie J, and Lex Beattie. 2022. “RecSys Fairness Metrics: Many to Use but Which One to Choose?” arXiv [Cs.HC]. http://arxiv.org/abs/2209.04011.
  • Wang, Lequn, and Thorsten Joachims. 2021. “User Fairness, Item Fairness, and Diversity for Rankings in Two-Sided Markets.” In Proceedings of the 2021 ACM SIGIR International Conference on Theory of Information Retrieval, 23–41. ICTIR ’21. doi:10.1145/3471158.3472260.