Searching for Fairness

This is the resource page for my keynote talk at the AI & Society Conference on June 28 in Zhuhai, China, entitled “Searching for Fairness: Grounding and Measuring Fairness and Social Impacts in Information Access”. Information access systems, such as search engines, recommender systems, and conversational agents, are used daily by billions of Internet users and have a profound impact on users’ information experiences, access to knowledge, and understanding of the world and people around them. These systems differ in crucial ways from the kinds of systems most frequently studied in the algorithmic fairness literature, requiring new techniques to properly understand and measure their social impacts. In this talk, I will discuss what makes these systems different and interesting; ground the quest for fairness and mitigating social harms in the varying goals of recommender systems; and describe several specific approaches and a general philosophy for measuring and mitigating harms in information access and other AI systems. My work discussed in the talk. 2022. Fairness in Information Access Systems. Foundations and Trends® in Information Retrieval 16(1–2) (July 2022), 1–177. DOI 10.1561/1500000079. arXiv:2105.05779 [cs.IR]. NSF PAR 10347630. Impact factor: 8. Cited 217 times. Cited 102 times. 2024. Not Just Algorithms: Strategically Addressing Consumer Impacts in Information Retrieval. In Proceedings of the 46th European Conference on Information Retrieval (ECIR ’24, IR for Good track), Mar 24–28, 2024. Lecture Notes in Computer Science 14611:314–335. DOI 10.1007/978-3-031-56066-8_25. NSF PAR 10497110. Acceptance rate: 35.9%. Cited 12 times. Cited 5 times. 2024. The Impossibility of Fair LLMs. In HEAL: Human-centered Evaluation and Auditing of Language Models, a non-archival workshop at CHI 2024, May 12, 2024. arXiv:2406.03198v1 [cs.CL]. Cited 21 times. Cited 14 times. 2024. Towards Optimizing Ranking in Grid-Layout for Provider-side Fairness. In Proceedings of the 46th European Conference on Information Retrieval (ECIR ’24, IR for Good track), Mar 24–28, 2024. Lecture Notes in Computer Science 14612:90–105. DOI 10.1007/978-3-031-56069-9_7. NSF PAR 10497109. Acceptance rate: 35.9%. Cited 3 times. Cited 2 times. 2024. Distributionally-Informed Recommender System Evaluation. Transactions on Recommender Systems 2(1) (March 2024; online Aug 4, 2023), 6:1–27. DOI 10.1145/3613455. arXiv:2309.05892 [cs.IR]. NSF PAR 10461937. Cited 18 times. Cited 11 times. 2023. Towards Measuring Fairness in Grid Layout in Recommender Systems. Presented at the 6th FAccTrec Workshop on Responsible Recommendation at RecSys 2023 (peer-reviewed but not archived). arXiv:2309.10271 [cs.IR]. Cited 1 time. 2022. Matching Consumer Fairness Objectives & Strategies for RecSys. Presented at the 5th FAccTrec Workshop on Responsible Recommendation at RecSys 2022 (peer-reviewed but not archived). arXiv:2209.02662 [cs.IR]. Cited 5 times. Cited 3 times. 2022. Fire Dragon and Unicorn Princess: Gender Stereotypes and Children’s Products in Search Engine Responses. In SIGIR eCom ’22, Jul 15, 2022. 9 pp. DOI 10.48550/arXiv.2206.13747. arXiv:2206.13747 [cs.IR]. Cited 14 times. Cited 6 times. 2022. Measuring Fairness in Ranked Results: An Analytical and Empirical Comparison. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’22), Jul 11, 2022. pp. 726–736. DOI 10.1145/3477495.3532018. NSF PAR 10329880. Acceptance rate: 20%. Cited 79 times. Cited 54 times. 2022. The Multisided Complexity of Fairness in Recommender Systems. AI Magazine 43(2) (June 2022), 164–176. DOI 10.1002/aaai.12054. NSF PAR 10334796. Cited 40 times. Cited 22 times. 2021. Exploring Author Gender in Book Rating and Recommendation. User Modeling and User-Adapted Interaction 31(3) (February 2021), 377–420. DOI 10.1007/s11257-020-09284-2. arXiv:1808.07586v2. NSF PAR 10218853. Impact factor: 4.412. Cited 213 times (shared with RecSys18. Cited 115 times (shared with RecSys18. 2020. Evaluating Stochastic Rankings with Expected Exposure. In Proceedings of the 29th ACM International Conference on Information and Knowledge Management (CIKM ’20), Oct 21, 2020. ACM, pp. 275–284. DOI 10.1145/3340531.3411962. arXiv:2004.13157 [cs.IR]. NSF PAR 10199451. Acceptance rate: 20%. Nominated for Best Long Paper. Cited 205 times. Cited 179 times. 2018. All The Cool Kids, How Do They Fit In?: Popularity and Demographic Biases in Recommender Evaluation and Effectiveness. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency (FAT* 2018), Feb 23, 2018. PMLR, Proceedings of Machine Learning Research 81:172–186. Acceptance rate: 24%. Cited 323 times. Cited 226 times. Related resource: book data tools. Other work cited in the talk:Abstract
Slides
Papers
Bibliography