Fair Recommender Systems
In this project, we are investigating several questions of fairness and bias in recommender systems:
- What does it mean for a recommender to be fair, unfair, or biased?
- What potentially discriminatory biases are present in the recommender’s input data, algorithmic structure, or output?
- How do these biases change over time through the recommender-user feedback loop?
This is a part of our overall, ongoing goal to help make recommenders (and other AI systems) better for the people they affect. The key starting point to read about this research is our monograph. I have been involved with several workshops, sessions, etc. related to fair recommendation. 2024. It’s Not You, It’s Me: The Impact of Choice Models and Ranking Strategies on Gender Imbalance in Music Recommendation. Short paper in Proceedings of the 18th ACM Conference on Recommender Systems (RecSys ’24). ACM. DOI 10.1145/3640457.3688163. arXiv:2409.03781 [cs.IR]. 2024. Not Just Algorithms: Strategically Addressing Consumer Impacts in Information Retrieval. In Proceedings of the 46th European Conference on Information Retrieval (ECIR ’24, IR for Good track). Lecture Notes in Computer Science 14611:314–335. DOI 10.1007/978-3-031-56066-8_25. NSF PAR 10497110. Acceptance rate: 35.9%. Cited 6 times. Cited 3 times. 2024. Towards Optimizing Ranking in Grid-Layout for Provider-side Fairness. In Proceedings of the 46th European Conference on Information Retrieval (ECIR ’24, IR for Good track). Lecture Notes in Computer Science 14612:90–105. DOI 10.1007/978-3-031-56069-9_7. NSF PAR 10497109. Acceptance rate: 35.9%. Cited 1 time. Cited 1 time. 2024. Building Human Values into Recommender Systems: An Interdisciplinary Synthesis. Transactions on Recommender Systems 2(3) (June 5th, 2024; online November 12th, 2023), 20:1–57. DOI 10.1145/3632297. arXiv:2207.10192 [cs.IR]. Cited 63 times. Cited 43 times. 2024. Distributionally-Informed Recommender System Evaluation. Transactions on Recommender Systems 2(1) (March 7th, 2024; online August 4th, 2023), 6:1–27. DOI 10.1145/3613455. arXiv:2309.05892 [cs.IR]. NSF PAR 10461937. Cited 16 times. Cited 9 times. 2023. Towards Measuring Fairness in Grid Layout in Recommender Systems. Presented at the 6th FAccTrec Workshop on Responsible Recommendation at RecSys 2023 (peer-reviewed but not archived). arXiv:2309.10271 [cs.IR]. Cited 1 time. 2023. Patterns of Gender-Specializing Query Reformulation. Short paper in Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’23). DOI 10.1145/3539618.3592034. arXiv:2304.13129. NSF PAR 10423689. Acceptance rate: 25.1%. Cited 2 times. Cited 1 time. 2023. Much Ado About Gender: Current Practices and Future Recommendations for Appropriate Gender-Aware Information Access. In Proceedings of the 2023 Conference on Human Information Interaction and Retrieval (CHIIR ’23). DOI 10.1145/3576840.3578316. arXiv:2301.04780. NSF PAR 10423693. Acceptance rate: 39.4%. Cited 19 times. Cited 11 times. 2022. Matching Consumer Fairness Objectives & Strategies for RecSys. Presented at the 5th FAccTrec Workshop on Responsible Recommendation at RecSys 2022 (peer-reviewed but not archived). arXiv:2209.02662 [cs.IR]. Cited 6 times. Cited 4 times. 2022. Fire Dragon and Unicorn Princess: Gender Stereotypes and Children’s Products in Search Engine Responses. In SIGIR eCom ’22. DOI 10.48550/arXiv.2206.13747. arXiv:2206.13747 [cs.IR]. Cited 11 times. Cited 5 times. 2022. Measuring Fairness in Ranked Results: An Analytical and Empirical Comparison. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’22). pp. 726–736. DOI 10.1145/3477495.3532018. NSF PAR 10329880. Acceptance rate: 20%. Cited 59 times. Cited 44 times. 2021. Pink for Princesses, Blue for Superheroes: The Need to Examine Gender Stereotypes in Kids’ Products in Search and Recommendations. In Proceedings of the 5th International and Interdisciplinary Workshop on Children & Recommender Systems (KidRec ’21), at IDC 2021. arXiv:2105.09296. NSF PAR 10335669. Cited 9 times. Cited 5 times. 2022. Fairness in Information Access Systems. Foundations and Trends® in Information Retrieval 16(1–2) (July 11th, 2022), 1–177. DOI 10.1561/1500000079. arXiv:2105.05779 [cs.IR]. NSF PAR 10347630. Impact factor: 8. Cited 184 times. Cited 84 times. 2022. The Multisided Complexity of Fairness in Recommender Systems. AI Magazine 43(2) (June 23rd, 2022), 164–176. DOI 10.1002/aaai.12054. NSF PAR 10334796. Cited 33 times. Cited 19 times. 2022. Fairness in Recommender Systems. In Recommender Systems Handbook (3rd edition). Francesco Ricci, Lior Roach, and Bracha Shapira, eds. Springer-Verlag. DOI 10.1007/978-1-0716-2197-4_18. ISBN 978-1-0716-2196-7. Cited 36 times. Cited 19 times. 2021. Exploring Author Gender in Book Rating and Recommendation. User Modeling and User-Adapted Interaction 31(3) (February 4th, 2021), 377–420. DOI 10.1007/s11257-020-09284-2. arXiv:1808.07586v2. NSF PAR 10218853. Impact factor: 4.412. Cited 201 times (shared with RecSys18◊). Cited 107 times (shared with RecSys18◊). 2021. Estimation of Fair Ranking Metrics with Incomplete Judgments. In Proceedings of The Web Conference 2021 (TheWebConf 2021). ACM. DOI 10.1145/3442381.3450080. arXiv:2108.05152. NSF PAR 10237411. Acceptance rate: 21%. Cited 47 times. Cited 36 times. 2020. Evaluating Stochastic Rankings with Expected Exposure. In Proceedings of the 29th ACM International Conference on Information and Knowledge Management (CIKM ’20). ACM, pp. 275–284. DOI 10.1145/3340531.3411962. arXiv:2004.13157 [cs.IR]. NSF PAR 10199451. Acceptance rate: 20%. Nominated for Best Long Paper. Cited 187 times. Cited 166 times. 2020. Comparing Fair Ranking Metrics. Presented at the 3rd FAccTrec Workshop on Responsible Recommendation at RecSys 2020 (peer-reviewed but not archived). arXiv:2009.01311 [cs.IR]. Cited 37 times. Cited 29 times. 2020. Overview of the TREC 2019 Fair Ranking Track. Meeting summary in The Twenty-Eighth Text REtrieval Conference (TREC 2019) Proceedings (TREC 2019). arXiv:2003.11650. Cited 46 times. Cited 14 times. 2019. FACTS-IR: Fairness, Accountability, Confidentiality, Transparency, and Safety in Information Retrieval. SIGIR Forum 53(2) (December 12th, 2019), 20–43. DOI 10.1145/3458553.3458556. Cited 50 times. Cited 18 times. 2018. Exploring Author Gender in Book Rating and Recommendation. In Proceedings of the 12th ACM Conference on Recommender Systems (RecSys ’18). ACM, pp. 242–250. DOI 10.1145/3240323.3240373. arXiv:1808.07586v1 [cs.IR]. Acceptance rate: 17.5%. Citations reported under UMUAI21◊. Citations reported under UMUAI21◊. 2018. All The Cool Kids, How Do They Fit In?: Popularity and Demographic Biases in Recommender Evaluation and Effectiveness. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency (FAT* 2018). PMLR, Proceedings of Machine Learning Research 81:172–186. Acceptance rate: 24%. Cited 290 times. Cited 212 times. 2017. The Demographics of Cool: Popularity and Recommender Performance for Different Groups of Users. In RecSys 2017 Poster Proceedings. CEUR, Workshop Proceedings 1905. Cited 17 times. Cited 6 times. 2023. FAccTRec 2023: The 6th Workshop on Responsible Recommendation. Meeting summary in Proceedings of the 17th ACM Conference on Recommender Systems (RecSys ’23). ACM. DOI 10.1145/3604915.3608761. Cited 1 time. Cited 1 time. 2021. FAccTRec 2021: The 4th Workshop on Responsible Recommendation. Meeting summary in Proceedings of the 15th ACM Conference on Recommender Systems (RecSys ’21). ACM. DOI 10.1145/3460231.3470932. Cited 2 times. Cited 2 times. 2020. 3rd FATREC Workshop: Responsible Recommendation. Meeting summary in Proceedings of the 14th ACM Conference on Recommender Systems (RecSys ’20). ACM. DOI 10.1145/3383313.3411538. Cited 6 times. Cited 6 times. 2020. FairUMAP 2020: The 3rd Workshop on Fairness in User Modeling, Adaptation and Personalization. Meeting summary in Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization (UMAP ’20). ACM. DOI 10.1145/3340631.3398671. Cited 5 times. Cited 2 times. 2019. Workshop on Fairness, Accountability, Confidentiality, Transparency, and Safety in Information Retrieval (FACTS-IR). In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’19). ACM. DOI 10.1145/3331184.3331644. Cited 6 times. 2019. FairUMAP 2019 Chairs’ Welcome Overview. Meeting summary in Adjunct Publication of the 27th Conference on User Modeling, Adaptation and Personalization (UMAP ’19). ACM. DOI 10.1145/3314183.3323842. 2018. 2nd FATREC Workshop: Responsible Recommendation. Meeting summary in Proceedings of the 12th ACM Conference on Recommender Systems (RecSys ’18). ACM. DOI 10.1145/3240323.3240335. Cited 13 times. Cited 11 times. 2018. UMAP 2018 Fairness in User Modeling, Adaptation and Personalization (FairUMAP 2018) Chairs’ Welcome & Organization. Meeting summary in Adjunct Publication of the 26th Conference on User Modeling, Adaptation, and Personalization (UMAP ’18). ACM. DOI 10.1145/3213586.3226200. 2017. The FATREC Workshop on Responsible Recommendation. Meeting summary in Proceedings of the 11th ACM Conference on Recommender Systems (RecSys ’17). ACM. DOI 10.1145/3109859.3109960. Cited 6 times. Cited 14 times.Blog Posts and Other Coverage
Funding
Workshops and Meetings
Publications
Workshop Summaries