publications

2024

  1. GoodIT
    Indian-BhED: A Dataset for Measuring India-Centric Biases in Large Language Models
    Khyati Khandelwal, Manuel Tonneau, Andrew M. Bean, Hannah Rose Kirk, and Scott A. Hale
    In Proceedings of the 2024 International Conference on Information Technology for Social Good, Sep 2024
  2. Do Large Language Models have Shared Weaknesses in Medical Question Answering?
    Andrew M. Bean, Karolina Korgul, Felix Krones, Robert McCraith, and Adam Mahdi
    Mar 2024
  3. The PRISM Alignment Project: What Participatory, Representative and Individualised Human Feedback Reveals About the Subjective and Multicultural Alignment of Large Language Models
    Hannah Rose Kirk, Alexander Whitefield, Paul Röttger, Andrew M. Bean, Katerina Margatina, Juan Ciro, Rafael Mosquera, Max Bartolo, Adina Williams, He He, Bertie Vidgen, and Scott A. Hale
    Apr 2024
  4. LINGOLY: A Benchmark of Olympiad-Level Linguistic Reasoning Puzzles in Low-Resource and Extinct Languages
    Andrew M. Bean, Simi Hellsten, Harry Mayne, Jabez Magomere, Ethan A. Chi, Ryan Chi, Scott A. Hale, and Hannah Rose Kirk
    Jun 2024
  5. Fine-Tuning Large Language Models with Human-inspired Learning Strategies in Medical Question Answering
    Yushi Yang, Andrew M. Bean, Robert McCraith, and Adam Mahdi
    Aug 2024

2023

  1. House of Lords
    Large Language Models - Written Evidence
    Andrew M. Bean, Hannah Rose Kirk, Jakob Mökander, Cailean Osborne, Huw Roberts, and Marta Ziosi
    Oct 2023
  2. The Past, Present and Better Future of Feedback Learning in Large Language Models for Subjective Human Preferences and Values
    Hannah Kirk, Andrew Bean, Bertie Vidgen, Paul Rottger, and Scott Hale
    In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, Dec 2023