The Responsible Foundation Model Development Cheatsheet: A Review of Tools & Resources Shayne Longpre*, Stella Biderman* et al.
[Paper][Website]
International Scientific Report on the Safety of Advanced AI Yoshua Bengio et al.
2024 AI Seoul Summit [Paper][Website]
The Foundation Model Transparency Index v1.1: May 2024 Rishi Bommasani*, Kevin Klyman* et al.
[Paper][Website][Data]
On the Societal Impact of Open Foundation Models Sayash Kapoor*, Rishi Bommasani* et al.
ICML 2024 (Oral, top 1.5% of papers) [Paper][Website]
A Safe Harbor for AI Evaluation and Red Teaming Shayne Longpre et al.
ICML 2024 (Oral, top 1.5% of papers) [Paper][Website]
Foundation Model Transparency Reports Rishi Bommasani et al.
[Paper]
HELM Lite: Lightweight and Broad Capabilities Evaluation Percy Liang, Yifan Mai, Josselin Somerville, Farzaan Kaiyom, Tony Lee, Rishi Bommasani
[Blog]
AI Regulation Has Its Own Alignment Problem: The Technical and Institutional Feasibility of Disclosure, Registration, Licensing, and Auditing Neel Guha*, Christie M. Lawrence* et al.
George Washington Law Review 2024 [Paper][Policy Brief]
Ecosystem-level Analysis of Deployed Machine Learning Reveals Homogeneous Outcomes Connor Toups*, Rishi Bommasani*, Kathleen A. Creel, Sarah Bana, Dan Jurafsky, Percy Liang
NeurIPS 2023 [Paper][Code]
Cheaply Evaluating Inference Efficiency Metrics for Autoregressive Transformer APIs Deepak Narayanan, Keshav Santhanam, Peter Henderson, Rishi Bommasani, Tony Lee, Percy Liang
NeurIPS 2023 [Paper]
Ecosystem Graphs: The Social Footprint of Foundation Models Rishi Bommasani, Dilara Soylu, Thomas I. Liao, Kathleen A. Creel, Percy Liang
[Paper][Website][Blog][Code]
AI Spring? Four Takeaways from Major Releases in Foundation Models Rishi Bommasani
[Blog]
Evaluation for Change Rishi Bommasani
ACL 2023 [Paper]
Trustworthy Social Bias Measurement Rishi Bommasani, Percy Liang
[Paper][Code]
Evaluating Human-Language Model Interaction Mina Lee et al.
TMLR 2023 [Paper][Code]
Holistic Evaluation of Language Models Percy Liang*, Rishi Bommasani*, Tony Lee* et al.
TMLR 2023 [Paper][Website][Blog][Code] Outstanding Paper
Picking on the Same Person: Does Algorithmic Monoculture lead to Outcome Homogenization? Rishi Bommasani, Kathleen A. Creel, Ananya Kumar, Dan Jurafsky, Percy Liang
NeurIPS 2022 [Paper][Code]
Emergent Abilities of Large Language Models Jason Wei, Yi Tay, Rishi Bommasani et al.
TMLR 2022 [Paper][Blog] Outstanding Survey Paper
Data Governance in the Age of Large-Scale Data-Driven Language Technology Yacine Jernite et al.
FAccT 2022 [Paper]
The Time Is Now to Develop Community Norms for the Release of Foundation Models Percy Liang, Rishi Bommasani, Kathleen A. Creel, Rob Reich
[Blog][Op-ed]
Reflections on Foundation Models Rishi Bommasani, Percy Liang
[Blog]
Mistral — A Journey towards Reproducible Language Model Training Siddharth Karamcheti*, Laurel Orr* et al.
[Blog][Code]
Generalized Optimal Linear Orders Rishi Bommasani
Committee: Claire Cardie (Chair), Robert Kleinberg
M.S. Thesis, Cornell University [arXiv][Thesis][Slides]
Interpreting Pretrained Contextualized Representations via Reductions to Static Embeddings Rishi Bommasani, Kelly Davis, Claire Cardie
ACL 2020 [Paper][Oral][Slides][BibTeX][Abstract]
Towards Private Synthetic Text Generation Rishi Bommasani, Steven Wu, Xanda Schofield
NeurIPS 2019 Machine Learning with Guarantees Workshop [Paper]
Long-Distance Dependencies don’t have to be Long: Simplifying through Provably (Approximately) Optimal Permutations Rishi Bommasani
NeurIPS 2019 Context and Compositionality in Biological and Artificial Neural Systems Workshop [Paper][Poster][BibTeX][Abstract]
Long-Distance Dependencies don’t have to be Long: Simplifying through Provably (Approximately) Optimal Permutations Rishi Bommasani
ACL 2019 [Paper][Poster][BibTeX][Abstract]
Towards Understanding Position Embeddings Rishi Bommasani, Claire Cardie
ACL 2019 BlackboxNLP Workshop [Paper][Poster]