Salon des Refusés

We welcome self-nomination of rejected submissions from all NeurIPS tracks. This dedicated poster session celebrates papers that almost made it through the review process, especially during these times when AI progress is moving at a rapid pace.

About the Salon des Refusés

One of the EurIPS poster sessions is a dedicated Salon des Refusés featuring papers that almost made it through the NeurIPS review process. Posters will be selected on a first-come, first-serve basis following a light review by the EurIPS committee; priority will be given to papers with positive reviews and metareviews.

Attendees of NeurIPS 2016 may recall the original Salon des Refusés, inspired by the 1863 exhibition of works rejected from the Paris Salon. We at EurIPS are happy to bring back this tradition together with the NeurIPS 2025 team.

Non-Archival Venue

The Salon des Refusés is non-archival. Authors retain full rights to submit their work to other venues, including archival conferences and journals.

Rolling Admission

Papers will be accepted on a first-come, first-serve basis following a light review by the EurIPS committee, until the poster session is filled or the deadline of Friday, November 14, 2025 (AOE).

Selection Criteria

Priority will be given to papers with:

  • Positive reviews from NeurIPS reviewers
  • Positive metareviews indicating the paper almost made it through
Registration Requirement

Authors that are accepted for presentation in the Salon des Refusés will need to register for the conference within 10 days of receiving the acceptance notification.

Submission Guidelines

To submit a paper for consideration at the Salon des Refusés, please self-nominate by providing:

Recommended Option

Link to your paper's OpenReview page

You can choose to make your reviews public in OpenReview. This option:

  • Makes it easier for us to evaluate your submission
  • Improves your paper's visibility to EurIPS participants
  • Facilitates better discussions at the conference
Alternative Option

PDF file containing the reviews

If you do not wish to make your reviews and paper publicly available, you can upload a PDF file containing the reviews (use Print to PDF on the OpenReview paper page in your browser).

Deadline: Friday, November 14, 2025 (AOE)
Submissions Closed: The conference is sold out.

48 Salon des Refusés Posters

Multi-Armed Bandits with Minimum Aggregated Revenue Constraints

Ahmed Ben Yahmed, Hafedh El Ferchichi, Marc Abeille, Vianney Perchet

Presented by: Ahmed Ben Yahmed Poster Stand #1 Track: Regular Reinforcement Learning, Sequential Decision Making, Online Learning
Fairness for the people, by the people: Minority Collective Action

Omri Ben-Dov, Samira Samadi, Amartya Sanyal, Alexandru Tifrea

Presented by: Omri Ben-Dov Poster Stand #2 Track: Regular Fairness and Ethics
Can You Hear Me Now? A Benchmark for Long-Range Graph Propagation

Luca Miglior, Matteo Tolloso, Alessio Gravina, Davide Bacciu

Presented by: Luca Miglior Poster Stand #3 Track: Datasets and Benchmarks Graph Neural Networks, Benchmarking Studies
Benchmarking Optimizers for Large Language Model Pretraining

Andrei Semenov, Matteo Pagliardini, Martin Jaggi

Presented by: Andrei Semenov Poster Stand #4 Track: Datasets and Benchmarks Large Language Models, Optimization Theory, Benchmarking Studies
EMMA: Concept Erasure Benchmark with Comprehensive Semantic Metrics and Diverse Categories

Lu Wei, Yuta Nakashima, Noa Garcia

Presented by: Lu Wei Poster Stand #5 Track: Datasets and Benchmarks Machine Unlearning, Benchmarking Studies, Fairness and Ethics
Gradient-based Learning of Simple yet Accurate Rule-Based Classifiers

Javier Fumanal-Idocin, Javier Andreu-Perez, Raquel Fernandez-Peralta

Presented by: Javier Fumanal-Idocin Poster Stand #6 Track: Regular Model Interpretability, Optimization Theory, Benchmarking Studies
Discontinuity-Preserving Image Super-Resolution via MAP-Regularized One-Step Diffusion

Sanchar Palit, Subhasis Chaudhuri, Biplab Banerjee

Presented by: Sanchar Palit Poster Stand #7 Track: Regular Computer Vision, Generative Models, Bayesian Methods
Federated ADMM from Bayesian Duality

Thomas Möllenho!, Siddharth Swaroop, Finale Doshi-Velez, Mohammad Emtiyaz Khan

Presented by: Thomas Moellenhoff Poster Stand #8 Track: Regular Federated Learning, Bayesian Methods, Optimization Theory
The Time for Sampling Is Now: Charting a New Course for Bayesian Deep Learning

Emanuel Sommer, David Rügamer

Presented by: David Rügamer Poster Stand #9 Track: Position paper Bayesian Methods, Uncertainty Quantification
Position: Olfaction Standardization is Essential for the Advancement of Embodied Artificial Intelligence

Kordel K. France, Rohith Peddi, Nik Dennler, Ovidiu Daescu

Presented by: Rohith Peddi Poster Stand #10 Track: Position paper Embodied AI, Multimodal Learning, Benchmarking Studies
JAPAN: Joint Adaptive Prediction Areas with Normalising Flow

Eshant English, Christoph Lippert

Presented by: Eshant English Poster Stand #11 Track: Regular Uncertainty Quantification, Generative Models, Model Calibration
TabStruct: Measuring Structural Fidelity of Tabular Data

Xiangjian Jiang, Nikola Simidjievski, Mateja Jamnik

Presented by: Xiangjian Jiang Poster Stand #12 Track: Datasets and Benchmarks Tabular Data Analysis, Benchmarking Studies, Evaluation Methods
CausalProfiler: Generating Synthetic Benchmarks for Rigorous and Tra...

Panayiotis Panayiotou, Audrey Poinsot, Alessandro Leite, Nicolas CHESNEAU, Marc Schoenauer, Özgür Şimşek

Presented by: Audrey Poinsot Poster Stand #13 Track: Datasets and Benchmarks Causal Inference, Benchmarking Studies, Evaluation Methods
Explicit Density Approximation for Neural Implicit Samplers Using a Bernstein-Based Convex Divergence

José Manuel de Frutos, Manuel A. Vázquez, Pablo M. Olmos, Joaquin Miguez

Presented by: José Manuel de Frutos Porras Poster Stand #14 Track: Regular Generative Models, Optimization Theory
ASIDE: Architectural Separation of Instructions and Data in Language Models

Egor Zverev, Evgenii Kortukov, Alexander Panfilov, Alexandra Volkova, Soroush Tabesh, Sebastian Lapuschkin, Wojciech Samek, Christoph H. Lampert

Presented by: Egor Zverev Poster Stand #15 Track: Regular Large Language Models, Adversarial Robustness, Model Interpretability
Breaking Rank Bottlenecks in Knowledge Graph Completion

Samy Badreddine, Emile van Krieken, Luciano Serafini

Presented by: Emile van Krieken Poster Stand #16 Track: Regular Knowledge Graphs, Model Calibration, Evaluation Methods
Scalable Utility-Aware Multiclass Calibration

Mahmoud Hegazy, Michael I. Jordan, Aymeric Dieuleveut

Presented by: Mahmoud Hegazy Poster Stand #17 Track: Regular Model Calibration, Evaluation Methods
There are no Champions in Long-Term Time Series Forecasting

Lorenzo Brigato, Rafael Morand, Knut Joar Strømmen, Maria Panagiotou, Markus Schmidt, Stavroula Mougiakakou

Presented by: Lorenzo Brigato Poster Stand #18 Track: Position paper Time Series Forecasting, Evaluation Methods, Benchmarking Studies
SATA-BENCH: Select All That Apply Benchmark for Multiple Choice Questions

Weijie Xu, Shixian Cui, Xi Fang, Chi Xue, Stephanie Eckman, Chandan K. Reddy

Presented by: Weijie Xu Poster Stand #19 Track: Datasets and Benchmarks Large Language Models, Benchmarking Studies, Evaluation Methods
DecompSR: A dataset for decomposed analyses of compositional multihop spatial reasoning

Lachlan McPheat, Navdeep Kaur, Robert E. Blackwell, Alessandra Russo, Anthony G Cohn, Pranava Madhyastha

Presented by: Pranava Madhyastha Poster Stand #20 Track: Datasets and Benchmarks Spatial Reasoning, Large Language Models, Benchmarking Studies
MultiHal: MultiLingual Dataset for Knowledge-Graph Grounded Evaluation of LLM Hallucinations

Ernests Lavrinovics, Russa Biswas, Katja Hose, Johannes Bjerva

Presented by: Ernests Lavrinovics Poster Stand #22 Track: Datasets and Benchmarks Large Language Models, Knowledge Graphs, Benchmarking Studies
Rethinking Knowledge Distillation: A Data Dependent Regulariser With an Asymmetric Payoff

Israel Mason-Williams, Gabryel Mason-Williams, Helen Yannakoudakis

Presented by: Israel Mason-Williams Poster Stand #23 Track: Regular Model Compression, Optimization Theory, Fairness and Ethics
Deep Active Inference Agents for DeIayed and Long-Horizon Environments

Yavar Taheri Yeganeh, Mohsen a jafari, Andrea Matta

Presented by: Yavar Taheri Yeganeh Poster Stand #24 Track: Regular Reinforcement Learning, Generative Models, Sequential Decision Making
Reusing Trajectories in Policy Gradients Enables Fast Convergence

Alessandro Montenegro, Federico Mansutti, Marco Mussi, Matteo Papini, Alberto Maria Metelli

Presented by: Matteo Papini Poster Stand #25 Track: Regular Reinforcement Learning, Sequential Decision Making, Optimization Theory
Physics-Learning AI Datamodel (PLAID) datasets: a collection of physics simulations for machine learning

Fabien Casenave, Xavier Roynard, Brian Staber, William PIAT, Michele Alessandro Bucci, Nissrine Akkari, Abbas Kabalan, Xuan Minh Vuong Nguyen, Luca Saverio, Raphael Carpintero Perez, Anthony Kalaydjian, Samy Fouché, Thierry Gonon, Ghassan Najjar, Emmanuel Menier, Matthieu Nastorg, Giovanni Catalani, Christian Rey

Presented by: Fabien Casenave Poster Stand #26 Track: Datasets and Benchmarks Benchmarking Studies, Dynamical Systems, Evaluation Methods
Beware! The AI Act Can Also Apply to Your AI Research Practices

Alina Wernick, Kristof Meding

Presented by: Alina Wernick Poster Stand #27 Track: Position paper Fairness and Ethics
Deep Actor-Critics with Tight Risk Certificates

Bahareh Tasdighi, Manuel Haussmann, Yi-Shan Wu, Andres R Masegosa, Melih Kandemir

Presented by: Manuel Haussmann Poster Stand #28 Track: Regular Reinforcement Learning, Uncertainty Quantification, Bayesian Methods
BN-Pool: Bayesian Nonparametric Graph Pooling

Daniele Castellana, Filippo Maria Bianchi

Presented by: Filippo Maria Bianchi Poster Stand #29 Track: Regular Graph Neural Networks, Bayesian Methods, Generative Models
RelationalFactQA: A Benchmark for Evaluating Tabular Fact Retrieval from Large Language Models

Dario Satriani, Enzo Veltri, Donatello Santoro, Paolo Papotti

Presented by: Dario Satriani Poster Stand #30 Track: Datasets and Benchmarks Large Language Models, Benchmarking Studies, Tabular Data Analysis
Detecting Invariant Manifolds in ReLU-Based RNNs

Lukas Eisenmann, Alena Brdndle, Zahra Monfared, Daniel Durstewitz

Presented by: Lukas Eisenmann Poster Stand #31 Track: Regular Dynamical Systems, Model Interpretability, Biosignal Processing
Online Learning and Unlearning

Yaxi Hu, Bernhard Schélkopf, Amartya Sanyal

Presented by: Yaxi Hu Poster Stand #32 Track: Regular Online Learning, Machine Unlearning, Sequential Decision Making
Robust Counterfactual Inference in Markov Decision Processes

Jessica Lally, Milad Kazemi, Nicola Paoletti

Presented by: Jessica Lally Poster Stand #33 Track: Regular Causal Inference, Reinforcement Learning, Uncertainty Quantification
Discrete Diffusion-Based Decoding of Digital Communication Signals ...

Fadli Damara, Peter Jung, Shinichi Nakajima

Presented by: Fadli Damara Poster Stand #34 Track: Regular Generative Models, Signal Processing
What is Adversarial Training for Diffusion Models?

Maria Rosaria Briglia, Mujtaba Hussain Mirza, Giuseppe Lisanti, Iacopo Masi

Presented by: Maria Rosaria Briglia Poster Stand #35 Track: Regular Adversarial Robustness, Generative Models, Computer Vision
Explanation Design in Strategic Learning: Sufficient Explanations that Induce Non-harmful Responses

Kiet Q. H. Vo, Siu Lun Chau, Masahiro Kato, Yixin Wang, Krikamol Muandet

Presented by: Kiet Vo Poster Stand #36 Track: Regular Model Interpretability, Sequential Decision Making
Implicit Inversion turns CLIP into a Decoder

Antonio D'Orazio, Maria Rosaria Briglia, Donato Crisostomi, Dario Loi, Emanuele Rodolà, Iacopo Masi

Presented by: Antonio D'Orazio Poster Stand #37 Track: Regular Generative Models, Multimodal Learning, Computer Vision
Fodor and Pylyshyn’s Legacy – Still No Human-like Systematic Compositionality in Neural Networks

Tim Tobiasch, Moritz Willig, Antonia Wüst, Lukas Hel!, Wolfgang Stammer, Constantin A. Rothkopf, Kristian Kersting

Presented by: Tim Woydt Poster Stand #38 Track: Position paper Natural Language Processing, Model Interpretability, Evaluation Methods
Can LLMs Match the Observations of Systematic Reviews?

Christopher Polzak, Alejandro Lozano, Min Woo Sun, James Burgess, Yuhui Zhang, Kevin Wu, Serena Yeung-Levy

Presented by: Christopher Polzak Poster Stand #39 Track: Datasets and Benchmarks Large Language Models, Benchmarking Studies, Natural Language Processing
RAID: A Dataset for Testing the Adversarial Robustness of AI-Generated Image Detectors

Hicham Eddoubi, Jonas Ricker, Federico Cocchi, Lorenzo Baraldi, Angelo Sotgiu, Maura Pintor, Marcella Cornia, Lorenzo Baraldi, Asja Fischer, Rita Cucchiara, Battista Biggio

Presented by: Maura Pintor Poster Stand #40 Track: Datasets and Benchmarks Adversarial Robustness, Computer Vision, Benchmarking Studies
Position: We Fool Ourselves Thinking the ‘X’ in XAI is Useful

Rolf Hvidtfeldt, Mohammad Naser Sabet Jahromi, Àlex Pujol Vidal, Thomas Gammeltoft-Hansen, Thomas Ploug, Jeppe Agger Nielsen, Stine Nørgaard Christensen, Thomas B. Moeslund

Presented by: Rolf Hvidtfeldt Poster Stand #41 Track: Position paper Model Interpretability, Fairness and Ethics
Transducing Language Models

Vésteinn Snaebjarnarson, Samuel Kiegeland, Tianyu Liu, Reda Boumasmoud, Tim Vieira, Ryan Cotterell

Presented by: Vésteinn Snæbjarnarson Poster Stand #42 Track: Regular Natural Language Processing, Large Language Models, Generative Models
On the Calibration of survival models with competing risks

Julie Alberge, Tristan Haugomat, Gaël Varoquaux, Judith Abécassis

Presented by: Julie Alberge Poster Stand #43 Track: Regular Model Calibration, Uncertainty Quantification, Time Series Forecasting
LLMail-Inject: A Dataset from a Realistic Adaptive Prompt Injection Challenge

Sahar Abdelnabi, Aideen Fay, Ahmed Salem, Egor Zverev, Kai-Chieh Liao, Chi-Huang Liu, Chun Chih Kuo, Jannis Weigend, Danyael Manlangit, Alex Apostolov, Haris Umair, João Donato, Masayuki Kawakita, Athar Mahboob, Tran Huu Bach, Tsun-Han Chiang, Myeongjin Cho, Hajin Choi, Byeonghyeon Kim, HyeonjinLee et al. (5 additional authors not shown)

Presented by: Sahar Abdelnabi Poster Stand #44 Track: Datasets and Benchmarks Large Language Models, Adversarial Robustness, Evaluation Methods
Benchmarking Diversity in Text-to-Image Models via Attribute-Conditional Human Evaluation

Isabela Albuquerque, Ira Ktena, Olivia Wiles, Ivana Kajic, Amal Rannen-Triki, Cristina Nader Vasconcelos, Aida Nematzadeh

Presented by: Isabela Albuquerque Poster Stand #45 Track: Datasets and Benchmarks Benchmarking Studies, Generative Models, Multimodal Learning
BioX-Bridge: Model Bridging for Unsupervised Cross-Modal Knowledge Transfer across Biosignals

Chenqi Li, Yu Liu, Timothy Denison, Tingting Zhu

Presented by: Chenqi Li Poster Stand #46 Track: Regular Biosignal Processing, Multimodal Learning, Model Compression
Aleph-Alpha-GermanWeb: Improving German-language LLM pre-training with model-based data curation and synthetic data generation

Thomas F Burns, Letitia Parcalabescu, Stephan Waeldchen, Michael Barlow, Gregor Ziegltrum, Volker Stampa, Bastian Harren, Björn Deiseroth

Presented by: Tom Burns Poster Stand #47 Track: Datasets and Benchmarks Large Language Models, Generative Models, Natural Language Processing
Benchmarking Stochastic Approximation Algorithms for Fairness-Constrained Training of Deep Neural Networks

Andrii Kliachkin, Jana Lepšová, Gilles Bareilles, Jakub Marecek

Presented by: Gilles Bareilles Poster Stand #48 Track: Datasets and Benchmarks Fairness and Ethics, Benchmarking Studies, Optimization Theory
Fedivertex: a Graph Dataset based on Decentralized Social Networks for Trustworthy Machine Learning

Marc DAMIE, Edwige Cyffers

Presented by: Edwige Cyffers Poster Stand #49 Track: Datasets and Benchmarks Federated Learning, Graph Neural Networks, Benchmarking Studies