ICML 2026 Workshop on
Structured Probabilistic Inference & Generative Modeling


Join the SPIGM Slack!


Location: TBD
Time: TBD
Link: TBD
The workshop focuses on theory, methodology, and application of structured probabilistic inference and generative modeling Probabilistic inference addresses the problem of amortization, sampling, and integration of complex quantities from graphical models, while generative modeling captures the underlying probability distributions of a dataset. Apart from applications in computer vision, natural language processing, and speech recognition, probabilistic inference and generative modeling approaches have also been widely used in natural science domains, including physics, chemistry, molecular biology, and medicine. Despite the promising results, probabilistic methods face challenges when applied to highly structured data, which are ubiquitous in real-world settings. We aim to bring experts from diverse backgrounds together, from both academia and industry, to discuss the applications and challenges of probabilistic methods, emphasizing challenges in encoding domain knowledge in these settings. We hope to provide a platform that fosters collaboration and discussion in the field of probabilistic methods. Topics include but are not limited to (see Call for Papers for more details):

  • Generative methods for graphs, 3D, time series, text, video, and other structured modalities, and probabilistic inference in these models for reward fine-tuning, alignment, acceleration, watermarking, etc.
  • Scaling and accelerating inference and generative models
  • Uncertainty quantification in AI systems
  • Sampling and variational inference
  • Intersection between probabilistic inference and LLMs, VLMs, VLAs, and foundation models
  • Applications in sampling, optimization, decision making, etc
  • Applications and practical implementations of existing methods to areas in science
  • Empirical analysis comparing different architectures for a given data modality and application
This year, we also update our focus on the relevance of probabilistic inference in the era of foundation models. We welcome submissions that explore the intersection of probabilistic inference and foundation models!

Speakers

Jona Ballé

NYU

Jennifer Listgarten

UC Berkeley

Christian Andersson Naesseth

University of Amsterdam

Mingyuan Zhou

Microsoft AI Superintelligence, University of Texas at Austin

Jiatao Gu

UPenn CIS / Apple MLR

Pilar Cossio

Flatiron Institute


Panel: Diffusion Language Models vs Autoregressive Language Models?

Subham Sekhar Sahoo

Cornell University

Stefano Ermon

Stanford University

Heli Ben-Hamu

FAIR (Meta AI)

Jiaxin Shi

Meta SuperIntelligence Labs


Organizers

Jiajun He

University of Cambridge

Yuanqi Du

Cornell

Luhuan Wu

Flatiron Instittue

Seul Lee

KAIST

Charlotte Bunne

EPFL

José Miguel Hernández-Lobato

University of Cambridge

Anima Anandkumar

Caltech