✨ LENS 2026 · in conjunction with WACV
WACV 2026 · Tucson, Arizona · March 6–10 (workshop day TBA)

LENS: Learning & Exploitation of Latent Space Geometries

A full-day workshop on geometric foundations of learned representations and their impact on vision & AI.

About LENS

LENS brings together researchers studying the geometry of latent representations—their manifolds, Riemannian structures, intrinsic dimensions, and implications for model design and evaluation. We aim to bridge advances in geometric learning with practical computer vision applications, fostering dialogue between theory and deployments.

We welcome contributions that deepen our understanding of latent spaces (e.g., curvature, geodesics, topology), propose geometry-aware architectures and objectives, or demonstrate how latent geometry can improve robustness, generalization, fairness, privacy, and efficiency in real-world vision systems.

Quick Facts

  • 📅 Dates: March 6–10, 2026 (exact workshop day TBA)
  • 📍 Location: Tucson, Arizona, USA (with WACV 2026)
  • 📝 Proceedings: WACV Workshops (subject to WACV policy)
  • 🧭 Format: Invited talks + contributed posters (spotlights TBA)

Motivation

Recent breakthroughs in machine learning and artificial intelligence—spanning images, video, text, and other complex data—can be traced to the hidden structure of real-world data spaces. Although an image is formally a point in the immense space of all pixel arrays, the subset of natural scenes occupies a relatively low-dimensional, smooth manifold. This manifold hypothesis has far-reaching implications: if we can model and exploit the geometry of such manifolds, we can achieve more efficient, robust, and interpretable learning and inference. Early manifold learning methods (Isomap, LLE, MDS) in the 1990s–2000s pioneered this view but were limited in scalability and expressiveness. Today, large datasets, powerful computation, and deep generative models—variational autoencoders (VAEs), generative adversarial networks (GANs), diffusion and flow-based models—offer a new opportunity: to learn manifolds directly in latent spaces where data are compactly represented. These advances invite a fresh synthesis of geometry, topology, and modern AI, enabling principled exploration of latent representations, their Riemannian structure, and their role in next-generation learning algorithms.

Topics of Interest

  • Learning latent representations and their Riemannian geometry
  • Manifold learning with deep neural networks
  • Manifold hypothesis for image and multimodal data
  • Geometry-aware architectures and training objectives
  • Intrinsic dimension, dimension reduction, and encoding
  • Latent geometry for robustness, generalization, fairness, and privacy
  • Geometric evaluation metrics, diagnostics, and interpretability
  • Applications to computer vision: recognition, segmentation, and generative models

Important Dates (Tentative)

Paper submission deadline: TBA (Nov/Dec 2025)
Notification to authors: TBA (Jan 2026)
Camera-ready due: TBA (Late Jan 2026)
Workshop day at WACV: One day during Mar 6–10, 2026

Exact dates will be finalized after coordination with WACV organizers (all deadlines 23:59 AoE unless noted).

Invited Speakers

Dr. Soren Hauberg

Søren Hauberg

Technical University of Denmark

Virtual Talk

Dr. Laurent Younes

Laurent Younes

Johns Hopkins University

Dr. Baba Vemuri

Baba C. Vemuri

University of Florida

Organizing Committee

  • Anuj Srivastava (Florida State University)
  • Sudeep Sarkar (University of South Florida)
  • Pavan Turaga (Arizona State University)