When Scores Learn Geometry: Rate Separations under the Manifold Hypothesis
Overview
Abstract
Score-based methods, such as diffusion models and Bayesian inverse problems, are often interpreted as learning the data distribution in the low-noise limit ($\sigma \to 0$). In this work, we propose an alternative perspective: their success arises from implicitly learning the data manifold rather than the full distribution. Our claim is based on a novel analysis of scores in the small-$\sigma$ regime that reveals a sharp separation of scales: information about the data manifold is $\Theta(\sigma^{-2})$ stronger than information about the distribution. We argue that this insight suggests a paradigm shift from the less practical goal of distributional learning to the more attainable task of geometric learning, which provably tolerates $O(\sigma^{-2})$ larger errors in score approximation. We illustrate this perspective through three consequences: i) in diffusion models, concentration on data support can be achieved with a score error of $o(\sigma^{-2})$, whereas recovering the specific data distribution requires a much stricter $o(1)$ error; ii) more surprisingly, learning the uniform distribution on the manifold—an especially structured and useful object—is also $O(\sigma^{-2})$ easier; and iii) in Bayesian inverse problems, the maximum entropy prior is $O(\sigma^{-2})$ more robust to score errors than generic priors. Finally, we validate our theoretical findings with preliminary experiments on large-scale models, including Stable Diffusion.
Presenters
Xiang Li, Ph.D. student, Department of Computer Science, ETH Zurich
Brief Biography
Xiang Li is a Ph.D. student in computer science at ETH Zurich supervised by Prof. Niao He. His research interests lie in the mathematical foundations of machine learning, with an emphasis on optimization theory and dynamical systems for the analysis of deep learning algorithms, including training dynamics of optimizers and diffusion models.