Low-rank Bayesian neural networks with singular posterior geometry, reduced complexity scaling, and scalable uncertainty quantification.
We parameterize Bayesian neural network weights through low-rank factors $W = AB^\top$, inducing a singular posterior geometry concentrated on the rank-$r$ manifold. This reduces variational complexity from $O(mn)$ to $O(r(m+n))$ while maintaining competitive uncertainty-aware performance across MLPs, LSTMs, and Transformers.
Standard Bayesian neural networks often rely on fully factorized posteriors that ignore structured correlations between weights. We instead introduce low-rank variational factors:
Although the factors themselves can be mean-field, the induced posterior over $W$ becomes highly structured, introducing correlations through shared latent factors.
The induced posterior is singular with respect to Lebesgue measure on the ambient weight space.
Low-rank posteriors reduce the dominant complexity scaling when $r \ll \min(m,n)$.
Shared latent factors induce non-trivial covariance structure between weights, unlike fully factorized posteriors.
Move the rank slider to compare full-rank and low-rank parameterization for a single $256 \times 256$ layer.
@inproceedings{toure2026singular,
title = {Singular Bayesian Neural Networks},
author = {Toure, Mame Diarra and Stephens, David A.},
booktitle = {International Conference on Machine Learning},
year = {2026}
}