- Critical norm blow-up rates for the energy supercritical nonlinear heat equation We prove the first classification of blow-up rates of the critical norm for solutions of the energy supercritical nonlinear heat equation, without any assumptions such as radial symmetry or sign conditions. Moreover, the blow-up rates we obtain are optimal, for solutions that blow-up with bounded L^{n(p-1)/2,infty}(R^n)-norm up to the blow-up time. We establish these results by proving quantitative estimates for the energy supercritical nonlinear heat equation with a robust new strategy based on quantitative varepsilon-regularity criterion averaged over certain comparable time scales. With this in hand, we then produce the quantitative estimates using arguments inspired by Palasek [31] and Tao [38] involving quantitative Carleman inequalities applied to the Navier-Stokes equations. Our work shows that energy structure is not essential for establishing blow-up rates of the critical norm for parabolic problems with a scaling symmetry. This paves the way for establishing such critical norm blow-up rates for other nonlinear parabolic equations. 3 authors · Dec 13, 2024
- Critical Points and Convergence Analysis of Generative Deep Linear Networks Trained with Bures-Wasserstein Loss We consider a deep matrix factorization model of covariance matrices trained with the Bures-Wasserstein distance. While recent works have made important advances in the study of the optimization problem for overparametrized low-rank matrix approximation, much emphasis has been placed on discriminative settings and the square loss. In contrast, our model considers another interesting type of loss and connects with the generative setting. We characterize the critical points and minimizers of the Bures-Wasserstein distance over the space of rank-bounded matrices. For low-rank matrices the Hessian of this loss can theoretically blow up, which creates challenges to analyze convergence of optimizaton methods. We establish convergence results for gradient flow using a smooth perturbative version of the loss and convergence results for finite step size gradient descent under certain assumptions on the initial weights. 4 authors · Mar 6, 2023
- Closed Estimates of Leray Projected Transport Noise and Strong Solutions of the Stochastic Euler Equations We consider the incompressible Euler and Navier-Stokes equations on the three dimensional torus, in velocity form, perturbed by a transport or transport-stretching Stratonovich noise. Closed control of the noise contributions in energy estimates are demonstrated, for any positive integer ordered Sobolev Space and the equivalent Stokes Space; difficulty arises due to the presence of the Leray Projector disrupting cancellation of the top order derivative. This is particularly pertinent in the case of a transport noise without stretching, where the vorticity form cannot be used. As a consequence we obtain, for the first time, the existence of a local strong solution to the corresponding stochastic Euler equation. Furthermore, smooth solutions are shown to exist until blow-up in L^1left([0,T];W^{1,infty}right). 1 authors · Jul 1, 2025
- Reinforcement Learning in Low-Rank MDPs with Density Features MDPs with low-rank transitions -- that is, the transition matrix can be factored into the product of two matrices, left and right -- is a highly representative structure that enables tractable learning. The left matrix enables expressive function approximation for value-based learning and has been studied extensively. In this work, we instead investigate sample-efficient learning with density features, i.e., the right matrix, which induce powerful models for state-occupancy distributions. This setting not only sheds light on leveraging unsupervised learning in RL, but also enables plug-in solutions for convex RL. In the offline setting, we propose an algorithm for off-policy estimation of occupancies that can handle non-exploratory data. Using this as a subroutine, we further devise an online algorithm that constructs exploratory data distributions in a level-by-level manner. As a central technical challenge, the additive error of occupancy estimation is incompatible with the multiplicative definition of data coverage. In the absence of strong assumptions like reachability, this incompatibility easily leads to exponential error blow-up, which we overcome via novel technical tools. Our results also readily extend to the representation learning setting, when the density features are unknown and must be learned from an exponentially large candidate set. 3 authors · Feb 4, 2023
- Existence-Uniqueness Theory and Small-Data Decay for a Reaction-Diffusion Model of Wildfire Spread I examine some analytical properties of a nonlinear reaction-diffusion system that has been used to model the propagation of a wildfire. I establish global-in-time existence and uniqueness of bounded mild solutions to the Cauchy problem for this system given bounded initial data. In particular, this shows that the model does not allow for thermal blow-up. If the initial temperature and fuel density also satisfy certain integrability conditions, the L^2-norms of these global solutions are uniformly bounded in time. Additionally, I use a bootstrap argument to show that small initial temperatures give rise to solutions that decay to zero as time goes to infinity, proving the existence of initial states that do not develop into travelling combustion waves. 1 authors · Jun 1, 2024
1 Poincaré ResNet This paper introduces an end-to-end residual network that operates entirely on the Poincar\'e ball model of hyperbolic space. Hyperbolic learning has recently shown great potential for visual understanding, but is currently only performed in the penultimate layer(s) of deep networks. All visual representations are still learned through standard Euclidean networks. In this paper we investigate how to learn hyperbolic representations of visual data directly from the pixel-level. We propose Poincar\'e ResNet, a hyperbolic counterpart of the celebrated residual network, starting from Poincar\'e 2D convolutions up to Poincar\'e residual connections. We identify three roadblocks for training convolutional networks entirely in hyperbolic space and propose a solution for each: (i) Current hyperbolic network initializations collapse to the origin, limiting their applicability in deeper networks. We provide an identity-based initialization that preserves norms over many layers. (ii) Residual networks rely heavily on batch normalization, which comes with expensive Fr\'echet mean calculations in hyperbolic space. We introduce Poincar\'e midpoint batch normalization as a faster and equally effective alternative. (iii) Due to the many intermediate operations in Poincar\'e layers, we lastly find that the computation graphs of deep learning libraries blow up, limiting our ability to train on deep hyperbolic networks. We provide manual backward derivations of core hyperbolic operations to maintain manageable computation graphs. 3 authors · Mar 24, 2023
- Determinantal ideals of secant varieties Using Hilbert schemes of points, we establish a number of results for a smooth projective variety X in a sufficiently ample embedding. If X is a curve or a surface, we show that the ideals of higher secant varieties are determinantally presented, and we prove the same for the first secant variety if X has arbitrary dimension. This completely settles a conjecture of Eisenbud-Koh-Stillman for curves and partially resolves a conjecture of Sidman-Smith in higher dimensions. If X is a curve or a surface we also prove that the corresponding embedding of the Hilbert scheme of points X^{[d]} into the Grassmannian is projectively normal. Finally, if X is an arbitrary projective scheme in a sufficiently ample embedding, then we demonstrate that its homogeneous ideal is generated by quadrics of rank three, confirming a conjecture of Han-Lee-Moon-Park. Along the way, we check that the Hilbert scheme of three points on a smooth variety is the blow-up of the symmetric product along the big diagonal. 2 authors · Oct 2, 2025
- Federated Linear Contextual Bandits with User-level Differential Privacy This paper studies federated linear contextual bandits under the notion of user-level differential privacy (DP). We first introduce a unified federated bandits framework that can accommodate various definitions of DP in the sequential decision-making setting. We then formally introduce user-level central DP (CDP) and local DP (LDP) in the federated bandits framework, and investigate the fundamental trade-offs between the learning regrets and the corresponding DP guarantees in a federated linear contextual bandits model. For CDP, we propose a federated algorithm termed as ROBIN and show that it is near-optimal in terms of the number of clients M and the privacy budget varepsilon by deriving nearly-matching upper and lower regret bounds when user-level DP is satisfied. For LDP, we obtain several lower bounds, indicating that learning under user-level (varepsilon,delta)-LDP must suffer a regret blow-up factor at least min{1/varepsilon,M} or min{1/varepsilon,M} under different conditions. 6 authors · Jun 8, 2023
- Action-Quantized Offline Reinforcement Learning for Robotic Skill Learning The offline reinforcement learning (RL) paradigm provides a general recipe to convert static behavior datasets into policies that can perform better than the policy that collected the data. While policy constraints, conservatism, and other methods for mitigating distributional shifts have made offline reinforcement learning more effective, the continuous action setting often necessitates various approximations for applying these techniques. Many of these challenges are greatly alleviated in discrete action settings, where offline RL constraints and regularizers can often be computed more precisely or even exactly. In this paper, we propose an adaptive scheme for action quantization. We use a VQ-VAE to learn state-conditioned action quantization, avoiding the exponential blowup that comes with na\"ive discretization of the action space. We show that several state-of-the-art offline RL methods such as IQL, CQL, and BRAC improve in performance on benchmarks when combined with our proposed discretization scheme. We further validate our approach on a set of challenging long-horizon complex robotic manipulation tasks in the Robomimic environment, where our discretized offline RL algorithms are able to improve upon their continuous counterparts by 2-3x. Our project page is at https://saqrl.github.io/ 6 authors · Oct 18, 2023