Minimum-norm interpolators have recently gained attention as an analyzable model to shed light on the double descent phenomenon observed for neural networks. Most of the work has focused on analyzing interpolators in Hilbert spaces, where, typically, an effectively low-rank structure of the feature covariance prevents a large bias. This work takes a first step towards establishing a general framework that connects generalization properties of the interpolators to well-known concepts from high-dimensional geometry, specifically, from the local theory of Banach spaces.