The lower-triangular all-ones counting matrix is a key primitive in differential privacy, and its factorization norms determine theoretical utility guarantees for private machine learning training with correlated noise. For more than three decades, the best known upper bound for these norms remained essentially unchanged, and recent work asked whether an explicit factorization could provably improve it. In this talk I present an explicit, efficiently computable construction that improves the longstanding bound, together with significantly stronger lower bounds, shrinking the remaining constant gap to a small margin.