In this talk we will enumerate the main reasons for a collection of matrices multiplying to 0. Our motivation is Bayesian Learning Theory, where one of the goals is to progressively approximate an unknown distribution using data generated from that distribution. A key component in this framework is a function K (relative entropy), which is often highly singular. The invariants of the singularities of K (in the style of log canonical threshold') are related to how well the Singular Learning Theory "generalizes"---or, in Machine Learning terms, how efficiently the model can be trained. Computing the singularity invariants in real-life scenarios of Machine Learning is notoriously difficult. In this talk, we focus on an elementary example and compute the learning coefficients of Linear Neural Networks. Joint work with S. P. Lehalleur.