Discriminating distinct objects and concepts from sensory stimuli is essential for survival. Our brains perform this processing in deep sensory networks shaped through plasticity. However, our understanding of the underlying plasticity mechanisms remains rudimentary. I will introduce Latent Predictive Learning (LPL), a plasticity model that prescribes a local learning rule that combines Hebbian elements with predictive plasticity. I will show that deep neural networks equipped with LPL develop disentangled object representations without supervision. The same rule accurately captures neuronal selectivity changes observed in the primate inferotemporal cortex in response to altered visual experience. Finally, our model generalizes to spiking neural networks and naturally accounts for several experimentally observed properties of synaptic plasticity, including metaplasticity and spike-timing-dependent plasticity (STDP). LPL thus constitutes a plausible normative theory of representation learning in the brain while making concrete testable predictions.