Exponential tilting is a technique commonly used to create parametric distribution shifts. Despite its prevalence in related fields, tilting has not seen widespread use in machine learning. In this talk, I discuss a simple extension to ERM---tilted empirical risk minimization (TERM)---which uses tilting to flexibly tune the impact of individual losses. I make connections between TERM and related approaches, such as Value-at-Risk, Conditional Value-at-Risk, and distributionally robust optimization (DRO), and present batch and stochastic first-order optimization methods for solving TERM at scale. Finally, I show that this baseline can be used for a multitude of applications in machine learning, such as enforcing fairness between subgroups, mitigating the effect of outliers, and handling class imbalance---delivering state-of-the-art performance relative to more complex, bespoke solutions for these problems.