Distribution has been a major trend in computing over the last two decades, and has enabled a wide range of applications, from the fast training of large-scale machine learning models, to cloud services which can process our requests within milliseconds. In this talk, I will describe some of the basic ideas underpinning these applications, in the context of the work done by our lab. Specifically, I will first describe the role of efficient distributed algorithms in machine learning, as well as the intriguing trade-offs between their synchronization costs and their convergence properties. Second, I will discuss our work on scalable variants of classic data structures such as priority queues and search trees, as well as on population protocols, roughly defined as distributed algorithms which can be implemented at molecular scale.