The strong lottery ticket hypothesis (SLTH) posits that, for any given target network, a sufficiently large randomly initialized neural network contains a subnetwork whose inputoutput behavior can approximate that target. This viewpoint suggests an alternative paradigm for model design: rather than adjusting parameters through training, we can search for effective subnetworks by pruning. In this talk, I will introduce the SLTH and discuss several pruning approaches.