Despite increasing adoption by industry, developing program verification tools remains challenging, and their designers must carefully balance tradeoffs between false alarms, missed bugs, and scalability to large codebases. Furthermore, when tools fail to verify some program property, they only provide coarse estimates of alarm relevance and of the likelihood of having found a real bug, thereby limiting their usefulness in large software projects. I will present a framework to extend program reasoning systems with rich probabilistic models. These models emerge naturally from the program structure, and probabilistic inference refines the deductive process of the underlying system. By enabling users to focus on the most important alarms and by automatically generalizing from feedback, these techniques enable an order-of-magnitude reduction in false alarm rates and invocations of expensive reasoning engines such as SMT solvers. I will conclude by describing how these ideas promise to underpin the next generation of intelligent programming systems, with applications in areas such as program synthesis, differentiable programming, and fault localization in complex systems.