Classic algorithms and machine learning systems like neural networks
are both abundant in everyday life. While classic computer
science algorithms are suitable for precise execution of exactly defined
tasks such as finding the shortest path in a large graph, neural
networks allow learning from data to predict the most likely answer in
more complex tasks such as image classification, which cannot be
reduced to an exact algorithm. In the talk, we explore combining
both concepts leading to more robust, better performing, more interpretable,
more computationally efficient, and most importantly data
efficient architectures. Using algorithmic supervision a neural network can learn
from or in conjunction with an algorithm. When integrating an algorithm
into a neural architecture, it is important that the algorithm is
differentiable such that the architecture can be trained end-to-end and
gradients can be propagated back through the algorithm in a meaningful
way. To make algorithms differentiable, I discuss a general
method for continuously relaxing algorithms by perturbing variables
with logistic distributions. In addition, I discuss specialized differentiable
algorithms such as differentiable sorting networks, and efficient
and effective differentiable sorting and ranking operators allowing
sorting and ranking supervision. Furthermore, I delve into differentiable
rendering, specifically, the generalized differentiable renderer GenDR.