PageRank (PR) is an algorithm developed by Google for the purpose of generating website rankings used as part of their search engine systems. Roughly speaking, PageRank represent a method for aggregating scores of variable-length random walks. Aggregation is performed using weighted combinations of walk parameters, with the weights representing a predetermined function of a single “diffusion parameter”. Given that many other learning and optimization methods rely on random walk aggregation, various generalizations of PageRank have been proposed in the literature. These generalizations rely on optimizing or adapting the weights used in the aggregation process to various tasks at hand. Examples include Personalized PageRank and Heat-Kernel PageRank.
In the talk, we discuss two new forms of generalized PageRank methods, Inverse PageRank (IPR) and Adaptive PageRank (APR). IPR offers provable state-of-the-art performance guarantees for local (seed-set) community detection, while APR can be applied to different graph neural network learning tasks as it adaptively learns the aggregation weights of random walks. We describe the underlying mathematical principles supporting the parameter selection process and provide numerous experiments on synthetic and real datasets that illustrate the performance of the generalized PageRank methods.
This is a joint work with Eli Chien, Pan Li and Jianhao Peng.