Juliacon 2024

Probabilistic inference using contraction of tensor networks
07-10, 14:00–14:30 (Europe/Amsterdam), Method (1.5)

TensorInference, a package for exact probabilistic inference in discrete graphical models, capitalizes on recent tensor network advancements. Its tensor-based engine features optimized contraction ordering methods, an aspect vital to computational performance. Additionally, it incorporates optimized BLAS routines and GPU technology for enhanced efficiency. In a comparative evaluation with similar libraries, TensorInference demonstrates superior scalability for models of increasing complexity.


GitHub repo: https://github.com/TensorBFS/TensorInference.jl
JOSS publication: https://joss.theoj.org/papers/10.21105/joss.05700

Reasoning under uncertainty is a key challenge in fields like AI, medical diagnosis, computer vision, and natural language processing. Probabilistic graphical models (PGMs) are crucial for this, representing complex systems' joint probability distributions efficiently.

Despite PGMs' utility, probabilistic inference remains a complex task due to the high-dimensional combinatorial optimization involved. To address this, we introduce TensorInference.jl, a Julia package that combines PGMs with tensor network computational power to improve probabilistic inference performance for complex models.

Probabilistic inference calculates probabilities from observed data using probability theory axioms. Inference methods are categorized into exact and approximate. Exact methods face NP-hard computational challenges related to the model's treewidth. In contrast, approximate methods like Markov chain Monte Carlo and variational inference, implemented in packages like Stan and PyMC3, offer scalability but lack formal accuracy guarantees. Thus, exact methods are resurging for their accuracy potential.

TensorInference.jl utilizes recent tensor network advances to provide exact high-performance solutions for inference problems. It enables tasks like calculating partition functions, computing marginal probabilities, finding likely variable assignments, and drawing posterior samples.

Tensor networks, like PGMs, are adept at representing complex system states. Computational efficiency in tensor networks hinges on the contraction sequence; hence, optimizing this order is vital.

TensorInference.jl enhances computational tasks using differentiable programming, supports generic element types, and allows for hyper-optimized contraction order settings. It includes advanced contraction methods including a local search-based method, denoted as TreeSA, two methods based on min-cut algorithms, denoted as SABipartite and KaHyParBipartite, as well as a greedy algorithm, denoted as GreedyMethod. Additionally, it incorporates BLAS routines and GPU technology for improved performance.

Martin Roa-Villescas earned his B.Sc. degree in Electronic Engineering from the National University of Colombia (UNAL) in Manizales, Colombia, in 2010. He received his M.Sc. degree in Embedded Systems from Eindhoven University of Technology (TU/e) in Eindhoven, The Netherlands, in 2013. Currently, he is pursuing a Ph.D. degree in Bayesian Machine Learning at TU/e. Between 2013 and 2018, he served as an embedded software designer at Philips Research in Eindhoven, The Netherlands. His research interests encompass probabilistic graphical models, open-source software, and robotics.