Juliacon 2024

What's new with GraphPPL.jl?
07-12, 11:50–12:00 (Europe/Amsterdam), For Loop (3.2)

Probabilistic Programming Languages (PPLs) aim to shield users from the complex mechanics of Bayesian inference. GraphPPL has previously been introduced as the PPL of RxInfer.jl. GraphPPL uses Julia's metaprogramming functionality to transform high-level user code into correct Julia syntax. In the newest release of GraphPPL, users can use any GraphPPL model as a submodel in larger models, introducing modularity into probabilistic programming.


Introduction

Popular Probabilistic Programming Languages (PPLs) in Julia include Turing.jl and GraphPPL.jl, with both packages exploiting Julia's metaprogramming functionality to design a syntax for specifying probabilistic generative models. This metaprogramming approach differs from approaches like Pyro that aim to define the syntax of the PPL within the used general-purpose language. PPLs based on metaprogramming generally offer a higher-level programming interface and can shield many implementational details from the user by transforming high-level user code into specific instructions.

A shortcoming of existing PPLs is that there is a limited notion of modularity or reuse of models in larger models. The newest release of GraphPPL aims to fill this gap by allowing users to use any previously specified model as a submodel. By explicitly framing the generative model as a Factor Graph, existing models can always be used as subgraphs in a larger generative model. The newest release of GraphPPL also removes the dependency on a particular inference backend from the requirements, making it a backend-agnostic PPL that fully specifies a generative model and possible inference constraints.

Nested model specification

For deep learning modeling, PyTorch revolutionized model specification by introducing a modular model specification language. Any PyTorch module can be used as a submodule in a larger model. GraphPPL introduces this modularity to Probabilistic Programming, by allowing the user to use any GraphPPL model in a larger model specification. By materializing the specified model as a Factor Graph, existing models are represented as specific graphs and can be used as subgraphs in larger models. By allowing users to build complex models through the composition of nested sub-models, GraphPPL alleviates researchers from the burden of interpreting an entire model when making subtle adjustments. Instead, the complexity is encapsulated within the nested submodels, presenting users with a concise and focused view of the code relevant to their specific modeling decisions. This modularity is akin to the concept of functions in ordinary programming, where functions greatly improve the readability and maintainability of computer programs. Similarly, GraphPPL sub-models enhance the modularity and readability of probabilistic models, allowing for a more intuitive and efficient workflow for researchers and practitioners.

Backend agnosticism

Previous releases of the GraphPPL language were dependent on the inference backend provided by ReactiveMP.jl. This dependency has been removed, making GraphPPL a backend-agnostic PPL. The main advantage of this is that other Bayesian Inference packages can use GraphPPL as their user interface. A model created in GraphPPL fully specifies a generative model and inference constraints necessary for inference backends to perform Bayesian Inference.

Conclusions

The most recent update to GraphPPL redesigned the language to support nested model specification through the materialization of a Factor Graph. Along with other significant quality-of-life updates, this marks a significant milestone in the design of the PPL and by extension to the RxInfer ecosystem.

See also: Presentation (3.1 MB)

PhD student @ Eindhoven University of Technology

This speaker also appears in: