Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

State of the field is missing #67

Closed
gdalle opened this issue Sep 7, 2023 · 3 comments
Closed

State of the field is missing #67

gdalle opened this issue Sep 7, 2023 · 3 comments

Comments

@gdalle
Copy link

gdalle commented Sep 7, 2023

The only explicit mention of alternatives provided in the JOSS paper is the JunctionTrees.jl package, by the very same author. This is not enough to get an accurate picture of the field.

Some questions I have are:

  • What are the leading packages for graphical model inference in Python, R and Julia?
  • Which ones perform exact inference as opposed to variational inference, and why does it matter?
  • How is TensorInference.jl different from the Julia competitors in terms of functionality and performance?
  • Why would one choose the Julia option instead of Python or R?

A quick search yielded the following related Julia packages, but there are probably more:

Plus the heavyweight probabilistic programming languages like Turing.jl and Gen.jl

openjournals/joss-reviews#5700

@mroavi
Copy link
Collaborator

mroavi commented Sep 10, 2023

Thanks for the great questions! Totally agree they're important to dig into. We're already over the word limit set by JOSS (1363 of the 1000 permitted words), so we had to cut some stuff. We're all ears if you've got ideas on what to trim to make room for these points.

About exact vs. variational inference: Basically, exact methods are super accurate but can be a computational nightmare, while approximate methods, like those used in RxInfer.jl, scale better but are less precise. It's a trade-off between speed (or scalability) and accuracy. One notable drawback of approximate methods is that they lack formal accuracy guarantees, which is a challenge that is, in itself, NP-hard to resolve.

Regarding Julia packages, I know RxInfer.jl best. It's big on approximate methods (mainly variational) for continuous variables (it does seem to support categorical distributions, which are discrete, but this is definitely not the package's strength). This is why we didn't pick it as a reference to compare against. The others, like GraphicalModelLearning.jl and BayesNets.jl, seem sketchy on details based on their docs, so they'd be a pain to properly evaluate with our benchmarks.

Every few years, there's a UAI inference competition. We checked out the contenders there and picked Merlin and libdai for benchmarks (both implemented in C++). They're open-source, strong in exact inference, and well-documented. Our initial benchmarks are promising and I'm rerunning them now. Happy to share these once they're done and could even pop them into the JOSS paper if you think that'd be helpful.

Beyond the usual reasons to go for Julia—like its speed, the power of multiple dispatch, and its intuitive mathematical syntax—one major factor stands out: the tensor network field has seen a lot of action recently, and much of that innovation is happening right in the Julia ecosystem.

Hope this sheds some light! We'd love to hear what you think could be trimmed from the paper to make room for some of the insights we've discussed here.

@mroavi
Copy link
Collaborator

mroavi commented Sep 16, 2023

We included a discussion of the trade-off between exact and approximate inference in the paper and explained why we opted for an exact approach. The paper now mentions renowned packages for both approximate and exact inference. We also added a reference to the performance evaluation in our package's documentation, which compares TensorInference.jl with its predecessor JunctionTrees.jl, as well as with Merlin and libDAI.

@gdalle
Copy link
Author

gdalle commented Sep 22, 2023

much better!

@gdalle gdalle closed this as completed Sep 22, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants