Skip to content

Commit

Permalink
pre-commit autoupdate (#37)
Browse files Browse the repository at this point in the history
* pre-commit autoupdate

updates:
- https://github.com/charliermarsh/ruff-pre-commithttps://github.com/astral-sh/ruff-pre-commit
- [github.com/astral-sh/ruff-pre-commit: v0.0.290 → v0.0.291](astral-sh/ruff-pre-commit@v0.0.290...v0.0.291)

* pre-commit autoupdate

* references.bib removed fields file,langid,annotation from zotero auto-export

---------

Co-authored-by: Janosh Riebesell <[email protected]>
  • Loading branch information
pre-commit-ci[bot] and janosh authored Oct 8, 2023
1 parent c1db361 commit 89a08ea
Show file tree
Hide file tree
Showing 3 changed files with 17 additions and 30 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ on:
branches: [main]
paths: ["**/*.py", ".github/workflows/test.yml"]
release:
types: [published, edited]
types: [published]

jobs:
tests:
Expand Down
8 changes: 4 additions & 4 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,8 @@ default_stages: [commit]
default_install_hook_types: [pre-commit, commit-msg]

repos:
- repo: https://github.com/charliermarsh/ruff-pre-commit
rev: v0.0.290
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.0.292
hooks:
- id: ruff
args: [--fix]
Expand All @@ -23,7 +23,7 @@ repos:
- id: mypy

- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.4.0
rev: v4.5.0
hooks:
- id: check-case-conflict
- id: check-symlinks
Expand All @@ -35,7 +35,7 @@ repos:
- id: trailing-whitespace

- repo: https://github.com/codespell-project/codespell
rev: v2.2.5
rev: v2.2.6
hooks:
- id: codespell
stages: [commit, commit-msg]
Expand Down
37 changes: 12 additions & 25 deletions paper/references.bib
Original file line number Diff line number Diff line change
Expand Up @@ -12,9 +12,7 @@ @online{biewald_weights_2020
url = {https://docs.wandb.ai/company/academics},
urldate = {2022-08-21},
abstract = {Machine Learning Experiment Tracking},
organization = {{Use Weights \& Biases for free to track experiments, collaborate, and publish results}},
annotation = {Software available from wandb.com},
file = {/Users/janosh/Library/Zotero/storage/N2S7PBIM/academics.html}
organization = {{Use Weights \& Biases for free to track experiments, collaborate, and publish results}}
}

@software{bradbury_jax_2018,
Expand All @@ -41,16 +39,15 @@ @online{fey_fast_2019
title = {Fast {{Graph Representation Learning}} with {{PyTorch Geometric}}},
author = {Fey, Matthias and Lenssen, Jan Eric},
date = {2019-04-25},
number = {arXiv:1903.02428},
eprint = {arXiv:1903.02428},
eprint = {1903.02428},
eprinttype = {arxiv},
eprintclass = {cs, stat},
doi = {10.48550/arXiv.1903.02428},
url = {http://arxiv.org/abs/1903.02428},
urldate = {2022-08-27},
abstract = {We introduce PyTorch Geometric, a library for deep learning on irregularly structured input data such as graphs, point clouds and manifolds, built upon PyTorch. In addition to general graph data structures and processing methods, it contains a variety of recently published methods from the domains of relational learning and 3D data processing. PyTorch Geometric achieves high data throughput by leveraging sparse GPU acceleration, by providing dedicated CUDA kernels and by introducing efficient mini-batch handling for input examples of different size. In this work, we present the library in detail and perform a comprehensive comparative study of the implemented methods in homogeneous evaluation scenarios.},
pubstate = {preprint},
keywords = {Computer Science - Machine Learning,Statistics - Machine Learning},
file = {/Users/janosh/Library/Zotero/storage/LATHCC4B/Fey and Lenssen - 2019 - Fast Graph Representation Learning with PyTorch Ge.pdf;/Users/janosh/Library/Zotero/storage/L5LDP64L/1903.html}
keywords = {Computer Science - Machine Learning,Statistics - Machine Learning}
}

@article{harris_array_2020,
Expand All @@ -69,9 +66,7 @@ @article{harris_array_2020
urldate = {2022-08-21},
abstract = {Array programming provides a powerful, compact and expressive syntax for accessing, manipulating and operating on data in vectors, matrices and higher-dimensional arrays. NumPy is the primary array programming library for the Python language. It has an essential role in research analysis pipelines in fields as diverse as physics, chemistry, astronomy, geoscience, biology, psychology, materials science, engineering, finance and economics. For example, in astronomy, NumPy was an important part of the software stack used in the discovery of gravitational waves1 and in the first imaging of a black hole2. Here we review how a few fundamental array concepts lead to a simple and powerful programming paradigm for organizing, exploring and analysing scientific data. NumPy is the foundation upon which the scientific Python ecosystem is constructed. It is so pervasive that several projects, targeting audiences with specialized needs, have developed their own NumPy-like interfaces and array objects. Owing to its central position in the ecosystem, NumPy increasingly acts as an interoperability layer between such array computation libraries and, together with its application programming interface (API), provides a flexible framework to support the next decade of scientific and industrial analysis.},
issue = {7825},
langid = {english},
keywords = {Computational neuroscience,Computational science,Computer science,Software,Solar physics},
file = {/Users/janosh/Library/Zotero/storage/CIAH2SZA/Harris et al. - 2020 - Array programming with NumPy.pdf;/Users/janosh/Library/Zotero/storage/5DU7NHQF/s41586-020-2649-2.html}
keywords = {Computational neuroscience,Computational science,Computer science,Software,Solar physics}
}

@article{jumper_highly_2021,
Expand All @@ -89,9 +84,7 @@ @article{jumper_highly_2021
urldate = {2022-08-27},
abstract = {Proteins are essential to life, and understanding their structure can facilitate a mechanistic understanding of their function. Through an enormous experimental effort1–4, the structures of around 100,000 unique proteins have been determined5, but this represents a small fraction of the billions of known protein sequences6,7. Structural coverage is bottlenecked by the months to years of painstaking effort required to determine a single protein structure. Accurate computational approaches are needed to address this gap and to enable large-scale structural bioinformatics. Predicting the three-dimensional structure that a protein will adopt based solely on its amino acid sequence—the structure prediction component of the ‘protein folding problem’8—has been an important open research problem for more than 50~years9. Despite recent progress10–14, existing methods fall far~short of atomic accuracy, especially when no homologous structure is available. Here we provide the first computational method that can regularly predict protein structures with atomic accuracy even in cases in which no similar structure is known. We validated an entirely redesigned version of our neural network-based model, AlphaFold, in the challenging 14th Critical Assessment of protein Structure Prediction (CASP14)15, demonstrating accuracy competitive with experimental structures in a majority of cases and greatly outperforming other methods. Underpinning the latest version of AlphaFold is a novel machine learning approach that incorporates physical and biological knowledge about protein structure, leveraging multi-sequence alignments, into the design of the deep learning algorithm.},
issue = {7873},
langid = {english},
keywords = {Computational biophysics,Machine learning,Protein structure predictions,Structural biology},
file = {/Users/janosh/Library/Zotero/storage/PQWUB2WI/Jumper et al. - 2021 - Highly accurate protein structure prediction with .pdf;/Users/janosh/Library/Zotero/storage/I65KYK5J/s41586-021-03819-2.html}
keywords = {Computational biophysics,Machine learning,Protein structure predictions,Structural biology}
}

@unpublished{kendall_what_2017,
Expand All @@ -104,9 +97,7 @@ @unpublished{kendall_what_2017
url = {http://arxiv.org/abs/1703.04977},
urldate = {2020-08-31},
abstract = {There are two major types of uncertainty one can model. Aleatoric uncertainty captures noise inherent in the observations. On the other hand, epistemic uncertainty accounts for uncertainty in the model -- uncertainty which can be explained away given enough data. Traditionally it has been difficult to model epistemic uncertainty in computer vision, but with new Bayesian deep learning tools this is now possible. We study the benefits of modeling epistemic vs. aleatoric uncertainty in Bayesian deep learning models for vision tasks. For this we present a Bayesian deep learning framework combining input-dependent aleatoric uncertainty together with epistemic uncertainty. We study models under the framework with per-pixel semantic segmentation and depth regression tasks. Further, our explicit uncertainty formulation leads to new loss functions for these tasks, which can be interpreted as learned attenuation. This makes the loss more robust to noisy data, also giving new state-of-the-art results on segmentation and depth regression benchmarks.},
keywords = {Computer Science - Computer Vision and Pattern Recognition},
annotation = {ZSCC: 0001077},
file = {/Users/janosh/Library/Zotero/storage/KQW23PK7/Kendall and Gal - 2017 - What Uncertainties Do We Need in Bayesian Deep Lea.pdf;/Users/janosh/Library/Zotero/storage/8KZHR6TM/1703.html}
keywords = {Computer Science - Computer Vision and Pattern Recognition}
}

@article{kirkpatrick_pushing_2021,
Expand All @@ -121,9 +112,7 @@ @article{kirkpatrick_pushing_2021
issn = {0036-8075, 1095-9203},
doi = {10.1126/science.abj6511},
url = {https://www.science.org/doi/10.1126/science.abj6511},
urldate = {2022-01-04},
langid = {english},
file = {/Users/janosh/Library/Zotero/storage/8FQHI8V5/Kirkpatrick et al. - 2021 - Pushing the frontiers of density functionals by so.pdf;/Users/janosh/Library/Zotero/storage/9Z7UYPZZ/science.abj6511.pdf}
urldate = {2022-01-04}
}

@unpublished{lakshminarayanan_simple_2016,
Expand All @@ -136,23 +125,21 @@ @unpublished{lakshminarayanan_simple_2016
url = {http://arxiv.org/abs/1612.01474},
urldate = {2019-03-03},
abstract = {Deep neural networks (NNs) are powerful black box predictors that have recently achieved impressive performance on a wide spectrum of tasks. Quantifying predictive uncertainty in NNs is a challenging and yet unsolved problem. Bayesian NNs, which learn a distribution over weights, are currently the state-of-the-art for estimating predictive uncertainty; however these require significant modifications to the training procedure and are computationally expensive compared to standard (non-Bayesian) NNs. We propose an alternative to Bayesian NNs that is simple to implement, readily parallelizable, requires very little hyperparameter tuning, and yields high quality predictive uncertainty estimates. Through a series of experiments on classification and regression benchmarks, we demonstrate that our method produces well-calibrated uncertainty estimates which are as good or better than approximate Bayesian NNs. To assess robustness to dataset shift, we evaluate the predictive uncertainty on test examples from known and unknown distributions, and show that our method is able to express higher uncertainty on out-of-distribution examples. We demonstrate the scalability of our method by evaluating predictive uncertainty estimates on ImageNet.},
keywords = {Computer Science - Machine Learning,Statistics - Machine Learning},
file = {/Users/janosh/Library/Zotero/storage/HI8NT3P9/Lakshminarayanan et al. - 2016 - Simple and Scalable Predictive Uncertainty Estimat.pdf;/Users/janosh/Library/Zotero/storage/LT283PWQ/1612.html}
keywords = {Computer Science - Machine Learning,Statistics - Machine Learning}
}

@online{paszke_pytorch_2019,
title = {{{PyTorch}}: {{An Imperative Style}}, {{High-Performance Deep Learning Library}}},
shorttitle = {{{PyTorch}}},
author = {Paszke, Adam and Gross, Sam and Massa, Francisco and Lerer, Adam and Bradbury, James and Chanan, Gregory and Killeen, Trevor and Lin, Zeming and Gimelshein, Natalia and Antiga, Luca and Desmaison, Alban and Köpf, Andreas and Yang, Edward and DeVito, Zach and Raison, Martin and Tejani, Alykhan and Chilamkurthy, Sasank and Steiner, Benoit and Fang, Lu and Bai, Junjie and Chintala, Soumith},
date = {2019-12-03},
number = {arXiv:1912.01703},
eprint = {arXiv:1912.01703},
eprint = {1912.01703},
eprinttype = {arxiv},
eprintclass = {cs, stat},
doi = {10.48550/arXiv.1912.01703},
url = {http://arxiv.org/abs/1912.01703},
urldate = {2022-08-27},
abstract = {Deep learning frameworks have often focused on either usability or speed, but not both. PyTorch is a machine learning library that shows that these two goals are in fact compatible: it provides an imperative and Pythonic programming style that supports code as a model, makes debugging easy and is consistent with other popular scientific computing libraries, while remaining efficient and supporting hardware accelerators such as GPUs. In this paper, we detail the principles that drove the implementation of PyTorch and how they are reflected in its architecture. We emphasize that every aspect of PyTorch is a regular Python program under the full control of its user. We also explain how the careful and pragmatic implementation of the key components of its runtime enables them to work together to achieve compelling performance. We demonstrate the efficiency of individual subsystems, as well as the overall speed of PyTorch on several common benchmarks.},
pubstate = {preprint},
keywords = {Computer Science - Machine Learning,Computer Science - Mathematical Software,Statistics - Machine Learning},
file = {/Users/janosh/Library/Zotero/storage/BQV27DIB/Paszke et al. - 2019 - PyTorch An Imperative Style, High-Performance Dee.pdf;/Users/janosh/Library/Zotero/storage/YYMGM452/1912.html}
keywords = {Computer Science - Machine Learning,Computer Science - Mathematical Software,Statistics - Machine Learning}
}

0 comments on commit 89a08ea

Please sign in to comment.