Download papers pdfs and other information from main AI conferences (when public available) and store it on ./data/
, creating one directory per conference, and one per year. More specifically, it creates the following structure:
.
└── data
└── conf
└── year
├── abstracts.csv # format `title|abstract`
├── authors.csv # format `title;authors`
├── paper_info.csv # format `title;abstract_url;pdf_url`
└── papers
├── paper01.pdf # pdf file from a paper
├── paper02.pdf
├── ...
└── paperN.pdf
Based on CVPR_paper_search_tool by Jin Yamanaka. I decided to split the code into multiple projects:
- this project - Download papers pdfs and other information from main AI conferences
- AI Papers Cleaner - Extract text from papers PDFs and abstracts, and remove uninformative words
- AI Papers Search Tool - Automatic paper clustering
- AI Papers Searcher - Web app to search papers by keywords or similar papers
- AI Conferences Info - Contains the titles, abstracts, urls, and authors names extracted from the papers
Currently supports the following conferences, from 2017 and on:
Source | Conferences |
---|---|
AAAI Library | AAAI |
ACL Anthology | ACL, COLING (even years), EACL (odd years, except 2019), EMNLP, Findings (2020 and on), IJCNLP (odd years), NAACL (except 2017 and 2020), SIGDIAL, TACL |
European Computer Vision Association | ECCV (even years) |
International Joint Conferences on Artificial Intelligence Organization | IJCAI |
KDD | SIGKDD (abstracts only) |
Proceedings of Machine Learning Research | ICML |
NeurIPS Proceedings | NeurIPS |
NeurIPS Datasets and Benchmarks Proceedings | NeurIPS Track on Datasets and Benchmarks (2021) |
OpenReview | ICLR |
SIGCHI | SIGCHI (2018 and on, abstracts only) |
SIGGRAPH | SIGGRAPH (2017 and on, abstracts only) |
The Computer Vision Foundation open access | CVPR, ICCV (odd years), WACV (2020 and on) |
Docker or, for local installation:
- Python 3.10+
- Poetry
To make it easier to run the code, with or without Docker, I created a few helpers. Both ways use start_here.sh
as an entry point. Since there are a few quirks when calling the scrappers, I created this file with all the necessary commands to run the code. All you need to do is to uncomment the relevant lines inside the conferences
array and run the script.
You first need to install Python Poetry. Then, you can install the dependencies and run the code:
poetry install
bash start_here.sh
To help with the Docker setup, I created a Dockerfile
and a Makefile
. The Dockerfile
contains all the instructions to create the Docker image. The Makefile
contains the commands to build the image, run the container, and run the code inside the container. To build the image, simply run:
make
To call start_here.sh
inside the container, run:
make run
To run the interactive scrapy shell inside a Docker container, run:
make RUN_STRING="scrapy shell 'https://your.site.com'" run
Since some conferences have multiple sources, I created a source_url
field to help when recreating the urls in AI Papers Searcher. The variable is an integer that represents the source URL. The following table shows the available sources:
source_url | Source |
---|---|
-1 | auto detect |
0 | openreview |
1 | aaai |
2 | acl |
3 | eccv |
4 | ijcai |
5 | kdd |
6 | icml |
7 | neurips |
8 | sigchi |
9 | cvf |
10 | arxiv |
11 | siggraph |