CLI tool for retrieving JSON from paginated APIs.
This tool works against APIs that use the HTTP Link header for pagination. The GitHub API is one example of this.
Recipes using this tool:
pip install paginate-json
Or use pipx:
pipx install paginate-json
Run this tool against a URL that returns a JSON list of items and uses the link:
HTTP header to indicate the URL of the next page of results.
It will output a single JSON list containing all of the records, across multiple pages.
paginate-json \
https://api.github.com/users/simonw/events
You can use the --header
option to send additional request headers. For example, if you have a GitHub OAuth token you can pass it like this:
paginate-json \
https://api.github.com/users/simonw/events \
--header Authorization "bearer e94d9e404d86..."
Some APIs may return a root level object where the items you wish to gather are stored in a key, like this example from the Datasette JSON API:
{
"ok": true,
"rows": [
{
"id": 1,
"name": "San Francisco"
},
{
"id": 2,
"name": "Los Angeles"
},
{
"id": 3,
"name": "Detroit"
},
{
"id": 4,
"name": "Memnonia"
}
]
}
In this case, use --key rows
to specify which key to extract the items from:
paginate-json \
https://latest.datasette.io/fixtures/facet_cities.json \
--key rows
The output JSON will be streamed as a pretty-printed JSON array by default.
To switch to newline-delimited JSON, with a separate object on each line, add --nl
:
paginate-json \
https://latest.datasette.io/fixtures/facet_cities.json \
--key rows \
--nl
The output from that command looks like this:
{"id": 1, "name": "San Francisco"}
{"id": 2, "name": "Los Angeles"}
{"id": 3, "name": "Detroit"}
{"id": 4, "name": "Memnonia"}
This tool works well in conjunction with sqlite-utils. For example, here's how to load all of the GitHub issues for a project into a local SQLite database.
paginate-json \
"https://api.github.com/repos/simonw/datasette/issues?state=all&filter=all" \
--nl | \
sqlite-utils upsert /tmp/issues.db issues - --nl --pk=id
You can then use other features of sqlite-utils to enhance the resulting database. For example, to enable full-text search on the issue title and body columns:
sqlite-utils enable-fts /tmp/issues.db issues title body
If you install the optional jq or pyjq dependency you can also pass --jq PROGRAM
to transform the results of each page using a jq program. The jq
option you supply should transform each page of fetched results into an array of objects.
For example, to extract the id
and title
from each issue:
paginate-json \
"https://api.github.com/repos/simonw/datasette/issues" \
--nl \
--jq 'map({id, title})'
If you installed paginate-json
using pipx
you can inject the extra dependency into the correct virtual environment like this:
pipx inject paginate-json jq
Usage: paginate-json [OPTIONS] URL
Fetch paginated JSON from a URL
Example usage:
paginate-json https://api.github.com/repos/simonw/datasette/issues
Options:
--version Show the version and exit.
--nl Output newline-delimited JSON
--key TEXT Top-level key to extract from each page
--jq TEXT jq transformation to run on each page
--accept TEXT Accept header to send
--sleep INTEGER Seconds to delay between requests
--silent Don't show progress on stderr - default
-v, --verbose Show progress on stderr
--show-headers Dump response headers out to stderr
--ignore-http-errors Keep going on non-200 HTTP status codes
--header <TEXT TEXT>... Send custom request headers
--help Show this message and exit.