Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Backport fixes from Substrate branch #1169

Merged
merged 1 commit into from
Dec 16, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,11 @@ Releases prior to 7.0 has been removed from this file to declutter search result

### Fixed

- cli: Don't wrap exceptions with `CallbackError` to avoid shadowing the original exception.
- cli: Fixed `--template` option being ignored when `--quiet` flag is set.
- config: Fixed setting default loglevels when `logging` is a dict.
- config: Fixed parsing config files after updating to pydantic 2.10.3.
- metrics: Fixed indexed objects counter.

## [8.1.2] - 2024-12-10

Expand Down
14 changes: 9 additions & 5 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -13,9 +13,17 @@ FRONTEND_PATH=../interface
help: ## Show this help (default)
@grep -Fh "##" $(MAKEFILE_LIST) | grep -Fv grep -F | sed -e 's/\\$$//' | sed -e 's/##//'

install:
##
##-- Dependencies
##

install: ## Install dependencies
pdm sync --clean

update: ## Update dependencies and dump requirements.txt
pdm update
pdm export --without-hashes -f requirements --prod -o requirements.txt

##
##-- CI
##
Expand Down Expand Up @@ -83,10 +91,6 @@ typeignore: ## Find type:ignore comments
##-- Release
##

update: ## Update dependencies and dump requirements.txt
pdm update
pdm export --without-hashes -f requirements --prod -o requirements.txt

demos: ## Recreate demo projects from templates
python scripts/demos.py render ${DEMO}
python scripts/demos.py init ${DEMO}
Expand Down
16 changes: 10 additions & 6 deletions docs/0.quickstart-evm.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,8 @@ network: "ethereum"

This page will guide you through the steps to get your first selective indexer up and running in a few minutes without getting too deep into the details.

A selective blockchain indexer is an application that extracts and organizes specific blockchain data from multiple data sources, rather than processing all blockchain data. It allows users to index only relevant entities, reducing storage and computational requirements compared to full node indexing, and query data more efficiently for specific use cases. Think of it as a customizable filter that captures and stores only the blockchain data you need, making data retrieval faster and more resource-efficient. DipDup is a framework that helps you implement such an indexer.

Let's create an indexer for the [USDt token contract](https://etherscan.io/address/0xdac17f958d2ee523a2206206994597c13d831ec7). Our goal is to save all token transfers to the database and then calculate some statistics of its holders' activity.

## Install DipDup
Expand All @@ -29,10 +31,10 @@ DipDup CLI has a built-in project generator. Run the following command in your t
dipdup new
```

Choose `EVM` network and `demo_evm_events` template.
Choose `From template`, then `EVM` network and `demo_evm_events` template.

::banner{type="note"}
Want to skip a tutorial and start from scratch? Choose `[none]` and `demo_blank` instead and proceed to the [Config](../docs/1.getting-started/3.config.md) section.
Want to skip a tutorial and start from scratch? Choose `Blank` at the first step instead and proceed to the [Config](../docs/1.getting-started/3.config.md) section.
::

Follow the instructions; the project will be created in the new directory.
Expand Down Expand Up @@ -96,9 +98,9 @@ That's a lot of files and directories! But don't worry, we will need only `model

## Define data models

DipDup supports storing data in SQLite, PostgreSQL and TimescaleDB databases. We use custom ORM based on Tortoise ORM as an abstraction layer.
DipDup supports storing data in SQLite, PostgreSQL and TimescaleDB databases. We use modified [Tortoise ORM](https://tortoise.github.io/) library as an abstraction layer.

First, you need to define a model class. Our schema will consist of a single model `Holder` with the following fields:
First, you need to define a model class. DipDup uses model definitions both for database schema and autogenerated GraphQL API. Our schema will consist of a single model `Holder` with the following fields:

| | |
| ----------- | ----------------------------------- |
Expand All @@ -114,6 +116,8 @@ Here's how to define this model in DipDup:
{{ #include ../src/demo_evm_events/models/__init__.py }}
```

Using ORM is not a requirement; DipDup provides helpers to run SQL queries/scripts directly, see [Database](1.getting-started/5.database.md) page.

## Implement handlers

Everything's ready to implement an actual indexer logic.
Expand All @@ -134,7 +138,7 @@ Run the indexer in memory:
dipdup run
```

Store data in SQLite database:
Store data in SQLite database (defaults to /tmp, set `SQLITE_PATH` env variable):

```shell
dipdup -c . -c configs/dipdup.sqlite.yaml run
Expand All @@ -154,7 +158,7 @@ DipDup will fetch all the historical data and then switch to realtime updates. Y
If you use SQLite, run this query to check the data:

```bash
sqlite3 demo_evm_events.sqlite 'SELECT * FROM holder LIMIT 10'
sqlite3 /tmp/demo_evm_events.sqlite 'SELECT * FROM holder LIMIT 10'
```

If you run a Compose stack, open `http://127.0.0.1:8080` in your browser to see the Hasura console (an exposed port may differ). You can use it to explore the database and build GraphQL queries.
Expand Down
16 changes: 10 additions & 6 deletions docs/0.quickstart-starknet.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,8 @@ network: "starknet"

This page will guide you through the steps to get your first selective indexer up and running in a few minutes without getting too deep into the details.

A selective blockchain indexer is an application that extracts and organizes specific blockchain data from multiple data sources, rather than processing all blockchain data. It allows users to index only relevant entities, reducing storage and computational requirements compared to full node indexing, and query data more efficiently for specific use cases. Think of it as a customizable filter that captures and stores only the blockchain data you need, making data retrieval faster and more resource-efficient. DipDup is a framework that helps you implement such an indexer.

Let's create an indexer for the [USDt token contract](https://starkscan.co/contract/0x68f5c6a61780768455de69077e07e89787839bf8166decfbf92b645209c0fb8). Our goal is to save all token transfers to the database and then calculate some statistics of its holders' activity.

## Install DipDup
Expand All @@ -29,10 +31,10 @@ DipDup CLI has a built-in project generator. Run the following command in your t
dipdup new
```

Choose `Starknet` network and `demo_starknet_events` template.
Choose `From template`, then `Starknet` network and `demo_starknet_events` template.

::banner{type="note"}
Want to skip a tutorial and start from scratch? Choose `[none]` and `demo_blank` instead and proceed to the [Config](../docs/1.getting-started/3.config.md) section.
Want to skip a tutorial and start from scratch? Choose `Blank` at the first step instead and proceed to the [Config](../docs/1.getting-started/3.config.md) section.
::

Follow the instructions; the project will be created in the new directory.
Expand Down Expand Up @@ -96,9 +98,9 @@ That's a lot of files and directories! But don't worry, we will need only `model

## Define data models

DipDup supports storing data in SQLite, PostgreSQL and TimescaleDB databases. We use custom ORM based on Tortoise ORM as an abstraction layer.
DipDup supports storing data in SQLite, PostgreSQL and TimescaleDB databases. We use modified [Tortoise ORM](https://tortoise.github.io/) library as an abstraction layer.

First, you need to define a model class. Our schema will consist of a single model `Holder` with the following fields:
First, you need to define a model class. DipDup uses model definitions both for database schema and autogenerated GraphQL API. Our schema will consist of a single model `Holder` with the following fields:

| | |
| ----------- | ----------------------------------- |
Expand All @@ -114,6 +116,8 @@ Here's how to define this model in DipDup:
{{ #include ../src/demo_starknet_events/models/__init__.py }}
```

Using ORM is not a requirement; DipDup provides helpers to run SQL queries/scripts directly, see [Database](1.getting-started/5.database.md) page.

## Implement handlers

Everything's ready to implement an actual indexer logic.
Expand All @@ -134,7 +138,7 @@ Run the indexer in memory:
dipdup run
```

Store data in SQLite database:
Store data in SQLite database (defaults to /tmp, set `SQLITE_PATH` env variable):

```shell
dipdup -c . -c configs/dipdup.sqlite.yaml run
Expand All @@ -154,7 +158,7 @@ DipDup will fetch all the historical data and then switch to realtime updates. Y
If you use SQLite, run this query to check the data:

```bash
sqlite3 demo_starknet_events.sqlite 'SELECT * FROM holder LIMIT 10'
sqlite3 /tmp/demo_starknet_events.sqlite 'SELECT * FROM holder LIMIT 10'
```

If you run a Compose stack, open `http://127.0.0.1:8080` in your browser to see the Hasura console (an exposed port may differ). You can use it to explore the database and build GraphQL queries.
Expand Down
16 changes: 10 additions & 6 deletions docs/0.quickstart-tezos.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,8 @@ network: "tezos"

This page will guide you through the steps to get your first selective indexer up and running in a few minutes without getting too deep into the details.

A selective blockchain indexer is an application that extracts and organizes specific blockchain data from multiple data sources, rather than processing all blockchain data. It allows users to index only relevant entities, reducing storage and computational requirements compared to full node indexing, and query data more efficiently for specific use cases. Think of it as a customizable filter that captures and stores only the blockchain data you need, making data retrieval faster and more resource-efficient. DipDup is a framework that helps you implement such an indexer.

Let's create an indexer for the [tzBTC FA1.2 token contract](https://tzkt.io/KT1PWx2mnDueood7fEmfbBDKx1D9BAnnXitn/operations/). Our goal is to save all token transfers to the database and then calculate some statistics of its holders' activity.

## Install DipDup
Expand All @@ -29,10 +31,10 @@ DipDup CLI has a built-in project generator. Run the following command in your t
dipdup new
```

Choose `Tezos` network and `demo_tezos_token` template.
Choose `From template`, then `Tezos` network and `demo_tezos_token` template.

::banner{type="note"}
Want to skip a tutorial and start from scratch? Choose `[none]` and `demo_blank` instead and proceed to the [Config](./1.getting-started/3.config.md) section.
Want to skip a tutorial and start from scratch? Choose `Blank` at the first step instead and proceed to the [Config](./1.getting-started/3.config.md) section.
::

Follow the instructions; the project will be created in the new directory.
Expand Down Expand Up @@ -99,9 +101,9 @@ That's a lot of files and directories! But don't worry, we will need only `model

## Define data models

DipDup supports storing data in SQLite, PostgreSQL and TimescaleDB databases. We use custom ORM based on Tortoise ORM as an abstraction layer.
DipDup supports storing data in SQLite, PostgreSQL and TimescaleDB databases. We use modified [Tortoise ORM](https://tortoise.github.io/) library as an abstraction layer.

First, you need to define a model class. Our schema will consist of a single model `Holder` with the following fields:
First, you need to define a model class. DipDup uses model definitions both for database schema and autogenerated GraphQL API. Our schema will consist of a single model `Holder` with the following fields:

| | |
| ----------- | ----------------------------------- |
Expand All @@ -117,6 +119,8 @@ Here's how to define this model in DipDup:
{{ #include ../src/demo_tezos_token/models/__init__.py }}
```

Using ORM is not a requirement; DipDup provides helpers to run SQL queries/scripts directly, see [Database](1.getting-started/5.database.md) page.

## Implement handlers

Everything's ready to implement an actual indexer logic.
Expand Down Expand Up @@ -147,7 +151,7 @@ Run the indexer in memory:
dipdup run
```

Store data in SQLite database:
Store data in SQLite database (defaults to /tmp, set `SQLITE_PATH` env variable):

```shell
dipdup -c . -c configs/dipdup.sqlite.yaml run
Expand All @@ -167,7 +171,7 @@ DipDup will fetch all the historical data and then switch to realtime updates. Y
If you use SQLite, run this query to check the data:

```bash
sqlite3 demo_tezos_token.sqlite 'SELECT * FROM holder LIMIT 10'
sqlite3 /tmp/demo_tezos_token.sqlite 'SELECT * FROM holder LIMIT 10'
```

If you run a Compose stack, open `http://127.0.0.1:8080` in your browser to see the Hasura console (an exposed port may differ). You can use it to explore the database and build GraphQL queries.
Expand Down
2 changes: 1 addition & 1 deletion docs/1.getting-started/10.hooks.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ async def calculate_stats(
major: bool,
depth: int,
) -> None:
await ctx.execute_sql('calculate_stats')
await ctx.execute_sql_script('calculate_stats')
```

By default, hooks execute SQL scripts from the corresponding subdirectory of `sql/`. Remove or comment out the `ctx.execute_sql` call to prevent it.
Expand Down
6 changes: 3 additions & 3 deletions docs/1.getting-started/5.database.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,18 +60,18 @@ SELECT created_at FROM dipdup_schema WHERE name = 'public';

## SQL scripts

Put your `*.sql` scripts to `{{ project.package }}/sql`. You can run these scripts from any callback with `ctx.execute_sql('name')`. If `name` is a directory, each script it contains will be executed.
Put your `*.sql` scripts to `{{ project.package }}/sql`. You can run these scripts from any callback with `ctx.execute_sql_script('name')`. If `name` is a directory, each script it contains will be executed.

Scripts are executed without being wrapped with SQL transactions. It's generally a good idea to avoid touching table data in scripts.

By default, an empty `sql/<hook_name>` directory is generated for every hook in config during init. Remove `ctx.execute_sql` call from hook callback to avoid executing them.

```python
# Execute all scripts in sql/my_hook directory
await ctx.execute_sql('my_hook')
await ctx.execute_sql_script('my_hook')

# Execute a single script
await ctx.execute_sql('my_hook/my_script.sql')
await ctx.execute_sql_script('my_hook/my_script.sql')
```

## Helper functions
Expand Down
2 changes: 1 addition & 1 deletion docs/5.advanced/5.backups.md
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@ async def on_index_rollback(
from_level: int,
to_level: int,
) -> None:
await ctx.execute_sql('on_index_rollback')
await ctx.execute_sql_script('on_index_rollback')

database_config: Union[SqliteDatabaseConfig, PostgresDatabaseConfig] = ctx.config.database

Expand Down
Loading
Loading