Skip to content

Commit

Permalink
Update readme
Browse files Browse the repository at this point in the history
  • Loading branch information
Michael Zaikin committed Jan 11, 2024
1 parent 18dec75 commit 74f7608
Show file tree
Hide file tree
Showing 10 changed files with 269 additions and 50 deletions.
1 change: 1 addition & 0 deletions Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -100,4 +100,4 @@ run-listener:

run-spammer:
cargo build --bin simple-spammer
RUST_LOG=info ./target/debug/simple-spammer --endpoint $(ENDPOINT) --sleep $(SLEEP)
RUST_LOG=info ./target/debug/simple-spammer --endpoint $(ENDPOINT) --sleep $(SLEEP)
119 changes: 112 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,19 +37,124 @@ sudo apt install protobuf-compiler clang

## How to run

### Local consensus benchmark
### Rollup operator

Check out the [instructions](./benchmark/README.md)
Start with:

### Remote consensus benchmark
```
make run-operator
```

You will end up inside the docker container shell.
Every time you call this target, kernel and docker image will be rebuilt.
Also, existing docker volume and running container will be removed.

#### Generate new keys

For convenience, your local .tezos-client folder is mapped into the container in order to preserve the keys. Upon the first launch you need to create new keypair, in order to do that inside the operator shell:

```
$ operator generate_key
```

#### Check account info

If you already have a key, check it's balance: it should be at least 10k tez to operate a rollup, otherwise top up the balance from the faucet. To get your account address:

### Local DSN setup
```
$ operator account_info
```

#### Originate rollup

```
$ operator deploy_rollup
```

Rollup data is persisted meaning that you can restart the container without data loss. If you try to call this command again it will tell you that there's an existing rollup configuration. Use `--force` flag to remove all data and originate a new one.

#### Run rollup node

```
$ operator run_node
```

Runs rollup node in synchronous mode, with logs being printed to stdout.
Also RPC is available at `127.0.0.1:8932` on your host machine.

### Local DSN cluster

In order to run 7 consensus nodes on a local machine:
```
make run-dsn
```

Not that the output would be captured. In order to stop them all, type in another terminal:
```
make kill-dsn
```

The pre-block streaming API will available at:
- `http://127.0.0.1:64011`
- `http://127.0.0.1:64021`
- `http://127.0.0.1:64031`
- `http://127.0.0.1:64041`
- `http://127.0.0.1:64051`
- `http://127.0.0.1:64061`
- `http://127.0.0.1:64071`

#### Operator
The transaction server API will available at:
- `http://127.0.0.1:64012`
- `http://127.0.0.1:64022`
- `http://127.0.0.1:64032`
- `http://127.0.0.1:64042`
- `http://127.0.0.1:64052`
- `http://127.0.0.1:64062`
- `http://127.0.0.1:64072`

#### DSN
### Consensus node


### Sequencer

Once you have both consensus and rollup nodes running, you can launch sequencer node to test the entire setup:

```
make run-sequencer
```

#### Mocked rollup

It is possible to mock rollup node and do local pre-block verification instead:

```
make build-sequencer
./target/debug/sequencer --mock-rollup
```

#### Mocked consensus

Similarly, you can make sequencer generate pre-blocks instead of connecting to a DSN

```
make build-sequencer
./target/debug/sequencer --mock-consensus
```

### DSN listener

You can also subscribe to a remote DSN and listen for incoming pre-blocks:

```
make run-listener ENDPOINT=http://127.0.0.0:64001 FROM_ID=0
```

### DSN spammer

In order to generate a transaction stream to test latency run:

```
make run-spammer ENDPOINT=http://127.0.0.0:64003 SLEEP=10
```

#### Sequencer
Every `SLEEP` milliseconds it will connect to remote DSN node and send a transaction of random size + timestamp in the beginning. If you also run a listener you will see stat messages for incoming pre-blocks.
5 changes: 4 additions & 1 deletion benchmark/benchmark/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -167,14 +167,17 @@ def __init__(self, addresses, base_port):
host = hosts.pop(0)
primary_addr = f'/ip4/{host}/udp/{port}'
port += 1
grpc_address = f'/ip4/127.0.0.1/tcp/{port}'
port += 1

self.json['authorities'][name] = {
'stake': 1,
'protocol_key': name,
'protocol_key_bytes': name,
'primary_address': primary_addr,
'network_key': network_name,
'hostname': host
'hostname': host,
'grpc_address': grpc_address,
}

def primary_addresses(self, faults=0):
Expand Down
7 changes: 4 additions & 3 deletions crates/pre-block/src/fixture.rs
Original file line number Diff line number Diff line change
Expand Up @@ -103,15 +103,16 @@ impl NarwhalFixture {
let committee = self.fixture.committee();
let mut signatures = Vec::new();

// 3 Signers satisfies the 2F + 1 signed stake requirement
for authority in self.fixture.authorities().take(3) {
let num_signers = 2 * (committee.size() - 1) / 3 + 1;
for authority in self.fixture.authorities().take(num_signers) {
let vote = authority.vote(&header);
signatures.push((vote.author(), vote.signature().clone()));
}

match CertificateV2::new_unverified(&committee, header, signatures) {
Ok(narwhal_types::Certificate::V2(cert)) => cert.into(),
_ => unreachable!(),
Ok(_) => unreachable!(),
Err(err) => panic!("Failed to create cert: {}", err),
}
}

Expand Down
68 changes: 66 additions & 2 deletions sequencer/src/fixture.rs
Original file line number Diff line number Diff line change
Expand Up @@ -8,14 +8,18 @@ use pre_block::{PreBlock, PublicKey, DsnConfig};
use std::path::PathBuf;
use std::sync::mpsc;
use std::time::Duration;
use log::info;
use log::{info, error};

use crate::consensus_client::PrimaryClient;
use crate::da_batcher::publish_pre_blocks;
use crate::rollup_client::RollupClient;

pub async fn generate_pre_blocks(
prev_index: u64,
pre_blocks_tx: mpsc::Sender<PreBlock>,
) -> anyhow::Result<()> {
let mut index = prev_index;
let mut fixture = NarwhalFixture::default();
let mut fixture = NarwhalFixture::new(7);

loop {
let pre_block = fixture.next_pre_block(1);
Expand Down Expand Up @@ -68,3 +72,63 @@ pub async fn verify_pre_blocks(
tokio::time::sleep(Duration::from_secs(1)).await;
}
}

pub async fn run_da_task_with_mocked_consensus(
node_id: u8,
rollup_node_url: String,
) -> anyhow::Result<()> {
info!("[DA task] Starting...");

let rollup_client = RollupClient::new(rollup_node_url.clone());
let smart_rollup_address = rollup_client.connect().await?;

loop {
let from_id = rollup_client.get_next_index().await?;
let (tx, rx) = mpsc::channel();
info!("[DA task] Starting from index #{}", from_id);

tokio::select! {
res = generate_pre_blocks(from_id - 1, tx) => {
if let Err(err) = res {
error!("[DA generate] Failed with: {}", err);
}
},
res = publish_pre_blocks(&rollup_client, &smart_rollup_address, node_id, rx) => {
if let Err(err) = res {
error!("[DA publish] Failed with: {}", err);
}
},
};

tokio::time::sleep(Duration::from_secs(5)).await;
}
}

pub async fn run_da_task_with_mocked_rollup(
primary_node_url: String,
) -> anyhow::Result<()> {
info!("[DA task] Starting...");

let mut primary_client = PrimaryClient::new(primary_node_url);

loop {
let from_id = 1;
let (tx, rx) = mpsc::channel();
info!("[DA task] Starting from index #{}", from_id);

tokio::select! {
res = primary_client.subscribe_pre_blocks(from_id - 1, tx) => {
if let Err(err) = res {
error!("[DA fetch] Failed with: {}", err);
}
},
res = verify_pre_blocks(rx) => {
if let Err(err) = res {
error!("[DA verify] Failed with: {}", err);
}
},
};

tokio::time::sleep(Duration::from_secs(5)).await;
}
}
Loading

0 comments on commit 74f7608

Please sign in to comment.