Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

merge queue: embarking unstable (a257a12) and [#5109 + #5071 + #5000] together #5161

Closed
wants to merge 27 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
27 commits
Select commit Hold shift + click to select a range
86e563e
Add jq in api-bn
chong-he Dec 6, 2023
1ca3c54
Update beaconstate size
chong-he Dec 8, 2023
2df46ce
Add fields to web3signer API
chong-he Dec 10, 2023
59f6d07
Link web3signer API
chong-he Dec 10, 2023
e25d712
Update /lighthouse/logs in table
chong-he Dec 11, 2023
439b00e
plural
chong-he Dec 11, 2023
f2bb0bb
update slasher doc
chong-he Dec 11, 2023
8d1ca33
update FAQ
chong-he Dec 11, 2023
08255e2
Add link in validator section
chong-he Dec 11, 2023
ad95220
Add more info on state pruning
chong-he Dec 11, 2023
37c5c3d
Update database size
chong-he Dec 11, 2023
cd8efbb
Merge branch 'unstable' into book-update
chong-he Dec 11, 2023
80cf989
Revise Siren for vc to connect bn
chong-he Dec 14, 2023
b75533e
Merge branch 'book-update' of https://github.com/chong-he/lighthouse …
chong-he Dec 14, 2023
5fc2fee
Corrections to siren faq
chong-he Dec 20, 2023
bdf9256
Fix typos
michaelsproul Dec 20, 2023
840b01d
Update release date for 4.6.0
chong-he Jan 9, 2024
6f7b232
Prevent logs and dialing quic multiaddrs when not supported
AgeManning Jan 16, 2024
76bc560
Test backfill
AgeManning Jan 23, 2024
4a2f020
Revert cargo.toml
AgeManning Jan 23, 2024
88bf01e
Update beacon_node/beacon_chain/src/builder.rs
AgeManning Jan 24, 2024
bc782db
Merge branch 'unstable' into book-update
chong-he Jan 29, 2024
4a2808d
Remove redundant code
AgeManning Jan 31, 2024
88d4864
Merge latest unstable
AgeManning Jan 31, 2024
a14a19b
Merge of #5109
mergify[bot] Jan 31, 2024
1a1db97
Merge of #5071
mergify[bot] Jan 31, 2024
c0b89e4
Merge of #5000
mergify[bot] Jan 31, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions beacon_node/beacon_chain/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ write_ssz_files = [] # Writes debugging .ssz files to /tmp during block process
participation_metrics = [] # Exposes validator participation metrics to Prometheus.
fork_from_env = [] # Initialise the harness chain spec from the FORK_NAME env variable
portable = ["bls/supranational-portable"]
test_backfill = []

[dev-dependencies]
maplit = { workspace = true }
Expand Down
12 changes: 8 additions & 4 deletions beacon_node/beacon_chain/src/builder.rs
Original file line number Diff line number Diff line change
Expand Up @@ -846,10 +846,14 @@ where
let genesis_backfill_slot = if self.chain_config.genesis_backfill {
Slot::new(0)
} else {
let backfill_epoch_range = (self.spec.min_validator_withdrawability_delay
+ self.spec.churn_limit_quotient)
.as_u64()
/ 2;
let backfill_epoch_range = if cfg!(feature = "test_backfill") {
3
} else {
(self.spec.min_validator_withdrawability_delay + self.spec.churn_limit_quotient)
.as_u64()
/ 2
};

match slot_clock.now() {
Some(current_slot) => {
let genesis_backfill_epoch = current_slot
Expand Down
3 changes: 3 additions & 0 deletions beacon_node/lighthouse_network/src/peer_manager/config.rs
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,8 @@ pub struct Config {
pub discovery_enabled: bool,
/// Whether metrics are enabled.
pub metrics_enabled: bool,
/// Whether quic is enabled.
pub quic_enabled: bool,
/// Target number of peers to connect to.
pub target_peer_count: usize,

Expand All @@ -37,6 +39,7 @@ impl Default for Config {
Config {
discovery_enabled: true,
metrics_enabled: false,
quic_enabled: true,
target_peer_count: DEFAULT_TARGET_PEERS,
status_interval: DEFAULT_STATUS_INTERVAL,
ping_interval_inbound: DEFAULT_PING_INTERVAL_INBOUND,
Expand Down
4 changes: 4 additions & 0 deletions beacon_node/lighthouse_network/src/peer_manager/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -104,6 +104,8 @@ pub struct PeerManager<TSpec: EthSpec> {
discovery_enabled: bool,
/// Keeps track if the current instance is reporting metrics or not.
metrics_enabled: bool,
/// Keeps track of whether the QUIC protocol is enabled or not.
quic_enabled: bool,
/// The logger associated with the `PeerManager`.
log: slog::Logger,
}
Expand Down Expand Up @@ -149,6 +151,7 @@ impl<TSpec: EthSpec> PeerManager<TSpec> {
status_interval,
ping_interval_inbound,
ping_interval_outbound,
quic_enabled,
} = cfg;

// Set up the peer manager heartbeat interval
Expand All @@ -167,6 +170,7 @@ impl<TSpec: EthSpec> PeerManager<TSpec> {
heartbeat,
discovery_enabled,
metrics_enabled,
quic_enabled,
log: log.clone(),
})
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -96,10 +96,16 @@ impl<TSpec: EthSpec> NetworkBehaviour for PeerManager<TSpec> {
if let Some(enr) = self.peers_to_dial.pop() {
let peer_id = enr.peer_id();
self.inject_peer_connection(&peer_id, ConnectingType::Dialing, Some(enr.clone()));
let quic_multiaddrs = enr.multiaddr_quic();
if !quic_multiaddrs.is_empty() {
debug!(self.log, "Dialing QUIC supported peer"; "peer_id"=> %peer_id, "quic_multiaddrs" => ?quic_multiaddrs);
}

let quic_multiaddrs = if self.quic_enabled {
let quic_multiaddrs = enr.multiaddr_quic();
if !quic_multiaddrs.is_empty() {
debug!(self.log, "Dialing QUIC supported peer"; "peer_id"=> %peer_id, "quic_multiaddrs" => ?quic_multiaddrs);
}
quic_multiaddrs
} else {
Vec::new()
};

// Prioritize Quic connections over Tcp ones.
let multiaddrs = quic_multiaddrs
Expand Down
1 change: 1 addition & 0 deletions beacon_node/lighthouse_network/src/service/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -328,6 +328,7 @@ impl<AppReqId: ReqId, TSpec: EthSpec> Network<AppReqId, TSpec> {
let peer_manager = {
let peer_manager_cfg = PeerManagerCfg {
discovery_enabled: !config.disable_discovery,
quic_enabled: !config.disable_quic_support,
metrics_enabled: config.metrics_enabled,
target_peer_count: config.target_peers,
..Default::default()
Expand Down
16 changes: 7 additions & 9 deletions book/src/advanced_database.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,15 +23,13 @@ states to slow down dramatically. A lower _slots per restore point_ value (SPRP)
frequent restore points, while a higher SPRP corresponds to less frequent. The table below shows
some example values.

| Use Case | SPRP | Yearly Disk Usage* | Load Historical State |
| Use Case | SPRP | Yearly Disk Usage*| Load Historical State |
|----------------------------|------|-------------------|-----------------------|
| Research | 32 | 3.4 TB | 155 ms |
| Block explorer/analysis | 128 | 851 GB | 620 ms |
| Enthusiast (prev. default) | 2048 | 53.6 GB | 10.2 s |
| Hobbyist | 4096 | 26.8 GB | 20.5 s |
| Validator only (default) | 8192 | 12.7 GB | 41 s |
| Research | 32 | more than 10 TB | 155 ms |
| Enthusiast (prev. default) | 2048 | hundreds of GB | 10.2 s |
| Validator only (default) | 8192 | tens of GB | 41 s |

*Last update: May 2023.
*Last update: Dec 2023.

As we can see, it's a high-stakes trade-off! The relationships to disk usage and historical state
load time are both linear – doubling SPRP halves disk usage and doubles load time. The minimum SPRP
Expand All @@ -41,12 +39,12 @@ The default value is 8192 for databases synced from scratch using Lighthouse v2.
2048 for prior versions. Please see the section on [Defaults](#defaults) below.

The values shown in the table are approximate, calculated using a simple heuristic: each
`BeaconState` consumes around 18MB of disk space, and each block replayed takes around 5ms. The
`BeaconState` consumes around 145MB of disk space, and each block replayed takes around 5ms. The
**Yearly Disk Usage** column shows the approximate size of the freezer DB _alone_ (hot DB not included), calculated proportionally using the total freezer database disk usage.
The **Load Historical State** time is the worst-case load time for a state in the last slot
before a restore point.

As an example, we use an SPRP of 4096 to calculate the total size of the freezer database until May 2023. It has been about 900 days since the genesis, the total disk usage by the freezer database is therefore: 900/365*26.8 GB = 66 GB.
To run a full archival node with fast access to beacon states and a SPRP of 32, the disk usage will be more than 10 TB per year, which is impractical for many users. As such, users may consider running the [tree-states](https://github.com/sigp/lighthouse/releases/tag/v4.5.444-exp) release, which only uses less than 150 GB for a full archival node. The caveat is that it is currently experimental and in alpha release (as of Dec 2023), thus not recommended for running mainnet validators. Nevertheless, it is suitable to be used for analysis purposes, and if you encounter any issues in tree-states, we do appreciate any feedback. We plan to have a stable release of tree-states in 1H 2024.

### Defaults

Expand Down
2 changes: 1 addition & 1 deletion book/src/api-bn.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,7 @@ The `jq` tool is used to format the JSON data properly. If it returns `jq: comma
Shows the status of validator at index `1` at the `head` state.

```bash
curl -X GET "http://localhost:5052/eth/v1/beacon/states/head/validators/1" -H "accept: application/json"
curl -X GET "http://localhost:5052/eth/v1/beacon/states/head/validators/1" -H "accept: application/json" | jq
```

```json
Expand Down
15 changes: 8 additions & 7 deletions book/src/api-vc-endpoints.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ HTTP Path | Description |
[`POST /lighthouse/validators/keystore`](#post-lighthousevalidatorskeystore) | Import a keystore.
[`POST /lighthouse/validators/mnemonic`](#post-lighthousevalidatorsmnemonic) | Create a new validator from an existing mnemonic.
[`POST /lighthouse/validators/web3signer`](#post-lighthousevalidatorsweb3signer) | Add web3signer validators.
[`GET /lighthouse/logs`](#get-lighthouselogs) | Get logs

The query to Lighthouse API endpoints requires authorization, see [Authorization Header](./api-vc-auth-header.md).

Expand Down Expand Up @@ -745,27 +746,27 @@ Create any number of new validators, all of which will refer to a
"graffiti": "Mr F was here",
"suggested_fee_recipient": "0xa2e334e71511686bcfe38bb3ee1ad8f6babcc03d",
"voting_public_key": "0xa062f95fee747144d5e511940624bc6546509eeaeae9383257a9c43e7ddc58c17c2bab4ae62053122184c381b90db380",
"builder_proposals": true,
"url": "http://path-to-web3signer.com",
"root_certificate_path": "/path/on/vc/filesystem/to/certificate.pem",
"root_certificate_path": "/path/to/certificate.pem",
"client_identity_path": "/path/to/identity.p12",
"client_identity_password": "pass",
"request_timeout_ms": 12000
}
]

```

The following fields may be omitted or nullified to obtain default values:
Some of the fields above may be omitted or nullified to obtain default values (e.g., `graffiti`, `request_timeout_ms`).

- `graffiti`
- `suggested_fee_recipient`
- `root_certificate_path`
- `request_timeout_ms`

Command:
```bash
DATADIR=/var/lib/lighthouse
curl -X POST http://localhost:5062/lighthouse/validators/web3signer \
-H "Authorization: Bearer $(cat ${DATADIR}/validators/api-token.txt)" \
-H "Content-Type: application/json" \
-d "[{\"enable\":true,\"description\":\"validator_one\",\"graffiti\":\"Mr F was here\",\"suggested_fee_recipient\":\"0xa2e334e71511686bcfe38bb3ee1ad8f6babcc03d\",\"voting_public_key\":\"0xa062f95fee747144d5e511940624bc6546509eeaeae9383257a9c43e7ddc58c17c2bab4ae62053122184c381b90db380\",\"url\":\"http://path-to-web3signer.com\",\"request_timeout_ms\":12000}]"
-d "[{\"enable\":true,\"description\":\"validator_one\",\"graffiti\":\"Mr F was here\",\"suggested_fee_recipient\":\"0xa2e334e71511686bcfe38bb3ee1ad8f6babcc03d\",\"voting_public_key\":\"0xa062f95fee747144d5e511940624bc6546509eeaeae9383257a9c43e7ddc58c17c2bab4ae62053122184c381b90db380\",\"builder_proposals\":true,\"url\":\"http://path-to-web3signer.com\",\"root_certificate_path\":\"/path/to/certificate.pem\",\"client_identity_path\":\"/path/to/identity.p12\",\"client_identity_password\":\"pass\",\"request_timeout_ms\":12000}]"
```


Expand Down
19 changes: 16 additions & 3 deletions book/src/database-migrations.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ validator client or the slasher**.

| Lighthouse version | Release date | Schema version | Downgrade available? |
|--------------------|--------------|----------------|----------------------|

| v4.6.0 | Dec 2023 | v19 | yes before Deneb |
| v4.6.0-rc.0 | Dec 2023 | v18 | yes before Deneb |
| v4.5.0 | Sep 2023 | v17 | yes |
Expand Down Expand Up @@ -158,8 +159,7 @@ lighthouse db version --network mainnet

Pruning historic states helps in managing the disk space used by the Lighthouse beacon node by removing old beacon
states from the freezer database. This can be especially useful when the database has accumulated a significant amount
of historic data. This command is intended for nodes synced before 4.4.1, as newly synced node no longer store
historic states by default.
of historic data. This command is intended for nodes synced before 4.4.1, as newly synced nodes no longer store historic states by default.

Here are the steps to prune historic states:

Expand All @@ -175,14 +175,27 @@ Here are the steps to prune historic states:
sudo -u "$LH_USER" lighthouse db prune-states --datadir "$LH_DATADIR" --network "$NET"
```

If pruning is available, Lighthouse will log:

```
INFO Ready to prune states
WARN Pruning states is irreversible
WARN Re-run this command with --confirm to commit to state deletion
INFO Nothing has been pruned on this run
```

3. If you are ready to prune the states irreversibly, add the `--confirm` flag to commit the changes:

```bash
sudo -u "$LH_USER" lighthouse db prune-states --confirm --datadir "$LH_DATADIR" --network "$NET"
```

The `--confirm` flag ensures that you are aware the action is irreversible, and historic states will be permanently removed.
The `--confirm` flag ensures that you are aware the action is irreversible, and historic states will be permanently removed. Lighthouse will log:

```
INFO Historic states pruned successfully
```

4. After successfully pruning the historic states, you can restart the Lighthouse beacon node:

```bash
Expand Down
32 changes: 30 additions & 2 deletions book/src/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@
- [Does increasing the number of validators increase the CPU and other computer resources used?](#vc-resource)
- [I want to add new validators. Do I have to reimport the existing keys?](#vc-reimport)
- [Do I have to stop `lighthouse vc` the when importing new validator keys?](#vc-import)

- [How can I delete my validator once it is imported?](#vc-delete)

## [Network, Monitoring and Maintenance](#network-monitoring-and-maintenance-1)
- [I have a low peer count and it is not increasing](#net-peer)
Expand All @@ -33,6 +33,7 @@
- [Should I do anything to the beacon node or validator client settings if I have a relocation of the node / change of IP address?](#net-ip)
- [How to change the TCP/UDP port 9000 that Lighthouse listens on?](#net-port)
- [Lighthouse `v4.3.0` introduces a change where a node will subscribe to only 2 subnets in total. I am worried that this will impact my validators return.](#net-subnet)
- [How to know how many of my peers are connected through QUIC?](#net-quic)

## [Miscellaneous](#miscellaneous-1)
- [What should I do if I lose my slashing protection database?](#misc-slashing)
Expand All @@ -41,6 +42,7 @@
- [Does Lighthouse have pruning function like the execution client to save disk space?](#misc-prune)
- [Can I use a HDD for the freezer database and only have the hot db on SSD?](#misc-freezer)
- [Can Lighthouse log in local timestamp instead of UTC?](#misc-timestamp)
- [My hard disk is full and my validator is down. What should I do? ](#misc-full)

## Beacon Node

Expand Down Expand Up @@ -345,6 +347,13 @@ Generally yes.

If you do not want to stop `lighthouse vc`, you can use the [key manager API](./api-vc-endpoints.md) to import keys.


### <a name="vc-delete"></a> How can I delete my validator once it is imported?

Lighthouse supports the [KeyManager API](https://ethereum.github.io/keymanager-APIs/#/Local%20Key%20Manager/deleteKeys) to delete validators and remove them from the `validator_definitions.yml` file. To do so, start the validator client with the flag `--http` and call the API.

If you are looking to delete the validators in one node and import it to another, you can use the [validator-manager](./validator-manager-move.md) to move the validators across nodes without the hassle of deleting and importing the keys.

## Network, Monitoring and Maintenance

### <a name="net-peer"></a> I have a low peer count and it is not increasing
Expand Down Expand Up @@ -486,6 +495,23 @@ While subscribing to more subnets can ensure you have peers on a wider range of

If you would still like to subscribe to all subnets, you can use the flag `subscribe-all-subnets`. This may improve the block rewards by 1-5%, though it comes at the cost of a much higher bandwidth requirement.

### <a name="net-quic"></a> How to know how many of my peers are connected via QUIC?

With `--metrics` enabled in the beacon node, you can find the number of peers connected via QUIC using:

```bash
curl -s "http://localhost:5054/metrics" | grep libp2p_quic_peers
```

A response example is:

```
# HELP libp2p_quic_peers Count of libp2p peers currently connected via QUIC
# TYPE libp2p_quic_peers gauge
libp2p_quic_peers 4
```
which shows that there are 4 peers connected via QUIC.

## Miscellaneous

### <a name="misc-slashing"></a> What should I do if I lose my slashing protection database?
Expand Down Expand Up @@ -533,9 +559,11 @@ Yes, you can do so by using the flag `--freezer-dir /path/to/freezer_db` in the

The reason why Lighthouse logs in UTC is due to the dependency on an upstream library that is [yet to be resolved](https://github.com/sigp/lighthouse/issues/3130). Alternatively, using the flag `disable-log-timestamp` in combination with systemd will suppress the UTC timestamps and print the logs in local timestamps.

### <a name="misc-full"></a> My hard disk is full and my validator is down. What should I do?

A quick way to get the validator back online is by removing the Lighthouse beacon node database and resync Lighthouse using checkpoint sync. A guide to do this can be found in the [Lighthouse Discord server](https://discord.com/channels/605577013327167508/605577013331361793/1019755522985050142). With some free space left, you will then be able to prune the execution client database to free up more space.


For a relatively long term solution, if you are using Geth and Nethermind as the execution client, you can consider setup the online pruning feature. Refer to [Geth](https://blog.ethereum.org/2023/09/12/geth-v1-13-0) and [Nethermind](https://gist.github.com/yorickdowne/67be09b3ba0a9ff85ed6f83315b5f7e0) for details.



Expand Down
5 changes: 1 addition & 4 deletions book/src/slasher.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ of the immaturity of the slasher UX and the extra resources required.
The slasher runs inside the same process as the beacon node, when enabled via the `--slasher` flag:

```
lighthouse bn --slasher --debug-level debug
lighthouse bn --slasher
```

The slasher hooks into Lighthouse's block and attestation processing, and pushes messages into an
Expand All @@ -26,9 +26,6 @@ verifies the signatures of otherwise invalid messages. When a slasher batch upda
messages are filtered for relevancy, and all relevant messages are checked for slashings and written
to the slasher database.

You **should** run with debug logs, so that you can see the slasher's internal machinations, and
provide logs to the developers should you encounter any bugs.

## Configuration

The slasher has several configuration options that control its functioning.
Expand Down
2 changes: 1 addition & 1 deletion book/src/ui-configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ To enable the HTTP API for the beacon node, utilize the `--gui` CLI flag. This a

If you require accessibility from another machine within the network, configure the `--http-address` to match the local LAN IP of the system running the Beacon Node and Validator Client.

> To access from another machine on the same network (192.168.0.200) set the Beacon Node and Validator Client `--http-address` as `192.168.0.200`.
> To access from another machine on the same network (192.168.0.200) set the Beacon Node and Validator Client `--http-address` as `192.168.0.200`. When this is set, the validator client requires the flag `--beacon-nodes http://192.168.0.200:5052` to connect to the beacon node.

In a similar manner, the validator client requires activation of the `--http` flag, along with the optional consideration of configuring the `--http-address` flag. If `--http-address` flag is set on the Validator Client, then the `--unencrypted-http-transport` flag is required as well. These settings will ensure compatibility with Siren's connectivity requirements.

Expand Down
Loading
Loading