Skip to content

Commit

Permalink
feat: add xtask forester stats
Browse files Browse the repository at this point in the history
  • Loading branch information
ananas-block committed Oct 2, 2024
1 parent 069953b commit 59819d6
Show file tree
Hide file tree
Showing 6 changed files with 228 additions and 13 deletions.
6 changes: 5 additions & 1 deletion Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

22 changes: 11 additions & 11 deletions js/stateless.js/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,8 +21,8 @@
This package provides server and web applications with clients, utilities, and types to leverage the power of [ZK Compression](https://www.zkcompression.com/) on Solana via the Compression RPC API.

> The core ZK Compression Solana programs and clients are maintained by
[Light](https://github.com/lightprotocol) as a part of the Light Protocol. The RPC API and indexer are maintained by
[Helius Labs](https://github.com/helius-labs).
> [Light](https://github.com/lightprotocol) as a part of the Light Protocol. The RPC API and indexer are maintained by
> [Helius Labs](https://github.com/helius-labs).
## Usage

Expand All @@ -39,18 +39,18 @@ npm install --save \

### Dependencies

- [`@solana/web3.js`](https://www.npmjs.com/package/@solana/web3.js) — provides access to the Solana network via RPC.
- [`@coral-xyz/anchor`](https://www.npmjs.com/package/@coral-xyz/anchor) — a client for [Anchor](https://www.anchor-lang.com/) Solana programs.
- [`@solana/web3.js`](https://www.npmjs.com/package/@solana/web3.js) — provides access to the Solana network via RPC.
- [`@coral-xyz/anchor`](https://www.npmjs.com/package/@coral-xyz/anchor) — a client for [Anchor](https://www.anchor-lang.com/) Solana programs.

## Documentation and Examples

For a more detailed documentation on usage, please check [the respective section at the ZK Compression documentation.](https://www.zkcompression.com/developers/typescript-client)

For example implementations, including web and server, refer to the respective repositories:

- [Web application example implementation](https://github.com/Lightprotocol/example-web-client)
- [Web application example implementation](https://github.com/Lightprotocol/example-web-client)

- [Node server example implementation](https://github.com/Lightprotocol/example-nodejs-client)
- [Node server example implementation](https://github.com/Lightprotocol/example-nodejs-client)

## Troubleshooting

Expand All @@ -65,14 +65,14 @@ Feel free to ask in the [Light](https://discord.gg/CYvjBgzRFP) and [Helius](http

Light and ZK Compression are open source protocols and very much welcome contributions. If you have a contribution, do not hesitate to send a PR to the respective repository or discuss in the linked developer Discord servers.

- 🐞 For bugs or feature requests, please open an
[issue](https://github.com/lightprotocol/lightprotocol/issues/new).
- 🔒 For security vulnerabilities, please follow the [security policy](https://github.com/Lightprotocol/light-protocol/blob/main/SECURITY.md).
- 🐞 For bugs or feature requests, please open an
[issue](https://github.com/lightprotocol/lightprotocol/issues/new).
- 🔒 For security vulnerabilities, please follow the [security policy](https://github.com/Lightprotocol/light-protocol/blob/main/SECURITY.md).

## Additional Resources

- [Light Protocol Repository](https://github.com/Lightprotocol/light-protocol)
- [ZK Compression Official Documentation](https://www.zkcompression.com/)
- [Light Protocol Repository](https://github.com/Lightprotocol/light-protocol)
- [ZK Compression Official Documentation](https://www.zkcompression.com/)

## Disclaimer

Expand Down
3 changes: 2 additions & 1 deletion js/stateless.js/src/utils/address.ts
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,8 @@ export function deriveAddressSeed(
*/
export function deriveAddress(
seed: Uint8Array,
addressMerkleTreePubkey: PublicKey = defaultTestStateTreeAccounts().addressTree,
addressMerkleTreePubkey: PublicKey = defaultTestStateTreeAccounts()
.addressTree,
): PublicKey {
if (seed.length != 32) {
throw new Error('Seed length is not 32 bytes.');
Expand Down
5 changes: 5 additions & 0 deletions xtask/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ edition = "2021"

[dependencies]
account-compression = { workspace = true }
light-registry = { workspace = true }
anyhow = "1.0"
ark-bn254 = "0.4"
ark-ff = "0.4"
Expand All @@ -21,3 +22,7 @@ quote = "1.0"
sha2 = "0.10"
solana-program = { workspace = true }
tabled = "0.15"
solana-sdk.workspace = true
solana-client = { workspace = true }
anchor-lang = { workspace = true }

201 changes: 201 additions & 0 deletions xtask/src/forester_stats.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,201 @@
use account_compression::{AddressMerkleTreeAccount, QueueAccount, StateMerkleTreeAccount};
use anchor_lang::{AccountDeserialize, Discriminator};
use clap::Parser;
use light_concurrent_merkle_tree::copy::ConcurrentMerkleTreeCopy;
use light_hash_set::HashSet;
use light_hasher::Poseidon;
use light_registry::{protocol_config::state::ProtocolConfigPda, EpochPda, ForesterEpochPda};
use solana_sdk::{account::ReadableAccount, commitment_config::CommitmentConfig};
#[derive(Debug, Parser)]
pub struct Options {
/// Select to run compressed token program tests.
#[clap(long)]
full: bool,
#[clap(long)]
protocol_config: bool,
#[clap(long, default_value_t = true)]
queue: bool,
}

pub fn fetch_foreter_stats(opts: Options) -> anyhow::Result<()> {
let commitment_config = CommitmentConfig::confirmed();
let rpc_url = std::env::var("RPC_URL")
.expect("RPC_URL environment variable not set, export RPC_URL=<url>");

let client =
solana_client::rpc_client::RpcClient::new_with_commitment(rpc_url, commitment_config);
let registry_accounts = client
.get_program_accounts(&light_registry::ID)
.expect("Failed to fetch accounts for registry program.");

let mut forester_epoch_pdas = vec![];
let mut epoch_pdas = vec![];
let mut protocol_config_pdas = vec![];
for (_, account) in registry_accounts {
match account.data()[0..8].try_into().unwrap() {
ForesterEpochPda::DISCRIMINATOR => {
let forester_epoch_pda =
ForesterEpochPda::try_deserialize_unchecked(&mut account.data())
.expect("Failed to deserialize ForesterEpochPda");
forester_epoch_pdas.push(forester_epoch_pda);
}
EpochPda::DISCRIMINATOR => {
let epoch_pda = EpochPda::try_deserialize_unchecked(&mut account.data())
.expect("Failed to deserialize EpochPda");
epoch_pdas.push(epoch_pda);
}
ProtocolConfigPda::DISCRIMINATOR => {
let protocol_config_pda =
ProtocolConfigPda::try_deserialize_unchecked(&mut account.data())
.expect("Failed to deserialize ProtocolConfigPda");
protocol_config_pdas.push(protocol_config_pda);
}
_ => (),
}
}
forester_epoch_pdas.sort_by(|a, b| a.epoch.cmp(&b.epoch));
epoch_pdas.sort_by(|a, b| a.epoch.cmp(&b.epoch));
let slot = client.get_slot().expect("Failed to fetch slot.");
let current_active_epoch = protocol_config_pdas[0]
.config
.get_current_active_epoch(slot)
.unwrap();
let current_registration_epoch = protocol_config_pdas[0]
.config
.get_latest_register_epoch(slot)
.unwrap();
println!("Current active epoch: {:?}", current_active_epoch);

println!(
"Current registration epoch: {:?}",
current_registration_epoch
);

println!(
"Forester registered for latest epoch: {:?}",
forester_epoch_pdas
.iter()
.any(|pda| pda.epoch == current_registration_epoch)
);
println!(
"Forester registered for active epoch: {:?}",
forester_epoch_pdas
.iter()
.any(|pda| pda.epoch == current_active_epoch)
);
println!(
"current active epoch progress {:?} / {}",
protocol_config_pdas[0]
.config
.get_current_active_epoch_progress(slot),
protocol_config_pdas[0].config.active_phase_length
);
println!(
"current active epoch progress {:?}%",
protocol_config_pdas[0]
.config
.get_current_active_epoch_progress(slot) as f64
/ protocol_config_pdas[0].config.active_phase_length as f64
* 100f64
);
println!("Hours until next epoch : {:?} hours", {
// slotduration is 460ms and 1000ms is 1 second and 3600 seconds is 1 hour
protocol_config_pdas[0]
.config
.active_phase_length
.saturating_sub(
protocol_config_pdas[0]
.config
.get_current_active_epoch_progress(slot),
)
* 460
/ 1000
/ 3600
});
let slots_until_next_registration = protocol_config_pdas[0]
.config
.registration_phase_length
.saturating_sub(
protocol_config_pdas[0]
.config
.get_current_active_epoch_progress(slot),
);
println!(
"Slots until next registration : {:?}",
slots_until_next_registration
);
println!(
"Hours until next registration : {:?} hours",
// slotduration is 460ms and 1000ms is 1 second and 3600 seconds is 1 hour
slots_until_next_registration * 460 / 1000 / 3600
);
if opts.full {
for epoch in &epoch_pdas {
println!("Epoch: {:?}", epoch.epoch);
let registered_foresters_in_epoch = forester_epoch_pdas
.iter()
.filter(|pda| pda.epoch == epoch.epoch);
for forester in registered_foresters_in_epoch {
println!("Forester authority: {:?}", forester.authority);
}
}
}
if opts.protocol_config {
println!("protocol config: {:?}", protocol_config_pdas[0]);
}
if opts.queue {
let account_compression_accounts = client
.get_program_accounts(&account_compression::ID)
.expect("Failed to fetch accounts for account compression program.");
for (pubkey, mut account) in account_compression_accounts {
match account.data()[0..8].try_into().unwrap() {
QueueAccount::DISCRIMINATOR => {
unsafe {
let queue = HashSet::from_bytes_copy(
&mut account.data[8 + std::mem::size_of::<QueueAccount>()..],
)
.unwrap();

println!("Queue account: {:?}", pubkey);
let mut num_of_marked_items = 0;
for i in 0..queue.get_capacity() {
if queue.get_unmarked_bucket(i).is_some() {
num_of_marked_items += 1;
}
}
println!(
"queue num of unmarked items: {:?} / {}",
num_of_marked_items,
queue.get_capacity() / 2 // div by 2 because only half of the hash set can be used before tx start to fail
);
}
}
StateMerkleTreeAccount::DISCRIMINATOR => {
println!("State Merkle tree: {:?}", pubkey);
let merkle_tree = ConcurrentMerkleTreeCopy::<Poseidon, 26>::from_bytes_copy(
&account.data[8 + std::mem::size_of::<StateMerkleTreeAccount>()..],
)
.unwrap();
println!(
"State Merkle tree next index {:?}",
merkle_tree.next_index()
);
}
AddressMerkleTreeAccount::DISCRIMINATOR => {
println!("Address Merkle tree: {:?}", pubkey);
let merkle_tree = ConcurrentMerkleTreeCopy::<Poseidon, 26>::from_bytes_copy(
&account.data[8 + std::mem::size_of::<AddressMerkleTreeAccount>()..],
)
.unwrap();
println!(
"Address Merkle tree next index {:?}",
merkle_tree.next_index()
);
}
_ => (),
}
}
}

Ok(())
}
4 changes: 4 additions & 0 deletions xtask/src/main.rs
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ use clap::{Parser, ValueEnum};
mod bench;
mod create_vkeyrs_from_gnark_key;
mod fee;
mod forester_stats;
mod hash_set;
mod type_sizes;
mod zero_bytes;
Expand Down Expand Up @@ -38,6 +39,8 @@ enum Command {
Fee,
/// Hash set utilities.
HashSet(hash_set::HashSetOptions),
/// Hash set utilities.
ForesterStats(forester_stats::Options),
}

fn main() -> Result<(), anyhow::Error> {
Expand All @@ -55,5 +58,6 @@ fn main() -> Result<(), anyhow::Error> {
Command::Bench(opts) => bench::bench(opts),
Command::Fee => fee::fees(),
Command::HashSet(opts) => hash_set::hash_set(opts),
Command::ForesterStats(opts) => forester_stats::fetch_foreter_stats(opts),
}
}

0 comments on commit 59819d6

Please sign in to comment.