Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pending Log Subscription emits logs at extremely slow rate ( roughly 1 pending transaction per second ) #29415

Closed
nakedkoalacapital opened this issue Mar 31, 2024 · 16 comments
Assignees
Labels

Comments

@nakedkoalacapital
Copy link

System information

CL client & version: e.g. [email protected]
OS & Version: LINUX
Commit hash : a382917

Running this minimal snippet to subscribe to logs emitted by pending transactions.


func GetProvider(conn string) *ethclient.Client {

	provider, err := ethclient.Dial(conn)
	if err != nil {
		fmt.Printf("faied to connect to geth node via ws. err=%v conn=%v\n", err, conn)
		return nil
	}
	return provider
}

func MinimumLoop() {

	localCh := make(chan types.Log)

	provider = GetProvider(LOCAL_GETH_WS_CONN)

	localSub, err := provider.SubscribeFilterLogs(
		context.Background(),
		ethereum.FilterQuery{
			FromBlock: big.NewInt(-1),
			ToBlock:   big.NewInt(-1),
		},
		localCh,
	)
	if err != nil {
		log.Fatalln(err)
	}
	for {
		select {
		case logLocal := <-localCh:
			fmt.Println(logLocal.TxHash)
		case err := <-localSub.Err():
			log.Fatalln(err)
		}
	}
}

func main() {
	MinimumLoop()
}

Expected behaviour

Volume of logs being printed from the loop should match number of actual transaction entering the mempool.

Actual behaviour

Pending logs are printed out very very slowly. At a rate of roughly 1 unique transaction per second. Something is definitely wrong,

@s1na
Copy link
Contributor

s1na commented Apr 1, 2024

Can you check the contents of your txpool and how well your node is peered? I have a hunch your node is not receiving many pending txes.

@nakedkoalacapital
Copy link
Author

nakedkoalacapital commented Apr 3, 2024

Peer connection doesn't seem to be the issue.

> admin.peers.length
200

I have attempted to debug this issue by running the same pending log subscription snippet code against different webscoket connections and eyeballing the printout to get a idea of the throughput.

  1. local geth node
  2. infura hosted geth node
  3. chainstack hosted geth node
  4. quicknode hosted geth node

Observations:

Throughput of 3 and 4 are fastest and mathced my impression of my local throughput prior to dencun upgrade.
Throughput of 2 is a lot slower than 3 & 4 but still faster than my local geth node
Throughput of 1 is slowest.

Local pendingLog Throughput

local-pendinglogs.mp4

Chainstack pendingLog Throughput

chainstack-pendinglogs.mp4

check the contents of your txpool

Can you suggest what would be the canonical way to do so ?

I try to approximate the pending txn throughput of txpool by printing the pending transactions

func MinimumLoop2() {
	hashCh := make(chan common.Hash)

	localSub, err := gethProvider.SubscribePendingTransactions(
		context.Background(),
		hashCh,
	)
	if err != nil {
		log.Fatalf("Failed to call SubscribeFilterLogs. err=%v. Exit MainEventLoop\n", err)
	}
	for {
		select {
		case hash := <-hashCh:
			fmt.Println(hash)
		case err := <-localSub.Err():
			log.Fatalln(err)
		}
	}
}

Local PendingTx

local-pendingTx.mp4

Chainstack PendingTx

chainstack-pendingTx.mp4

The pending txn throughput through my local tx pool looks to be normal.

Here is the config.toml of my geth node

# Note: this config doesn't contain the genesis block.

[Eth]
NetworkId = 1
SyncMode = "snap"
EthDiscoveryURLs = ["enrtree://AKA3AM6LPBYEUDMVNU3BSVQJ5AD45Y7YPOHJLEF6W26QOE4VTUDPE@all.mainnet.ethdisco.net"]
SnapDiscoveryURLs = ["enrtree://AKA3AM6LPBYEUDMVNU3BSVQJ5AD45Y7YPOHJLEF6W26QOE4VTUDPE@all.mainnet.ethdisco.net"]
NoPruning = false
NoPrefetch = false
TxLookupLimit = 2350000
TransactionHistory = 2350000
StateHistory = 90000
LightPeers = 100
DatabaseCache = 512
DatabaseFreezer = ""
TrieCleanCache = 154
TrieDirtyCache = 256
TrieTimeout = 3600000000000
SnapshotCache = 102
Preimages = false
FilterLogCacheSize = 32
EnablePreimageRecording = false
RPCGasCap = 1000000000
RPCEVMTimeout = 5000000000
RPCTxFeeCap = 1e+00

[Eth.Miner]
GasCeil = 30000000
GasPrice = 1000000000
Recommit = 2000000000

[Eth.TxPool]
Locals = []
NoLocals = false
Journal = "transactions.rlp"
Rejournal = 3600000000000
PriceLimit = 1
PriceBump = 10
AccountSlots = 16
GlobalSlots = 50000
AccountQueue = 64
GlobalQueue = 10000
Lifetime = 10800000000000

[Eth.BlobPool]
Datadir = "blobpool"
Datacap = 2684354560
PriceBump = 100

[Eth.GPO]
Blocks = 20
Percentile = 60
MaxHeaderHistory = 1024
MaxBlockHistory = 1024
MaxPrice = 500000000000
IgnorePrice = 2

[Node]
DataDir = "/root/.ethereum"
IPCPath = "geth.ipc"
HTTPHost = ""
HTTPPort = 8545
HTTPVirtualHosts = ["localhost"]
HTTPModules = ["net", "web3", "eth"]
AuthAddr = "localhost"
AuthPort = 8551
AuthVirtualHosts = ["localhost"]
WSHost = ""
WSPort = 8546
WSModules = ["net", "web3", "eth"]
GraphQLVirtualHosts = ["localhost"]
BatchRequestLimit = 1000
BatchResponseMaxSize = 25000000

[Node.P2P]
MaxPeers = 200
NoDiscovery = false
DiscoveryV4 = true
BootstrapNodes = ["enode://d860a01f9722d78051619d1e2351aba3f43f943f6f00718d1b9baa4101932a1f5011f16bb2b1bb35db20d6fe28fa0bf09636d26a87d31de9ec6203eeedb1f666@18.138.108.67:30303", "enode://22a8232c3abc76a16ae9d6c3b164f98775fe226f0917b0ca871128a74a8e9630b458460865bab457221f1d448dd9791d24c4e5d88786180ac185df813a68d4de@3.209.45.79:30303", "enode://2b252ab6a1d0f971d9722cb839a42cb81db019ba44c08754628ab4a823487071b5695317c8ccd085219c3a03af063495b2f1da8d18218da2d6a82981b45e6ffc@65.108.70.101:30303", "enode://4aeb4ab6c14b23e2c4cfdce879c04b0748a20d8e9b59e25ded2a08143e265c6c25936e74cbc8e641e3312ca288673d91f2f93f8e277de3cfa444ecdaaf982052@157.90.35.166:30303"]
BootstrapNodesV5 = ["enr:-KG4QMOEswP62yzDjSwWS4YEjtTZ5PO6r65CPqYBkgTTkrpaedQ8uEUo1uMALtJIvb2w_WWEVmg5yt1UAuK1ftxUU7QDhGV0aDKQu6TalgMAAAD__________4JpZIJ2NIJpcIQEnfA2iXNlY3AyNTZrMaEDfol8oLr6XJ7FsdAYE7lpJhKMls4G_v6qQOGKJUWGb_uDdGNwgiMog3VkcIIjKA", "enr:-KG4QF4B5WrlFcRhUU6dZETwY5ZzAXnA0vGC__L1Kdw602nDZwXSTs5RFXFIFUnbQJmhNGVU6OIX7KVrCSTODsz1tK4DhGV0aDKQu6TalgMAAAD__________4JpZIJ2NIJpcIQExNYEiXNlY3AyNTZrMaECQmM9vp7KhaXhI-nqL_R0ovULLCFSFTa9CPPSdb1zPX6DdGNwgiMog3VkcIIjKA", "enr:-Ku4QImhMc1z8yCiNJ1TyUxdcfNucje3BGwEHzodEZUan8PherEo4sF7pPHPSIB1NNuSg5fZy7qFsjmUKs2ea1Whi0EBh2F0dG5ldHOIAAAAAAAAAACEZXRoMpD1pf1CAAAAAP__________gmlkgnY0gmlwhBLf22SJc2VjcDI1NmsxoQOVphkDqal4QzPMksc5wnpuC3gvSC8AfbFOnZY_On34wIN1ZHCCIyg", "enr:-Ku4QP2xDnEtUXIjzJ_DhlCRN9SN99RYQPJL92TMlSv7U5C1YnYLjwOQHgZIUXw6c-BvRg2Yc2QsZxxoS_pPRVe0yK8Bh2F0dG5ldHOIAAAAAAAAAACEZXRoMpD1pf1CAAAAAP__________gmlkgnY0gmlwhBLf22SJc2VjcDI1NmsxoQMeFF5GrS7UZpAH2Ly84aLK-TyvH-dRo0JM1i8yygH50YN1ZHCCJxA", "enr:-Ku4QPp9z1W4tAO8Ber_NQierYaOStqhDqQdOPY3bB3jDgkjcbk6YrEnVYIiCBbTxuar3CzS528d2iE7TdJsrL-dEKoBh2F0dG5ldHOIAAAAAAAAAACEZXRoMpD1pf1CAAAAAP__________gmlkgnY0gmlwhBLf22SJc2VjcDI1NmsxoQMw5fqqkw2hHC4F5HZZDPsNmPdB1Gi8JPQK7pRc9XHh-oN1ZHCCKvg", "enr:-Le4QPUXJS2BTORXxyx2Ia-9ae4YqA_JWX3ssj4E_J-3z1A-HmFGrU8BpvpqhNabayXeOZ2Nq_sbeDgtzMJpLLnXFgAChGV0aDKQtTA_KgEAAAAAIgEAAAAAAIJpZIJ2NIJpcISsaa0Zg2lwNpAkAIkHAAAAAPA8kv_-awoTiXNlY3AyNTZrMaEDHAD2JKYevx89W0CcFJFiskdcEzkH_Wdv9iW42qLK79ODdWRwgiMohHVkcDaCI4I", "enr:-Le4QLHZDSvkLfqgEo8IWGG96h6mxwe_PsggC20CL3neLBjfXLGAQFOPSltZ7oP6ol54OvaNqO02Rnvb8YmDR274uq8ChGV0aDKQtTA_KgEAAAAAIgEAAAAAAIJpZIJ2NIJpcISLosQxg2lwNpAqAX4AAAAAAPA8kv_-ax65iXNlY3AyNTZrMaEDBJj7_dLFACaxBfaI8KZTh_SSJUjhyAyfshimvSqo22WDdWRwgiMohHVkcDaCI4I", "enr:-Le4QH6LQrusDbAHPjU_HcKOuMeXfdEB5NJyXgHWFadfHgiySqeDyusQMvfphdYWOzuSZO9Uq2AMRJR5O4ip7OvVma8BhGV0aDKQtTA_KgEAAAAAIgEAAAAAAIJpZIJ2NIJpcISLY9ncg2lwNpAkAh8AgQIBAAAAAAAAAAmXiXNlY3AyNTZrMaECDYCZTZEksF-kmgPholqgVt8IXr-8L7Nu7YrZ7HUpgxmDdWRwgiMohHVkcDaCI4I", "enr:-Le4QIqLuWybHNONr933Lk0dcMmAB5WgvGKRyDihy1wHDIVlNuuztX62W51voT4I8qD34GcTEOTmag1bcdZ_8aaT4NUBhGV0aDKQtTA_KgEAAAAAIgEAAAAAAIJpZIJ2NIJpcISLY04ng2lwNpAkAh8AgAIBAAAAAAAAAA-fiXNlY3AyNTZrMaEDscnRV6n1m-D9ID5UsURk0jsoKNXt1TIrj8uKOGW6iluDdWRwgiMohHVkcDaCI4I", "enr:-Ku4QHqVeJ8PPICcWk1vSn_XcSkjOkNiTg6Fmii5j6vUQgvzMc9L1goFnLKgXqBJspJjIsB91LTOleFmyWWrFVATGngBh2F0dG5ldHOIAAAAAAAAAACEZXRoMpC1MD8qAAAAAP__________gmlkgnY0gmlwhAMRHkWJc2VjcDI1NmsxoQKLVXFOhp2uX6jeT0DvvDpPcU8FWMjQdR4wMuORMhpX24N1ZHCCIyg", "enr:-Ku4QG-2_Md3sZIAUebGYT6g0SMskIml77l6yR-M_JXc-UdNHCmHQeOiMLbylPejyJsdAPsTHJyjJB2sYGDLe0dn8uYBh2F0dG5ldHOIAAAAAAAAAACEZXRoMpC1MD8qAAAAAP__________gmlkgnY0gmlwhBLY-NyJc2VjcDI1NmsxoQORcM6e19T1T9gi7jxEZjk_sjVLGFscUNqAY9obgZaxbIN1ZHCCIyg", "enr:-Ku4QPn5eVhcoF1opaFEvg1b6JNFD2rqVkHQ8HApOKK61OIcIXD127bKWgAtbwI7pnxx6cDyk_nI88TrZKQaGMZj0q0Bh2F0dG5ldHOIAAAAAAAAAACEZXRoMpC1MD8qAAAAAP__________gmlkgnY0gmlwhDayLMaJc2VjcDI1NmsxoQK2sBOLGcUb4AwuYzFuAVCaNHA-dy24UuEKkeFNgCVCsIN1ZHCCIyg", "enr:-Ku4QEWzdnVtXc2Q0ZVigfCGggOVB2Vc1ZCPEc6j21NIFLODSJbvNaef1g4PxhPwl_3kax86YPheFUSLXPRs98vvYsoBh2F0dG5ldHOIAAAAAAAAAACEZXRoMpC1MD8qAAAAAP__________gmlkgnY0gmlwhDZBrP2Jc2VjcDI1NmsxoQM6jr8Rb1ktLEsVcKAPa08wCsKUmvoQ8khiOl_SLozf9IN1ZHCCIyg", "enr:-LK4QA8FfhaAjlb_BXsXxSfiysR7R52Nhi9JBt4F8SPssu8hdE1BXQQEtVDC3qStCW60LSO7hEsVHv5zm8_6Vnjhcn0Bh2F0dG5ldHOIAAAAAAAAAACEZXRoMpC1MD8qAAAAAP__________gmlkgnY0gmlwhAN4aBKJc2VjcDI1NmsxoQJerDhsJ-KxZ8sHySMOCmTO6sHM3iCFQ6VMvLTe948MyYN0Y3CCI4yDdWRwgiOM", "enr:-LK4QKWrXTpV9T78hNG6s8AM6IO4XH9kFT91uZtFg1GcsJ6dKovDOr1jtAAFPnS2lvNltkOGA9k29BUN7lFh_sjuc9QBh2F0dG5ldHOIAAAAAAAAAACEZXRoMpC1MD8qAAAAAP__________gmlkgnY0gmlwhANAdd-Jc2VjcDI1NmsxoQLQa6ai7y9PMN5hpLe5HmiJSlYzMuzP7ZhwRiwHvqNXdoN0Y3CCI4yDdWRwgiOM"]
StaticNodes = []
TrustedNodes = []
ListenAddr = ":30303"
DiscAddr = ""
EnableMsgEvents = false

[Node.HTTPTimeouts]
ReadTimeout = 30000000000
ReadHeaderTimeout = 30000000000
WriteTimeout = 30000000000
IdleTimeout = 120000000000

[Metrics]
HTTP = "127.0.0.1"
Port = 6060
InfluxDBEndpoint = "http://localhost:8086"
InfluxDBDatabase = "geth"
InfluxDBUsername = "test"
InfluxDBPassword = "test"
InfluxDBTags = "host=localhost"
InfluxDBToken = "test"
InfluxDBBucket = "geth"
InfluxDBOrganization = "geth"

@MariusVanDerWijden
Copy link
Member

Are you running the current master? We recently changed how pending subscriptions work which makes them a lot slower

@nakedkoalacapital
Copy link
Author

Yes. I am running a382917

On which commit was the change enacted ?

Is the slow down permanent ? (next release will have the slow version)

@MariusVanDerWijden
Copy link
Member

Yep, it was changed on this commit: #28623
And it will stay permanent. You might be able to speed it up a bit by setting --miner.recommit to something small, which will use more cpu though

@nakedkoalacapital
Copy link
Author

I tried play with recommit value.

Reduce it by a factor of 5 and the pending log is still very slow.

If I reduce it by a factor of 10 then I won't be able to get any pending logs at all, seeing
WARN [04-03|09:59:38.714] Block building is interrupted allowance=200ms in the logs.

Looks like the only way to get the old throughput is to create a fork myself.

Thanks for the info though. Closing this issue.

@rjl493456442
Copy link
Member

Are your sure your local geth is fully synced?

@nakedkoalacapital
Copy link
Author

Yes. blockHeight looks right. Javascript console says it is not syncing and log doesn't indicate anything funny is going on.

Is the pending log subscription throughput abnormally slow even consider the changes we made in #28623 ?

@rjl493456442
Copy link
Member

rjl493456442 commented Apr 3, 2024

I will check it out, the pending throughput is expected to be higher.

@rjl493456442
Copy link
Member

rjl493456442 commented Apr 8, 2024

Originally the pending logs are streamed out whenever they are available. Specifically, the pending logs will be emitted when new transaction arrives and be packed into pending block.

In the recent miner rewrite, the pending logs are only emitted once a new chain head is produced, with the available transactions in the pool. During the next 12 second time-window, the newly arrived transactions are silently ignored.

Will discuss in the issue triage.

@nakedkoalacapital
Copy link
Author

Is the current throughput considered unacceptable / buggy ? If so, then I expect a remediation in the future so I don't have to spend my time to fork it .

My usecase is that I need to react to live pendingLog as soon as they arrive. The current throughput makes GETH un-usable for that purpose

@holiman
Copy link
Contributor

holiman commented Apr 9, 2024

This was never reliable, and often not useful, since it only ever spat out events for the ~200txs that were possible to include in the pending block. It would not emit logs for all things going into the pool.

With the rewrite, this subscription is now moot, and we should fully remove support for it.

@Turrkey
Copy link

Turrkey commented Apr 13, 2024

package main

import (
	"context"
	"fmt"
	"log"
	"math/big"
	"time"

	"github.com/ethereum/go-ethereum"
	"github.com/ethereum/go-ethereum/common"
	"github.com/ethereum/go-ethereum/ethclient"
	"github.com/ethereum/go-ethereum/rpc"
	"github.com/ethereum/go-ethereum/types"
)

const (
	LOCAL_GETH_WS_CONN = "ws://localhost:8546"
)

func GetProvider(conn string) *ethclient.Client {
	client, err := rpc.Dial(conn)
	if err != nil {
		log.Fatalf("连接到geth节点的ws失败。错误=%v 连接=%v\n", err, conn)
		return nil
	}
	return ethclient.NewClient(client)
}

func MinimumLoop() {
	localCh := make(chan types.Log)
	provider := GetProvider(LOCAL_GETH_WS_CONN)
	defer provider.Close()

	localSub, err := provider.SubscribeFilterLogs(context.Background(), ethereum.FilterQuery{
		FromBlock: nil,
		ToBlock:   nil,
		Addresses: []common.Address{},
		Topics:    [][]common.Hash{},
	}, localCh)
	if err != nil {
		log.Fatalf("订阅失败:%v", err)
	}
	defer localSub.Unsubscribe()

	for {
		select {
		case logLocal := <-localCh:
			fmt.Println(logLocal.TxHash.Hex())
		case err := <-localSub.Err():
			log.Fatalf("订阅错误:%v", err)
		}
	}
}

func main() {
	MinimumLoop()
}

The code utilizes the rpc.Dial function instead of ethclient.Dial for better performance. Additionally, it properly closes the provider and unsubscribes from the subscription to avoid resource leaks.

@holiman
Copy link
Contributor

holiman commented Apr 18, 2024

Assigned to @fjl to remove it

@fjl
Copy link
Contributor

fjl commented Apr 18, 2024

Removing pending logs: #29574

@holiman
Copy link
Contributor

holiman commented Apr 24, 2024

Closing this, we've yeeted pending logs now

@holiman holiman closed this as completed Apr 24, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

7 participants