Skip to content

Latest commit

 

History

History
176 lines (133 loc) · 5.89 KB

tip-343.md

File metadata and controls

176 lines (133 loc) · 5.89 KB
tip: 343
title: TIP-343 Refine Write Parameters For Leveldb	
author: [email protected]
discussions to: https://github.com/tronprotocol/TIPs/issues/343
status: Final
type: Standards Track
category: Core
created: 2021-11-26

Simple Summary

This TIP is optimize leveldb read performance.

Abstract

Through analysis,parameters that affect performance read performance is affected by write_buffer_size, max_open_files, block_cache etc. We analyze existing databases and classify them according to read frequency, and set appropriate parameters for different type.

Specification

write_buffer_size

Amount of data to build up in memory,Larger values increase R/W performance.

max_open_files

Number of open files that can be used by the DB,increase this ,increase read performance.

Attention: to increase this ,you may increase ulimit first.

block_cache

Cache for blocks,increase this, increase read performance.

Motivation

Improve levelDB performance by tuning db parameter max_open_files.

Implementation

We have divided the existing database into three groups.

    1. Group A

      The vast majority of databases,with very little data, with low read frequency.

    1. Group B

      contract and code db, with little data, with normal read frequency.

    1. Group C

      delegation account and storage-row db, with large data, read frequently.

The following settings are based on 8c,16g,1t hdd, ulimit 65535

Online ,max_open_files is set to 100 for all dbs. The java-tron process holds database sst file around 1000 for now.

max_open_files

    1. read lowly

      max_open_files = 100 for Group A

    1. read normally

      max_open_files = 500 for Group B

    1. read frequently

      max_open_files = 1000 for Group C

how to set

add default for base settings ,defaultM for Group B, defaultL for Group C into storage.

Attention: properties = [],if you set db params in this, override the above three parameters settings

for example

storage {
    # Directory for storing persistent data
    db.version = 2,
    db.engine = "LEVELDB",
    db.directory = "database",
    index.directory = "index",

    # This configuration item is only for SolidityNode.
    # Turn off the index is "off", else "on".
    # Turning off the index will significantly improve the performance of the SolidityNode sync block.
    # You can turn off the index if you don't use the two interfaces getTransactionsToThis and getTransactionsFromThis.
    index.switch = "off"

    # This configuration item is used to database write strategy.
    # Synchronous writing is "true", else Asynchronous writing is "false".
    # Asynchronous writing significantly improves the performance of the FullNode sync block.
    # 1. If asynchronous, the write will be flushed from the operating system buffer cache.
    #    the machine crashes, some recent writes may be lost. Note that if it is just the process that
    #    crashes (i.e., the machine does not reboot), no writes will be lost;
    # 2. If synchronous, writes will be flush into leveldb directly.
    #     No writes will be lost when machine crashes, but it is slow.
    db.sync = false,

    # This configuration item controls the transaction result be put into transactionHistory database.
    # Turn off the switch is "off", else "on".
    # Turning off the switch, transaction result will not be put into transactionHistory database;
    # You can turn off the switch if you don't use the interface getransactioninfobyid.
    transHistory.switch = "on",
    ########### add new start ##########
    # custom default configs
    default = {
        maxOpenFiles = 100
    }
    # custom default  for read a lot configs ,"code", "contract"
    defaultM = {
        maxOpenFiles = 500
    }
    # custom default for  Read frequently configs,"account", "delegation","storage-row"
    defaultL = {
    maxOpenFiles = 1000
    }
   ########### add new end ##########
    # You can custom these 14 databases' configs:
    properties = [
    # END ANSIBLE MANAGED BLOCK
//    {
//      name = "account",
//      path = "storage_directory_test",
//      createIfMissing = true,
//      paranoidChecks = true,
//      verifyChecksums = true,
    //      compressionType = 1,        // compressed with snappy
    //      blockSize = 4096,           // 4  KB =         4 * 1024 B
    //      writeBufferSize = 10485760, // 10 MB = 10 * 1024 * 1024 B
    //      cacheSize = 10485760,       // 10 MB = 10 * 1024 * 1024 B
    //      maxOpenFiles = 100
    //    },
//    {
//      name = "account-index",
//      path = "storage_directory_test",
//      createIfMissing = true,
//      paranoidChecks = true,
//      verifyChecksums = true,
    //      compressionType = 1,        // compressed with snappy
    //      blockSize = 4096,           // 4  KB =         4 * 1024 B
    //      writeBufferSize = 10485760, // 10 MB = 10 * 1024 * 1024 B
    //      cacheSize = 10485760,       // 10 MB = 10 * 1024 * 1024 B
    //      maxOpenFiles = 100
    //    },
    ]

    }          

Performance Testing:

block count: 35880.

default.maxOpenFiles:100, defaultM.maxOpenFiles:500, defaultL.maxOpenFiles:1000.

We use arthas to monitor leveldb read performance

Strategy dbRead(ms) pushBlockAvg(ms) sst files mem(%)
base 0.05 201.03 1000 71.3
optimize 0.12 164.47 4100 72.1

Before optimization average cost: 201.03 ms, After optimization average cost: 164.47 ms Performance improvement: (201.03 - 164.47) / 201.03 = 18.18%.

Finally, you can increase or decrease maxOpenFiles depending on your situation.

Rationale

max_open_files is unsuitable for some dbs.