From e168bfe2413892f07dc0fd4b693b80e6582fd67d Mon Sep 17 00:00:00 2001 From: Dave Kelsey Date: Tue, 10 Dec 2024 15:14:17 +0000 Subject: [PATCH] Fixes to 0.6.0 with MkDocs 1.6.1 and mike 2.1.2 (#1672) Signed-off-by: Dave Kelsey Co-authored-by: Dave Kelsey --- .gitignore | 2 ++ 0.6.0/concepts/caliper-messengers/index.html | 2 +- 0.6.0/concepts/logging-control/index.html | 2 +- 0.6.0/concepts/rate-controllers/index.html | 2 +- 0.6.0/connectors/ethereum-config/index.html | 2 +- 0.6.0/connectors/fabric-config/index.html | 4 ++-- 0.6.0/search/search_index.json | 2 +- 0.6.0/sitemap.xml.gz | Bin 127 -> 127 bytes 8 files changed, 9 insertions(+), 7 deletions(-) create mode 100644 .gitignore diff --git a/.gitignore b/.gitignore new file mode 100644 index 000000000..83894b617 --- /dev/null +++ b/.gitignore @@ -0,0 +1,2 @@ +node_modules +/docs diff --git a/0.6.0/concepts/caliper-messengers/index.html b/0.6.0/concepts/caliper-messengers/index.html index 680b9728e..5d768fd62 100644 --- a/0.6.0/concepts/caliper-messengers/index.html +++ b/0.6.0/concepts/caliper-messengers/index.html @@ -1,4 +1,4 @@ - Messengers - Hyperledger Caliper
Skip to content

Messengers

Overview

Caliper uses an orchestrator to control workers that interact with the SUT in order to perform a benchmark. Messages are passed between the orchestrator and all workers in order to keep the workers synchronized, and to progress the specified benchmark tests. A user may specify the messaging protocol that is user by Caliper in order to facilitate communications between the orchestrator and worker.

Messengers

The messaging protocol to be used for communications between the orchestrator and worker during a benchmark is declared in the caliper runtime configuration file. Unspecified values will default to those specified in the default configuration file.

Permitted messengers are:

  • Process: The process messenger is the default messenger and is based on native NodeJS process based communications. This messenger type is only valid for instances when local workers are being used to perform a benchmark.
  • MQTT: The mqtt messenger uses MQTT to facilitate communication between the orchestrator and workers. This messenger type is valid for both local and distributed workers, and assumes the existence of an MQTT broker service that may be used, such as mosquitto.

Note

Mosquitto v2 requires explicit authorization and authentication configurations, which is a breaking change compared to v1. To migrate to v2, follow the official migration guide of Mosquitto.

The following yaml extract specifies the use of an MQTT communication method, using an existing MQTT broker that may be connected to via the specified address:

    worker:
+ Messengers - Hyperledger Caliper      

Messengers

Overview

Caliper uses an orchestrator to control workers that interact with the SUT in order to perform a benchmark. Messages are passed between the orchestrator and all workers in order to keep the workers synchronized, and to progress the specified benchmark tests. A user may specify the messaging protocol that is user by Caliper in order to facilitate communications between the orchestrator and worker.

Messengers

The messaging protocol to be used for communications between the orchestrator and worker during a benchmark is declared in the caliper runtime configuration file. Unspecified values will default to those specified in the default configuration file.

Permitted messengers are:

  • Process: The process messenger is the default messenger and is based on native NodeJS process based communications. This messenger type is only valid for instances when local workers are being used to perform a benchmark.
  • MQTT: The mqtt messenger uses MQTT to facilitate communication between the orchestrator and workers. This messenger type is valid for both local and distributed workers, and assumes the existence of an MQTT broker service that may be used, such as mosquitto.

Note

Mosquitto v2 requires explicit authorization and authentication configurations, which is a breaking change compared to v1. To migrate to v2, follow the official migration guide of Mosquitto.

The following yaml extract specifies the use of an MQTT communication method, using an existing MQTT broker that may be connected to via the specified address:

    worker:
         communication:
             method: mqtt
             address: mqtt://localhost:1883
diff --git a/0.6.0/concepts/logging-control/index.html b/0.6.0/concepts/logging-control/index.html
index 6828f3d1f..9a7d228be 100644
--- a/0.6.0/concepts/logging-control/index.html
+++ b/0.6.0/concepts/logging-control/index.html
@@ -1,4 +1,4 @@
- Logging Control - Hyperledger Caliper      

Logging Control

Overview

Caliper builds on the winston logger module to provide a flexible, multi-target logging mechanism. There are three different aspects when it comes to interacting with the Caliper logging subsystem:

  1. Customizing the logging style
  2. Configuring logging targets
  3. Creating your own loggers

The first two points can be achieved through the runtime configuration mechanism of Caliper. So make sure that you are familiar with the different way of overriding runtime settings before reading on. The examples below only set the different options through the command line. Naturally, any other setting source could be used.

The runtime configuration settings corresponding to logging reside under the caliper-logging key hierarchy. See the caliper.logging section of the default configuration file bundled with Caliper for the general structure of the settings.

Customizing the logging style

The two main aspects of the logging style are the message structure and the different formats that modify the message appearance if applied. The corresponding attributes are the caliper.logging.template property and the entire caliper.logging.formats property hierarchy, respectively.

The caliper.logging.formats hierarchy is special in a sense that every leaf property can be overridden one-by-one, even from the command line or from environment variables. As you will see later, this is not the case for the logging target settings.

Note

the following style settings apply to every specified logging target!

Setting the message structure

The message structure can be easily customized through the caliper.logging.template property. It is a simple string that contains predefined placeholders for some special values. Some placeholders are only available, when a corresponding format is also applied.

Let’s start with examining the default structure:

caliper:
+ Logging Control - Hyperledger Caliper      

Logging Control

Overview

Caliper builds on the winston logger module to provide a flexible, multi-target logging mechanism. There are three different aspects when it comes to interacting with the Caliper logging subsystem:

  1. Customizing the logging style
  2. Configuring logging targets
  3. Creating your own loggers

The first two points can be achieved through the runtime configuration mechanism of Caliper. So make sure that you are familiar with the different way of overriding runtime settings before reading on. The examples below only set the different options through the command line. Naturally, any other setting source could be used.

The runtime configuration settings corresponding to logging reside under the caliper-logging key hierarchy. See the caliper.logging section of the default configuration file bundled with Caliper for the general structure of the settings.

Customizing the logging style

The two main aspects of the logging style are the message structure and the different formats that modify the message appearance if applied. The corresponding attributes are the caliper.logging.template property and the entire caliper.logging.formats property hierarchy, respectively.

The caliper.logging.formats hierarchy is special in a sense that every leaf property can be overridden one-by-one, even from the command line or from environment variables. As you will see later, this is not the case for the logging target settings.

Note

the following style settings apply to every specified logging target!

Setting the message structure

The message structure can be easily customized through the caliper.logging.template property. It is a simple string that contains predefined placeholders for some special values. Some placeholders are only available, when a corresponding format is also applied.

Let’s start with examining the default structure:

caliper:
   logging:
     template: '%timestamp% %level% [%label%] [%module%] %message% (%metadata%)'
 

The following placeholders are available at the moment.

Placeholder Required format Description
%timestamp% timestamp Will be replaced with the timestamp of the log message.
%level% - Will be replaced with the severity level (e.g., info, warn, error) of the log message.
%label% label Will be replaced with the configured label of the process.
%module% - Will be replaced with the module name that logged the message.
%message% - Will be replaced with the actual message.
%metadata% - Will be replaced with the string representation of additional logging arguments.

You can override this template by changing the caliper-logging-template setting key, for example, from the command line: --caliper-logging-template="%time%: %message%"

Note:
  1. Do not forget the two enclosing quotes, since the template can contain spaces!
  2. This template if applied after every format has been applied!
  3. Adding spaces and different brackets this way is fine for simple coloring scenarios (or when coloring is disabled). However, when coloring the entire log message (or just parts that should be surrounded with additional characters), the result looks inconsistent when formatted this way. See the Tips & Tricks section for advanced message formatting scenarios.

Applying formats

The logging subsystem relies on winston’s format mechanism to further modify the log messages. The corresponding settings are under the caliper.logging.formats property.

Each of these formats can be easily disabled by setting its property to false. For example, to disable the colorize format, set its corresponding caliper.logging.formats.colorize property to false, for example, from the command line: --caliper-logging-formats-colorize=false

Similarly, any sub-property of a format can be easily overridden. For example, changing the caliper.logging.formats.colorize.colors.info property from the command line: --caliper-logging-formats-colorize-colors-info=blue

The following formats and their options (sub-properties) are supported.

Note

the different formats are applied in the order they are presented, which is important (see the Tips & Tricks section for the reason).

Timestamp

Adds the timestamp to the message in the specified format. The format string must conform to the rules of the fecha package.

For example: --caliper-logging-formats-timestamp="YYYY.MM.DD-HH:mm:ss.SSS"

Note

the format makes the timestamp attribute available in the message, thus it can be referenced in the message template, or in other formats that can access message attributes.

Label

Adds a custom label to the message. This is useful for differentiating multiple Caliper instances (or the distributed client instances) after collecting their logs.

For example: --caliper-logging-formats-label="caliper-test-1"

Note

the format makes the label attribute available in the message, thus it can be referenced in the message template, or in other formats that can access message attributes.

JSON

Outputs the messages as JSON strings. Useful for file-based logs that will be processed automatically by another tool. The format accepts a space sub-property as an options, which corresponds to the space parameter of the JSON.stringify function.

For example: --caliper-logging-formats-json="{space:0}"

Note:
  1. Enabling this format is easier from a configuration file. See the Tips & Tricks section.
  2. Setting space to a non-zero number will effectively format the JSON output with indentations on multiple lines. This could “spam” the console a bit (not a problem for log files, unless you care about the extra newlines).
  3. If this format is enabled, the rest of the formats won’t be applied, since their purpose is mainly to make console logs more readable.

Padding

Makes every log level string the same length, i.e., adds an extra space after "info" and "warn" make them the same length as "error" and "debug".

For example: --caliper-logging-formats-pad=true

Align

Prepends the message part of the log with a tabulator ("\t") to align the messages of different logs in the same place.

For example: --caliper-logging-formats-align=true

Note

if the message format contains other information with variable lengths (e.g., the module name), it can cause misaligned messages. So this is just a “best effort” format to make console messages more readable.

Attribute format

Defines string formatting options for the different attributes of a message. A “format string” can be provided for each message attribute that will “reformat” its value. The format string can use the %attribute% placeholder to reference the original value.

A format string can be specified for the following message attributes:

  • timestamp
  • level
  • label
  • module
  • message
  • metadata

For example, to customize the level information of the log (enclose it in the LEVEL[<level>] string):

--caliper-logging-formats-attributeformat-level="LEVEL[%attribute%]"

Note

if the attribute is not a string (which can be the case for the “metadata” attribute), then first the attribute value is converted to string, using JSON.stringify, and then it’s inserted into the format string.

Colorize

Applies color coding for the different attributes of a message. Enabling/disabling coloring is specified on an attribute basis. The following sub-properties can be set to true/false to enable/disable coloring for the corresponding attribute:

  • timestamp
  • level
  • label
  • module
  • message
  • metadata
  • all: setting it to true enables coloring for every attribute

For example, to colorize every part of the message: --caliper-logging-formats-colorize-all=true

Additionally, the format exposes a colors attribute, which contains coloring information for the info, error, warn and debug levels. The value of a level can be set to colors and styles provided by the colors package. To apply multiple styles, separate the values with a space.

For example, to really highlight error-level logs: --caliper-logging-formats-colorize-colors-error="white bgRed bold italic"

Note

the colors package offers some exotic styles which seem tempting at first, but don’t overdo it, for the sake of your eyes. Keep it simple.

Configuring logging targets

The source and target(s) of log messages are decoupled, thanks to the transport mechanism of winston. This means that a log message can be easily logged to multiple places, like the console, or different log files. Moreover, this is completely transparent to the module generating the log message!

The different targets are specified under the caliper.logging.targets property. The caliper.logging.targets section takes the following general form:

caliper:
diff --git a/0.6.0/concepts/rate-controllers/index.html b/0.6.0/concepts/rate-controllers/index.html
index 930443096..bcd251216 100644
--- a/0.6.0/concepts/rate-controllers/index.html
+++ b/0.6.0/concepts/rate-controllers/index.html
@@ -110,7 +110,7 @@
     "defaultSleepTime": 50
     }
 }
-

The replay rate controller can be specified by setting the rate controller type to the replay-rate string. The available options (opts property) are the following:

  • pathTemplate: the template for the file path where the transaction timings will be replayed from. The path can be either an absolute path or relative to the root Caliper directory.

The template can (and should) contain special “variables/placeholders” that can refer to special environment properties (see the remarks at the record rate controller). The available placeholders are the following: - <C>: placeholder for the 1-based index of the current client that uses this rate controller. - <R>: placeholder for the 1-based index of the current round that uses this rate controller.

  • inputFormat: optional. Determines the format in which the transaction timings are stored (see the details at the record rate controller). Defaults to "TEXT". The currently supported formats are the following:
    • "TEXT": each recorded timing is encoded as text on separate lines.
    • "BIN_BE": binary format with Big Endian encoding.
    • "BIN_LE": binary format with Little Endian encoding.
  • logWarnings: optional. Indicates whether to log that there are no more recordings to replay, so the defaultSleepTime is used between consecutive transactions. Defaults to false.
  • defaultSleepTime: optional. Determines the sleep time between transactions for the case when the benchmark execution is longer than the specified recording. Defaults to 20 ms.

About the recordings:

Special care must be taken, when using duration-based benchmark execution, as it is possible to issue more transactions than specified in the recording. A safety measure for this case is the defaultSleepTime option. This should only occur in the last few moments of the execution, affecting only a few transactions, that can be discarded before performing additional performance analyses on the results.

The recommended approach is to use transaction number-based round configurations, since the number of transactions to replay is known beforehand. Note, that the number of workers affects the actual number of transactions submitted by a worker.

Adding Custom Controllers

It is possible to use rate controllers that are not built-in controllers of Caliper. When you specify the rate controller in the test configuration file (see the architecture documentation), you must set the type and opts attributes.

You can set the type attribute so that it points to your custom JS file that satisfies the following criteria:

  1. The file/module exports a createRateController function that takes the following parameters:

    1. An TestMessage parameter that is the object representation of the opts attribute set in the configuration file, and contains the custom settings of your rate controller.
    2. A TransactionStatisticsCollector object that gives the rate controller access to the current worker transaction statistics
    3. A workerIndex parameter of type number that is the 0-based index of the worker process using this rate controller. The function must return an object (i.e., your rate controller instance) that satisfies the next criteria.
  2. The object returned by createRateController must implement the /packages/caliper-core/lib/rate-control/rateInterface.js interface, i.e., must provide the following async functions:

  3. applyRateControl , for performing the actual rate control by “blocking” the execution (in an async manner) for the desired time.
  4. end, for disposing any acquired resources at the end of a round.

The following example is a complete implementation of a rate control that doesn’t perform any control, thus allowing the submitting of transactions as fast as the program execution allows it (warning, this implementation run with many client processes could easily over-load a backend network, so use it with caution).

/*
+

The replay rate controller can be specified by setting the rate controller type to the replay-rate string. The available options (opts property) are the following:

  • pathTemplate: the template for the file path where the transaction timings will be replayed from. The path can be either an absolute path or relative to the root Caliper directory.

The template can (and should) contain special “variables/placeholders” that can refer to special environment properties (see the remarks at the record rate controller). The available placeholders are the following: - <C>: placeholder for the 1-based index of the current client that uses this rate controller. - <R>: placeholder for the 1-based index of the current round that uses this rate controller.

  • inputFormat: optional. Determines the format in which the transaction timings are stored (see the details at the record rate controller). Defaults to "TEXT". The currently supported formats are the following:
    • "TEXT": each recorded timing is encoded as text on separate lines.
    • "BIN_BE": binary format with Big Endian encoding.
    • "BIN_LE": binary format with Little Endian encoding.
  • logWarnings: optional. Indicates whether to log that there are no more recordings to replay, so the defaultSleepTime is used between consecutive transactions. Defaults to false.
  • defaultSleepTime: optional. Determines the sleep time between transactions for the case when the benchmark execution is longer than the specified recording. Defaults to 20 ms.

About the recordings:

Special care must be taken, when using duration-based benchmark execution, as it is possible to issue more transactions than specified in the recording. A safety measure for this case is the defaultSleepTime option. This should only occur in the last few moments of the execution, affecting only a few transactions, that can be discarded before performing additional performance analyses on the results.

The recommended approach is to use transaction number-based round configurations, since the number of transactions to replay is known beforehand. Note, that the number of workers affects the actual number of transactions submitted by a worker.

Adding Custom Controllers

It is possible to use rate controllers that are not built-in controllers of Caliper. When you specify the rate controller in the test configuration file (see the architecture documentation), you must set the type and opts attributes.

You can set the type attribute so that it points to your custom JS file that satisfies the following criteria:

  1. The file/module exports a createRateController function that takes the following parameters:

    1. An TestMessage parameter that is the object representation of the opts attribute set in the configuration file, and contains the custom settings of your rate controller.
    2. A TransactionStatisticsCollector object that gives the rate controller access to the current worker transaction statistics
    3. A workerIndex parameter of type number that is the 0-based index of the worker process using this rate controller. The function must return an object (i.e., your rate controller instance) that satisfies the next criteria.
  2. The object returned by createRateController must implement the /packages/caliper-core/lib/worker/rate-control/rateInterface.js interface, i.e., must provide the following async functions:

  3. applyRateControl , for performing the actual rate control by “blocking” the execution (in an async manner) for the desired time.
  4. end, for disposing any acquired resources at the end of a round.

The following example is a complete implementation of a rate control that doesn’t perform any control, thus allowing the submitting of transactions as fast as the program execution allows it (warning, this implementation run with many client processes could easily over-load a backend network, so use it with caution).

/*
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
diff --git a/0.6.0/connectors/ethereum-config/index.html b/0.6.0/connectors/ethereum-config/index.html
index 4fd985cc4..023334533 100644
--- a/0.6.0/connectors/ethereum-config/index.html
+++ b/0.6.0/connectors/ethereum-config/index.html
@@ -84,4 +84,4 @@
 }];
 
 await this.sutAdapter.sendRequests(requestsSettings);
-

Currently each method call inside sendRequests is sent separately, that is, they are NOT sent as a batch of calls on RPC.

To query a state on a contract state, set the readOnly attribute to true. The difference is that it can’t produce any change on the blockchain and node will answer with its local view of data. Like for traditional requests, currently there is no support for batch queries.

Transaction Data Gathered by the Adapter

The previously discussed sendRequests method returns an array whose elements correspond to the result of the submitted request(s) with the type of TxStatus. The class provides some standard and platform-specific information about its corresponding transaction.

The standard information provided by the type are the following:

  • GetID():string returns the transaction ID.
  • GetStatus():string returns the final status of the transaction, either success or failed.
  • GetTimeCreate():number returns the epoch when the transaction was submitted.
  • GetTimeFinal():number return the epoch when the transaction was finished.
  • IsCommitted():boolean indicates whether the transaction has been committed successfully.
  • IsVerified():boolean indicates whether we are sure about the final status of the transaction. Always true for successful transactions. False in all other cases.
  • GetResult():any returns the result of the transaction. Only requests with readOnly set to true will return the actual result of the call. Write operations, because they are asynchronous, will return the transaction hash.

License

The Caliper codebase is released under the Apache 2.0 license. Any documentation developed by the Caliper Project is licensed under the Creative Commons Attribution 4.0 International License. You may obtain a copy of the license, titled CC-BY-4.0, at http://creativecommons.org/licenses/by/4.0/.

\ No newline at end of file +

Currently each method call inside sendRequests is sent separately, that is, they are NOT sent as a batch of calls on RPC.

To query a state on a contract state, set the readOnly attribute to true. The difference is that it can’t produce any change on the blockchain and node will answer with its local view of data. Like for traditional requests, currently there is no support for batch queries.

Transaction Data Gathered by the Adapter

The previously discussed sendRequests method returns an array whose elements correspond to the result of the submitted request(s) with the type of TxStatus. The class provides some standard and platform-specific information about its corresponding transaction.

The standard information provided by the type are the following:

  • GetID():string returns the transaction ID.
  • GetStatus():string returns the final status of the transaction, either success or failed.
  • GetTimeCreate():number returns the epoch when the transaction was submitted.
  • GetTimeFinal():number return the epoch when the transaction was finished.
  • IsCommitted():boolean indicates whether the transaction has been committed successfully.
  • IsVerified():boolean indicates whether we are sure about the final status of the transaction. Always true for successful transactions. False in all other cases.
  • GetResult():any returns the result of the transaction. Only requests with readOnly set to true will return the actual result of the call. Write operations, because they are asynchronous, will return the transaction hash.

License

The Caliper codebase is released under the Apache 2.0 license. Any documentation developed by the Caliper Project is licensed under the Creative Commons Attribution 4.0 International License. You may obtain a copy of the license, titled CC-BY-4.0, at http://creativecommons.org/licenses/by/4.0/.

\ No newline at end of file diff --git a/0.6.0/connectors/fabric-config/index.html b/0.6.0/connectors/fabric-config/index.html index afaf0dbf6..1d71083a4 100644 --- a/0.6.0/connectors/fabric-config/index.html +++ b/0.6.0/connectors/fabric-config/index.html @@ -24,7 +24,7 @@
}; await this.sutAdapter.sendRequests(requestSettings); -

Note

sendRequests also accepts an array of request settings. However, Fabric does not support submitting an atomic batch of transactions like Sawtooth, so there is no guarantee that the order of these transactions will remain the same, or whether they will reside in the same block.

Gathered TX data

The previously discussed sendRequests function returns the result (or an array of results) for the submitted request(s) with the type of TxStatus. The class provides some standard and platform-specific information about its corresponding transaction.

The standard data provided are the following: - GetID():string returns the transaction ID. - GetStatus():string returns the final status of the transaction, either success or failed. - GetTimeCreate():number returns the epoch when the transaction was submitted. - GetTimeFinal():number return the epoch when the transaction was finished. - IsVerified():boolean indicates whether we are sure about the final status of the transaction. Unverified (considered failed) transactions could occur, for example, if the adapter loses the connection with every Fabric event hub, missing the final status of the transaction. - GetResult():Buffer returns one of the endorsement results returned by the chaincode as a Buffer. It is the responsibility of the user callback to decode it accordingly to the chaincode-side encoding.

The adapter also gathers the following platform-specific data (if observed) about each transaction, each exposed through a specific key name. The placeholders <P> and <O> in the key names are node names taking their values from the top-level peers and orderers sections from the network configuration file (e.g., endorsement_result_peer0.org1.example.com). The Get(key:string):any function returns the value of the observation corresponding to the given key. Alternatively, the GetCustomData():Map<string,any> returns the entire collection of gathered data as a Map.

Available data keys for all Fabric SUTs

The adapter-specific data keys that are available when binding to any of the Fabric SUT versions are :

Key name Data type Description
request_type string Either the transaction or query string value for traditional transactions or queries, respectively.

Available data keys for the Fabric 1.4 SUT when gateway is not enabled

The adapter-specific data keys that only the v1.4 SUT when not enabling the gateway makes available are :

Key name Data type Description
time_endorse number The Unix epoch when the adapter received the proposal responses from the endorsers. Saved even in the case of endorsement errors.
proposal_error string The error message in case an error occurred during sending/waiting for the proposal responses from the endorsers.
proposal_response_error_<P> string The error message in case the endorser peer <P> returned an error as endorsement result.
endorsement_result_<P> Buffer The encoded contract invocation result returned by the endorser peer <P>. It is the user callback’s responsibility to decode the result.
endorsement_verify_error_<P> string Has the value of 'INVALID' if the signature and identity of the endorser peer <P> couldn’t be verified. This verification step can be switched on/off through the runtime configuration options.
endorsement_result_error<P> string If the transaction proposal or query execution at the endorser peer <P> results in an error, this field contains the error message.
read_write_set_error string Has the value of 'MISMATCH' if the sent transaction proposals resulted in different read/write sets.
time_orderer_ack number The Unix epoch when the adapter received the confirmation from the orderer that it successfully received the transaction. Note, that this isn’t the actual ordering time of the transaction.
broadcast_error_<O> string The warning message in case the adapter did not receive a successful confirmation from the orderer node <O>.
broadcast_response_error_<O> string The error message in case the adapter received an explicit unsuccessful response from the orderer node <O>.
unexpected_error string The error message in case some unexpected error occurred during the life-cycle of a transaction.
commit_timeout_<P> string Has the value of 'TIMEOUT' in case the event notification about the transaction did not arrive in time from the peer node <P>.
commit_error_<P> string Contains the error code in case the transaction validation fails at the end of its life-cycle on peer node <P>.
commit_success_<P> number The Unix epoch when the adapter received a successful commit event from the peer node <P>. Note, that transactions committed in the same block have nearly identical commit times, since the SDK receives them block-wise, i.e., at the same time.
event_hub_error_<P> string The error message in case some event hub connection-related error occurs with peer node <P>.

You can access these data in your workload module after calling sendRequests:

let requestSettings = {
+

Note

sendRequests also accepts an array of request settings. However, Fabric does not support submitting an atomic batch of transactions like Sawtooth, so there is no guarantee that the order of these transactions will remain the same, or whether they will reside in the same block.

Gathered TX data

The previously discussed sendRequests function returns the result (or an array of results) for the submitted request(s) with the type of TxStatus. The class provides some standard and platform-specific information about its corresponding transaction.

The standard data provided are the following: - GetID():string returns the transaction ID. - GetStatus():string returns the final status of the transaction, either success or failed. - GetTimeCreate():number returns the epoch when the transaction was submitted. - GetTimeFinal():number return the epoch when the transaction was finished. - IsVerified():boolean indicates whether we are sure about the final status of the transaction. Unverified (considered failed) transactions could occur, for example, if the adapter loses the connection with every Fabric event hub, missing the final status of the transaction. - GetResult():Buffer returns one of the endorsement results returned by the chaincode as a Buffer. It is the responsibility of the user callback to decode it accordingly to the chaincode-side encoding.

The adapter also gathers the following platform-specific data (if observed) about each transaction, each exposed through a specific key name. The placeholders <P> and <O> in the key names are node names taking their values from the top-level peers and orderers sections from the network configuration file (e.g., endorsement_result_peer0.org1.example.com). The Get(key:string):any function returns the value of the observation corresponding to the given key. Alternatively, the GetCustomData():Map<string,any> returns the entire collection of gathered data as a Map.

Available data keys for all Fabric SUTs

The adapter-specific data keys that are available when binding to any of the Fabric SUT versions are :

Key name Data type Description
request_type string Either the transaction or query string value for traditional transactions or queries, respectively.

Available data keys for the Fabric 1.4 SUT when gateway is not enabled

The adapter-specific data keys that only the v1.4 SUT when not enabling the gateway makes available are :

Key name Data type Description
time_endorse number The Unix epoch when the adapter received the proposal responses from the endorsers. Saved even in the case of endorsement errors.
proposal_error string The error message in case an error occurred during sending/waiting for the proposal responses from the endorsers.
proposal_response_error_<P> string The error message in case the endorser peer <P> returned an error as endorsement result.
endorsement_result_<P> Buffer The encoded contract invocation result returned by the endorser peer <P>. It is the user callback’s responsibility to decode the result.
endorsement_verify_error_<P> string Has the value of 'INVALID' if the signature and identity of the endorser peer <P> couldn’t be verified. This verification step can be switched on/off through the runtime configuration options.
endorsement_result_error<P> string If the transaction proposal or query execution at the endorser peer <P> results in an error, this field contains the error message.
read_write_set_error string Has the value of 'MISMATCH' if the sent transaction proposals resulted in different read/write sets.
time_orderer_ack number The Unix epoch when the adapter received the confirmation from the orderer that it successfully received the transaction. Note, that this isn’t the actual ordering time of the transaction.
broadcast_error_<O> string The warning message in case the adapter did not receive a successful confirmation from the orderer node <O>.
broadcast_response_error_<O> string The error message in case the adapter received an explicit unsuccessful response from the orderer node <O>.
unexpected_error string The error message in case some unexpected error occurred during the life-cycle of a transaction.
commit_timeout_<P> string Has the value of 'TIMEOUT' in case the event notification about the transaction did not arrive in time from the peer node <P>.
commit_error_<P> string Contains the error code in case the transaction validation fails at the end of its life-cycle on peer node <P>.
commit_success_<P> number The Unix epoch when the adapter received a successful commit event from the peer node <P>. Note, that transactions committed in the same block have nearly identical commit times, since the SDK receives them block-wise, i.e., at the same time.
event_hub_error_<P> string The error message in case some event hub connection-related error occurs with peer node <P>.

You can access these data in your workload module after calling sendRequests:

let requestSettings = {
     contractId: 'marbles',
     contractVersion: '0.1.0',
     contractFunction: 'initMarble',
@@ -39,7 +39,7 @@
 let shortID = result.GetID().substring(8);
 let executionTime = result.GetTimeFinal() - result.GetTimeCreate();
 console.log(`TX [${shortID}] took ${executionTime}ms to execute. Result: ${result.GetStatus()}`);
-

The cleanupWorkloadModule function

The cleanupWorkloadModule function is called at the end of the round, and can be used to perform any resource cleanup required by your workload implementation.

Network configuration file reference

The YAML network configuration file of the adapter mainly describes the organizations and the identities associated with those organizations, It also provides explicit information about the channels in your Fabric network and the chaincode (containing 1 or more smart contracts) deployed to those channels. It can reference Common Connection Profiles for each organization (as common connection profiles are specific to a single organization). These are the same connection profiles that would be consumed by the node-sdk. Whoever creates the Fabric network and channels would be able to provide appropriate profiles for each organization.

The following sections detail each part separately. For a complete example, please refer to the example section or one of the files in the Caliper repositor, such as the caliper-fabric test folder.

name Required. Non-empty string.
The name of the configuration file.
name: Fabric
+

The cleanupWorkloadModule function

The cleanupWorkloadModule function is called at the end of the round, and can be used to perform any resource cleanup required by your workload implementation.

Network configuration file reference

The YAML network configuration file of the adapter mainly describes the organizations and the identities associated with those organizations, It also provides explicit information about the channels in your Fabric network and the chaincode (containing 1 or more smart contracts) deployed to those channels. It can reference Common Connection Profiles for each organization (as common connection profiles are specific to a single organization). These are the same connection profiles that would be consumed by the node-sdk. Whoever creates the Fabric network and channels would be able to provide appropriate profiles for each organization.

The following sections detail each part separately. For a complete example, please refer to the example section or one of the files in the Caliper repository, such as the caliper-fabric test folder.

name Required. Non-empty string.
The name of the configuration file.
name: Fabric
 
version Required. Non-empty string.
Specifies the YAML schema version that the Fabric SDK will use. Only the `'2.0.0'` string is allowed.
version: '2.0.0'
 
caliper Required. Non-empty object.
Contains runtime information for Caliper. Can contain the following keys.
  • blockchain Required. Non-empty string.
    Only the "fabric" string is allowed for this adapter.
    caliper:
         blockchain: fabric
    diff --git a/0.6.0/search/search_index.json b/0.6.0/search/search_index.json
    index fb943d5c5..76de10e9f 100644
    --- a/0.6.0/search/search_index.json
    +++ b/0.6.0/search/search_index.json
    @@ -1 +1 @@
    -{"config":{"lang":["en"],"separator":"[\\s\\u200b\\-_,:!=\\[\\]()\"`/]+|\\.(?!\\d)|&[lg]t;|(?!\\b)(?=[A-Z][a-z])","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Caliper","text":"

    Caliper is a blockchain performance benchmark framework, which allows users to test different blockchain solutions with predefined use cases, and get a set of performance test results.

    "},{"location":"#supported-blockchain-solutions","title":"Supported Blockchain Solutions","text":"

    Currently supported blockchain solutions:

    • Hyperledger Besu
    • Ethereum
    • Hyperledger Fabric
    "},{"location":"#supported-performance-metrics","title":"Supported Performance Metrics","text":"
    • Transaction/read throughput
    • Transaction/read latency (minimum, maximum, average, percentile)
    • Resource consumption (CPU, Memory, Network IO, \u2026)

    See PSWG for the exact definitions and corresponding measurement methods.

    "},{"location":"#architecture","title":"Architecture","text":"

    It helps to have a basic understanding of how Caliper works before diving into the examples. Have a look at the Architecture page!

    "},{"location":"#installing-caliper","title":"Installing Caliper","text":"

    Head to the Install & Usage page if you want to try Caliper right now. It\u2019s as simple as downloading an NPM package or starting a Docker container!

    "},{"location":"#sample-networks","title":"Sample Networks","text":"

    Sample benchmarks that may be used by Caliper are hosted on a companion GitHub repository.

    Important

    make sure that the version/tag of the benchmark repository matches the version of Caliper you are using! For example, if you are using Caliper v0.6.0, then checkout the v0.6.0 tag after cloning the benchmark repository. The main branch of the benchmark repository corresponds to the latest unstable Caliper version.

    "},{"location":"#how-to-contribute","title":"How to Contribute","text":"

    Every contribution is welcome! See the Contributing page for details.

    "},{"location":"#license","title":"License","text":"

    The Caliper codebase is released under the Apache 2.0 license. Any documentation developed by the Caliper Project is licensed under the Creative Commons Attribution 4.0 International License. You may obtain a copy of the license, titled CC-BY-4.0, at http://creativecommons.org/licenses/by/4.0/.

    "},{"location":"concepts/bench-config/","title":"Benchmark Configuration","text":""},{"location":"concepts/bench-config/#overview","title":"Overview","text":"

    The benchmark configuration file is one of the required configuration files necessary to run a Caliper benchmark. In contrast to the runtime configurations, used for tweaking the internal behavior of Caliper, the benchmark configuration pertains only to the execution of the benchmark workload and collection of the results.

    Note

    In theory, a benchmark configuration is independent of the system under test (SUT) and the internal configuration of Caliper. However, this independence might be limited by the implementation details of the benchmark workload module, which could target only a single SUT type.

    The benchmark configuration consists of three main parts:

    • Overview
    • Benchmark test settings
    • Monitoring settings
    • Example
    • License

    For a complete benchmark configuration example, refer to the last section.

    Note

    The configuration file can be either a YAML or JSON file, conforming to the format described below. The benchmark configuration file path can be specified for the manager and worker processes using the caliper-benchconfig setting key.

    "},{"location":"concepts/bench-config/#benchmark-test-settings","title":"Benchmark test settings","text":"

    The settings related to the benchmark workload all reside under the root test attribute, which has some general child attributes, and the important rounds attribute.

    Attribute Description test.name Short name of the benchmark to display in the report. test.description Detailed description of the benchmark to display in the report. test.workers Object of worker-related configurations. test.workers.type Currently unused. test.workers.number Specifies the number of worker processes to use for executing the workload. test.rounds Array of objects, each describing the settings of a round. test.rounds[i].label A short name of the rounds, usually corresponding to the types of submitted TXs. test.rounds[i].txNumber The number of TXs Caliper should submit during the round. test.rounds[i].txDuration The length of the round in seconds during which Caliper will submit TXs. test.rounds[i].rateControl The object describing the rate controller to use for the round. test.rounds[i].workload The object describing the workload module used for the round. test.rounds[i].workload.module The path to the benchmark workload module implementation that will construct the TXs to submit. test.rounds[i].workload.arguments Arbitrary object that will be passed to the workload module as configuration.

    A benchmark configuration with the above structure will define a benchmark run that consists of multiple rounds. Each round is associated with a rate controller that is responsible for the scheduling of TXs, and a workload module that will generate the actual content of the scheduled TXs.

    "},{"location":"concepts/bench-config/#monitoring-settings","title":"Monitoring settings","text":"

    The monitoring configuration determines what kind of metrics the manager process can gather and from where. The configuration resides under the monitors attribute. Refer to the monitors configuration page for the details.

    "},{"location":"concepts/bench-config/#example","title":"Example","text":"

    The example configuration below says the following:

    • Perform the benchmark run using 5 worker processes.
    • There will be two rounds.
    • The first init round will submit 500 TXs at a fixed 25 TPS send rate.
    • The content of the TXs are determined by the init.js workload module.
    • The second query round will submit TXs for 60 seconds at a fixed 5 TPS send rate.
    • The content of the TXs are determined by the query.js workload module.
    • The manager process will allow a Prometheus server to scrape information on port 3000 with a default scrape url of /metrics
    • The manager process should include the predefined metrics of all local Docker containers in the report.
    • The manager process should include the custom metric Endorse Time (s) based on the provided query for every available (peer) instance.
    test:\n  workers:\n    number: 5\n  rounds:\n    - label: init\n      txNumber: 500\n      rateControl:\n        type: fixed-rate\n        opts:\n          tps: 25\n      workload:\n        module: benchmarks/samples/fabric/marbles/init.js\n    - label: query\n      txDuration: 60\n      rateControl:\n        type: fixed-rate\n        opts:\n          tps: 5\n      workload:\n        module: benchmarks/samples/fabric/marbles/query.js\nmonitors:\n  transaction:\n  - module: prometheus\n  resource:\n  - module: docker\n    options:\n      interval: 1\n      containers: ['all']\n  - module: prometheus\n    options:\n      url: \"http://prometheus:9090\"\n      metrics:\n        include: [dev-.*, couch, peer, orderer]\n        queries:\n        - name: Endorse Time (s)\n          query: rate(endorser_propsal_duration_sum{chaincode=\"marbles:v0\"}[5m])/rate(endorser_propsal_duration_count{chaincode=\"marbles:v0\"}[5m])\n          step: 1\n          label: instance\n          statistic: avg\n
    "},{"location":"concepts/bench-config/#license","title":"License","text":"

    The Caliper codebase is released under the Apache 2.0 license. Any documentation developed by the Caliper Project is licensed under the Creative Commons Attribution 4.0 International License. You may obtain a copy of the license, titled CC-BY-4.0, at http://creativecommons.org/licenses/by/4.0/.

    "},{"location":"concepts/benchmark-generators/","title":"Benchmark Generators","text":""},{"location":"concepts/benchmark-generators/#overview","title":"Overview","text":"

    The Caliper benchmark generator is a Yeoman generator for generating the configuration and callback files used to perform benchmarks on deployed smart contracts. This page will take you through installing and using the generator.

    "},{"location":"concepts/benchmark-generators/#installation","title":"Installation","text":"

    You must first have Yeoman installed to be able to install and use the generator. You can do this using the following command:

    npm install -g yo\n

    Once Yeoman is installed, use the following command to install the generator:

    npm install -g @hyperledger/generator-caliper\n
    "},{"location":"concepts/benchmark-generators/#using-the-generator","title":"Using the Generator","text":"

    To use the generator, run the following command

    yo caliper\n

    If successful, you should get the following output where you will be prompted to choose a generator - choose Benchmark to run the Caliper benchmark generator:

    Welcome to the Hyperledger Caliper generator!\n? Which generator would you like to run? (Use arrow keys)\n\u276f Benchmark\n

    Note

    Alternatively, you can run the benchmark generator using: yo caliper:benchmark.

    You will then get the following output where you will be prompted to name your workspace:

    Welcome to the Hyperledger Caliper benchmark generator!\nLet's start off by creating a workspace folder!\n? What would you like to call your workspace? myWorkspace\n
    "},{"location":"concepts/benchmark-generators/#callback-prompts","title":"Callback Prompts","text":"

    The benchmark generator will inititally take you through generating the callback file and you will be prompted for:

    • the name of your smart contract,
    • the version of your smart contract,
    • a smart contract function
    • the argument variables of your smart contract function, which must be entered in array format

    By the end, you should have something similar to the following:

    Now for the callback file...\n? What is the name of your smart contract? fabcar\n? What is the version of your smart contract? 0.0.1\n? Which smart contract function would you like to perform the benchmark on? changeCarOwner\n? What are the arguments of your smart contract function? (e.g. [\"arg1\", \"arg2\"]) [\"CAR001\", \"Tom\"]\n
    "},{"location":"concepts/benchmark-generators/#configuration-prompts","title":"Configuration Prompts","text":"

    Next, you will be taken through generating the configuration file and you will be prompted for:

    • the name of the benchmark
    • a description of the benchmark
    • the number of workers

    Note

    On an invalid input value for workers, a default value will be used.

    • a label for differentiating between multiple rounds
    • the rate controller you would like to use. The generator currently provides the rate controllers displayed below as options. The generated configuration file will use default opts for whichever rate controller is chosen.
    ? Which rate controller would you like to use? (Use arrow keys)\n\u276f Fixed Rate\n  Fixed Backlog\n  Linear Rate\n  Fixed Feedback Rate\n
    • the method of which you should like to measure the length of the round. The round may be measured using either transaction duration, which defines the length of the round in seconds, or transaction number, which defines the length of the round using the number of transactions to be generated in the round.
    ? How would you like to measure the length of the round? (Use arrow keys)\n\u276f Transaction Duration\n  Transaction Number\n
    • a value for either txNumber or txDuration depending on the answer to previous prompt.

    Note

    On an invalid input value for either txDuration or txNumber, a default value will be used.

    By the end, you should have something similar to the following:

    Now for the benchmark configuration file...\n? What would you like to name your benchmark? Fabcar benchmark\n? What description would you like to provide for your benchamrk? Benchmark for performance testing fabcar contract modules\n? How many workers would you like to have? 5\n? What label (hint for test) would you like to provide for your benchmark? Round for changing car owner\n? Which rate controller would you like to use? Fixed Rate\n? How would you like to measure the length of the round? Transaction Number\n? How many transactions would you like to have in this round? 60\n

    On successful generation, you should see the following:

    Generating benchmarking files...\n   create myBenchmark/benchmarks/callbacks/changeCarOwner.js\n   create myBenchmark/benchmarks/config.yaml\nFinished generating benchmarking files\n

    The generator can also be run non-interactively from the command line using the following command line options:

    Options Default Description --workspace A workspace to put all the generated benchmark files. --contractId The name of your smart contract. --version The version of your smart contract. --contractFunction Your smart contract function. --contractArguments [] The arguments of your smart contract function. These must be in an array format. --benchmarkName A name for your benchmark. --benchmarkDescription A description for your benchmark. --workers 5 A value for the number of workers. --label A label for the round. --rateController The rate controller. --txType The way you would like to measure the length of the round - either \u201ctxDuration\u201d or \u201ctxNumber\u201d. --txDuration 50 The value for transaction duration if \u201ctxDuration\u201d was entered for txType. --txNumber 50 The value for transaction number if \u201ctxNumber\u201d was entered for txType.

    Below is an example of the generator being run non-interactively from the command line using the options above:

    yo caliper:benchmark -- --workspace 'myWorkspace' --contractId 'fabcar' --version '0.0.1' --contractFunction 'changeCarOwner' --contractArguments '[\"CAR001\", \"Tom\"]' --benchmarkName 'Fabcar benchmark' --benchmarkDescription 'Benchmark for performance testing fabcar contract modules' --workers 5 --label 'Round for changing car owner' --rateController 'fixed-rate' --txType 'txDuration' --txDuration 50\n

    Note

    All the options above are required when using the generator non-interactively.

    "},{"location":"concepts/benchmark-generators/#next-steps","title":"Next Steps","text":"

    The generated files will be placed within the workspace directory you named at the beginning of the generator, and you should have a directory structure similar to the one shown below:

    .myWorkspace\n\u2514\u2500\u2500 benchmarks\n    \u2502  callbacks\n    \u2502  \u2514\u2500\u2500 changeCarOwner.js\n    \u2514\u2500 config.yaml\n

    Currently, the generator does not provide invokerIdentity or contractArguments as inputs to your callback file. Should these be required, you will need to provide these in the run function of your callback file.

    The generator only generates a single callback file for a single smart contract function. If you would like to test other smart contract functions, you may create more callback files under the callbacks directory. You will also need to update your benchmark configuration file to take into account the extra callbacks.

    Note

    The benchmark generator will only create the benchmark configuration file and the callback file. You will still need to provide a network configuration file to be able to perform the benchmark.

    "},{"location":"concepts/benchmark-generators/#license","title":"License","text":"

    The Caliper codebase is released under the Apache 2.0 license. Any documentation developed by the Caliper Project is licensed under the Creative Commons Attribution 4.0 International License. You may obtain a copy of the license, titled CC-BY-4.0, at http://creativecommons.org/licenses/by/4.0/.

    "},{"location":"concepts/caliper-messengers/","title":"Messengers","text":""},{"location":"concepts/caliper-messengers/#overview","title":"Overview","text":"

    Caliper uses an orchestrator to control workers that interact with the SUT in order to perform a benchmark. Messages are passed between the orchestrator and all workers in order to keep the workers synchronized, and to progress the specified benchmark tests. A user may specify the messaging protocol that is user by Caliper in order to facilitate communications between the orchestrator and worker.

    "},{"location":"concepts/caliper-messengers/#messengers","title":"Messengers","text":"

    The messaging protocol to be used for communications between the orchestrator and worker during a benchmark is declared in the caliper runtime configuration file. Unspecified values will default to those specified in the default configuration file.

    Permitted messengers are:

    • Process: The process messenger is the default messenger and is based on native NodeJS process based communications. This messenger type is only valid for instances when local workers are being used to perform a benchmark.
    • MQTT: The mqtt messenger uses MQTT to facilitate communication between the orchestrator and workers. This messenger type is valid for both local and distributed workers, and assumes the existence of an MQTT broker service that may be used, such as mosquitto.

    Note

    Mosquitto v2 requires explicit authorization and authentication configurations, which is a breaking change compared to v1. To migrate to v2, follow the official migration guide of Mosquitto.

    The following yaml extract specifies the use of an MQTT communication method, using an existing MQTT broker that may be connected to via the specified address:

        worker:\n        communication:\n            method: mqtt\n            address: mqtt://localhost:1883\n

    If not specifying a caliper.yaml configuration file, the above may be specified as command line arguments to the CLI process as:

    --caliper-worker-communication-method mqtt --caliper-worker-communication-address mqtt://localhost:1883\n
    "},{"location":"concepts/caliper-messengers/#license","title":"License","text":"

    The Caliper codebase is released under the Apache 2.0 license. Any documentation developed by the Caliper Project is licensed under the Creative Commons Attribution 4.0 International License. You may obtain a copy of the license, titled CC-BY-4.0, at http://creativecommons.org/licenses/by/4.0/.

    "},{"location":"concepts/caliper-monitors/","title":"Resource and Transaction Monitors","text":""},{"location":"concepts/caliper-monitors/#overview","title":"Overview","text":"

    Caliper monitoring modules are used to collect resource utilization and transaction statistics during test execution, with the output being collated into the generated reports. Caliper monitors resources and transactions using:

    • Resource monitors. Collect statistics on resource utilization during benchmarking, with monitoring reset between test rounds.
    • Transaction monitors. Collect worker transaction statistics and provide conditional dispatch actions.
    "},{"location":"concepts/caliper-monitors/#resource","title":"Resource","text":"

    The type of resource monitor to be used within a Caliper benchmark is declared in the benchmark configuration file through the specification one or more monitoring modules in an array under the label monitors.resource.

    Permitted monitors are:

    • Process: The process monitor enables monitoring of a named process on the host machine, and is most typically used to monitor the resources consumed by the running clients. This monitor will retrieve statistics on: [memory(max), memory(avg), CPU(max), CPU(avg), Network I/O, Disc I/O]
    • Docker: The docker monitor enables monitoring of specified Docker containers on the host or a remote machine, through using the Docker Remote API to retrieve container statistics. This monitor will retrieve statistics on: [memory(max), memory(avg), CPU(max), CPU(avg), Network I/O, Disc I/O]
    • Prometheus: The prometheus monitor enables the retrieval of data from Prometheus. This monitor will only report based on explicit user provided queries that are issued to Prometheus. If defined, the provision of a Prometheus server will cause Caliper to default to using the Prometheus PushGateway.

    Each declared resource monitoring module is accompanied with options required to configure each of the named monitors. A common option for some modules is interval, which is used to configure the refresh interval at which point resource utilization is measured by the monitor.

    "},{"location":"concepts/caliper-monitors/#process-monitor","title":"Process Monitor","text":"

    The process monitoring module options comprise:

    • interval: monitor update interval in seconds
    • processes: of an array of [command, arguments, multiOutput] key:value pairs.
      • command: names the parent process to monitor
      • arguments: filters on the parent process being monitored
      • multiOutput: enables handling of the discovery of multiple processes and may be one of:
      • avg: take the average of process values discovered under command/name
      • sum: sum all process values discovered under command/name

    The following declares the monitoring of all local node processes that match caliper.js, with a 3 second update frequency, and the average of all discovered processes being taken.

    monitors:\n  resource:\n  - module: process\n    options:\n      interval: 3\n      processes: [{ command: 'node', arguments: 'caliper.js', multiOutput: 'avg' }]\n
    "},{"location":"concepts/caliper-monitors/#docker-monitor","title":"Docker Monitor","text":"

    The docker monitoring module options comprise:

    • interval: monitor update interval in seconds
    • cpuUsageNormalization: an optional boolean that may be used to convert the cpu usage in a more covenient value (scaled to 100) by normalising for the number of cores of the host machine, default is set to false
    • containers: an array of container names that may relate to local or remote docker containers to be monitored. If all local docker containers are to be monitored, this may be achieved by providing all as a name

    The following declares the monitoring of two named docker containers; one local and the other remote, with a 5 second update frequency:

    monitors:\n  resource:\n  - module: docker\n    options:\n      interval: 5\n      containers:\n      - peer0.org1.example.com\n      - http://192.168.1.100:2375/orderer.example.com\n

    The following declares the monitoring of all local docker containers, with a 5 second update frequency and normalization of the cpuUsage metric set to true.

    monitors:\n  resource:\n  - module: docker\n    options:\n      interval: 5\n      cpuUsageNormalization: true\n      containers:\n      - all\n
    "},{"location":"concepts/caliper-monitors/#prometheus-monitor","title":"Prometheus Monitor","text":"

    Prometheus is an open-source systems monitoring and alerting toolkit that scrapes metrics from instrumented jobs, either directly or via an intermediary push gateway for short-lived jobs. It stores all scraped samples locally and runs rules over this data to either aggregate and record new time series from existing data or generate alerts. Grafana or other API consumers can be used to visualize the collected data.

    All data stored on Prometheus may be queried by Caliper using the Prometheus query HTTP API. At a minimum this may be used to perform aggregate queries in order to report back the transaction statistics, though it is also possible to perform custom queries in order to report back information that has been scraped from other connected sources. Queries issued are intended to generate reports and so are expected to result in either a single value, or a vector that can be condensed into a single value through the application of a statistical routine. It is advisable to create required queries using Grafana to ensure correct operation before transferring the query into the monitor. Please see Prometheus and Grafana documentation for more information.

    "},{"location":"concepts/caliper-monitors/#configuring-the-prometheus-monitor","title":"Configuring The Prometheus Monitor","text":"

    The prometheus monitoring module options comprise:

    • url: The Prometheus URL, used for direct queries
    • metrics: The queries to be run for inclusion within the Caliper report, comprised of to keys: include and queries.
      • include a string array that is used to determine metric inclusion through javascript regex. Any query results where the label of interest, as specified in the queries block, matches an item within the include list via regex, will be included in a generated report.
      • queries a series of blocks that describe the queries that are to be run at the end of each Caliper test.

    The queries block is defined by:

    • name: the metric name that the query relates to, used when building the report
    • query: the query to be issued to the Prometheus server at the end of each test. Note that Caliper will add time bounding for the query so that only results pertaining to the test round are included.
    • step: the timing step size to use within the range query
    • label: a string to match on the returned query and used as a component identifier when populating the report
    • statistic: if multiple values are returned, for instance if looking at a specific resource over a time range, the statistic will condense the values to a single result to enable reporting. Permitted options are:
      • avg: return the average from all values
      • max: return the maximum from all values
      • min: return the minimum from all values
      • sum: return the summation of all values
      • multiplier: An optional multiplier that may be used to convert exported metrics into a more convenient value (such as converting bytes to GB)

    The following declares a Prometheus monitor that will run two bespoke queries between each test within the benchmark

    monitors:\n    resource:\n    - module: prometheus\n      options:\n        url: \"http://localhost:9090\"\n        metrics:\n            include: [dev-.*, couch, peer, orderer]\n            queries:\n                - name: Endorse Time (s)\n                  query: rate(endorser_propsal_duration_sum{chaincode=\"marbles:v0\"}[1m])/rate(endorser_propsal_duration_count{chaincode=\"marbles:v0\"}[1m])\n                  step: 1\n                  label: instance\n                  statistic: avg\n                - name: Max Memory (MB)\n                  query: sum(container_memory_rss{name=~\".+\"}) by (name)\n                  step: 10\n                  label: name\n                  statistic: max\n                  multiplier: 0.000001\n
    The two queries above will be listed in the generated report as \u201cEndorse Time (s)\u201d and \u201cMax Memory (MB)\u201d respectively:

    • Endorse Time (s): Runs the listed query with a step size of 1; filters on return tags within the Prometheus query response using the instance label; exclude the result if the instance value does not regex match any of the string values provided in the include array; if the instance does match an include option, then determine the average of all return results and return this value to be reported under \u201cEndorse Time (s)\u201d.
    • Max Memory (MB): Runs the listed query with a step size of 10; filter return tags within the Prometheus query response using the name label; exclude the result if the instance value does not regex match any of the string values provided in the include array; if the instance does match an include option, then determine the maximum of all return results; multiply by the provided multiplier and return this value to be reported under \u201cMax Memory (MB)\u201d.

    Returned components with labels that pass a regex test against the include array items, will be included within the report; all others will be omitted.

    "},{"location":"concepts/caliper-monitors/#basic-auth","title":"Basic Auth","text":"

    It is possible to use a Prometheus Server that is secured via basic authentication through provision of a username and password as runtime parameters, under the flags:

    • caliper-auth-prometheus-username
    • caliper-auth-prometheus-password

    These will be used to augment the configuration file based URL prior to making a connection.

    "},{"location":"concepts/caliper-monitors/#transaction","title":"Transaction","text":"

    Transaction monitors are used by Caliper workers to act on the completion of transactions. They are used internally to aggregate and dispatch transaction statistics to the manager process to enable transaction statistics aggregation for progress reporting via the default transaction monitor, and report generation.

    The default transaction monitor, used for progress reporting by consuming information from the internal transaction monitor, may be updated through configuration file settings:

    • caliper-progress-reporting-enabled: boolean flag to enable progress reporting, default true
    • caliper-progress-reporting-interval: numeric value to set the update frequency, in milliseconds (default 5000) Additional transaction monitoring modules include:

    • logging

    • prometheus-push

    One or more transaction modules may be specified by naming them as modules with an accompanying options block in an array format under monitors.transaction.

    "},{"location":"concepts/caliper-monitors/#logging","title":"Logging","text":"

    The logging transaction module is used to log aggregated transaction statistics at the completion of a test round, within the worker. The following specifies the use of a logging transaction monitor. No options are required by the module.

    monitors:\n    transaction:\n    - module: logging\n
    "},{"location":"concepts/caliper-monitors/#prometheus","title":"Prometheus","text":"

    The prometheus transaction module is used to expose current transaction statistics of all workers to a Prometheus server, via a scrape mechanism. The module exposes the following metrics:

    • caliper_tx_submitted (counter)
    • caliper_tx_finished (counter)
    • caliper_tx_e2e_latency (histogram)

    The following specifies the use of a prometheus transaction module that exposes metrics for collection on the default port (3000) and the default scrape URL (/metrics).

    monitors:\n    transaction:\n    - module: prometheus\n

    If operating with process based workers, each worker will increment the default (or overridden) port with their 0 based index, thereby exposing metrics for each worker on different ports.

    It is the responsibility of the user to configure a Prometheus server that correctly targets the exposed URLS through a correctly specified configuration file.

    Options comprise:

    • metricPath: override for the metrics path to be scraped (default /metrics).
    • scrapePort: override for the port to be used when configuring the scrape sever (default 3000).
    • processMetricCollectInterval: time interval for default metrics collection, enabled when present
    • defaultLabels: object of key:value pairs to augment the default labels applied to the exposed metrics during collection.
    • histogramBuckets: override for the histogram to be used for collection of caliper_tx_e2e_latency
      • explicit: direct pass through of user defined bucket
      • linear: use a linear bucket with user defined start, width and count parameters
        • start: start bucket size
        • width: bucket width
        • count: number of buckets to create
      • exponential
        • start: start bucket size
        • factor: bucket factor
        • count: number of buckets to create
    "},{"location":"concepts/caliper-monitors/#prometheus-push-gateway","title":"Prometheus Push Gateway","text":"

    The prometheus-push transaction module is used to expose current transaction statistics of all workers to a Prometheus server, via a push gateway. The module exposes the following metrics:

    • caliper_tx_submitted (counter)
    • caliper_tx_finished (counter)
    • caliper_tx_e2e_latency (histogram)

    The following specifies the use of a prometheus-push transaction module that sends current transaction statistics to a push gateway located at http://localhost:9091 at 5 second intervals.

    monitors:\n    transaction:\n    - module: prometheus-push\n      options:\n        pushInterval: 5000\n        pushUrl: \"http://localhost:9091\"\n

    Options comprise:

    • pushInterval: push interval in milliseconds
    • pushUrl: URL for Prometheus Push Gateway
    • processMetricCollectInterval: time interval for default metrics collection, enabled when present
    • defaultLabels: object of key:value pairs to augment the default labels applied to the exposed metrics during collection.
    • histogramBuckets: override for the histogram to be used for collection of caliper_tx_e2e_latency
      • explicit: direct pass through of user defined bucket
      • linear: use a linear bucket with user defined start, width and count parameters
        • start: start bucket size
        • width: bucket width
        • count: number of buckets to create
      • exponential
        • start: start bucket size
        • factor: bucket factor
        • count: number of buckets to create

    Use of a prometheus-push transaction module is predicated on the availability and use of a Prometheus Push Gateway that is available as a scrape target to Prometheus.

    "},{"location":"concepts/caliper-monitors/#basic-auth_1","title":"Basic Auth","text":"

    It is possible to use a Prometheus Push Gateway that is secured via basic authentication through provision of a username and password as runtime parameters, under the flags:

    • caliper-auth-prometheuspush-username
    • caliper-auth-prometheuspush-password

    These will be used to augment the configuration file based URL prior to making a connection.

    "},{"location":"concepts/caliper-monitors/#grafana-visualization","title":"Grafana Visualization","text":"

    Grafana is an analytics platform that may be used to query and visualize metrics collected by Prometheus. Caliper clients make the following metrics available, either via a direct scrape or indirectly via a Prometheus Push Gateway:

    • caliper_tx_submitted (counter)
    • caliper_tx_finished (counter)
    • caliper_tx_e2e_latency (histogram)

    Each of the above are tagged with the following default labels:

    • roundLabel: the current test round label
    • roundIndex: the current test round index
    • workerIndex: the zero based worker index that is sending the information

    We are currently working on a Grafana dashboard to give you immediate access to the metrics published above, but in the interim please feel free to create custom queries to view the above metrics that are accessible in real time.

    "},{"location":"concepts/caliper-monitors/#resource-charting","title":"Resource Charting","text":"

    The data from each monitor is capable of being output in chart form within the generated Caliper report, via an option within the benchmark configuration file for each monitor. In addition to tabulated data for resource monitors, Caliper currently supports rendering of the following charts using charting.js:

    • horizontal bar
    • polar area

    Charting is an option that is available for each resource monitor, and the specification of the charting to be produced is specified under each monitor type within the benchmark configuration file, under a charting block. It is possible to specify multiple charting options for a single resource monitor.

    A chart will contain data for all items that are being tracked by the monitor; it is only possible to filter on the metrics that are to be charted. The following declares the charting block that is valid for the listed monitors:

    charting:\n  bar:\n  - metrics: [all | <sting list>]\n  polar:\n  - metrics: [all | <sting list>]\n

    If the all option is specified, then a chart will be output for each metric and include all monitored items within each chart. It is possible to filter on metrics by providing a comma separated list. The provided list is matched against metrics using a string comparison, and so it is only required to provide the initial part of the required match. The following declares a charting block that specifies a bar chart for all available metrics, and a polar chart for only metric0 and metric1:

    charting:\n  bar:\n  - metrics: [all]\n  polar:\n  - metrics: [metric0, metric1]\n
    "},{"location":"concepts/caliper-monitors/#process-charting","title":"Process Charting","text":"

    he process resource monitor exposes the following metrics: Memory(max), Memory(avg), CPU%(max), CPU%(avg).

    The following declares the monitoring of any running processes named caliper.js, with charting options specified to produce bar charts for all available metrics. Charts will be produced containing data from all monitored processes:

    monitors:\n  resource:\n  - module: process\n    options:\n      interval: 3\n      processes: [{ command: 'node', arguments: 'caliper.js', multiOutput: 'avg' }]\n      charting:\n        bar:\n          metrics: [all]\n
    "},{"location":"concepts/caliper-monitors/#docker-charting","title":"Docker Charting","text":"

    The docker resource monitor exposes the following metrics: Memory(max), Memory(avg), CPU%(max), CPU%(avg), Traffic In, Traffic Out, Disc Read, Disc Write.

    The following declares the monitoring of all local docker containers, with charting options specified to produce bar charts for Memory(avg) and CPU%(avg), and polar charts for all metrics. Charts will be produced containing data from all monitored containers:

    monitors:\n  resource:\n  - module: docker\n    options:\n      interval: 5\n      containers:\n      - all\n      charting:\n        bar:\n          metrics: [Memory(avg), CPU%(avg)]\n        polar:\n          metrics: [all]\n
    "},{"location":"concepts/caliper-monitors/#prometheus-charting","title":"Prometheus Charting","text":"

    The Prometheus monitor enables user definition of all metrics within the configuration file.

    The following declares the monitoring of two user defined metrics Endorse Time(s) and Max Memory(MB). Charting options are specified to produce polar charts filtered on the metric Max Memory (MB), and bar charts of all user defined metrics.

    monitors:\n    resource:\n    - module: prometheus\n      options:\n        url: \"http://localhost:9090\"\n        metrics:\n            include: [dev.*, couch, peer, orderer]\n            queries:\n                - name: Endorse Time (s)\n                  query: rate(endorser_propsal_duration_sum{chaincode=\"marbles:v0\"}[1m])/rate(endorser_propsal_duration_count{chaincode=\"marbles:v0\"}[1m])\n                  step: 1\n                  label: instance\n                  statistic: avg\n                - name: Max Memory (MB)\n                  query: sum(container_memory_rss{name=~\".+\"}) by (name)\n                  step: 10\n                  label: name\n                  statistic: max\n                  multiplier: 0.000001\n        charting:\n          polar:\n            metrics: [Max Memory (MB)]\n          bar:\n            metrics: [all]\n
    "},{"location":"concepts/caliper-monitors/#license","title":"License","text":"

    The Caliper codebase is released under the Apache 2.0 license. Any documentation developed by the Caliper Project is licensed under the Creative Commons Attribution 4.0 International License. You may obtain a copy of the license, titled CC-BY-4.0, at http://creativecommons.org/licenses/by/4.0/.

    "},{"location":"concepts/declarative-workload-module/","title":"Declarative Workloads","text":""},{"location":"concepts/declarative-workload-module/#overview","title":"Overview","text":"

    DeclarativeWorkloadModuleBase is a base class that can be implemented for declaratively assigning workload parameters. The Contracts, Functions and Parameters for test runs are specified under arguments.behavior.

    "},{"location":"concepts/declarative-workload-module/#example","title":"Example","text":"
    workload:\n  module: declarative\n  arguments:\n    parameterOne: param1\n    parameterTwo: 42\n    behavior:\n      contracts:\n      - name: contract1\n        functions:\n        - name: function1\n          parameters:\n          - name: randomNumber\n            type: uniform_random\n            options:\n              min: 10\n              max: 100\n

    The example above means the follows:

    • The WorkloadModule used here is declarative.
    • The roundArguments taken in by Caliper are parameterOne, assigned the value of 'param1' and parameterTwo assigned the value of 42.
    • The arguments.behavior section specifies the declared properties of the workload module.
    • A contracts list is defined.
    • contracts contains name, followed by functions which has function1 present in it as the only list item.
    • functions contains name and parameters.
    • parameters contains a value provider with the name randomNumber of type uniform_random. This generates a random number between 10 and 100 for the parameter.
    "},{"location":"concepts/declarative-workload-module/#hierarchy","title":"Hierarchy","text":"

    Under arguments.behavior, contracts is the top layer. It consists of a list of contracts to be tested. Within a single contracts list element, functions property holds the list of all functions present under that contract. Similarly, under each functions list item, there is a parameters list which has different types of user defined parameters under it.

    "},{"location":"concepts/declarative-workload-module/#contracts","title":"Contracts","text":"

    Used to specify the list of contracts to be tested. Each contracts list element has the following format.

    Property Type Description name string Name of the SUT contract to be tested. functionSelection string Type of contract picking logic. function list List of function descriptions."},{"location":"concepts/declarative-workload-module/#functions","title":"Functions","text":"

    Used to specify the list of functions under a contract to be tested. Each functions list element has the following format.

    Property Type Description name string Name of the SUT function to be tested. parameters list List of parameter descriptions for the function."},{"location":"concepts/declarative-workload-module/#parameters","title":"Parameters","text":"

    Used to specify different generated parameters for each function.

    Property Type Description type string Assigned a value according to the type of parameter used. name string Parameter Name. options string Additional information about the parameter definition.

    The parameters list can contain one or more of the following items.

    "},{"location":"concepts/declarative-workload-module/#uniform-random","title":"Uniform Random","text":"

    Value provider format for generating a random number within a given range.

    "},{"location":"concepts/declarative-workload-module/#example_1","title":"Example","text":"
    - name: randomNumber\n  type: uniform_random\n  options:\n    min: 0\n    max: 100\n
    "},{"location":"concepts/declarative-workload-module/#attributes","title":"Attributes","text":"Property Type Description options.min number Minimum inclusive range for generated random number. options.max number Maximum inclusive range for generated random number."},{"location":"concepts/declarative-workload-module/#parameters-reference","title":"Parameters Reference","text":"

    Value Provider format for referencing a module.arguments item.

    "},{"location":"concepts/declarative-workload-module/#example_2","title":"Example","text":"
    - name: referenceNumber\n  type: parameter_reference\n  options:\n    name: marbleNumber #assuming this is defined under round.arguments\n
    "},{"location":"concepts/declarative-workload-module/#attributes_1","title":"Attributes","text":"Property Type Description options.name string Should be a variable name specified under module.arguments."},{"location":"concepts/declarative-workload-module/#variable-reference","title":"Variable Reference","text":"

    Value Provider format for referencing a Caliper workload variable.

    "},{"location":"concepts/declarative-workload-module/#example_3","title":"Example","text":"
    - name: referenceNumber\n  type: variable_reference\n  options:\n    name: txIndex #a Caliper workload parameter\n
    "},{"location":"concepts/declarative-workload-module/#attributes_2","title":"Attributes","text":"Property Type Description options.name string Should refer to a base-class provided variable."},{"location":"concepts/declarative-workload-module/#list-element","title":"List Element","text":"

    Value provider format for selecting an item from a given list.

    "},{"location":"concepts/declarative-workload-module/#example_4","title":"Example","text":"
    - name: selectedColor\n  type: list_element\n  options:\n    list: ['red', 'blue', 'green']\n    selector:\n      type: variable_reference\n      options:\n        name: txIndex #assuming this is defined under module.arguments\n
    "},{"location":"concepts/declarative-workload-module/#attributes_3","title":"Attributes","text":"Property Type Description options.list list List from which an element is chosen. options.selector object Contains information about any valid numeric value provider for selecting elements by index."},{"location":"concepts/declarative-workload-module/#formatted-string","title":"Formatted String","text":"

    Value provider format for generating formatted strings.

    "},{"location":"concepts/declarative-workload-module/#example_5","title":"Example","text":"
    - name: generatedString\n  type: formatted_string\n  options:\n    format: 'example_{1}_{2}'\n    parts:\n      - type: parameter_reference\n        options:\n          name: marbleIndex\n      - type: variable_reference\n        options:\n          name: txIndex\n
    "},{"location":"concepts/declarative-workload-module/#attributes_4","title":"Attributes","text":"Property Type Description options.format string Specifies format and placeholders for variables. Placeholders are specified using this syntax: {variable_no}. 1-based indexing of the parts list is used for this purpose. options.parts list Specifies variable and parameter reference value providers for use in the string."},{"location":"concepts/declarative-workload-module/#implementing-the-declarativeworkloadmodulebase-class","title":"Implementing the DeclarativeWorkloadModuleBase class","text":""},{"location":"concepts/declarative-workload-module/#submitwitharguments","title":"submitWithArguments","text":"

    The DeclarativeWorkloadModuleBase is exported as a module. In order to use it, submitWithArguments() needs to be implemented by the user. It accepts generatedArguments during a Caliper run.

    "},{"location":"concepts/logging-control/","title":"Logging Control","text":""},{"location":"concepts/logging-control/#overview","title":"Overview","text":"

    Caliper builds on the winston logger module to provide a flexible, multi-target logging mechanism. There are three different aspects when it comes to interacting with the Caliper logging subsystem:

    1. Customizing the logging style
    2. Configuring logging targets
    3. Creating your own loggers

    The first two points can be achieved through the runtime configuration mechanism of Caliper. So make sure that you are familiar with the different way of overriding runtime settings before reading on. The examples below only set the different options through the command line. Naturally, any other setting source could be used.

    The runtime configuration settings corresponding to logging reside under the caliper-logging key hierarchy. See the caliper.logging section of the default configuration file bundled with Caliper for the general structure of the settings.

    "},{"location":"concepts/logging-control/#customizing-the-logging-style","title":"Customizing the logging style","text":"

    The two main aspects of the logging style are the message structure and the different formats that modify the message appearance if applied. The corresponding attributes are the caliper.logging.template property and the entire caliper.logging.formats property hierarchy, respectively.

    The caliper.logging.formats hierarchy is special in a sense that every leaf property can be overridden one-by-one, even from the command line or from environment variables. As you will see later, this is not the case for the logging target settings.

    Note

    the following style settings apply to every specified logging target!

    "},{"location":"concepts/logging-control/#setting-the-message-structure","title":"Setting the message structure","text":"

    The message structure can be easily customized through the caliper.logging.template property. It is a simple string that contains predefined placeholders for some special values. Some placeholders are only available, when a corresponding format is also applied.

    Let\u2019s start with examining the default structure:

    caliper:\n  logging:\n    template: '%timestamp% %level% [%label%] [%module%] %message% (%metadata%)'\n

    The following placeholders are available at the moment.

    Placeholder Required format Description %timestamp% timestamp Will be replaced with the timestamp of the log message. %level% - Will be replaced with the severity level (e.g., info, warn, error) of the log message. %label% label Will be replaced with the configured label of the process. %module% - Will be replaced with the module name that logged the message. %message% - Will be replaced with the actual message. %metadata% - Will be replaced with the string representation of additional logging arguments.

    You can override this template by changing the caliper-logging-template setting key, for example, from the command line: --caliper-logging-template=\"%time%: %message%\"

    Note:
    1. Do not forget the two enclosing quotes, since the template can contain spaces!
    2. This template if applied after every format has been applied!
    3. Adding spaces and different brackets this way is fine for simple coloring scenarios (or when coloring is disabled). However, when coloring the entire log message (or just parts that should be surrounded with additional characters), the result looks inconsistent when formatted this way. See the Tips & Tricks section for advanced message formatting scenarios.
    "},{"location":"concepts/logging-control/#applying-formats","title":"Applying formats","text":"

    The logging subsystem relies on winston\u2019s format mechanism to further modify the log messages. The corresponding settings are under the caliper.logging.formats property.

    Each of these formats can be easily disabled by setting its property to false. For example, to disable the colorize format, set its corresponding caliper.logging.formats.colorize property to false, for example, from the command line: --caliper-logging-formats-colorize=false

    Similarly, any sub-property of a format can be easily overridden. For example, changing the caliper.logging.formats.colorize.colors.info property from the command line: --caliper-logging-formats-colorize-colors-info=blue

    The following formats and their options (sub-properties) are supported.

    Note

    the different formats are applied in the order they are presented, which is important (see the Tips & Tricks section for the reason).

    "},{"location":"concepts/logging-control/#timestamp","title":"Timestamp","text":"

    Adds the timestamp to the message in the specified format. The format string must conform to the rules of the fecha package.

    For example: --caliper-logging-formats-timestamp=\"YYYY.MM.DD-HH:mm:ss.SSS\"

    Note

    the format makes the timestamp attribute available in the message, thus it can be referenced in the message template, or in other formats that can access message attributes.

    "},{"location":"concepts/logging-control/#label","title":"Label","text":"

    Adds a custom label to the message. This is useful for differentiating multiple Caliper instances (or the distributed client instances) after collecting their logs.

    For example: --caliper-logging-formats-label=\"caliper-test-1\"

    Note

    the format makes the label attribute available in the message, thus it can be referenced in the message template, or in other formats that can access message attributes.

    "},{"location":"concepts/logging-control/#json","title":"JSON","text":"

    Outputs the messages as JSON strings. Useful for file-based logs that will be processed automatically by another tool. The format accepts a space sub-property as an options, which corresponds to the space parameter of the JSON.stringify function.

    For example: --caliper-logging-formats-json=\"{space:0}\"

    Note:
    1. Enabling this format is easier from a configuration file. See the Tips & Tricks section.
    2. Setting space to a non-zero number will effectively format the JSON output with indentations on multiple lines. This could \u201cspam\u201d the console a bit (not a problem for log files, unless you care about the extra newlines).
    3. If this format is enabled, the rest of the formats won\u2019t be applied, since their purpose is mainly to make console logs more readable.
    "},{"location":"concepts/logging-control/#padding","title":"Padding","text":"

    Makes every log level string the same length, i.e., adds an extra space after \"info\" and \"warn\" make them the same length as \"error\" and \"debug\".

    For example: --caliper-logging-formats-pad=true

    "},{"location":"concepts/logging-control/#align","title":"Align","text":"

    Prepends the message part of the log with a tabulator (\"\\t\") to align the messages of different logs in the same place.

    For example: --caliper-logging-formats-align=true

    Note

    if the message format contains other information with variable lengths (e.g., the module name), it can cause misaligned messages. So this is just a \u201cbest effort\u201d format to make console messages more readable.

    "},{"location":"concepts/logging-control/#attribute-format","title":"Attribute format","text":"

    Defines string formatting options for the different attributes of a message. A \u201cformat string\u201d can be provided for each message attribute that will \u201creformat\u201d its value. The format string can use the %attribute% placeholder to reference the original value.

    A format string can be specified for the following message attributes:

    • timestamp
    • level
    • label
    • module
    • message
    • metadata

    For example, to customize the level information of the log (enclose it in the LEVEL[<level>] string):

    --caliper-logging-formats-attributeformat-level=\"LEVEL[%attribute%]\"

    Note

    if the attribute is not a string (which can be the case for the \u201cmetadata\u201d attribute), then first the attribute value is converted to string, using JSON.stringify, and then it\u2019s inserted into the format string.

    "},{"location":"concepts/logging-control/#colorize","title":"Colorize","text":"

    Applies color coding for the different attributes of a message. Enabling/disabling coloring is specified on an attribute basis. The following sub-properties can be set to true/false to enable/disable coloring for the corresponding attribute:

    • timestamp
    • level
    • label
    • module
    • message
    • metadata
    • all: setting it to true enables coloring for every attribute

    For example, to colorize every part of the message: --caliper-logging-formats-colorize-all=true

    Additionally, the format exposes a colors attribute, which contains coloring information for the info, error, warn and debug levels. The value of a level can be set to colors and styles provided by the colors package. To apply multiple styles, separate the values with a space.

    For example, to really highlight error-level logs: --caliper-logging-formats-colorize-colors-error=\"white bgRed bold italic\"

    Note

    the colors package offers some exotic styles which seem tempting at first, but don\u2019t overdo it, for the sake of your eyes. Keep it simple.

    "},{"location":"concepts/logging-control/#configuring-logging-targets","title":"Configuring logging targets","text":"

    The source and target(s) of log messages are decoupled, thanks to the transport mechanism of winston. This means that a log message can be easily logged to multiple places, like the console, or different log files. Moreover, this is completely transparent to the module generating the log message!

    The different targets are specified under the caliper.logging.targets property. The caliper.logging.targets section takes the following general form:

    caliper:\n  logging:\n    targets:\n      mylogger1:\n        target: console\n        enabled: true\n        options:\n          # console target-specific options\n      mylogger2:\n        target: file\n        enabled: true\n        options:\n          # file target-specific options\n

    Each subproperty of caliper.logging.targets is an arbitrary name for the given logging target (e.g., mylogger1, mylogger2, etc.).

    Each target must specify the following properties:

    • target: the identifier of a supported target. See the table below.
    • enabled: indicates whether the target is enabled. Defaults to true if omitted.
    • options: this object will be given as-is to the specific winston transport as options. See the table below for the supported options of each transport.

    The following target values (i.e., transports) are supported. Click on the links for the official documentation of each transport.

    Target Available options console Console Transport file File Transport daily-rotate-file Daily Rotating File Transport"},{"location":"concepts/logging-control/#disabling-loggers","title":"Disabling loggers","text":"

    Even though the setting keys/properties of the caliper.logging.targets section cannot be overridden one-by-one (like the properties in the caliper.logging.formats section), the enabled property is an exception. To easily disable a logger, set its enabled property to false (using the target\u2019s name in the property hierarchy).

    For example, to disable the mylogger1 target, the following approaches are available:

    • From the command line: --caliper-logging-targets-mylogger1-enabled=false
    • From an environment variable: export CALIPER_LOGGING_TARGETS_MYLOGGER1_ENABLED=false

    Note

    you must use lower-case letters (and/or digits) in your target name for this to work!

    "},{"location":"concepts/logging-control/#overriding-logger-target-settings","title":"Overriding logger target settings","text":"

    But what if you would like to modify one of the options of a transport? You can use a configuration file for that!

    For the next example, we will disable the default file logger, modify the logging level of the console target, and also add a new daily rotating file logger. We can do all of this with a single configuration file.

    caliper:\n  logging:\n    targets:\n      console:\n        options:\n          # we don't care about info level messages anymore\n          level: warn\n      file:\n        # we disable this\n        enabled: false\n      rotatingfile:\n        target: daily-rotate-file\n        # enabled by default\n        options:\n          # we log every message this way\n          level: debug\n          # start a new log file every hour\n          datePattern: 'YYYY-MM-DD-HH'\n          # compress old log files after rotating\n          zippedArchive: true\n          # include the hour-precision date in the file names\n          filename: 'caliper-%DATE%.log'\n          # options for opening the file for writing\n          options:\n            # append mode\n            flags: a\n            # make the file readable/writable by anyone\n            mode: 0666\n

    If you save the above content as caliper.yaml in your workspace directory, then Caliper will pick it up automatically.

    Note

    Some remarks about the above file content:

    1. We only set the properties we wanted to override. The default configuration file will be merged with the above configuration file, the values in the latter taking precedence.
    2. The provided options for a transport are not verified by Caliper. It is simple passed to the specific transport. It is your responsibility to configure the transport the right way.
    3. We could have disabled the file logger also from the command line, or from an environment variable. The reason we did it from a config file is explained in the Tips & tricks section.
    "},{"location":"concepts/logging-control/#creating-your-own-loggers","title":"Creating your own loggers","text":"

    The different modules of Caliper will automatically use the configured targets for logging. Moreover, your user test modules can also create logger instances to log runtime events related to your business logic.

    To create your own logger instance, use the following API:

    const logger = require('@hyperledger/caliper-core').CaliperUtils.getLogger('my-module');\n\n// ...\n\nlogger.debug('My custom debug message', metadataObject1, metadataObject2);\n

    Once a logger instance is created, it exposes the usual info, warn, debug and error functions that each take as parameter a log message and optional objects, considered as \u201cmetadata\u201d.

    This \u201cmetadata\u201d is especially useful for debug level logs. When you perform an operation based on a complex input parameter/object, you can log the following at the beginning of your function:

    function complexCalculation(complexInput) {\n    logger.debug('Starting complex calculation. Input: ', complexInput);\n    // complex calculation\n}\n

    The \u201cmetadata\u201d will appear at the place of the %metadata% placeholder, as discussed in the message template section.

    Note

    pPassing large metadata objects can hurt the performance of logging if done in a loop/hot path. Only use \u201cmetadata\u201d logging for debug messages, since the debug level is usually switched off in production code.

    "},{"location":"concepts/logging-control/#tips-tricks","title":"Tips & tricks","text":""},{"location":"concepts/logging-control/#the-format-pipeline","title":"The format pipeline","text":"

    Winston formats are a powerful feature that allow the arbitrary manipulation of log messages. From the user\u2019s perspective, a log message is a simple string displayed on the console, or saved in a file. However, to fully utilize the logging styles described in this documentation, it might help knowing what really happens under the hood.

    Note

    in the remainder of this section, we\u2019ll refer to log messages as LOG.

    LOG can be considered an item/object, that is generated when issuing a call to logger.info(...) or similar functions. A LOG can have several attributes attached to it. Every LOG has the level and message attributes, containing the severity and the \u201cdescription\u201d of LOG. Additionally, Caliper automatically adds the module attribute to LOGs of every logger created through the Caliper API, denoting the name of the module who issued the log.

    Let\u2019s introduce the format pipeline through an example.

    "},{"location":"concepts/logging-control/#assumptions","title":"Assumptions","text":"

    Let\u2019s assume that the following caliper.logging configuration is used:

    template: '%timestamp%%level%%label%%module%%message%%metadata%'\nformats:\n    timestamp: 'YYYY.MM.DD-HH:mm:ss.SSS'\n    label: caliper\n    json: false\n    pad: true\n    align: false\n    attributeformat:\n        level: ' %attribute%'\n        label: ' [%attribute%]'\n        module: ' [%attribute%] '\n        metadata: ' (%attribute%)'\n    colorize:\n        all: true\n        colors:\n            info: green\n            error: red\n            warn: yellow\n            debug: grey\n
    This means that the following formats will be applied to every LOG:

    • module (automatically added by Caliper)
    • timestamp
    • label
    • padding
    • attribute formats
    • colorizing
    • template substitution

    Furthermore, let\u2019s assume that the following code initiates the LOG:

    const logger = require('@hyperledger/caliper-core').CaliperUtils.getLogger('my-module');\n\n// ...\n\nlogger.info('Doing operation X with:', 'someSetting', 'anotherSetting');\n
    "},{"location":"concepts/logging-control/#the-life-of-a-log","title":"The life of a LOG","text":"

    The logger.info call generates the initial LOG with the following attributes:

    level: 'info'\nmessage: 'Doing operation X with:'\n

    Before LOG enters the format pipeline, Caliper also adds the module name, and collects the additional parameters as metadata. Now LOG has the following attributes:

    level: 'info'\nmessage: 'Doing operation X with:'\nmodule: 'my-module'\nmetadata: ['someSetting', 'anotherSetting']\n

    This is the initial LOG entity that enters the format pipeline. Every enabled format is \u201cjust\u201d a transformation on the attributes of LOG. A format can manipulate the value of an existing attribute or/and add/remove arbitrary attributes.

    The first step of the pipeline is the timestamp format. This adds the timestamp attribute containing the current time, in the specified format. After this step, LOG looks like this:

    level: 'info'\nmessage: 'Doing operation X with:'\nmodule: 'my-module'\nmetadata: ['someSetting', 'anotherSetting']\ntimestamp: '2019.10.07-12:45:47.962'\n

    The next step if the label format, which adds the label attribute with the specified value (caliper, in this case):

    level: 'info'\nmessage: 'Doing operation X with:'\nmodule: 'my-module'\nmetadata: ['someSetting', 'anotherSetting']\ntimestamp: '2019.10.07-12:45:47.962'\nlabel: 'caliper'\n

    The next step is the padding format, which ensure that every logging level string has the same length. This means, that an extra space is appended at the end of the level attribute:

    level: 'info '\nmessage: 'Doing operation X with:'\nmodule: 'my-module'\nmetadata: ['someSetting', 'anotherSetting']\ntimestamp: '2019.10.07-12:45:47.962'\nlabel: 'caliper'\n

    The next step is the attribute formatter. This formatter is configured to modify multiple attributes of LOG, based on a string template:

    • level: add a space before it
    • label: enclose in [] and add a space before it
    • module: enclose in [] and add a space before and after it
    • metadata: enclose in () and add a space before it

    After these transformation, LOG looks like the following:

    level: ' info '\nmessage: 'Doing operation X with:'\nmodule: ' [my-module] '\nmetadata: ' ([\"someSetting\", \"anotherSetting\"])'\ntimestamp: '2019.10.07-12:45:47.962'\nlabel: ' [caliper]'\n

    Note

    some remarks: 1. metadata was an Array, not a string, so it was stringified before the formatting was applied. 2. message and timestamp is unchanged.

    The next step is the colorizing format, which adds certain color/style codes to the configured values. Since all is set true, and the level of LOG is info, every attribute is surrounded with the color code for green (denoted by <green> for sake of readability):

    level: '<green> info <green>'\nmessage: '<green>Doing operation X with:<green>'\nmodule: '<green> [my-module] <green>'\nmetadata: '<green> ([\"someSetting\", \"anotherSetting\"])<green>'\ntimestamp: '<green>2019.10.07-12:45:47.962<green>'\nlabel: '<green> [caliper]<green>'\n

    The last step in the pipeline (since the JSON format is disabled) is substituting the attributes into the logging template, to create the final message, that will appear in the console and in the file. The result is the concatenation of LOG\u2019s attributes in the following order:

    1. timestamp
    2. level
    3. label
    4. module
    5. message
    6. metadata

    Omitting the color code for the sake of readability, this results in:

    2019.10.07-12:45:47.962 info  [caliper] [my-module] Doing operation X with: ([\"someSetting\", \"anotherSetting\"])\n

    Note

    try adding other characters to the template string. And then be surprised that they are not colorized with the rest of the line. Actually, this is not surprising at all. The template string is \u201cevaluated\u201d after the colorizing format. Since these extra characters are not part of any attributes of LOG, they won\u2019t be colorized.

    "},{"location":"concepts/logging-control/#use-a-configuration-file","title":"Use a configuration file","text":"

    Logging settings are usually determined by your log analysis requirements. This means that once you settle on some logging style and targets, those settings will rarely change.

    To this end, the ability to override the logging style settings from the command line or from environment variables is really just a convenience feature. Once you found your ideal settings, it\u2019s worth to record them in a configuration file.

    The easiest way to do that is with a project-level configuration file. If you name the following file caliper.yaml and place it in your workspace root, then Caliper will automatically apply the settings.

    Note

    there are other ways to load a configuration file, as discussed in the runtime configuration page.

    caliper:\n  logging:\n    # no need for timestamp and label\n    template: '%level% [%module%]: %message% %meta%'\n    formats:\n      # color codes look ugly in log files\n      colorize: false\n      # don't need these, since won't appear in the template\n      label: false\n      timestamp: false\n    targets:\n      file:\n        options:\n          # bump the log level from debug to warn, only log the critical stuff in this file\n          level: warn\n          filename: 'critical.log'\n      rotatingfile:\n        target: daily-rotate-file\n        enabled: true\n        options:\n          level: debug\n          datePattern: 'YYYY-MM-DD-HH'\n          zippedArchive: true\n          filename: 'debug-%DATE%.log'\n          options:\n            flags: a\n            mode: 0666\n
    "},{"location":"concepts/logging-control/#license","title":"License","text":"

    The Caliper codebase is released under the Apache 2.0 license. Any documentation developed by the Caliper Project is licensed under the Creative Commons Attribution 4.0 International License. You may obtain a copy of the license, titled CC-BY-4.0, at http://creativecommons.org/licenses/by/4.0/.

    "},{"location":"concepts/rate-controllers/","title":"Rate Controllers","text":"

    The rate at which transactions are input to the blockchain system is a key factor within performance tests. It may be desired to send transactions at a specified rate or follow a specified profile. Caliper permits the specification of custom rate controllers to enable a user to perform testing under a custom loading mechanism. A user may specify their own rate controller or use one of the default options:

    • Fixed rate
    • Fixed feedback rate
    • Fixed load
    • Maximum rate
    • Linear rate
    • Composite rate
    • Zero rate
    • Record rate
    • Replay rate

    For implementing your own rate controller, refer to the Adding Custom Controllers section.

    "},{"location":"concepts/rate-controllers/#fixed-rate","title":"Fixed rate","text":"

    The fixed rate controller is the most basic controller, and also the default option if no controller is specified. It will send input transactions at a fixed interval that is specified as TPS (transactions per second).

    "},{"location":"concepts/rate-controllers/#options-and-use","title":"Options and use","text":"

    The fixed-rate controller can be specified by setting the rate controller type to the fixed-rate string.

    Controller options include:

    • tps: the rate at which transactions are cumulatively sent to the SUT by all workers

    The fixed rate controller, driving at 10 TPS, is specified through the following controller option:

    {\n  \"type\": \"fixed-rate\",\n  \"opts\": {\n    \"tps\" : 10\n  }\n}\n
    "},{"location":"concepts/rate-controllers/#fixed-feedback-rate","title":"Fixed feedback rate","text":"

    The fixed feedback rate controller which is the extension of fixed rate also will originally send input transactions at a fixed interval. When the unfinished transactions exceeds times of the defined unfinished transactions for each client,it will stop sending input transactions temporally by sleeping a long period of time.

    Controller options include:

    • tps: the rate at which transactions are cumulatively sent to the SUT by all workers
    • transactionLoad: the maximum transaction load on the SUT at which workers will pause sending further transactions

    The fixed feedback rate controller, driving at 100 TPS, 100 unfinished transactions for each client, is specified through the following controller option:

    {\n  \"type\": \"fixed-feedback-rate\",\n  \"opts\": {\n      \"tps\" : 100,\n      \"transactionLoad\": 100\n  }\n}\n
    "},{"location":"concepts/rate-controllers/#fixed-load","title":"Fixed Load","text":"

    The fixed load rate controller is a controller for driving the tests at a target loading (backlog transactions). This controller will aim to maintain a defined backlog of transactions within the system by modifying the driven TPS. The result is the maximum possible TPS for the system whilst maintaining the pending transaction load.

    "},{"location":"concepts/rate-controllers/#options-and-use_1","title":"Options and use","text":"

    The fixed-load controller can be specified by setting the rate controller type to the fixed-load string.

    Controller options include:

    • startTps: the initial rate at which transactions are cumulatively sent to the SUT by all workers
    • transactionLoad: the number of transactions being processed by the SUT that is to be maintained

    The fixed load controller, aiming to maintain a SUT transaction load of 5, with a starting TPS of 100, is specified through the following controller option:

    {\n  \"type\": \"fixed-load\",\n  \"opts\": {\n    \"transactionLoad\": 5,\n    \"startTps\": 100\n  }\n}\n
    "},{"location":"concepts/rate-controllers/#maximum-rate","title":"Maximum rate","text":"

    The maximum rate controller is a controller for driving the workers to their maximum achievable rate without overloading the SUT. This controller will aim to maximize the driven TPS for the worker by ramping up the driven TPS and backing off again when a drop in TPS is witnessed; such drops are indicative of an overloaded system.

    The achieved TPS is evaluated between txUpdate cycles, since this is the point at which TPS results are made available. A minimum sample interval that ensures settling of TPS rates should be considered for enhanced controller stability.

    Please note that the action of the controller is to slowly ramp to the maximum achievable rate for each worker until a threshold is reached, meaning that there will be a significant warm up phase that may skew averaged results for the round. It is recommended to investigate achievable results using Prometheus queries and/or Grafana visualization.

    "},{"location":"concepts/rate-controllers/#options-and-use_2","title":"Options and use","text":"

    The maximum rate controller can be specified by setting the rate controller type to the maximum-rate string.

    Controller options include:

    • tps: the starting TPS
    • step: the TPS increase for each interval. Note that on \u201cback-off\u201d this step size will automatically be reduced before re-attempting a TPS increase.
    • sampleInterval: the minimum time between steps to ensure settling of achieved TPS rates
    • includeFailed: boolean flag to indicate if the achieved TPS analysis within the controller is to include failed transactions (default true)

    The maximum rate controller, with a starting TPS of 100, a TPS step size of 5, and a minimum sample interval of 20seconds, is specified through the following controller option:

    {\n  \"type\": \"maximum-rate\",\n  \"opts\": {\n    \"tps\": 100,\n    \"step\": 5,\n    \"sampleInterval\": 20,\n    \"includeFailed\": true\n  }\n}\n
    "},{"location":"concepts/rate-controllers/#linear-rate","title":"Linear rate","text":"

    Exploring the performance limits of a system usually consists of performing multiple measurements with increasing load intensity. However, finding the tipping point of the system this way is not easy, it is more like a trial-and-error method.

    The linear rate controller can gradually (linearly) change the TPS rate between a starting and finishing TPS value (both in increasing and decreasing manner). This makes it easier to find the workload rates that affect the system performance in an interesting way.

    The linear rate controller can be used in both duration-based and transaction number-based rounds.

    "},{"location":"concepts/rate-controllers/#options-and-use_3","title":"Options and use","text":"

    The linear rate controller can be specified by setting the rate controller type to the linear-rate string.

    Controller options include:

    • startingTps: the rate at which transactions are cumulatively sent to the SUT by all workers at the start of the round
    • finishingTps: the rate at which transactions are cumulatively sent to the SUT by all workers at the end of the round

    The following example specifies a rate controller that gradually changes the transaction load from 25 TPS to 75 TPS during the benchmark round.

    {\n  \"type\": \"linear-rate\",\n  \"opts\": {\n    \"startingTps\": 25,\n    \"finishingTps\": 75\n    }\n}\n

    Note

    similarly to the fixed rate controller, this controller also divides the workload between the available client, so the specified rates in the configuration are cumulative rates, and not the rates of individual clients. Using the above configuration with 5 clients results in clients that start at 5 TPS and finish at 15 TPS. Together they generate a [25-75] TPS load.

    "},{"location":"concepts/rate-controllers/#composite-rate","title":"Composite rate","text":"

    A benchmark round in Caliper is associated with a single rate controller. However, a single rate controller is rarely sufficient to model advanced client behaviors. Moreover, implementing new rate controllers for such behaviors can be cumbersome and error-prone. Most of the time a complex client behavior can be split into several, simpler phases.

    Accordingly, the composite rate controller enables the configuration of multiple \u201csimpler\u201d rate controllers in a single round, promoting the reusability of existing rate controller implementations. The composite rate controller will automatically switch between the given controllers according to the specified weights (see the configuration details after the example).

    "},{"location":"concepts/rate-controllers/#options-and-use_4","title":"Options and use","text":"

    The composite rate controller can be specified by setting the rate controller type to the composite-rate string.

    Controller options include:

    • weights: an array of \u201cnumber-like\u201d values (explicit numbers or numbers as strings) specifying the weights associated with the rate controllers defined in the rateControllers property.

    The weights do not necessarily have to sum to 1, since they will eventually be normalized to a vector of unit length. This means, that the weights can be specified in a manner that is the most intuitive for the given configuration. For example, the weights can correspond to durations, numbers of transactions or ratios.

    In the above example, the weights are corresponding to ratios (2:1:2). The exact meaning of the weights is determined by whether the benchmark round is duration-based or transaction number-based. If the above controller definition is used in a round with a duration of 5 minutes, then in the first 2 minutes the transactions will be submitted at 100 TPS, then at 300 TPS for the next minute, and at 200 TPS for the last 2 minutes of the round.

    Note, that 0 weights are also allowed in the array. Setting the weight of one or more controllers to 0 is a convenient way to \u201cremove/disable\u201d those controllers without actually removing them from the configuration file.

    • rateControllers: an array of arbitrary rate controller specifications. See the documentation of the individual rate controllers on how to configure them. The number of specified rate controllers must equal to the number of specified weights.

    Note, that technically, composite rate controllers can be nested to form a hierarchy. However, using a composite rate controller incurs an additional execution overhead in the rate control logic. Keep this in mind before specifying a deep hierarchy of composite rate controllers, or just flatten the hierarchy to a single level.

    • logChange: a boolean value indicating whether the switches between the specified rate controllers should be logged or not.

    For example, the definition of a square wave function (with varying amplitude) as the transaction submission rate is as easy as switching between fixed rate controllers with different TPS settings:

    {\n  \"type\": \"composite-rate\",\n  \"opts\": {\n    \"weights\": [2, 1, 2],\n    \"rateControllers\": [\n      {\n        \"type\": \"fixed-rate\",\n        \"opts\": {\"tps\" : 100}\n      },\n      {\n        \"type\": \"fixed-rate\",\n        \"opts\": {\"tps\" : 300}\n      },\n      {\n        \"type\": \"fixed-rate\",\n        \"opts\": {\"tps\" : 200}\n      }\n    ],  \n    \"logChange\": true\n  }\n}\n

    Important! The existence of the composite rate controller is almost transparent to the specified \u201csub-controllers.\u201d This is achieved by essentially placing the controllers in a \u201cvirtualized\u201d round, i.e., \u201clying\u201d to them about:

    • the duration of the round (for duration-based rounds),
    • the total number of transactions to submit (for transaction number-based rounds),
    • the starting time of the round, and
    • the index of the next transaction to submit.

    The results of recently finished transactions are propagated to the sub-controllers as-is, so for the first few call of a newly activated sub-controller it can receive recent results that don\u2019t belong to its virtualized round.

    This virtualization does not affect the memoryless controllers, i.e., the controllers whose control logic does not depend on global round properties or past transaction results. However, other controllers might exhibit some strange (but hopefully transient) behavior due to this \u201cvirtualized\u201d round approach. For example, the logic of the PID controller for example depends on the transaction backlog.

    "},{"location":"concepts/rate-controllers/#zero-rate","title":"Zero rate","text":"

    This controller stops the workload generation for the duration of the round.

    "},{"location":"concepts/rate-controllers/#options-and-use_5","title":"Options and use","text":"

    Using the controller on its own for a round is meaningless. However, it can be used as a building block inside a composite rate controller. The zero rate controller can be used only in duration-based rounds!

    {\n  \"type\": \"composite-rate\",\n  \"opts\": {\n    \"weights\": [30, 10, 10, 30],\n    \"rateControllers\": [\n      {\n        \"type\": \"fixed-rate\",\n        \"opts\": {\"tps\" : 100}\n      },\n      {\n        \"type\": \"fixed-rate\",\n        \"opts\": {\"tps\" : 500}\n      },\n      {\n        \"type\": \"zero-rate\",\n        \"opts\": { }\n      },\n      {\n        \"type\": \"fixed-rate\",\n        \"opts\": {\"tps\" : 100}\n      }\n    ],  \n    \"logChange\": true\n  }\n}\n

    Let\u2019s assume, that the above example is placed in a round definition with an 80 seconds duration (note the intuitive specification of the weights). In this case, an initial 30 seconds normal workload is followed by a 10 seconds intensive workload, which is followed by a 10 seconds cooldown period, etc.

    The controller is identified by the zero-rate string as the value of the type property and requires no additional configuration.

    "},{"location":"concepts/rate-controllers/#record-rate","title":"Record rate","text":"

    This rate controller serves as a decorator around an other (arbitrary) controller. Its purpose is to record the times (relative to the start of the round) when each transaction was submitted, i.e., when the transaction was \u201cenabled\u201d by the \u201csub-controller.\u201d

    The following example records the times when the underlying fixed rate controller enabled the transactions (for details, see the available options below the example):

    {\n  \"type\": \"record-rate\",\n  \"opts\": {\n    \"rateController\": {\n      \"type\": \"fixed-rate\",\n      \"opts\": {\"tps\" : 100}\n    },\n    \"pathTemplate\": \"../tx_records_client<C>_round<R>.txt\",\n    \"outputFormat\": \"TEXT\",\n    \"logEnd\": true\n  }\n}\n

    The record rate controller can be specified by setting the rate controller type to the record-rate string. The available options (opts property) are the following:

    • rateController: the specification of an arbitrary rate controller.
    • pathTemplate: the template for the file path where the recorded times will be saved. The path can be either an absolute path or relative to the root Caliper directory.

    The template can (and should) contain special \u201cvariables/placeholders\u201d that can refer to special environment properties (see the remarks below). The available placeholders are the following: - <C>: placeholder for the 1-based index of the current client that uses this rate controller. - <R>: placeholder for the 1-based index of the current round that uses this rate controller.

    • outputFormat: optional. Determines the format in which the recording will be saved. Defaults to \"TEXT\". The currently supported formats are the following:
      • \"TEXT\": each recorded timing is encoded as text on separate lines.
      • \"BIN_BE\": binary format with Big Endian encoding.
      • \"BIN_LE\": binary format with Little Endian encoding.
    • logEnd: optional. Indicates whether to log that the recordings are written to the file(s). Defaults to false.

    Template placeholders: since Caliper provides a concise way to define multiple rounds and multiple workers with the same behavior, it is essential to differentiate between the recordings of the workers and rounds. Accordingly, the output file paths can contain placeholders for the round and worker indices that will be resolved automatically at each worker in each round. Otherwise, every worker would write the same file, resulting in a serious conflict between timings and transaction IDs.

    Text format: the rate controller saves the recordings in the following format (assuming a constant 10 TPS rate and ignoring the noise in the actual timings), row i corresponding to the ith transaction:

    100\n200\n300\n...\n

    The value of i starts at 0, so the first line represents 0 transactions submitted. Each value in a line represents time in milliseconds from the start of the round.

    Binary format: Both binary representations encode the X number of recordings as a series of X+1 UInt32 numbers (1 number for the array length, the rest for the array elements), either in Little Endian or Big Endian encoding:

    Offset: |0      |4      |8      |12      |16      |...     \nData:   |length |1st    |2nd    |3rd     |4th     |...      \n
    • The first entry (length) indicates the number of transactions submitted.
    • Each subsequent entry in the array represents a timestamp of when that number of transactions were submitted from the start of the round.
    • The subsequent values (Uint32 values in Big Endian or Little Endian format) represent time in milliseconds from the start of the round. Each entry corresponds to the number of transactions submitted starting from 0

    For example, if there are 3 transactions enabled, the binary format would look like:

    Offset: |0      |4      |8      |12     |16      \nData:   |length |0 tx   |1 tx   |2 tx   |3 tx \n        |4      |100    |200    |300    |400\n
    "},{"location":"concepts/rate-controllers/#replay-rate","title":"Replay rate","text":"

    One of the most important aspect of a good benchmark is its repeatability, i.e., it can be re-executed in a deterministic way whenever necessary. However, some benchmarks define the workload (e.g., user behavior) as a function of probabilistic distribution(s). This presents two problems from a practical point of view:

    1. Repeatability: The random sampling of the given probability distribution(s) can differ between benchmark (re-)executions. This makes the comparison of different platforms questionable.
    2. Efficiency: Sampling a complex probability distribution incurs an additional runtime overhead, which can limit the rate of the load, distorting the originally specified workload.

    This rate controller aims to mitigate these problems by replaying a fix transaction load profile that was created \u201coffline.\u201d This way the profile is generated once, outside of the benchmark execution, and can be replayed any time with the same timing constraints with minimal overhead.

    A trivial use case of this controller is to play back a transaction recording created by the record controller. However, a well-formed trace file is the only requirement for this controller, hence any tool/method can be used to generate the transaction load profile.

    The following example specifies a rate controller that replays some client-dependent workload profiles (for details, see the available options below the example):

    {\n  \"type\": \"replay-rate\",\n  \"opts\": {\n    \"pathTemplate\": \"../tx_records_client<C>.txt\",\n    \"inputFormat\": \"TEXT\",\n    \"logWarnings\": true,\n    \"defaultSleepTime\": 50\n    }\n}\n

    The replay rate controller can be specified by setting the rate controller type to the replay-rate string. The available options (opts property) are the following:

    • pathTemplate: the template for the file path where the transaction timings will be replayed from. The path can be either an absolute path or relative to the root Caliper directory.

    The template can (and should) contain special \u201cvariables/placeholders\u201d that can refer to special environment properties (see the remarks at the record rate controller). The available placeholders are the following: - <C>: placeholder for the 1-based index of the current client that uses this rate controller. - <R>: placeholder for the 1-based index of the current round that uses this rate controller.

    • inputFormat: optional. Determines the format in which the transaction timings are stored (see the details at the record rate controller). Defaults to \"TEXT\". The currently supported formats are the following:
      • \"TEXT\": each recorded timing is encoded as text on separate lines.
      • \"BIN_BE\": binary format with Big Endian encoding.
      • \"BIN_LE\": binary format with Little Endian encoding.
    • logWarnings: optional. Indicates whether to log that there are no more recordings to replay, so the defaultSleepTime is used between consecutive transactions. Defaults to false.
    • defaultSleepTime: optional. Determines the sleep time between transactions for the case when the benchmark execution is longer than the specified recording. Defaults to 20 ms.
    "},{"location":"concepts/rate-controllers/#about-the-recordings","title":"About the recordings:","text":"

    Special care must be taken, when using duration-based benchmark execution, as it is possible to issue more transactions than specified in the recording. A safety measure for this case is the defaultSleepTime option. This should only occur in the last few moments of the execution, affecting only a few transactions, that can be discarded before performing additional performance analyses on the results.

    The recommended approach is to use transaction number-based round configurations, since the number of transactions to replay is known beforehand. Note, that the number of workers affects the actual number of transactions submitted by a worker.

    "},{"location":"concepts/rate-controllers/#adding-custom-controllers","title":"Adding Custom Controllers","text":"

    It is possible to use rate controllers that are not built-in controllers of Caliper. When you specify the rate controller in the test configuration file (see the architecture documentation), you must set the type and opts attributes.

    You can set the type attribute so that it points to your custom JS file that satisfies the following criteria:

    1. The file/module exports a createRateController function that takes the following parameters:

      1. An TestMessage parameter that is the object representation of the opts attribute set in the configuration file, and contains the custom settings of your rate controller.
      2. A TransactionStatisticsCollector object that gives the rate controller access to the current worker transaction statistics
      3. A workerIndex parameter of type number that is the 0-based index of the worker process using this rate controller. The function must return an object (i.e., your rate controller instance) that satisfies the next criteria.
    2. The object returned by createRateController must implement the /packages/caliper-core/lib/rate-control/rateInterface.js interface, i.e., must provide the following async functions:

    3. applyRateControl , for performing the actual rate control by \u201cblocking\u201d the execution (in an async manner) for the desired time.
    4. end, for disposing any acquired resources at the end of a round.

    The following example is a complete implementation of a rate control that doesn\u2019t perform any control, thus allowing the submitting of transactions as fast as the program execution allows it (warning, this implementation run with many client processes could easily over-load a backend network, so use it with caution).

    /*\n* Licensed under the Apache License, Version 2.0 (the \"License\");\n* you may not use this file except in compliance with the License.\n* You may obtain a copy of the License at\n*\n* http://www.apache.org/licenses/LICENSE-2.0\n*\n* Unless required by applicable law or agreed to in writing, software\n* distributed under the License is distributed on an \"AS IS\" BASIS,\n* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n* See the License for the specific language governing permissions and\n* limitations under the License.\n*/\n\n'use strict';\n\nconst RateInterface = require('path-to-caliper/caliper-core/lib/rate-control/rateInterface.js');\n\n/**\n * Rate controller for allowing uninterrupted workloadload generation.\n *\n * @property {object} options The user-supplied options for the controller. Empty.\n */\nclass MyRateController  extends RateInterface{\n    /**\n     * Initializes the rate controller instance.\n     * @param {TestMessage} testMessage The testMessage passed for the round execution\n     * @param {TransactionStatisticsCollector} stats The TX stats collector instance.\n     * @param {number} workerIndex The 0-based index of the worker node.\n     * @param {number} roundIndex The 0-based index of the current round.\n     * @param {number} numberOfWorkers The total number of worker nodes.\n     * @param {object} roundConfig The round configuration object.\n     */\n     constructor(testMessage, stats, workerIndex) {\n        super(testMessage, stats, workerIndex);\n    }\n\n    /**\n     * Doesn't perform any rate control.\n     * @async\n     */\n    async applyRateControl() {\n        // no sleeping is needed, allow the transaction invocation immediately\n    }\n\n    /**\n     * Notify the rate controller about the end of the round.\n     * @async\n     */\n    async end() { \n        // nothing to dispose of\n    }\n}\n\n/**\n * Factory for creating a new rate controller instance.\n * @param {TestMessage} testMessage start test message\n * @param {TransactionStatisticsCollector} stats The TX stats collector instance.\n * @param {number} workerIndex The 0-based index of the worker node.\n *\n * @return {RateInterface} The new rate controller instance.\n */\nfunction createRateController(testMessage, stats, workerIndex) {\n    return new MyRate(testMessage, stats, workerIndex);\n}\n\nmodule.exports.createRateController = createRateController;\n

    Let\u2019s say you save this implementation into a file called maxRateController.js next to your Caliper directory (so they\u2019re on the same level). In the test configuration file you can set this rate controller (at its required place in the configuration hierarchy) the following way:

    rateControl:\n  # relative path from the Caliper directory\n- type: ../maxRateController.js\n  # empty options\n  opts: \n
    "},{"location":"concepts/rate-controllers/#license","title":"License","text":"

    The Caliper codebase is released under the Apache 2.0 license. Any documentation developed by the Caliper Project is licensed under the Creative Commons Attribution 4.0 International License. You may obtain a copy of the license, titled CC-BY-4.0, at http://creativecommons.org/licenses/by/4.0/.

    "},{"location":"concepts/runtime-config/","title":"Runtime Configuration","text":""},{"location":"concepts/runtime-config/#overview","title":"Overview","text":"

    Caliper relies on the nconf package to provide a flexible and hierarchical configuration mechanism for runtime-related settings. Hierarchical configuration means that a runtime setting can be set or overridden from multiple sources/locations, and there is a priority order among them.

    In general, a setting is a simple string key associated with some value. However, it is highly recommended to compose the keys in a way that follows the place of the module in the hierarchy that uses the setting. Consider the following key, for example:

    caliper-fabric-timeout-invokeorquery

    The key consists of several parts that makes it easy to identify the purpose of the setting: it is used in Caliper, by the Fabric connector, it is a timeout-related setting that specifies the timeout to use for transaction invocations or queries. Every setting key in Caliper follows the same convention.

    The rule of thumb is to use lowercase letters (maybe numbers), and the hierarchy should be denoted by dashes (-) as separator.

    Every setting used by Caliper is prefixed with the caliper- string. The prefix serves as a namespace for the internal settings of Caliper modules. It also prevents name collisions since the configuration mechanism parses every setting available from the various sources, some intended, for example, to the underlying SDK modules or the workload modules.

    Note

    For every available runtime setting, refer to the last section.

    "},{"location":"concepts/runtime-config/#setting-sources","title":"Setting sources","text":"

    Caliper supports the following sources/locations where runtime settings can be set/overridden, in priority order, starting with the highest priority:

    1. Memory
    2. Command line arguments
    3. Environment variables
    4. Project-level configuration file
    5. User-level configuration file
    6. Machine-level configuration file
    7. Fallback/default configuration file

    For simplicity, you can think of the above order as the following: the \u201ccloser\u201d the setting is set to its point of use, the higher the priority of the set value.

    "},{"location":"concepts/runtime-config/#in-memory-settings","title":"In-memory settings","text":"

    If some component (Caliper-related, or user provided) sets a setting during runtime (using the configuration API), then that value will have priority over any other source/location that might have also set the same setting.

    The simple configuration API is provided by the ConfigUtil module of the caliper-core package. It exports a simple get and set method:

    • get(key:string, fallbackValue:any) => any

      Returns the value of the setting associated with the given key. If the setting is not set from any sources, then the fallbackValue is returned.

    • set(key:string, value:any)

      Sets the value for the settings associated with the given key. It will overwrite any other value set by other sources.

    For example:

    const { ConfigUtil } = require('@hyperledger/caliper-core');\n\n// retrieves a setting for your module, if not set, use some default\nconst shouldBeFast = ConfigUtil.get('mymodule-performance-shoudbefast', /*default:*/ true);\n\nif (shouldBeFast) { /* ... */ } else { /* ... */ }\n

    The above code also shows how a plugin module can easily leverage Caliper\u2019s configuration mechanism. Since the mymodule-performance-shoudbefast setting is queried through the configuration API, setting it from various sources automatically became possible (see the next sections for details).

    Note

    Thus adding a flexible runtime setting to any module requires only to query that setting through the configuration API when you need it (with the desired default/fallback value).

    "},{"location":"concepts/runtime-config/#command-line-arguments","title":"Command line arguments","text":"

    If we wish to influence the behavior of a third-party code (e.g., Caliper or a user callback module), we usually can\u2019t (or don\u2019t want to) overwrite the setting in the source code. A standard way of modifying the behavior of third-party/pre-packaged applications is to provide the settings as commandline arguments.

    Starting Caliper through the CLI, you can override runtime settings the following way:

    caliper launch manager \\\n    --caliper-workspace yourworkspace/ \\\n    --caliper-benchconfig yourconfig.yaml \\\n    --caliper-networkconfig yournetwork.yaml \\\n    --mymodule-performance-shoudbefast=true\n

    The arguments will be converted to lower-case letters and every _ character will be replaced with -. So the above command can be written in a more user friendly way:

    caliper launch manager \\\n    --caliper-workspace yourworkspace/ \\\n    --caliper-benchconfig yourconfig.yaml \\\n    --caliper-networkconfig yournetwork.yaml \\\n    --MyModule_Performance_ShoudBeFast=true\n

    Both ways will result in the setting key mymodule-performance-shoudbefast associated with the boolean value true.

    Note, that nconf will automatically parse values of common types, so the true and false values will be parsed (and returned by get) as boolean values. This also holds for (both integer and floating point) numbers.

    Moreover, boolean values can be specified as flags, without explicitly setting the true or false value (note the no- prefix for the second case):

    • Setting a key to true:

        caliper launch manager \\\n      --caliper-workspace yourworkspace/ \\\n      --caliper-benchconfig yourconfig.yaml \\\n      --caliper-networkconfig yournetwork.yaml \\\n      --mymodule-performance-shoudbefast\n

    • Setting a key to false (note the no- prefix):

        caliper launch manager \\\n      --caliper-workspace yourworkspace/ \\\n      --caliper-benchconfig yourconfig.yaml \\\n      --caliper-networkconfig yournetwork.yaml \\\n      --no-mymodule-performance-shoudbefast\n
      Command line arguments overwrite the settings set in locations of the next sections.

    "},{"location":"concepts/runtime-config/#environment-variables","title":"Environment variables","text":"

    If Caliper is part of a scripted environment, then it would be cumbersome to modify the script to pass command line arguments to Caliper. The standard approach in these scenarios is to use environment variables.

    The example setting can be set the following way using an environment variable:

    export MYMODULE_PERFORMANCE_SHOULDBEFAST=true\n\n# calling some script containing the following command\ncaliper launch manager \\\n    --caliper-workspace yourworkspace/ \\\n    --caliper-benchconfig yourconfig.yaml \\\n    --caliper-networkconfig yournetwork.yaml\n

    Note the standard notation of environment variable setting: upper-case letters separated by _ characters. Caliper performs the same transformation as with command line arguments: the variable names will be converted to lower-case letters and every _ character will be replaced with -. So the above setting will also result in the setting key mymodule-performance-shoudbefast associated with the boolean value true.

    "},{"location":"concepts/runtime-config/#configuration-files","title":"Configuration files","text":"

    Depending on the scenario, users may want to change multiple runtime settings. Using command line arguments and environment variables to change multiple settings can become cumbersome.

    Using configuration files is a standard way of overriding multiple settings in a manageable way. Caliper provides multiple configuration \u201clocations\u201d where you can insert configuration files into the settings hierarchy. These locations also follow the \u201ccloser one wins\u201d semantic of the hierarchical configuration mechanism.

    Moreover, YAML-based configuration files allow comments that make your configuration choices self-documenting and self-contained.

    Note, that no additional transformation is performed on the key names of a YAML file, they are simply concatenated with - to get a flat string key from the object hierarchy.

    So the hierarchical setting

    mymodule:\n  performance:\n    shouldbefast: true\n

    will be parsed as the mymodule-performance-shouldbefast string key associated with the true Boolean values.

    "},{"location":"concepts/runtime-config/#project-level","title":"Project-level","text":"

    If you have a group of settings that are always overridden in your Caliper benchmark project, then it is recommended to define them as a project-level configuration file. This file will usually consist of a subset of settings defined in the default configuration file (and probably your custom settings associated with your custom user module).

    The project-level configuration file can be included into the hierarchy in two ways:

    • Define the overridden settings in the caliper.yaml file in the workspace directory
    • Or set the path of the configuration file explicitly through the caliper-projectconfig setting key using one of the higher priority locations above (i.e., in-memory, command line argument or environment variable):
      • The command line approach:
        caliper launch manager \\\n    --caliper-workspace yourworkspace/ \\\n    --caliper-benchconfig yourconfig.yaml \\\n    --caliper-networkconfig yournetwork.yaml \\\n    --Caliper-ProjectConfig mypath/project1-config.yaml\n
      • The environment variable approach:
        export CALIPER_PROJECTCONFIG=mypath/project1-config.yaml\ncaliper launch manager \\\n    --caliper-workspace yourworkspace/ \\\n    --caliper-benchconfig yourconfig.yaml \\\n    --caliper-networkconfig yournetwork.yaml\n

    Note that project-level settings will override the settings defined by the locations of the next sections.

    "},{"location":"concepts/runtime-config/#user-level","title":"User-level","text":"

    If you find yourself overriding the same settings for multiple Caliper benchmark projects, then it is recommended to extract the common settings into a user-level configuration file. To include a user-level configuration file into the hierarchy, specify its path through the caliper-userconfig settings key using one of the higher priority locations above (i.e., in-memory, command line argument, environment variable or the project-level configuration file):

    - The command line approach:\n```sh\ncaliper launch manager \\\n    --caliper-workspace yourworkspace/ \\\n    --caliper-benchconfig yourconfig.yaml \\\n    --caliper-networkconfig yournetwork.yaml \\\n    --Caliper-UserConfig ~/.config/my-caliper-config.yaml\n```\n- The environment variable approach:\n```sh\n    export CALIPER_USERCONFIG=~/.config/my-caliper-config.yaml\n    caliper launch manager \\\n        --caliper-workspace yourworkspace/ \\\n        --caliper-benchconfig yourconfig.yaml \\\n        --caliper-networkconfig yournetwork.yaml\n```\n- The configuration file approach (excerpt from the project-level configuration file):\n```sh\n      caliper:\nuserconfig: ~/.config/my-caliper-config.yaml\n# additional settings\n```\n
    "},{"location":"concepts/runtime-config/#machine-level","title":"Machine-level","text":"

    If multiple users use the same workstation and want to share common settings across Caliper projects and users, then a machine-level configuration file can be included into the hierarchy by specifying its path through the caliper-machineconfig settings key using one of the higher priority locations above (i.e., command line argument, environment variable, project- or user-level configuration files): - The command line approach:

        caliper launch manager \\\n        --caliper-workspace yourworkspace/ \\\n        --caliper-benchconfig yourconfig.yaml \\\n        --caliper-networkconfig yournetwork.yaml \\\n        --Caliper-MachineConfig /etc/config/caliper.yaml\n
    - The environment variable approach:
        export CALIPER_MACHINECONFIG=/etc/config/caliper.yaml\n    caliper launch manager \\\n        --caliper-workspace yourworkspace/ \\\n        --caliper-benchconfig yourconfig.yaml \\\n        --caliper-networkconfig yournetwork.yaml\n
    - The configuration file approach (excerpt from the project- or user-level configuration file):
        caliper:\n        machineconfig: /etc/config/caliper.yaml\n    # additional settings\n

    "},{"location":"concepts/runtime-config/#default-configuration","title":"Default configuration","text":"

    A default/fallback configuration file is shipped with the Caliper-related packages that defines sensible fallback values and documentation for each available setting used by the Caliper modules. This configuration file has the lowest priority among the supported setting locations.

    "},{"location":"concepts/runtime-config/#available-settings","title":"Available settings","text":"

    Note

    Always refer to the self-documenting default configuration file for the currently supported runtime configuration settings.

    "},{"location":"concepts/runtime-config/#basic-settings","title":"Basic settings","text":"Key Description caliper-benchconfig Path to the benchmark configuration file that describes the test worker(s), test rounds and monitors. caliper-networkconfig Path to the network configuration file that contains information required to interact with the SUT. caliper-machineconfig The file path for the machine-level configuration file. Can be relative to the workspace. caliper-projectconfig The file path for the project-level configuration file. Can be relative to the workspace. caliper-userconfig The file path for the user-level configuration file. Can be relative to the workspace. caliper-workspace Workspace directory that contains all configuration information caliper-progress-reporting-enabled Boolean value for enabling transaction completion progress display by the Caliper manager process caliper-progress-reporting-interval Numeric value used to specify the caliper progress update frequency, in milliseconds"},{"location":"concepts/runtime-config/#binding-settings","title":"Binding settings","text":"Key Description caliper-bind-args The additional args to pass to the binding (i.e., npm install) command. caliper-bind-cwd The CWD to use for the binding (i.e., npm install) command. caliper-bind-file The path of a custom binding configuration file that will override the default one. caliper-bind-sut The binding specification of the SUT in the <SUT type>:<SDK version> format."},{"location":"concepts/runtime-config/#reporting-settings","title":"Reporting settings","text":"Key Description caliper-report-charting-hue The HUE value to construct the chart color scheme from. caliper-report-charting-scheme The color scheme method to use for producing chart colors. caliper-report-charting-transparency The transparency value [0..1] to use for the charts. caliper-report-options The options object to pass to fs.writeFile. caliper-report-path The absolute or workspace-relative path of the generated report file. caliper-report-precision Precision (significant digits) for the numbers in the report."},{"location":"concepts/runtime-config/#logging-settings","title":"Logging settings","text":"Key Description caliper-logging-formats-align Adds a tab delimiter before the messages to align them in the same place. caliper-logging-formats-attributeformat-\\ Specifies the formatting string for the log message attribute <attribute>. caliper-logging-formats-json Indicates that the logs should be serialized in JSON format. caliper-logging-formats-label Adds a specified label to every message. Useful for distributed worker scenario. caliper-logging-formats-pad Pads the log level strings to be the same length. caliper-logging-formats-timestamp Adds a timestamp to the messages with the specified format. caliper-logging-formats-colorize-all Indicates that all log message attributes must be colorized. caliper-logging-formats-colorize-\\ Indicates that log message attribute <attribute> must be colorized. caliper-logging-formats-colorize-colors-\\ Sets the color for the log messages with level <level>. caliper-logging-targets-\\-enabled Sets whether the target transport <target> is enabled or disabled. caliper-logging-template Specifies the message structure through placeholders."},{"location":"concepts/runtime-config/#worker-management-settings","title":"Worker management settings","text":"Key Description caliper-worker-communication-method Indicates the type of the communication between the manager and workers. caliper-worker-communication-address The address of the MQTT broker used for distributed worker management. caliper-worker-pollinterval The interval for polling for new available workers, in milliseconds. caliper-worker-remote Indicates whether the workers operate in distributed mode."},{"location":"concepts/runtime-config/#benchmark-phase-settings","title":"Benchmark phase settings","text":"Key Description caliper-flow-only-end Indicates whether to only perform the end command script in the network configuration file. caliper-flow-only-init Indicates whether to only perform the init phase of the benchmark. caliper-flow-only-install Indicates whether to only perform the smart contract install phase of the benchmark. caliper-flow-only-start Indicates whether to only perform the start command script in the network configuration file. caliper-flow-only-test Indicates whether to only perform the test phase of the benchmark. caliper-flow-skip-end Indicates whether to skip the end command script in the network configuration file. caliper-flow-skip-init Indicates whether to skip the init phase of the benchmark. caliper-flow-skip-install Indicates whether to skip the smart contract install phase of the benchmark. caliper-flow-skip-start Indicates whether to skip the start command script in the network configuration file. caliper-flow-skip-test Indicates whether to skip the test phase of the benchmark."},{"location":"concepts/runtime-config/#authentication-settings","title":"Authentication settings","text":"Key Description caliper-auth-prometheus-username Basic authentication username to use authenticate with an existing Prometheus server. caliper-auth-prometheus-password Basic authentication password to use authenticate with an existing Prometheus server. caliper-auth-prometheuspush-username Basic authentication username to use authenticate with an existing Prometheus Push Gateway. caliper-auth-prometheuspush-password Basic authentication password to use authenticate with an existing Prometheus Push Gateway."},{"location":"concepts/runtime-config/#fabric-connector-settings","title":"Fabric Connector settings","text":"

    In the following table, The 1.4 Refers to 1.4 SUT without the caliper-fabric-gateway-enabled specified and 1.4Gateway Refers to 1.4 SUT with the caliper-fabric-gateway-enabled specified. All means that all the SUT versions support this option

    Key SUT Version Description caliper-fabric-timeout-invokeorquery All The default timeout in seconds to use for invoking or querying transactions. Default is 60 seconds. caliper-fabric-gateway-enabled 1.4 Indicates whether to use the Fabric gateway-based SDK API for the 1.4 Fabric SUT. Default is false. caliper-fabric-gateway-localhost 1.4Gateway, 2.2 Indicates whether to convert discovered endpoints to localhost. Does not apply if discover is set to false in network config. Default is true. caliper-fabric-gateway-querystrategy 1.4Gateway, 2.2 Sets the query strategy to use for 2.2 and 1.4 when gateway is enabled. Default is Round Robin. caliper-fabric-gateway-eventstrategy 1.4Gateway, 2.2 Sets the event strategy to use for 2.2 and 1.4 when gateway is enabled. Default is any in Invoker Organisation. caliper-fabric-latencythreshold 1.4 Determines the reported commit time of a transaction based on the given percentage of event sources. caliper-fabric-loadbalancing 1.4 Determines how automatic load balancing is applied. caliper-fabric-verify-proposalresponse 1.4 Indicates whether to verify the received proposal responses. caliper-fabric-verify-readwritesets 1.4 Indicates whether to verify that the read-write sets returned by the endorsers match."},{"location":"concepts/runtime-config/#supported-event-strategies","title":"Supported Event Strategies","text":"

    A description of the different types of event strategy for both the 1.4 and 2.2 SUT can be found here

    To select an event strategy set the property caliper-fabric-gateway-eventstrategy to one of the following

    Strategy Corresponds to msp_all MSPID_SCOPE_ALLFORTX msp_any MSPID_SCOPE_ANYFORTX network_all NETWORK_SCOPE_ALLFORTX network_any NETWORK_SCOPE_ANYFORTX

    for example using a flag on the cli to set to have all peers in the network report that the transaction was committed you would specify

    --caliper-fabric-gateway-eventstrategy network_all\n

    The default is msp_any

    "},{"location":"concepts/runtime-config/#supported-query-strategies","title":"Supported Query Strategies","text":"

    A description of the different types of query strategy for both the 1.4 and 2.2 SUT can be found here

    To select a query strategy set the property caliper-fabric-gateway-querystrategy to one of the following

    Strategy Corresponds to msp_single MSPID_SCOPE_SINGLE msp_round_robin MSPID_SCOPE_ROUND_ROBIN

    for example using a flag on the cli to set to have all peers in the network report that the transaction was committed you would specify

    --caliper-fabric-gateway-querystrategy msp_single\n

    The default is msp_round_robin

    "},{"location":"concepts/runtime-config/#license","title":"License","text":"

    The Caliper codebase is released under the Apache 2.0 license. Any documentation developed by the Caliper Project is licensed under the Creative Commons Attribution 4.0 International License. You may obtain a copy of the license, titled CC-BY-4.0, at http://creativecommons.org/licenses/by/4.0/.

    "},{"location":"concepts/workload-module/","title":"Workload modules","text":""},{"location":"concepts/workload-module/#overview","title":"Overview","text":"

    Workload modules are the essence of a Caliper benchmark since it is their responsibility to construct and submit TXs. Accordingly, workload modules implement the logic pertaining to your business, benchmark or user behavior. Think of the workload modules as the brain of an emulated SUT client, deciding what kind of TX to submit at the given moment.

    "},{"location":"concepts/workload-module/#implementing-the-workload-module","title":"Implementing the workload module","text":"

    Workload modules are Node.JS modules that expose a certain API. There are no further restrictions on the implementation, thus arbitrary logic (using further arbitrary components) can be implemented.

    "},{"location":"concepts/workload-module/#the-api","title":"The API","text":"

    Workload modules are loaded through factory functions, just like other pluggable modules in Caliper. Accordingly, a workload module implementation must export a single factory function, named createWorkloadModule:

    /**\n * Create a new instance of the workload module.\n * @return {WorkloadModuleInterface}\n */\nfunction createWorkloadModule() {\n    return new MyWorkload();\n}\n\nmodule.exports.createWorkloadModule = createWorkloadModule;\n

    The factory function must return an instance that implements the [WorkloadModuleInterface](https://github.com/hyperledger-caliper/caliper/blob/main/packages/caliper-core/lib/worker/workload/workloadModuleInterface.js) class. See the example for a complete implementation.

    The interface contains the following three asynchronous functions:

    "},{"location":"concepts/workload-module/#initializeworkloadmodule","title":"initializeWorkloadModule","text":"

    The initializeWorkloadModule function is called by the worker processes before each round, providing contextual arguments to the module:

    1. workerIndex (Number): The 0-based index of the worker instantiating the workload module.
    2. totalWorkers (Number): The total number of workers participating in the round.
    3. roundIndex (Number): The 0-based index of the currently executing round.
    4. roundArguments (Object): The user-provided arguments for the round from the benchmark configuration file.
    5. sutAdapter (BlockchainConnector): The connector of the underlying SUT (System Under Test).
    6. sutContext (Object): The custom context object provided by the SUT connector.

    This function is a good place to validate your workload module arguments provided by the benchmark configuration file. It\u2019s also a good practice to perform here any preprocessing needed to ensure the fast assembling of TX contents later in the submitTransaction function.

    "},{"location":"concepts/workload-module/#submittransaction","title":"submitTransaction","text":"

    The submitTransaction function is the backbone of the workload generation. The worker process calls this function every time the rate controller enables the next TX. So it is vital to keep this function implementation as efficient as possible in order to be able to keep up with high frequency scheduling settings.

    The function requires no parameters, but it is its responsibility to submit the TX through the connector API.

    "},{"location":"concepts/workload-module/#cleanupworkloadmodule","title":"cleanupWorkloadModule","text":"

    The cleanupWorkloadModule function is called at the end of the round, and can be used to perform any resource cleanup required by your workload implementation.

    "},{"location":"concepts/workload-module/#simple-base-class","title":"Simple base class","text":"

    Although directly implementing the interface is possible, Caliper provides a simple utility base class that implements the required interface and also performs some common housekeeping operation. Thus inheriting from the WorkloadModuleBase class can result in simpler implementations.

    The base class provides the following utilities:

    • Creates instance variables in the constructor that match the parameters of the initializeWorkloadModule function.
    • Provides an implementation for the initializeWorkloadModule function, where it saves the received arguments into the instance variables.
    • Provides a no-op implementation for the cleanupWorkloadModule function.

    Inheriting from this base class only requires the implementation of the submitTransaction function by the user. Additionally, the initialization logic can be extended/overridden if necessary.

    "},{"location":"concepts/workload-module/#example","title":"Example","text":"

    A complete (albeit simple) example of a workload module implementation (mostly identical with the WorkloadModuleBase implementation):

    'use strict';\n\nconst { WorkloadModuleInterface } = require('@hyperledger/caliper-core');\n\nclass MyWorkload extends WorkloadModuleInterface {\n    constructor() {\n        super();\n        this.workerIndex = -1;\n        this.totalWorkers = -1;\n        this.roundIndex = -1;\n        this.roundArguments = undefined;\n        this.sutAdapter = undefined;\n        this.sutContext = undefined;\n    }\n\n    async initializeWorkloadModule(workerIndex, totalWorkers, roundIndex, roundArguments, sutAdapter, sutContext) {\n        this.workerIndex = workerIndex;\n        this.totalWorkers = totalWorkers;\n        this.roundIndex = roundIndex;\n        this.roundArguments = roundArguments;\n        this.sutAdapter = sutAdapter;\n        this.sutContext = sutContext;\n    }\n\n    async submitTransaction() {\n        let txArgs = {\n            // TX arguments for \"mycontract\"\n        };\n\n        return this.sutAdapter.invokeSmartContract('mycontract', 'v1', txArgs, 30);\n    }\n\n    async cleanupWorkloadModule() {\n        // NOOP\n    }\n}\n\nfunction createWorkloadModule() {\n    return new MyWorkload();\n}\n\nmodule.exports.createWorkloadModule = createWorkloadModule;\n

    The simpler implementation using the utility base class is the following:

    'use strict';\n\nconst { WorkloadModuleBase } = require('@hyperledger/caliper-core');\n\nclass MyWorkload extends WorkloadModuleBase {\n    async submitTransaction() {\n        let txArgs = {\n            // TX arguments for \"mycontract\"\n        };\n\n        return this.sutAdapter.invokeSmartContract('mycontract', 'v1', txArgs, 30);\n    }\n}\n\nfunction createWorkloadModule() {\n    return new MyWorkload();\n}\n\nmodule.exports.createWorkloadModule = createWorkloadModule;\n
    "},{"location":"concepts/workload-module/#configuring-the-workload-module","title":"Configuring the workload module","text":"

    To use your workload module for a given round, you only need to reference it in the benchmark configuration file:

    1. Set the test.rounds[i].workload.module attribute to the path of your workload module file. The path can be either an absolute path, or a relative path to the configured workspace path. You can also set the attribute to the package name of your published workload module, but in that case you need to install it beforehand.
    2. If your module supports different settings, set the test.rounds[i].workload.arguments attribute object accordingly. It will be passed to your module upon initialization.
    "},{"location":"concepts/workload-module/#tips-tricks","title":"Tips & Tricks","text":"

    The following advices might help you to improve your workload module implementation.

    1. You can use (require) any Node.JS module in your code (including the core Caliper module). Modularization is important for keeping your implementation clean and manageable.
    2. If you use third-party modules, then it is your responsibility to make them available to your workload module. This usually requires an npm install call in your module directory before you start Caliper.
    3. Caliper provides some core utilities that might make your life easier, such as logging and runtime configuration. Use them, don\u2019t reinvent the wheel!
    4. The submitTransaction function is on the hot path of the worker workload generation loop. Do computation-intensive tasks with care, it might hurt the scheduling precision of TXs! You can perform expensive pre-processing tasks in the initializeWorkloadModule function instead.
    "},{"location":"concepts/workload-module/#license","title":"License","text":"

    The Caliper codebase is released under the Apache 2.0 license. Any documentation developed by the Caliper Project is licensed under the Creative Commons Attribution 4.0 International License. You may obtain a copy of the license, titled CC-BY-4.0, at http://creativecommons.org/licenses/by/4.0/.

    "},{"location":"connectors/ethereum-config/","title":"Ethereum","text":"

    This page introduces the Ethereum adapter suitable for all the Ethereum clients that expose the web3 RPC interface over websockets.

    Note

    This adapter relies on web3js 1.2.x that is the stable version coming from 1.0.0-beta.37

    Note

    Hyperledger Besu and Geth are the current tested clients. The tests are driven via standard Ethereum JSON-RPC APIs so other clients should be compatible once docker configurations exist.

    Note

    Some highlights of the provided features:

    • configurable confirmation blocks threshold

    The page covers the following aspects of using the Ethereum adapter:

    • how to assemble a connection profile file, a.k.a., the blockchain network configuration file;
    • how to use the adapter interface from the user callback module;
    • transaction data gathered by the adapter;
    • and a [complete example]../connectors/ethereum-config.md/#connection-profile-example) of a connection profile.
    "},{"location":"connectors/ethereum-config/#assembling-the-network-configuration-file","title":"Assembling the Network Configuration File","text":"

    The JSON network configuration file of the adapter essentially defines which contracts are expected to be on the network and which account the adapter should use to deploy the pointed contracts and which account use to invoke them.

    "},{"location":"connectors/ethereum-config/#connection-profile-example","title":"Connection profile example","text":"

    We will provide an example of the configuration and then we\u2019ll in deep key by key

    {\n    \"caliper\": {\n        \"blockchain\": \"ethereum\",\n        \"command\" : {\n            \"start\": \"docker-compose -f network/ethereum/1node-clique/docker-compose.yml up -d && sleep 3\",\n            \"end\" : \"docker-compose -f network/ethereum/1node-clique/docker-compose.yml down\"\n          }\n    },\n    \"ethereum\": {\n        \"url\": \"ws://localhost:8545\",\n        \"contractDeployerAddress\": \"0xc0A8e4D217eB85b812aeb1226fAb6F588943C2C2\",\n        \"contractDeployerAddressPassword\": \"password\",\n        \"fromAddress\": \"0xc0A8e4D217eB85b812aeb1226fAb6F588943C2C2\",\n        \"fromAddressPassword\": \"password\",\n        \"transactionConfirmationBlocks\": 12,\n        \"contracts\": {\n            \"simple\": {\n                \"path\": \"src/contract/ethereum/simple/simple.json\",\n                \"gas\": {\n                    \"open\": 45000,\n                    \"query\": 100000,\n                    \"transfer\": 70000\n                }\n            }\n        }\n    }\n}\n

    The top-level caliper attribute specifies the type of the blockchain platform, so Caliper can instantiate the appropriate adapter when it starts. To use this adapter, specify the ethereum value for the blockchain attribute.

    Furthermore, it also contains two optional commands: a start command to execute once before the tests and an end command to execute once after the tests. Using these commands is an easy way, for example, to automatically start and stop a test network. When connecting to an already deployed network, you can omit these commands.

    These are the keys to provide inside the configuration file under the ethereum one:

    • URL of the node to connect to. Only http is currently supported.
    • Deployer address with which to deploy required contracts.
    • Deployer address private key: the private key of the deployer address.
    • Deployer address password: to unlock the deployer address.
    • Address from which to invoke methods of the benchmark.
    • Private Key: the private key of the benchmark address.
    • Password: to unlock the benchmark address.
    • Number of confirmation blocks to wait to consider a transaction as successfully accepted in the chain.
    • Contracts configuration.

    The following sections detail each part separately. For a complete example, please refer to the example section or one of the example files in the network/ethereum directories

    "},{"location":"connectors/ethereum-config/#url","title":"URL","text":"

    The URL of the node to connect to. Any host and port can be used if it is reachable. Currently only websocket is supported.

    \"url\": \"ws://localhost:8545\"\n

    Unfortunately, HTTP connections are explicitly disallowed, as

    1. there is no efficient way to guarantee the order of transactions submitted over http, which leads to nonce errors, and
    2. this adapter relies on web3.js, and this library has deprecated its support for RPC over HTTP.
    "},{"location":"connectors/ethereum-config/#deployer-address","title":"Deployer Address","text":"

    The address to use to deploy contracts of the network. Without particular or specific needs it can be set to be equal to the benchmark address. Its private key must be hold by the node connected with URL and it must be provided in the checksum form (the one with both lowercase and uppercase letters).

    \"contractDeployerAddress\": \"0xc0A8e4D217eB85b812aeb1226fAb6F588943C2C2\"\n
    "},{"location":"connectors/ethereum-config/#deployer-address-private-key","title":"Deployer Address Private Key","text":"

    The private key for the deployer address. If present then transactions are signed inside caliper and sent \u201craw\u201d to the ethereum node.

    \"contractDeployerAddressPrivateKey\": \"0x45a915e4d060149eb4365960e6a7a45f334393093061116b197e3240065ff2d8\"\n
    "},{"location":"connectors/ethereum-config/#deployer-address-password","title":"Deployer Address Password","text":"

    The password to use to unlock deployer address. If there isn\u2019t an unlock password, this key must be present as empty string. If the deployer address private key is present this is not used.

    \"contractDeployerAddressPassword\": \"gottacatchemall\"\n

    Note

    Hyperledger Besu does not provide wallet services, so the contractDeployerPassword is not supported and the private key variant must be used.

    "},{"location":"connectors/ethereum-config/#benchmark-address","title":"Benchmark Address","text":"

    The address to use while invoking all the methods of the benchmark. Its private key must be hold by the node connected with URL and it must be provided in the checksum form (the one with both lowercase and uppercase letters).

    \"fromAddress\": \"0xc0A8e4D217eB85b812aeb1226fAb6F588943C2C2\"\n
    "},{"location":"connectors/ethereum-config/#benchmark-address-seed","title":"Benchmark Address Seed","text":"

    As an alternative to fromAddress, fromAddressPrivateKey, and fromAddressPassword the network configuration can use a fixed seed and derive needed addresses via BIP-44 key derivation. Each caliper test worker will generate an address for use as fromAddress and fromAddressPrivateKey using the derivation path m/44'/60'/<x>'/0/0, where is the clientIdx passed into getContext.

    This configuration does not override fromAddress, but it takes priority over fromAddressPrivateKey and fromAddressPassword.

    \"fromAddressSeed\": \"0x3f841bf589fdf83a521e55d51afddc34fa65351161eead24f064855fc29c9580\"\n

    Note

    The use of fromAddressSeed is mandatory if you want to use more than 1 worker in your benchmark. This is because each worker independently fetches the current transaction nonce for the address it is using. If they all use the same address, they will all get the same nonces, causing known transaction errors. By using a seed, each worker will generate a unique address and private key.

    "},{"location":"connectors/ethereum-config/#benchmark-address-private-key","title":"Benchmark Address Private Key","text":"

    The private key for the benchmark address. If present then transactions are signed inside caliper and sent \u201craw\u201d to the ethereum node.

    This configuration takes priority over fromAddressPassword.

    \"fromAddressPassword\": \"0x45a915e4d060149eb4365960e6a7a45f334393093061116b197e3240065ff2d8\"\n
    "},{"location":"connectors/ethereum-config/#benchmark-address-password","title":"Benchmark Address Password","text":"

    The password to use to unlock benchmark address. If there isn\u2019t an unlock password, this key must be present as empty string. If the benchmark address private key is present this is not used.

    \"fromAddressPassword\": \"gottacatchemall\"\n

    Note

    Hyperledger Besu does not provide wallet services, so the fromAddressPassword option is not supported and the private key variant must be used.

    "},{"location":"connectors/ethereum-config/#confirmation-blocks","title":"Confirmation blocks","text":"

    It is the number of blocks the adapter will wait before warn Caliper that a transaction has been successfully executed on the network. You can freely tune it from 1 to the desired confirmations. Keep in mind that in the Ethereum main net (PoW), 12 to 20 confirmations can be required to consider a transaction as accepted in the blockchain. If you\u2019re using different consensus algorithm (like clique in the example network provided) it can be safely brought to a lower value. In any case it is up to you.

    \"transactionConfirmationBlocks\": 12\n
    "},{"location":"connectors/ethereum-config/#contract-configuration","title":"Contract configuration","text":"

    It is the list, provided as a JSON object, of pre-deployed contracts or contracts to deploy on the network before running the benchmark. You should provide a JSON entry for each contract. The key will represent the contract identifier to invoke methods on that contract.

    Depending on whether you plan to use pre-deployed contracts or deploy them during the benchmark, the configuration will differ slightly.

    Note

    Defining configurations simultaneously for both pre-deployed contracts and contracts to be deployed by Caliper is currently not supported and may result in unexpected errors. This is because opting for pre-deployed contracts means the contract installation phase will be skipped.

    In both cases, it strongly recommended to specify a gas field, which is an object with one field per contract function that you will call in your test. The value of these fields should be set to the amount of gas that will be required to execute your transaction. There is no need for this number to be an exact match, as it\u2019s used to set the gas limit for the transaction, so if your transaction might have a variable gas cost, just set this value to the highest gas usage that you would expect to see for your transaction.

    Note: If you do not specify the gas for your contract functions, web3 will automatically call out to your node to estimate the gas requirement before submitting the transaction. This causes three problems. First, it means that your transaction will effectively execute twice, doubling the load on the node serving as your RPC endpoint. Second, the extra call will add significant additional latency to every transaction. Third, your transactions may be reordered, causing transaction failures due to out of order nonces.

    "},{"location":"connectors/ethereum-config/#pre-deployed-contracts","title":"Pre-deployed contracts","text":"

    To use pre-deployed contracts, you must launch caliper using the --caliper-flow-skip-install option, which skips the smart contract installation phase. For each key you must provide a JSON object containing the contract address and the contract ABI. The ABI is required to invoke methods on the contract.

    \"contracts\": {\n    \"simple\": {\n        \"address\": \"0xc24f4561B8F1159E8D8661B282A2974cD48058C2\",\n        \"gas\": {\n            \"open\": 45000,\n            \"query\": 100000,\n            \"transfer\": 70000\n        },\n        \"abi\": [...]\n    }\n}\n
    "},{"location":"connectors/ethereum-config/#contracts-to-deploy","title":"Contracts to Deploy","text":"

    Contracts to be deployed by Caliper require the specification of a contract definition file for each. In the contract configuration you must include a path field pointing to each contract definition file. It\u2019s in this new file that you will define the contract\u2019s ABI and bytecode, as well as the gas required to deploy it.

    \"contracts\": {\n    \"simple\": {\n        \"path\": \"src/contract/ethereum/simple/simple.json\",\n        \"gas\": {\n            \"open\": 45000,\n            \"query\": 100000,\n            \"transfer\": 70000\n        }\n    },\n    \"second\": {\n        \"path\": \"src/contract/ethereum/second/second.json\",\n        \"gas\": {\n            \"function\": 12345\n        }\n    }\n}\n
    "},{"location":"connectors/ethereum-config/#contract-definition-file","title":"Contract definition file","text":"

    Contract definition file is a simple JSON file containing basic information to deploy and use an Ethereum contract. Four keys are required:

    • Name
    • ABI
    • Bytecode
    • Gas

    Here is an example:

    {\n    \"name\": \"The simplest workload contract\",\n    \"abi\": [{\"constant\":true,\"inputs\":[{\"nam......ype\":\"function\"}],\n    \"bytecode\": \"0x608060405.........b0029\",\n    \"gas\": 259823\n}\n

    "},{"location":"connectors/ethereum-config/#name","title":"Name","text":"

    It is a name to display in logs when the contract gets deployed. It is only a description name.

    "},{"location":"connectors/ethereum-config/#abi","title":"ABI","text":"

    It is the ABI generated when compiling the contract. It is required in order to invoke methods on a contract.

    "},{"location":"connectors/ethereum-config/#bytecode","title":"Bytecode","text":"

    It is the bytecode generated when compiling the contract. Note that since it is an hexadecimal it must start with the 0x.

    "},{"location":"connectors/ethereum-config/#gas","title":"Gas","text":"

    It is the gas required to deploy the contract. It can be easily calculated with widely used solidity development kits or querying to a running Ethereum node.

    "},{"location":"connectors/ethereum-config/#using-the-adapter-interface","title":"Using the Adapter Interface","text":"

    The workload modules interact with the adapter at two phases of the tests: during the initialization of the workload module (the initializeWorkloadModule function), and when submitting invoke or query transactions (the submitTransaction function).

    "},{"location":"connectors/ethereum-config/#the-initializeworkloadmodule-function","title":"The initializeWorkloadModule function","text":"

    See the corresponding documentation of the function for the description of its parameters.

    The last argument of the function is a sutContext object, which is a platform-specific object provided by the backend blockchain\u2019s connector. The context object provided by this connector is the following:

    {\n  fromAddress: \"0xA89....7G\"\n  web3: Web3\n}\n

    The fromAddress property is the benchmark address while web3 is the configured instance of the Web3js client.

    "},{"location":"connectors/ethereum-config/#the-submittransaction-function","title":"The submitTransaction function","text":"

    The sutAdapter object received (and saved) in the initializeWorkloadModule function is of type [ConnectorInterface](https://github.com/hyperledger-caliper/caliper/blob/v0.6.0/packages/caliper-core/lib/common/core/connector-interface.js). Its getType() function returns the fabric string value.

    The sendRequests method of the connector API allows the workload module to submit requests to the SUT. It takes a single parameter: an object or array of objects containing the settings of the requests.

    The settings object has the following structure:

    • contract: string. Required. The ID of the contract (that is the key specified here).
    • readOnly: boolean. Optional. Indicates whether the request is a TX or a query. Defaults to false.
    • verb: string. Required. The name of the function to call on the contract.
    • value: number. Optional. The value parameter in Wei to be passed to the payable function of the contract.
    • args: mixed[]. Optional. The list of arguments to pass to the method in the correct order as they appear in method signature. It must be an array.
    let requestsSettings = [{\n    contract: 'simple',\n    verb: 'open',\n    value: 1000000000000000000000,\n    args: ['sfogliatella', 1000]\n},{\n    contract: 'simple',\n    verb: 'open',\n    value: 900000000000000000000,\n    args: ['baba', 900]\n}];\n\nawait this.sutAdapter.sendRequests(requestsSettings);\n

    Currently each method call inside sendRequests is sent separately, that is, they are NOT sent as a batch of calls on RPC.

    To query a state on a contract state, set the readOnly attribute to true. The difference is that it can\u2019t produce any change on the blockchain and node will answer with its local view of data. Like for traditional requests, currently there is no support for batch queries.

    "},{"location":"connectors/ethereum-config/#transaction-data-gathered-by-the-adapter","title":"Transaction Data Gathered by the Adapter","text":"

    The previously discussed sendRequests method returns an array whose elements correspond to the result of the submitted request(s) with the type of TxStatus. The class provides some standard and platform-specific information about its corresponding transaction.

    The standard information provided by the type are the following:

    • GetID():string returns the transaction ID.
    • GetStatus():string returns the final status of the transaction, either success or failed.
    • GetTimeCreate():number returns the epoch when the transaction was submitted.
    • GetTimeFinal():number return the epoch when the transaction was finished.
    • IsCommitted():boolean indicates whether the transaction has been committed successfully.
    • IsVerified():boolean indicates whether we are sure about the final status of the transaction. Always true for successful transactions. False in all other cases.
    • GetResult():any returns the result of the transaction. Only requests with readOnly set to true will return the actual result of the call. Write operations, because they are asynchronous, will return the transaction hash.
    "},{"location":"connectors/ethereum-config/#license","title":"License","text":"

    The Caliper codebase is released under the Apache 2.0 license. Any documentation developed by the Caliper Project is licensed under the Creative Commons Attribution 4.0 International License. You may obtain a copy of the license, titled CC-BY-4.0, at http://creativecommons.org/licenses/by/4.0/.

    "},{"location":"connectors/fabric-config/","title":"Fabric","text":""},{"location":"connectors/fabric-config/#overview","title":"Overview","text":"

    This page introduces the Fabric adapter that utilizes the Common Connection Profile (CCP) feature of the Fabric SDK to provide compatibility and a unified programming model across different Fabric versions.

    Note

    The LTS versions of Hyperledger Fabric as well as the very latest 2.x release of Hyperledger Fabric are supported, all other versions are unsupported

    The adapter exposes many SDK features directly to the user callback modules, making it possible to implement complex scenarios.

    Note

    Some highlights of the provided features:

    • supports multiple channels and chaincodes
    • supports multiple organizations
    • supports multiple identities
    • private data collection support
    • support for TLS and limited mutual TLS communication (identity certificates cannot have restrictions on them)
    • option to select the identity for submitting a TX/query
    "},{"location":"connectors/fabric-config/#installing-dependencies","title":"Installing dependencies","text":"

    You must bind Caliper to a specific Fabric SDK to target the corresponding (or compatible) SUT version. Refer to the binding documentation for details. When you bind to an SUT, you are in fact selecting a specific Fabric SDK to use which could be used with different versions of Fabric SUTs.

    Note

    • None of the Fabric bindings support administration actions. It it not possible to create/join channels nor deploy a chaincode. Consequently running caliper only facilitate operations using the --caliper-flow-only-test flag
    "},{"location":"connectors/fabric-config/#binding-with-fabric-14-client-sdk","title":"Binding with Fabric 1.4 Client SDK","text":"

    To bind with this client sdk, use fabric:1.4. Do not use this binding if using the BFT Concensus mechanism in Hyperledger Fabric that was introduced in version 3.0.0, use fabric:fabric-gateway instead.

    It is confirmed that a 1.4 Fabric SDK is compatible with a Fabric 2.2 and later Fabric 2.x SUTs, therefore this binding can be used with later Fabric SUTs

    Note that when using the binding target for the Fabric SDK 1.4 there are capability restrictions:

    Note

    • Currently setting discover to true in the network configuration file is not supported if you don\u2019t enable the gateway option (eg specifying \u2013caliper-Fabric-gateway-enabled as a command line option)
    • Detailed execution data for every transaction is only available if you don\u2019t enable the gateway option
    "},{"location":"connectors/fabric-config/#binding-with-fabric-22-client-sdk","title":"Binding with Fabric 2.2 Client SDK","text":"

    To bind with this client sdk, use fabric:2.2. Do not use this binding if using the BFT Concensus mechanism in Hyperledger Fabric that was introduced in version 3.0.0, use fabric:fabric-gateway instead.

    It is confirmed that a 2.2 Fabric SDK is compatible with 2.2 and later Fabric SUTs, therefore this binding can be used with 2.2 and later Fabric SUTs

    Note

    The following further restrictions exist for this binding: - Detailed execution data for every transaction is not available.

    "},{"location":"connectors/fabric-config/#binding-with-fabric-gateway-client-sdk","title":"Binding with Fabric Gateway Client SDK","text":"

    To bind with this client sdk, use fabric:fabric-gateway. This is now the preferred SDK to use given that Fabric 2.2 and earlier are now not in LTS and as such the Fabric 1.4 and 2.2 SDKs are deprecated.

    Only Fabric 2.4 and later with the Peer Gateway capability enabled (which is the default setting for a Fabric peer) can be used so for older versions of Hyperledger Fabric you cannot bind with this client SDK.

    Note

    The following further restrictions exist for this binding - Detailed execution data for every transaction is not available. - mutual TLS is not supported - peer and organization targeting is not supported so the options targetPeers and targetOrganizations in a request will throw an error.

    "},{"location":"connectors/fabric-config/#connection-profiles","title":"Connection Profiles","text":"

    Connection Profiles are a Fabric standard that provides connectivity information for your Fabric network. In the past (Hyperledger Fabric 1.1) you needed to describe all your endpoints in a connection profile, ie all the orderers and all the peers in order to be able to connect a client application to the network. This is referred to as a static connection profile and when you use this connection profile with Caliper you should set the discover property to false. The problem with static connection profiles is that if a network topology changes (eg add/remove orderer, peer, organisation etc) then every client needs to have an updated connection profile.

    Hyperledger Fabric in 1.2 introduced the concept of discovery. This allowed you to ask a peer for the network topology. Your Fabric network has to be configured correctly for this to work (but all Fabric networks should be configured to allow for discovery now). Connection profiles that use this capability will only have a list of 1 or more peers for the specific organisation that connection profile applies to which will be used to discover the network. These connection profiles are referred to as dynamic connection profiles and when you use this connection profile with Caliper you should set the discover property to true.

    Network builders and providers should generate connection profiles (for example test-network in fabric-samples does this), however if you don\u2019t have a connection profile you will need to create one. Information about creating connection profiles can be found in Hyperledger Fabric documentation as well as the node-sdk documentation (the format changed between node sdks. The 1.4 version should work when binding to either Fabric 1.4 or 2.2 but the version documented by 2.2 may only work when binding to Fabric 2.2)

    • node sdk 2.2 documentation for connection profiles
    • node sdk 1.4 documentation for connection profiles

    Unfortunately the documentation provided by Hyperledger Fabric is more focused on static connection profiles rather than dynamic connection profiles and your aim should be to create the simpler and smaller dynamic connection profile.

    With the introduction of using the Peer Gateway rather than the traditional node sdks (1.4 and 2.2) caliper has introduced the concept of declaring peers in an organization within the network configuration file as an alternative to connection profiles. This provides a simple way to describe either peers to discover from (when binding to Fabric 1.4 or 2.2, for Fabric 1.4 you must enable the gateway option as it won\u2019t work otherwise as discovery is not supported with the Fabric 1.4 binding when the gateway option is not enabled) or the peer to be used as a gateway into the Fabric network (when binding to Fabric 2.4/fabric-gateway). An example of a peers section in the network configuration is

    peers:\n      - endpoint: peer0.org3.example.com:7051\n        tlsCACerts:\n          pem: |-\n            -----BEGIN CERTIFICATE-----\n            ...\n            -----END CERTIFICATE-----\n        grpcOptions:\n          grpc.keepalive_timeout_ms: 20000\n          grpc.keepalive_time_ms: 120000\n          grpc.http2.min_time_between_pings_ms: 120000\n          grpc.http2.max_pings_without_data: 0\n          grpc.keepalive_permit_without_calls: 1\n
    "},{"location":"connectors/fabric-config/#runtime-settings","title":"Runtime settings","text":""},{"location":"connectors/fabric-config/#common-settings","title":"Common settings","text":"

    Some runtime properties of the adapter can be set through Caliper\u2019s runtime configuration mechanism. For the available settings, see the caliper.fabric section of the default configuration file and its embedded documentation.

    The above settings are processed when starting Caliper. Modifying them during testing will have no effect. However, you can override the default values before Caliper starts from the usual configuration sources. In the following example the localhost property applies only when binding with Fabric 2.2 or Fabric 1.4 (and only if the gateway option is enabled)

    Note

    An object hierarchy in a configuration file generates a setting entry for every leaf property. Consider the following configuration file:

    caliper:\n    fabric:\n        gateway:\n          localhost: false\n
    After naming the project settings file caliper.yaml and placing it in the root of your workspace directory, it will override the following two setting keys with the following values:

    • Setting caliper-fabric-gateway-localhost is set to false

    The other settings remain unchanged.

    Alternatively you can change this setting when you launch caliper with the CLI options of

    --caliper-fabric-gateway-localhost false

    "},{"location":"connectors/fabric-config/#the-connector-api","title":"The connector API","text":"

    The workload modules interact with the adapter at three phases of the tests: during the initialization of the user module (in the initializeWorkloadModule callback), when submitting invoke or query transactions (in the submitTransaction callback), and at the optional cleanup of the user module (in the cleanupWorkloadModule callback).

    "},{"location":"connectors/fabric-config/#the-initializeworkloadmodule-function","title":"The initializeWorkloadModule function","text":"

    See the corresponding documentation of the function for the description of its parameters.

    The last argument of the function is a sutContext object, which is a platform-specific object provided by the backend blockchain\u2019s connector. The context object provided by this connector is a FabricConnectorContext instance but this doesn\u2019t provide anything of use at this time.

    For the current details/documentation of the API, refer to the source code.

    "},{"location":"connectors/fabric-config/#the-submittransaction-function","title":"The submitTransaction function","text":"

    The sutAdapter object received (and saved) in the initializeWorkloadModule function is of type [ConnectorInterface](https://github.com/hyperledger-caliper/caliper/blob/v0.6.0/packages/caliper-core/lib/common/core/connector-interface.js). Its getType() function returns the fabric string value.

    The sendRequests method of the connector API allows the workload module to submit requests to the SUT. It takes a single parameter: an object or array of objects containing the settings of the requests.

    The settings object has the following structure:

    • contractId: string. Required. The ID of the contract to call. This is either the unique contractID specified in the network configuration file or the chaincode ID used to deploy the chaincode and must match the id field in the contacts section of channels in the network configuration file.
    • contractFunction: string. Required. The name of the function to call in the contract.
    • contractArguments: string[]. Optional. The list of string arguments to pass to the contract.
    • readOnly: boolean. Optional. Indicates whether the request is a TX or a query. Defaults to false.
    • transientMap: Map. Optional. The transient map to pass to the contract.
    • invokerIdentity: string. Optional. The name of the user who should invoke the contract. If not provided, a user will be selected from the organization defined by invokerMspId or the first organization in the network configuration file if that property is not provided.
    • invokerMspId: string. Optional. The mspid of the user organization who should invoke the contract. Defaults to the first organization in the network configuration file.
    • targetPeers: string[]. Optional. An array of endorsing peer names as the targets of the transaction proposal. If omitted, the target list will be chosen for you and if discovery is used then the node SDK uses discovery to determine the correct peers.
    • targetOrganizations: string[]. Optional. An array of endorsing organizations as the targets of the invoke. If both targetPeers and targetOrganizations are specified, then targetPeers will take precedence.
    • channel: string. Optional. The name of the channel on which the contract to call resides.
    • timeout: number. Optional. [Only applies to 1.4 binding when not enabling gateway use] The timeout in seconds to use for this request.
    • orderer: string. Optional. [Only applies to 1.4 binding when not enabling gateway use] The name of the target orderer for the transaction broadcast. If omitted, then an orderer node of the channel will be automatically selected.
    • So invoking a contract looks like the following:

      let requestSettings = {\n    contractId: 'marbles',\n    contractFunction: 'initMarble',\n    contractArguments: ['MARBLE#1', 'Red', '100', 'Attila'],\n    invokerIdentity: 'client0.org2.example.com',\n    timeout: 10\n};\n\nawait this.sutAdapter.sendRequests(requestSettings);\n

      Note

      sendRequests also accepts an array of request settings. However, Fabric does not support submitting an atomic batch of transactions like Sawtooth, so there is no guarantee that the order of these transactions will remain the same, or whether they will reside in the same block.

      "},{"location":"connectors/fabric-config/#gathered-tx-data","title":"Gathered TX data","text":"

      The previously discussed sendRequests function returns the result (or an array of results) for the submitted request(s) with the type of TxStatus. The class provides some standard and platform-specific information about its corresponding transaction.

      The standard data provided are the following: - GetID():string returns the transaction ID. - GetStatus():string returns the final status of the transaction, either success or failed. - GetTimeCreate():number returns the epoch when the transaction was submitted. - GetTimeFinal():number return the epoch when the transaction was finished. - IsVerified():boolean indicates whether we are sure about the final status of the transaction. Unverified (considered failed) transactions could occur, for example, if the adapter loses the connection with every Fabric event hub, missing the final status of the transaction. - GetResult():Buffer returns one of the endorsement results returned by the chaincode as a Buffer. It is the responsibility of the user callback to decode it accordingly to the chaincode-side encoding.

      The adapter also gathers the following platform-specific data (if observed) about each transaction, each exposed through a specific key name. The placeholders <P> and <O> in the key names are node names taking their values from the top-level peers and orderers sections from the network configuration file (e.g., endorsement_result_peer0.org1.example.com). The Get(key:string):any function returns the value of the observation corresponding to the given key. Alternatively, the GetCustomData():Map<string,any> returns the entire collection of gathered data as a Map.

      "},{"location":"connectors/fabric-config/#available-data-keys-for-all-fabric-suts","title":"Available data keys for all Fabric SUTs","text":"

      The adapter-specific data keys that are available when binding to any of the Fabric SUT versions are :

      Key name Data type Description request_type string Either the transaction or query string value for traditional transactions or queries, respectively."},{"location":"connectors/fabric-config/#available-data-keys-for-the-fabric-14-sut-when-gateway-is-not-enabled","title":"Available data keys for the Fabric 1.4 SUT when gateway is not enabled","text":"

      The adapter-specific data keys that only the v1.4 SUT when not enabling the gateway makes available are :

      Key name Data type Description time_endorse number The Unix epoch when the adapter received the proposal responses from the endorsers. Saved even in the case of endorsement errors. proposal_error string The error message in case an error occurred during sending/waiting for the proposal responses from the endorsers. proposal_response_error_<P> string The error message in case the endorser peer <P> returned an error as endorsement result. endorsement_result_<P> Buffer The encoded contract invocation result returned by the endorser peer <P>. It is the user callback\u2019s responsibility to decode the result. endorsement_verify_error_<P> string Has the value of 'INVALID' if the signature and identity of the endorser peer <P> couldn\u2019t be verified. This verification step can be switched on/off through the runtime configuration options. endorsement_result_error<P> string If the transaction proposal or query execution at the endorser peer <P> results in an error, this field contains the error message. read_write_set_error string Has the value of 'MISMATCH' if the sent transaction proposals resulted in different read/write sets. time_orderer_ack number The Unix epoch when the adapter received the confirmation from the orderer that it successfully received the transaction. Note, that this isn\u2019t the actual ordering time of the transaction. broadcast_error_<O> string The warning message in case the adapter did not receive a successful confirmation from the orderer node <O>. broadcast_response_error_<O> string The error message in case the adapter received an explicit unsuccessful response from the orderer node <O>. unexpected_error string The error message in case some unexpected error occurred during the life-cycle of a transaction. commit_timeout_<P> string Has the value of 'TIMEOUT' in case the event notification about the transaction did not arrive in time from the peer node <P>. commit_error_<P> string Contains the error code in case the transaction validation fails at the end of its life-cycle on peer node <P>. commit_success_<P> number The Unix epoch when the adapter received a successful commit event from the peer node <P>. Note, that transactions committed in the same block have nearly identical commit times, since the SDK receives them block-wise, i.e., at the same time. event_hub_error_<P> string The error message in case some event hub connection-related error occurs with peer node <P>.

      You can access these data in your workload module after calling sendRequests:

      let requestSettings = {\n    contractId: 'marbles',\n    contractVersion: '0.1.0',\n    contractFunction: 'initMarble',\n    contractArguments: ['MARBLE#1', 'Red', '100', 'Attila'],\n    invokerIdentity: 'client0.org2.example.com',\n    timeout: 10\n};\n\n// single argument, single return value\nconst result = await this.sutAdapter.sendRequests(requestSettings);\n\nlet shortID = result.GetID().substring(8);\nlet executionTime = result.GetTimeFinal() - result.GetTimeCreate();\nconsole.log(`TX [${shortID}] took ${executionTime}ms to execute. Result: ${result.GetStatus()}`);\n
      "},{"location":"connectors/fabric-config/#the-cleanupworkloadmodule-function","title":"The cleanupWorkloadModule function","text":"

      The cleanupWorkloadModule function is called at the end of the round, and can be used to perform any resource cleanup required by your workload implementation.

      "},{"location":"connectors/fabric-config/#network-configuration-file-reference","title":"Network configuration file reference","text":"

      The YAML network configuration file of the adapter mainly describes the organizations and the identities associated with those organizations, It also provides explicit information about the channels in your Fabric network and the chaincode (containing 1 or more smart contracts) deployed to those channels. It can reference Common Connection Profiles for each organization (as common connection profiles are specific to a single organization). These are the same connection profiles that would be consumed by the node-sdk. Whoever creates the Fabric network and channels would be able to provide appropriate profiles for each organization.

      The following sections detail each part separately. For a complete example, please refer to the example section or one of the files in the Caliper repositor, such as the caliper-fabric test folder.

      name Required. Non-empty string. The name of the configuration file.
      name: Fabric\n
      version Required. Non-empty string. Specifies the YAML schema version that the Fabric SDK will use. Only the `'2.0.0'` string is allowed.
      version: '2.0.0'\n
      caliper Required. Non-empty object. Contains runtime information for Caliper. Can contain the following keys.
      • blockchain Required. Non-empty string. Only the \"fabric\" string is allowed for this adapter.
        caliper:\n    blockchain: fabric\n
      • sutOptions Required. Non-empty object. These are sut specific options block, the following are specific to the Fabric implementation
        • mutualTls Optional. Boolean. Indicates whether to use client-side TLS in addition to server-side TLS. Cannot be set to
          true
          without using server-side TLS. Defaults to
          false
          .
          caliper:\n    blockchain: fabric\n    sutOptions:\n      mutualTls: true\n
      • command Optional. Non-empty object. Specifies the start and end scripts. Note:

        Must contain at least one of the following keys.

        • start Optional. Non-empty string. Contains the command to execute at startup time. The current working directory for the commands is set to the workspace.
          caliper:\n  command:\n    start: my-startup-script.sh\n
        • end Optional. Non-empty string. Contains the command to execute at exit time. The current working directory for the commands is set to the workspace.
          caliper:\n  command:\n    end: my-cleanup-script.sh\n
      info Optional. Object. Specifies custom key-value pairs that will be included as-is in the generated report. The key-value pairs have no influence on the runtime behavior.
       info:\n   Version: 1.1.0\n   Size: 2 Orgs with 2 Peers\n   Orderer: Solo\n   Distribution: Single Host\n   StateDB: CouchDB\n
      organizations Required. Non-empty object. Contains information about 1 or more organizations that will be used when running a workload. Even in a multi-organization Fabric network, workloads would usually only be run from a single organization so it would be common to only see 1 organization defined. However it does support defining multiple organizations for which a workload can explicitly declare which organization to use. The first Organization in the network configuration will be the default organization if no explicit organization is requested.
       organizations:\n- mspid: Org1MSP\n  identities:\n    wallet:\n      path: './org1wallet'\n      adminNames:\n      - admin\n    certificates:\n    - name: 'User1'\n      clientPrivateKey:\n        pem: |-\n          -----BEGIN PRIVATE KEY-----\n          ...\n          -----END PRIVATE KEY-----\n      clientSignedCert:\n        pem: |-\n          -----BEGIN CERTIFICATE-----\n          ...\n          -----END CERTIFICATE-----\n  connectionProfile:\n    path: './Org1ConnectionProfile.yaml'\n    discover: true\n- mspid: Org2MSP\n  connectionProfile:\n    path: './Org2ConnectionProfile.yaml'\n    discover: false\n  identities:\n    wallet:\n      path: './org2wallet'\n      adminNames:\n      - admin\n- mspid: Org3MSP\n  peers:\n    - endpoint: peer0.org3.example.com:7051\n      tlsCACerts:\n        pem: |-\n          -----BEGIN CERTIFICATE-----\n          ...\n          -----END CERTIFICATE-----\n      grpcOptions:\n        grpc.keepalive_timeout_ms: 20000\n        grpc.keepalive_time_ms: 120000\n        grpc.http2.min_time_between_pings_ms: 120000\n        grpc.http2.max_pings_without_data: 0\n        grpc.keepalive_permit_without_calls: 1\n
      Each organization must have mspid, identities and either connectionProfile or peers provided and at least 1 certificate or wallet definition in the identities section so that at least 1 identity is defined
      • mspid Required. Non-empty string. The unique MSP ID of the organization.
        organizations:\n  - mspid: Org1MSP\n
      • connectionProfile Required if peers not provided. Non-empty object. Reference to a Fabric network Common Connection Profile. These profiles are the same profiles that the Fabric SDKs would consume in order to interact with a Fabric network. A Common Connection Profile is organization specific so you need to ensure you point to a Common Connection Profile that is representative of the organization it is being included under. Connection Profiles also can be in 2 forms. A static connection profile will contain a complete description of the Fabric network, ie all the peers and orderers as well as all the channels that the organization is part of. A dynamic connection profile will contain a minimal amount of information usually just a list of 1 or more peers belonging to the organization (or is allowed to access) in order to discover the Fabric network nodes and channels.
        organizations:\n  - mspid: Org1MSP\n    connectionProfile:\n      path: './test/sample-configs/Org1ConnectionProfile.yaml'\n      discover: true\n
      • path Required. Non-empty string. The path to the connection profile file
        organizations:\n  - mspid: Org1MSP\n    connectionProfile:\n      path: './test/sample-configs/Org1ConnectionProfile.yaml'\n
      • discover Optional. Boolean. This does not need to be provided when binding to the
        fabric-gateway and will be ignored if provided and is relevant only for the 1.4 and 2.2 fabric bindings. A value of 
        true indicates that the connection profile is a dynamic connection profile and discovery should be used. If not specified then it defaults to 
        false. For a Fabric 1.4 binding you can only set this value to true if you plan to use the 
        gateway option.\n\n        
        organizations:\n  - mspid: Org1MSP\n    connectionProfile:\n      path: './test/sample-configs/Org1ConnectionProfile.yaml'\n      discover: true\n
        \n \n \n \n \n
      • \n \n peers\n\n Required if connectionProfile not provided. Non-empty object.\n \n Reference to one or more peers that are either\n
          \n
        • a peer to discover the network from when bound to Fabric 2.2 or Fabric 1.4 in conjunction with using the gateway enabled option\n
        • \n
        • a gateway peer when bound to Fabric gateway
        • \n
        \n\n This option removes the need for connection profiles but the Fabric network must be set up correctly to allow the network to be discovered. These entries are the equivalent of a dynamic connection profile but in a more compact and easier form.\n\n
        organizations:\n  - mspid: Org3MSP\n    peers:\n      - endpoint: peer0.org3.example.com:7051\n        tlsCACerts:\n          pem: |-\n            -----BEGIN CERTIFICATE-----\n            ...\n            -----END CERTIFICATE-----\n        grpcOptions:\n          grpc.keepalive_timeout_ms: 20000\n          grpc.keepalive_time_ms: 120000\n          grpc.http2.min_time_between_pings_ms: 120000\n          grpc.http2.max_pings_without_data: 0\n          grpc.keepalive_permit_without_calls: 1\n
        \n
      • \n \n endpoint\n\n Required. Non-empty string.\n \n the end point of the peer in the form of host:port (note that you do not specify a schema such as grpc:// or grpcs://, in fact these schemas are not real and were invented purely for connection profiles). Whether the end point is secured by tls or not is determined by the presence of the tlsCACerts property\n\n
        peers:\n  - endpoint: peer0.org3.example.com:7051\n
        \n \n
      • \n
      • \n \n tlsCACerts\n Optional. Non-empty object.\n \n Specifies the tls root certificate chain to verify a TLS connection with the peer by the client\n \n Note:\n

        Must contain at most one of the following keys.

        \n \n
          \n
        • \n \n path\n Optional. Non-empty string.\n \n The path of the file containing the certificate chain.\n\n
          tlsCACerts:\n  path: path/to/cert.pem\n
          \n\n \n
        • \n
        • \n \n pem\n Optional. Non-empty string.\n \n The content of the certificate file in exact PEM format (which must split into multiple lines for yaml or include escaped new lines for json).\n
          tlsCACerts:\n   pem: |\n    -----BEGIN CERTIFICATE-----\n    ...\n    -----END CERTIFICATE-----\n
          \n \n
        • \n
        \n \n
      • \n
      • \n \n grpcOptions\n Optional. Non-empty Object.\n \n A set of grpc specific options when creating a grpc connection to a peer.\n
        peers:\n  - endpoint: peer0.org3.example.com:7051\n    grpcOptions:\n      grpc.keepalive_timeout_ms: 20000\n      grpc.keepalive_time_ms: 120000\n      grpc.http2.min_time_between_pings_ms: 120000\n      grpc.http2.max_pings_without_data: 0\n      grpc.keepalive_permit_without_calls: 1\n
        \n \n
      • \n\n \n
      • \n
      • \n \n identities\n\n Required. Non-empty object.\n \n Defines the location of 1 or more identities available for use. Currently only supports explicit identities by providing a certificate and private key as PEM or an SDK wallet that contains 1 or more identities on the file system. At least 1 identity must be provided via one of the child properties of identity.\n\n
        identities:\n   wallet:\n     path: './wallets/org1wallet'\n     adminNames:\n     - admin\n   certificates:\n   - name: 'User1'\n     clientPrivateKey:\n       pem: |-\n         -----BEGIN PRIVATE KEY-----\n         ...\n         -----END PRIVATE KEY-----\n     clientSignedCert:\n       pem: |-\n         -----BEGIN CERTIFICATE-----\n         ...\n         -----END CERTIFICATE-----\n
        \n
          \n
        • \n \n certificates\n Optional. A List of non-empty objects.\n \n Defines 1 or more identities by providing the PEM information for the client certificate and client private key as either an embedded PEM, a base64 encoded string of the PEM file contents or a path to individual PEM files\n\n
          certificates:\n- name: 'User1'\n  clientPrivateKey:\n     path: path/to/privateKey.pem\n  clientSignedCert:\n     path: path/to/cert.pem\n- name: 'Admin'\n  admin: true\n  clientPrivateKey:\n   pem: |-\n    -----BEGIN PRIVATE KEY-----\n    MIGHAgEAMBMGByqGSM49AgEGCCqGSM49AwEHBG0wawIBAQQgIRZo3SAPXAJnGVOe\n    jRALBJ208m+ojeCYCkmJQV2aBqahRANCAARnoGOEw1k+MtjHH4y2rTxRjtOaKWXn\n    FGpsALLXfBkKZvxIhbr+mPOFZVZ8ztihIsZBaCuCIHjw1Tx65szJADcO\n    -----END PRIVATE KEY-----\n  clientSignedCert:\n   pem: |-\n     -----BEGIN CERTIFICATE-----\n    MIICSDCCAe+gAwIBAgIQfpGy5OOXBYpKZxg89x75hDAKBggqhkjOPQQDAjB2MQsw\n    CQYDVQQGEwJVUzETMBEGA1UECBMKQ2FsaWZvcm5pYTEWMBQGA1UEBxMNU2FuIEZy\n    YW5jaXNjbzEZMBcGA1UEChMQb3JnMS5leGFtcGxlLmNvbTEfMB0GA1UEAxMWdGxz\n    Y2Eub3JnMS5leGFtcGxlLmNvbTAeFw0xODA5MjExNzU3NTVaFw0yODA5MTgxNzU3\n    NTVaMHYxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlhMRYwFAYDVQQH\n    Ew1TYW4gRnJhbmNpc2NvMRkwFwYDVQQKExBvcmcxLmV4YW1wbGUuY29tMR8wHQYD\n    VQQDExZ0bHNjYS5vcmcxLmV4YW1wbGUuY29tMFkwEwYHKoZIzj0CAQYIKoZIzj0D\n    AQcDQgAED4FM1+iq04cjveIDyn4uj90lJlO6rASeOIzm/Oc2KQOjpRRlB3H+mVnp\n    rXN6FacjOp0/6OKeEiW392dcdCMvRqNfMF0wDgYDVR0PAQH/BAQDAgGmMA8GA1Ud\n    JQQIMAYGBFUdJQAwDwYDVR0TAQH/BAUwAwEB/zApBgNVHQ4EIgQgPQRWjQR5EUJ7\n    xkV+zbfY618IzOYGIpfLaV8hdlZfWVIwCgYIKoZIzj0EAwIDRwAwRAIgYzk8553v\n    fWAOZLxiDuMN9RiHve1o5aAQad+uD+eLpxMCIBmv8CtXf1C60h/0zyG1D6tTTnrB\n    H8Zua3x+ZQn/kqVv\n    -----END CERTIFICATE-----\n- name: 'User3'\n  clientPrivateKey:\n   pem: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JR0hBZ0VBTUJNR0J5cUdTTTQ5QWdFR0NDcUdTTTQ5QXdFSEJHMHdhd0lCQVFRZ0lSWm8zU0FQWEFKbkdWT2UKalJBTEJKMjA4bStvamVDWUNrbUpRVjJhQnFhaFJBTkNBQVJub0dPRXcxaytNdGpISDR5MnJUeFJqdE9hS1dYbgpGR3BzQUxMWGZCa0tadnhJaGJyK21QT0ZaVlo4enRpaElzWkJhQ3VDSUhqdzFUeDY1c3pKQURjTwotLS0tLUVORCBQUklWQVRFIEtFWS0tLS0tCg==\n  clientSignedCert:\n   pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNXRENDQWY2Z0F3SUJBZ0lSQU1wU2dXRmpESE9vaFhhMFI2ZTlUSGd3Q2dZSUtvWkl6ajBFQXdJd2RqRUwKTUFrR0ExVUVCaE1DVlZNeEV6QVJCZ05WQkFnVENrTmhiR2xtYjNKdWFXRXhGakFVQmdOVkJBY1REVk5oYmlCRwpjbUZ1WTJselkyOHhHVEFYQmdOVkJBb1RFRzl5WnpFdVpYaGhiWEJzWlM1amIyMHhIekFkQmdOVkJBTVRGblJzCmMyTmhMbTl5WnpFdVpYaGhiWEJzWlM1amIyMHdIaGNOTWpBd09UQTNNVEUwTWpBd1doY05NekF3T1RBMU1URTAKTWpBd1dqQjJNUXN3Q1FZRFZRUUdFd0pWVXpFVE1CRUdBMVVFQ0JNS1EyRnNhV1p2Y201cFlURVdNQlFHQTFVRQpCeE1OVTJGdUlFWnlZVzVqYVhOamJ6RVpNQmNHQTFVRUNoTVFiM0puTVM1bGVHRnRjR3hsTG1OdmJURWZNQjBHCkExVUVBeE1XZEd4elkyRXViM0puTVM1bGVHRnRjR3hsTG1OdmJUQlpNQk1HQnlxR1NNNDlBZ0VHQ0NxR1NNNDkKQXdFSEEwSUFCTWRMdlNVRElqV1l1Qnc0WVZ2SkVXNmlmRkx5bU9BWDdHS1k2YnRWUERsa2RlSjh2WkVyWExNegpKV2ppdnIvTDVWMlluWnF2ME9XUE1NZlB2K3pIK1JHamJUQnJNQTRHQTFVZER3RUIvd1FFQXdJQnBqQWRCZ05WCiBIU1VFRmpBVUJnZ3JCZ0VGQlFjREFnWUlLd1lCQlFVSEF3RXdEd1lEVlIwVEFRSC9CQVV3QXdFQi96QXBCZ05WCkhRNEVJZ1FnNWZPaHl6d2FMS20zdDU0L0g0YjBhVGU3L25HUHlKWk5oOUlGUks2ZkRhQXdDZ1lJS29aSXpqMEUKQXdJRFNBQXdSUUloQUtFbnkvL0pZN0dYWi9USHNRSXZVVFltWHNqUC9iTFRJL1Z1TFg3VHpjZWZBaUJZb1N5WQp5OTByZHBySTZNcDZSUGlxalZmMDJQNVpDODZVa1AwVnc0cGZpUT09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K\n
          \n
            \n
          • \n \n name\n Required. Non-empty string.\n \n Specifies a name to associate with this identity. This name doesn\u2019t have to match anything within the certificate itself but must be unique\n\n
            certificates:\n  - name: 'User1'\n
            \n \n
          • \n \n admin\n Optional. Boolean.\n \n Indicates if this identity can be considered an admin identity for the organization. Defaults to false if not provided This only needs to be provided if you plan to create channels and/or install and instantiate contracts (chaincode)\n\n
            certificates:\n  - name: 'User2'\n    admin: true\n
            \n \n
          • \n \n clientPrivateKey\n Required. Non-empty object.\n \n Specifies the identity\u2019s private key for the organization.\n \n Note:\n

            Must contain at most one of the following keys.

            \n \n
              \n
            • \n \n path\n Optional. Non-empty string.\n \n The path of the file containing the private key\n
               clientPrivateKey:\n    path: path/to/cert.pem\n
              \n \n
            • \n \n pem\n Optional. Non-empty string.\n \n The content of the private key file either in exact PEM format (which must split into multiple lines for yaml, or contain newline characters for JSON), or it could be a base 64 encoded version of the PEM (which will also encode the required newlines) as a single string. This single string format makes it much easier to embed into the network configuration file especially for a JSON based file\n
              clientPrivateKey:\n   pem: |\n     -----BEGIN PRIVATE KEY-----\n      MIGHAgEAMBMGByqGSM49AgEGCCqGSM49AwEHBG0wawIBAQQgIRZo3SAPXAJnGVOe\n      jRALBJ208m+ojeCYCkmJQV2aBqahRANCAARnoGOEw1k+MtjHH4y2rTxRjtOaKWXn\n      FGpsALLXfBkKZvxIhbr+mPOFZVZ8ztihIsZBaCuCIHjw1Tx65szJADcO\n      -----END PRIVATE KEY-----\n
              \n \n \n \n
            • \n \n clientSignedCert\n Required. Non-empty object.\n \n Specifies the identity\u2019s certificate for the organization.\n \n Note:\n

              Must contain at most one of the following keys.

              \n \n
                \n
              • \n \n path\n Optional. Non-empty string.\n \n The path of the file containing the certificate\n
                 clientSignedCert:\n    path: path/to/cert.pem\n
                \n \n
              • \n \n pem\n Optional. Non-empty string.\n \n The content of the certificate file either in exact PEM format (which must split into multiple lines for yaml, or contain newline characters for JSON), or it could be a base 64 encoded version of the PEM (which will also encode the required newlines) as a single string. This single string format makes it much easier to embed into the network configuration file especially for a JSON based file\n\n
                clientSignedCert:\n   pem: |\n     -----BEGIN CERTIFICATE-----\n      MIICSDCCAe+gAwIBAgIQfpGy5OOXBYpKZxg89x75hDAKBggqhkjOPQQDAjB2MQsw\n      CQYDVQQGEwJVUzETMBEGA1UECBMKQ2FsaWZvcm5pYTEWMBQGA1UEBxMNU2FuIEZy\n      YW5jaXNjbzEZMBcGA1UEChMQb3JnMS5leGFtcGxlLmNvbTEfMB0GA1UEAxMWdGxz\n      Y2Eub3JnMS5leGFtcGxlLmNvbTAeFw0xODA5MjExNzU3NTVaFw0yODA5MTgxNzU3\n      NTVaMHYxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlhMRYwFAYDVQQH\n      Ew1TYW4gRnJhbmNpc2NvMRkwFwYDVQQKExBvcmcxLmV4YW1wbGUuY29tMR8wHQYD\n      VQQDExZ0bHNjYS5vcmcxLmV4YW1wbGUuY29tMFkwEwYHKoZIzj0CAQYIKoZIzj0D\n      AQcDQgAED4FM1+iq04cjveIDyn4uj90lJlO6rASeOIzm/Oc2KQOjpRRlB3H+mVnp\n      rXN6FacjOp0/6OKeEiW392dcdCMvRqNfMF0wDgYDVR0PAQH/BAQDAgGmMA8GA1Ud\n      JQQIMAYGBFUdJQAwDwYDVR0TAQH/BAUwAwEB/zApBgNVHQ4EIgQgPQRWjQR5EUJ7\n      xkV+zbfY618IzOYGIpfLaV8hdlZfWVIwCgYIKoZIzj0EAwIDRwAwRAIgYzk8553v\n      fWAOZLxiDuMN9RiHve1o5aAQad+uD+eLpxMCIBmv8CtXf1C60h/0zyG1D6tTTnrB\n      H8Zua3x+ZQn/kqVv\n      -----END CERTIFICATE-----\n
                \n \n \n \n \n \n
              • \n
              • \n \n wallet\n Optional. Non-empty object.\n \n Provide the path to a file system wallet. Be aware that the persistence format used between v1.x and v2.x of the node sdks changed so make sure you provide a wallet created in the appropriate format for the version of SUT you bind to.\n
                  \n
                • \n \n path\n Required. Non-empty string.\n \n The path to the file system wallet\n
                  identities:\n  wallet:\n    path: './wallets/org1wallet'\n
                  \n \n
                • \n \n adminNames\n Optional. List of strings.\n \n 1 or more names in the wallet that are identified as organization administrators. This only needs to be provided if you plan to create channels and/or install and instantiate contracts (chaincode)\n
                  identities:\n  wallet:\n    path: './wallets/org1wallet'\n    adminNames:\n    - admin\n    - another_admin\n
                  \n \n \n \n
                • \n \n \n \n
                \n\n\n\n channels\n\n Required. A list of objects.\n \n Contains one or more unique channels with associated information about the chaincode (contracts section) that will be available on the channel\n\n
                channels:\n- channelName: mychannel\n  contracts:\n  - id: marbles\n    contractID: myMarbles\n\n- channelName: somechannel\n  contracts:\n  - id: basic\n
                \n
                  \n
                • \n \n channelName\n\n Required. Non-empty String.\n \n The name of the channel.\n\n
                  channels:\n  - channelName: mychannel\n
                  \n \n
                • \n
                • \n \n contracts\n Required. Non-sparse array of objects.\n \n Each array element contains information about a chaincode deployed to the channel.\n \n Note:\n

                  the contractID value of every contract in every channel must be unique on the configuration file level! If contractID is not specified for a contract then its default value is the id of the contract.

                  \n \n\n
                  channels:\n  mychannel\n    contracts:\n    - id: simple\n    - id: smallbank\n
                  \n
                • \n \n id\n\n Required. Non-empty string.\n \n The chaincode ID that was specified when the chaincode was deployed to the channel\n\n
                  channels:\n  mychannel\n    contracts:\n    - id: simple\n
                  \n \n
                • \n
                • \n \n contractID\n\n Required. Non-empty string.\n \n The Caliper-level unique ID of the contract. This ID will be referenced from the user callback modules. Can be an arbitrary name, it won\u2019t effect the contract properties on the Fabric side.\n \n If omitted, it defaults to the id property value.\n\n
                  channels:\n  mychannel\n    contracts:\n    - id: simple\n    - contractID: simpleContract\n
                  \n \n
                • \n \n \n
                "},{"location":"connectors/fabric-config/#network-configuration-example","title":"Network Configuration Example","text":"

                The following example is a Fabric network configuration for the following network topology and artifacts:

                \n
                  \n
                • two organizations Org1MSP and Org2MSP (Note that having 2 organizations is not common in a network configuration file);
                • \n
                • one channel named mychannel;
                • \n
                • asset-transfer-basic chaincode deployed to mychannel with a chaincode id of basic;
                • \n
                • the nodes of the network use TLS communication, but not mutual TLS;
                • \n
                • the Fabric samples test network is started and terminated automatically by Caliper;
                • \n
                \n
                name: Fabric\nversion: \"2.0.0\"\n\ncaliper:\n  blockchain: fabric\n  sutOptions:\n    mutualTls: false\n  command:\n    start: ../fabric-samples/test-network/network.sh up createChannel && ../fabric-samples/test-network/network.sh deployCC -ccp ../fabric-samples/asset-transfer-basic/chaincode-javascript -ccn basic -ccl javascript\n    end: ../fabric-samples/test-network/network.sh down\n\ninfo:\n  Version: 1.1.0\n  Size: 2 Orgs\n  Orderer: Raft\n  Distribution: Single Host\n  StateDB: GoLevelDB\n\nchannels:\n  - channelName: mychannel\n    contracts:\n    - id: basic\n      contractID: BasicOnMyChannel\n\norganizations:\n  - mspid: Org1MSP\n    identities:\n      certificates:\n      - name: 'admin.org1.example.com'\n        admin: true\n        clientPrivateKey:\n          pem: |-\n            -----BEGIN PRIVATE KEY-----\n            ...\n            -----END PRIVATE KEY-----\n        clientSignedCert:\n          pem: |-\n            -----BEGIN CERTIFICATE-----\n            ...\n            -----END CERTIFICATE-----\n    connectionProfile:\n      path: './Org1ConnectionProfile.yaml'\n      discover: true\n  - mspid: Org2MSP\n    connectionProfile:\n    identities:\n      certificates:\n      - name: 'admin.org2.example.com'\n        admin: true\n        clientPrivateKey:\n          pem: |-\n            -----BEGIN PRIVATE KEY-----\n            ...\n            -----END PRIVATE KEY-----\n        clientSignedCert:\n          pem: |-\n            -----BEGIN CERTIFICATE-----\n            ...\n            -----END CERTIFICATE-----\n      path: './Org2ConnectionProfile.json'\n      discover: true\n
                \n

                Another example with only a single organization but using the peers property so everything required is contained in a single network configuration file:

                \n
                name: Fabric\nversion: \"2.0.0\"\n\ncaliper:\n  blockchain: fabric\n  sutOptions:\n    mutualTls: false\n\nchannels:\n  - channelName: mychannel\n    contracts:\n    - id: basic\n\norganizations:\n  - mspid: Org1MSP\n    identities:\n      certificates:\n      - name: 'admin.org1.example.com'\n        admin: true\n        clientPrivateKey:\n          pem: |-\n            -----BEGIN PRIVATE KEY-----\n            ...\n            -----END PRIVATE KEY-----\n        clientSignedCert:\n          pem: |-\n            -----BEGIN CERTIFICATE-----\n            ...\n            -----END CERTIFICATE-----\n    peers:\n      - endpoint: peer0.org1.example.com:7051\n        grpcOptions:\n          ssl-target-name-override: peer0.org1.example.com\n          grpc.keepalive_time_ms: 600000\n        tlsCACerts:\n          pem: |-\n            -----BEGIN CERTIFICATE-----\n            ...\n            -----END CERTIFICATE-----\n
                "},{"location":"connectors/fabric-config/#license","title":"License","text":"

                The Caliper codebase is released under the Apache 2.0 license. Any documentation developed by the Caliper Project is licensed under the Creative Commons Attribution 4.0 International License. You may obtain a copy of the license, titled CC-BY-4.0, at http://creativecommons.org/licenses/by/4.0/.

                "},{"location":"connectors/writing-connectors/","title":"Writing Connectors","text":""},{"location":"connectors/writing-connectors/#overview","title":"Overview","text":"

                Connectors are probably the most important modules in Caliper. They provide an abstraction layer between the system under test (SUT) and the different Caliper components (e.g., the manager and workers processes, or the workload modules). A connector\u2019s job is to simplify interaction with the SUT as much as possible, hiding any peculiarities behind its API.

                Note

                To get a sense of how a connector fits into the Caliper architecture, see the corresponding architecture documentation sections about multi-platform support, the manager process and the worker processes.

                Caliper ships with some predefined/built-in connectors, but in general, connectors are treated as pluggable components (just like resource and TX monitors, workload modules, etc.). So nothing stops you from implementing and using your 3rd party connector! However, we strongly recommend that you absorb every part of this guide before implementing a new connector.

                "},{"location":"connectors/writing-connectors/#requirements-for-quality-connectors","title":"Requirements for quality connectors","text":"

                A connector\u2019s complexity is usually proportional to the complexity of the SUT (and/or its programming model). Accordingly, connectors are considered heavy-weight components compared to other extension points in Caliper.

                There are a few things to keep in mind when implementing a connector. Some are technical, some impact usability.

                Note

                Caliper does not constrain the types of SUT you can test. Accordingly, the following points are general guidelines for connectors targeting complex distributed systems. You are allowed to deviate from the advices, but you should probably document such choices to lower the surprise factor for end-users.

                1. Keep to the predefined interface.
                2. You must implement the given interface so Caliper modules can interact with the connector.
                3. If you expose additional capabilities outside of the interface, then you will disrupt the programming model of workload module developers. They will have to handle your extra API as a separate case, complicating the development of cross-platform benchmarks However, if you are implementing a workload module to performance test a specific SUT, then this is not a concern.
                4. If your connector behaves similarly to other connectors following this guide, users will be quick to adapt and experiment with your connector/SUT.

                5. Consider the distributed nature of the SUT.

                6. Distributed systems consist of multiple nodes, sometimes with different roles.
                7. You probably don\u2019t want the connector to be just a proxy for a single SUT node. The connector should be aware of as many SUT nodes as it makes sense to support features like load balancing or SUT-specific request execution policies.
                8. Hide the network topology as much as you can from other Caliper modules, especially from workload modules. Most of the time an emulated client does not care about the receiving end of a request as long as it\u2019s executed.
                9. If you must expose certain nodes to the workload modules, then do that through simple (preferably text-based) handles. But do not expose implementation-specific classes representing the nodes!

                10. Consider the actors in the SUT.

                11. Authentication and authorization are cornerstones for almost every remote system, thus handling digital identities (that can come in many forms) must be a first-class feature in connectors.
                12. Similarly to the nodes in the SUT, there will be many actors/clients with different privileges initiating different requests. A connector should be able to impersonate multiple clients to allow for diverse client behaviors in workload modules.
                13. A connector should allow for easy switching between client identities for each request, thus the connector must expose the identities towards the workload modules somehow. The same advice applies as for the nodes: use simple (preferably text-based) handles, but do not expose implementation-specific classes representing the identities!

                14. Do not reinvent the wheel.

                15. Each system exposes a standard remote API for communicating with clients. These APIs can take many forms (REST, gRPC, etc.). Regardless of the used API technology, there\u2019s probably a mature client library for it. Or even better, the target platform has its own SDK!
                16. A connector shouldn\u2019t bother with network-level communication and such low-level details. Delegate such tasks to the SDKs or client libraries. This will allow your connector to be more robust and to attract additional contributors familiar with the used libraries.

                17. Do not be the bottleneck.

                18. The purpose of Caliper is to performance test the SUT from the client\u2019s point-of-view.
                19. If assembling and sending a request takes time in the same order of magnitude as executing the request, then the results won\u2019t be representative. Sending requests is considered a hot path for connectors, and it should be as efficient as possible.
                20. Using SDKs and widely known client libraries is an exception. Real client-side applications will probably do the same, so the library overheads must be incorporated into the request latencies. Do not micro-optimize by writing your own special-purpose SDK, just to push down the latency numbers!
                21. Connector bottlenecks on the hot path will influence/limit the request output rate of Caliper worker processes. Caliper users won\u2019t be happy if they have to launch 10 worker processes just to send 100 requests per second to the SUT.

                Note

                A connector\u2019s job is to bridge the platform-agnostic Caliper-side API with the high-level SUT-specific client library, while adhering to the above points.

                "},{"location":"connectors/writing-connectors/#implementing-the-connector","title":"Implementing the connector","text":"

                You should treat a connector implementation process as a full-fledged Node.js project. Refer to the Caliper integration section for the suggested project structure. Putting the project structure aside, you have four implementation-related tasks:

                1. Implement the connector interface (optionally using the available utility base class).
                2. Implement a factory method for instantiating the connector.
                3. Define the schema of your network configuration file.
                4. Provide binding configurations for your connector.
                "},{"location":"connectors/writing-connectors/#the-connector-interface","title":"The connector interface","text":"

                Once you add the @hyperledger/caliper-core package (or one of its specific versions) as your project dependency, you will gain access to its exported ConnectorInterface class, which declares the following interface:

                class ConnectorInterface extends EventEmitter {\n    getType() {}\n    getWorkerIndex() {}\n    async init(workerInit) {}\n    async installSmartContract() {}\n    async prepareWorkerArguments(number) {}\n    async getContext(roundIndex, args) {}\n    async releaseContext() {}\n    async sendRequests(requests) {}\n}\n\nmodule.exports = ConnectorInterface;\n

                The interface is detailed in the next subsection, but for now, keep the following things in mind:

                1. The connector is used in two different environments: in the manager and worker processes. The corresponding environment of the methods will be discussed in the interface reference subsection.
                2. The connector must expose certain events about the requests, otherwise it\u2019s not observable by the Caliper workers, which breaks the scheduling mechanism of Caliper.
                3. sendRequests is the hot path for the interface, implement it carefully and efficiently!
                4. The behavior of the connector (and the methods to really implement) largely depends on the capabilities of the network configuration schema. The more flexibility you allow on the Caliper-side of the network, the more features you will have to provide. A flexible connector makes it easier to setup benchmark scenarios, resulting in happy users.
                "},{"location":"connectors/writing-connectors/#interface-reference","title":"Interface reference","text":"

                getType

                • Description Retrieves a short name for the connector type, usually denoting the SUT, e.g., fast-ledger. The name can be used by workload modules capable of targeting multiple types of SUT.
                • Return type string
                • Returns The name of the connector.

                getWorkerIndex

                • Description Retrieves the zero-based worker process index that instantiated the connector.
                • Return type number
                • Returns The worker process index.

                init

                • Description The method is called by both the manager and (optionally) the worker processes to initialize the connector instance, and potentially certain aspects of the SUT. The initialization tasks are connector-specific, but usually can be divided among the processes:
                  • The manager process instance performs one-time initialization tasks that require interaction with the SUT. Such tasks can include, for example, the creation of digital identities, or other housekeeping actions regarding the SUT.
                  • The worker process instances usually just perform local housekeeping tasks, like creating the necessary data structures or caches needed later on when sending requests. This step is optional and can be performed by your factory method after creating the connector instance. If the worker processes need to access the SUT in this phase, then they should do so using only idempotent operations (like configuration queries) that guarantee correct execution for arbitrary number of parallel worker processes.
                • Parameters
                  • workerInit (boolean) Denotes whether the method is called by a worker process, or by the manager process.
                • Return type Promise
                • Returns The promise that will resolve upon method completion.

                installSmartContract

                • Description The method is called by the manager process to perform contract deployment on the SUT, if allowed remotely.
                • Return type Promise
                • Returns The promise that will resolve upon method completion.

                prepareWorkerArguments

                • Description This method is called by the manager process, and ensures that the connector instance in the manager process can distribute data to the connector instances in the worker processes. This method is the perfect place to return, for example, newly created digital identities to the manager process, which in turn will distribute them to the worker process instances for further use.
                • Return type Promise<object[]>
                • Returns The promise of connector-specific objects for each worker that will resolve upon method completion.

                getContext

                • Description: The method is called by the worker processes before each round, and can be used to assemble a connector-specific object that will be shared with the workload module of the current round. The method is also the perfect place to claim resources necessary for the next round, like establishing connections to remote nodes.
                • Parameters:
                • roundIndex (number): The zero-based index of the imminent round.
                • args (object): The object assembled for this worker instance in the prepareWorkerArguments method of the manager instance.
                • Return type: Promise<object>
                • Returns: The promise of a connector-specific object that will resolve upon method completion.

                releaseContext

                • Description: The method is called by the worker processes after each round, and can be used to release resources claimed in the getContext method.
                • Return type: Promise
                • Returns: The promise that will resolve upon method completion.

                sendRequests

                • Description: This method is the hot path of the connector, called in the worker processes by the workload modules of the rounds. The method must accept one or multiple settings objects pertaining to the request or requests that must be sent to the SUT. The connector doesn\u2019t have to preserve the order of execution for the requests, unless the target SUT type supports such request batches. The connector must gather at least the start time, finish time, and final status (successful or failed) of every request through TxStatus instances.
                • Return type: Promise
                • Returns: The promise of one or more request execution results that will resolve upon method completion.
                • "},{"location":"connectors/writing-connectors/#exposed-events","title":"Exposed events","text":"

                  The connector must expose the following events with names matching the defined constants for them. Without these events the Caliper scheduling mechanism won\u2019t function correctly, and other components might also rely on them (like TX monitors).

                  txsSubmitted

                  • Description The event must be raised when one or more requests are submitted for execution to the SUT. Typically the event should be raised for every individual request.
                  • Parameters
                    • count (number) The number of requests submitted.

                  txsFinished

                  • Description The event must be raised when one or more requests are fully processed by the SUT (i.e., the connector received the results).
                  • Parameters
                    • results (TxStatus|TxStatus[]) One or more request execution result gathered by the connector.
                  "},{"location":"connectors/writing-connectors/#optional-base-class","title":"Optional base class","text":"

                  The @hyperledger/caliper-core package also exports a ConnectorBase class that provides sensible default implementations for the following ConnectorInterface methods:

                  • prepareWorkerArguments: An empty object is returned for each worker by default, i.e., nothing is shared with the worker process instances.
                  • sendRequests: Handles the cases when a single or multiple requests are submitted by the workload modules. Also raises the necessary events before and after the requests. The method delegates the execution of a single request to the _sendSingleRequest method (see below).
                  • constructor: Declares a constructor that requires the worker index and SUT/connector type as parameters.
                  • getType: Provides a simple getter for the corresponding constructor argument.
                  • getWorkerIndex: Provides a simple getter for the corresponding constructor argument.

                  If you opt in to use this base class for your connector then you must implement the _sendSingleRequest method.

                  _sendSingleRequest

                  • Description The method only has to handle the sending and processing of a single request.
                  • Parameters
                    • request (object) A connector-specific settings object for the request.
                  • Return type Promise
                  • Returns The promise of a request execution result that will resolve upon method completion.
                  • "},{"location":"connectors/writing-connectors/#the-factory-method","title":"The factory method","text":"

                    The entry point for your connector implementation will be a factory method. The manager and worker processes will call this exported factory method to instantiate your connector (be careful with the casing).

                    ConnectorFactory - Description Instantiates a connector and optionally initializes it. When called from the manager process (denoted with a worker index of -1), the manager will handle calling the init and installSmartContracts methods. This initialization is optional in the worker processes, so the factory method must handle it if required. - Parameters - workerIndex (number) The zero-based index of the worker process, or -1 for the manager process. - Return type Promise - Returns The promise of a ConnectorInterface instance that will resolve upon method completion.

                    The following is a possible implementation of a factory method for our fast-ledger connector:

                        'use strict';\n\n    const FastLedgerConnector = require('./fast-ledger-connector');\n\n    async function ConnectorFactory(workerIndex) {\n        const connector = new FastLedgerConnector(workerIndex, 'fast-ledger');\n\n        // initialize the connector for the worker processes\n        if (workerIndex >= 0) {\n            await connector.init(true);\n        }\n\n        return connector;\n    }\n\n    module.exports.ConnectorFactory = ConnectorFactory;\n
                    "},{"location":"connectors/writing-connectors/#the-network-configuration-file","title":"The network configuration file","text":"

                    The network configuration file can contain whatever information your connector requires to communicate with the SUT and fulfill the connector quality requirements. The configuration file can be either a JSON or YAML file. YAML is preferred for its readability and comment support.

                    The network configuration schema must contain a mandatory top-level field with the following structure:

                    # mandatory\ncaliper:\n  # mandatory\n  blockchain: fast-ledger\n  # optional\n  commands:\n    start: startLedger.sh\n    end: stopLedger.sh\n

                    The caliper.blockchain attribute tells Caliper which connector to load for the test. The value of the attribute depends on how you want to integrate the connector with Caliper.

                    "},{"location":"connectors/writing-connectors/#binding-configuration","title":"Binding configuration","text":"

                    The binding command of Caliper allows you to specify major connector dependencies to be installed during runtime (instead of packaged with the connector during development time). SUT SDKs and other client libraries usually fall into this category (i.e., libraries that facilitate interactions with the SUT). If the APIs of such libraries are consistent across different versions, then your single connector implementation can possibly target multiple SUT versions.

                    In that case, users should be able to select a specific SDK version that will target the corresponding SUT version. You can achieve this by providing a binding configuration file (JSON or YAML) for your connector.

                    "},{"location":"connectors/writing-connectors/#simple-configuration","title":"Simple configuration","text":"

                    The schema of a general binding configuration is usually simple:

                    sut:\n  fast-ledger:\n    1.0:\n      packages: ['fast-ledger-sdk@1.0.0']\n    1.4:\n      packages: ['fast-ledger-sdk@1.4.5']\n    2.0: &fast-ledger-latest\n      packages: ['fast-ledger-sdk@2.0.0']\n    latest: *fast-ledger-latest\n

                    Several things to note about the above configuration:

                    1. The sut top-level attribute denotes the configuration section that Caliper will process. You can write arbitrary YAML sections outside of this attribute, without any schema constraints. This means you can utilize, for example, YAML anchors and aliases to improve the readability of your complex binding specification. You will see an example soon.
                    2. The sut attribute contains keys that identify the SUT types whose connector supports binding. We defined a single SUT type (fast-ledger) for our example connector.
                    3. Under fast-ledger we can define several SUT versions our connector supports. It\u2019s recommended to use keys corresponding to the semantic version of the SUT. The users will specify the binding using the SUT type and SUT version, for example, by passing the --caliper-bind-sut fast-ledger:1.4 command line argument to Caliper.
                    4. Every SUT version needs to declare the required packages Caliper should install during runtime. The different SUT versions will usually declare different SDK version to install.
                    5. Even though we declared 1.4 as SUT version, we asked Caliper to install the 1.4.5 SDK version. It\u2019s good practice to always bind to the latest available patch release, so users can enjoy the latest bug fixes for an SDK version.
                    6. Many library management systems (like NPM and DockerHub) provide latest tags to denote the newest release. If you provide such a binding \u201cversion\u201d for your connector, then users can bind your connector by using the simplified --caliper-bind-sut fast-ledger notation. You can easily refer to the binding version you deem latest using YAML anchors and aliases. Doing so will make your configuration easier to read and maintain.
                    "},{"location":"connectors/writing-connectors/#advanced-configuration","title":"Advanced configuration","text":"

                    Even though your connector supports multiple SUT versions on the implementation level, that doesn\u2019t mean that all versions can be equally supported in the same environment. A typical example would be to support older SUT versions, whose corresponding SDK packages fail to build \u201cautomagically\u201d under newer Node.js version. The binding configuration provides some flexibility to tune the installation of these packages.

                    Node modules can include native components which may have to be compiled on the system it will run on and against the specific version of node that is being used (some package owners make precompiled versions for specific platforms and versions of node available to download to avoid having to perform a local compilation). This means you will have to have appropriate compilers installed. Moreover, the used compiler is strict by default, resulting in multiple compilation errors.

                    To circumvent such hardships, the binding configuration schema allows us to tinker with the install process by specifying command line arguments and environment variables (picked up by npm install). You can put such install logic under the settings attribute.

                    sut:\n  fast-ledger:\n    1.0:\n      packages: ['fast-ledger-sdk@1.0.0', 'comm-lib@1.0.0']\n      settings:\n      # compiling older comm-lib on newer Node.js version\n      - versionRegexp: '^((?!v8\\.).)*$'\n        env:\n          CXXFLAGS: '-Wno-error=class-memaccess'\n          CFLAGS: '-Wno-error=class-memaccess'\n        args: '--build-from-source'\n

                    The settings attribute is actually an array of potentially applicable settings. Caliper will process them in order, and picks the first settings object whose regular expression (versionRegexp) matches the used Node.js version. The example demonstrates a setting that should be applied if a newer Node.js version is used (i.e., the version is not v8.X). If that is the case, then pass the command line arguments (specified by args) to npm install, and also set the environment variables specified under env (which will also be picked up by npm install and any sub-command executed by it).

                    Your connector can use such advanced specification to provide support for a wide range of SUT/SDK versions in multiple environments.

                    "},{"location":"connectors/writing-connectors/#documenting-the-connector","title":"Documenting the connector","text":"

                    Providing proper user manual for your connector is just as important as a quality implementation. Otherwise, users will have a hard time interacting with your connector. We will take the Fabric connector documentation as an example, section by section.

                    "},{"location":"connectors/writing-connectors/#overview_1","title":"Overview","text":"

                    You should provide a short summary of your connector. This should include the following:

                    • The supported SUT type and versions.
                    • The capabilities of your connector (supported SUT features and limitations).

                    The overview will lay down the foundation of what users can expect from your connector.

                    "},{"location":"connectors/writing-connectors/#installing-dependencies","title":"Installing dependencies","text":"

                    If your connector supports multiple SUT versions through the binding process, then document the necessary steps to bind to a specific version. The binding process is universal for every connector, so a short example should suffice.

                    However, it can happen that not every SUT feature is supported by every binding. Carefully document the limitations of the affected bindings, and provide some workaround if possible.

                    "},{"location":"connectors/writing-connectors/#runtime-settings","title":"Runtime settings","text":"

                    The network configuration file only describes the SUT topology and related artifacts. SUT-agnostic design choices can still arise during the development of a connector. Instead of deciding yourself, you should delegate such choices to the end users utilizing the runtime configuration mechanism of Caliper where possible/meaningful.

                    Such settings typically affect the operating mode of the connector, but don\u2019t change the overall semantics of the SUT interactions. Be sure to document every available runtime setting for your connector! Also, don\u2019t forget to provide sensible defaults to these settings where possible.

                    "},{"location":"connectors/writing-connectors/#request-api","title":"Request API","text":"

                    The main users of your connector will be workload module developers. They will interact with your connector mainly through the [sendRequests](#interface-reference) method. The method accepts either a single, or multiple settings object relating to the requests the user wants to send. You have to precisely specify what kind of settings are available for a request. These will typically include:

                    • The operation to execute on the SUT.
                    • The arguments of the operation.
                    • The identity who should submit the request.
                    • The node(s) to send the request to.
                    • Differentiation between read-only/write requests.
                    "},{"location":"connectors/writing-connectors/#gathered-request-data","title":"Gathered request data","text":"

                    Your connector must report basic execution data towards Caliper to ensure correct reporting. But you are also free to collect any kind of client-side data you have access to. Who knows what data users will find useful. Make sure to document such collected data (both semantics and data types).

                    "},{"location":"connectors/writing-connectors/#network-configuration-file","title":"Network configuration file","text":"

                    The probably most important piece of your documentation is the schema of the network configuration file your connector can process. Try to provide an intuitive structure for defining the network topology, participants and any required artifacts. You should document the semantics and data types of different settings. Be sure to document any constraints that could arise between multiple attributes (mutual exclusion, valid values, etc.).

                    "},{"location":"connectors/writing-connectors/#example-network-configuration","title":"Example network configuration","text":"

                    Be sure to provide a fully specified and functioning network configuration example. For some, it is easier to absorb a concrete example than a reference-style documentation.

                    "},{"location":"connectors/writing-connectors/#integration-with-caliper","title":"Integration with Caliper","text":"

                    Once you\u2019ve implemented your connector, you have two choices to integrate it with Caliper:

                    1. Use it as a 3rd party, pluggable component, which is part of your benchmark project.
                    2. Contribute your connector to the official Caliper code-base, so it\u2019s always installed together with Caliper.
                    "},{"location":"connectors/writing-connectors/#3rd-party-connector","title":"3rd party connector","text":"

                    You can easily plug in your connector dynamically without it being part of the Caliper code-base. The process is the following:

                    1. Create an index.js file in your project that will export your connector factory. The file provides a clean entry point for your connector:
                    'use strict';\nmodule.exports.ConnectorFactory = require('./lib/connectorFactory').ConnectorFactory;\n
                    1. Set the ./fast-ledger/index.js path for the caliper.blockchain attribute in your network configuration file. The path should be relative to the Caliper workspace directory, or an absolute path (not recommended for portability reasons). Caliper will load the module and the factory method from this path.
                    2. If you support different bindings, then prepare a binding configuration file for your connector.
                    3. When you launch Caliper, your connector implementation will be picked up through your network configuration file.
                    4. You can specify your custom binding configuration using, for example, the --caliper-bind-file ./fast-ledger-binding.yaml command line argument that points to your custom file. Don\u2019t forget to also specify the binding itself with --caliper-bind-sut fast-ledger:1.0.

                    Alternatively, you can set your caliper.blockchain attribute to an NPM package name if you published your connector. In that case, you must ensure that the package is installed in the Caliper workspace directory prior to running the benchmark. The recommended naming convention for packages is caliper-sut. For our example, the caliper.blockchain attribute would be set to caliper-fast-ledger.

                    Note

                    Until Caliper reaches its first major version, it is recommended to version your connector package based on which @hyperledger/caliper-core version you depend on.

                    "},{"location":"connectors/writing-connectors/#built-in","title":"Built-in","text":"

                    Note

                    By contributing a connector to the code-base, you also accept the responsibility of maintaining the connector when needed. Otherwise, it might become stale and deprecated in future releases.

                    If you would like to expose you connector to a wider user-base, then you should contribute your code to the official Caliper repository, so your connector becomes a built-in module, immediately available when someone installs Caliper.

                    Note

                    Don\u2019t hesitate to reach out to the project maintainers on Rocket.Chat (#caliper-contributors channel) who will help you with the integration.

                    The integration consists of the following steps (for an example, see the [caliper-ethereum](https://github.com/hyperledger-caliper/caliper/tree/v0.6.0/packages/caliper-ethereum) connector):

                    1. Create a caliper-fast-ledger directory in the packages directory of the repository. This will contain your connector implementation.
                    2. Update your metadata in your own package.json file accordingly. The package name should be scoped: @hyperledger/caliper-fast-ledger.
                    3. If your connector supports binding, then you should list the dynamic packages in the devDependencies section, so they\u2019re not automatically installed with Caliper (since the users will rebind it anyway). Also, add your connector\u2019s binding specifications to the built-in binding configuration file.
                    4. Add your new directory path to the root lerna.json file, under the packages section. This will ensure that your package is bootstrapped correctly for other developers (and for testing, publishing, etc.).
                    5. Add your new package (by name) to the Caliper CLI dependencies.
                    6. List your connector as a built-in connector in the caliper-utils.js module, under the BuiltinConnectors variable:
                      const BuiltinConnectors = new Map([\n ['fast-ledger', '@hyperledger/caliper-fast-ledger'],\n // other connectors...\n]);\n
                    7. It is highly recommended to provide integration tests for your connector.
                    8. Make sure that every code-related artifact (mostly .js, .yaml and .md files) contains the appropriate license header!
                    9. And you\u2019re done! Now users can refer to the connector as fast-ledger in their network configuration files. The connector package will be published automatically upon every merged PR.
                    "},{"location":"connectors/writing-connectors/#license","title":"License","text":"

                    The Caliper codebase is released under the Apache 2.0 license. Any documentation developed by the Caliper Project is licensed under the Creative Commons Attribution 4.0 International License. You may obtain a copy of the license, titled CC-BY-4.0, at http://creativecommons.org/licenses/by/4.0/.

                    "},{"location":"getting-started/architecture/","title":"Architecture","text":""},{"location":"getting-started/architecture/#overview","title":"Overview","text":"

                    Caliper is a general framework for executing benchmarks against different blockchain platforms. Caliper was designed with scalability and extensibility in mind to easily integrate with today\u2019s popular monitoring and infrastructure solutions. Accordingly, the architecture of Caliper can seem a bit complicated at first.

                    This page aims to gradually ease you into the intricacies of Caliper\u2019s architecture, taking one step at a time. By the end of this page, you should be familiar with the general concepts and API of Caliper. As you read along, you will find references to other, more technical documentation pages. Feel free to explore them once you are familiar with the basic building blocks of Caliper.

                    "},{"location":"getting-started/architecture/#birds-eye-view","title":"Bird\u2019s eye view","text":"

                    At its most simple form, Caliper is a service that generates a workload against a specific system under test (SUT) and continuously monitors its responses. Finally, Caliper generates a report based on the observed SUT responses. This simplistic view is depicted in the following figure.

                    Caliper requires several inputs to run a benchmark, independently of the used SUT. The following subsections give a brief overview of these inputs.

                    "},{"location":"getting-started/architecture/#benchmark-configuration-file","title":"Benchmark configuration file","text":"

                    The benchmark configuration file describes how the benchmark should be executed. It tells Caliper how many rounds it should execute, at what rate the TXs should be submitted, and which module will generate the TX content. It also includes settings about monitoring the SUT.

                    You can consider this file as the \u201cflow orchestrator\u201d of the benchmark. For the most part, the settings are independent of the SUT, so you can easily reuse them when performing multiple benchmarks against different SUT types or versions.

                    Note

                    For a more technical introduction to the benchmark configuration file, see the corresponding page.

                    "},{"location":"getting-started/architecture/#network-configuration-file","title":"Network configuration file","text":"

                    The content of the network configuration file is SUT-specific. The file usually describes the topology of the SUT, where its nodes are (their endpoint addresses), what identities/clients are present in the network, and what smart contracts Caliper should deploy or interact with.

                    For the exact structure of the network configuration files, refer to the corresponding SUT connector documentations (we will discuss connectors a bit later on this page):

                    • Hyperledger Besu & Ethereum
                    • Hyperledger Fabric
                    "},{"location":"getting-started/architecture/#workload-modules","title":"Workload modules","text":"

                    Workload modules are the brain of a benchmark. Since Caliper is a general benchmark framework, it does not include any concrete benchmark implementation. When Caliper schedules TXs for a given round, it is the task of the round\u2019s workload module to generate the content of the TXs and submit it. Each round can have a different associated workload module, so separating your workload implementation based on phases/behavior should be easy.

                    Workload modules are simply Node.JS modules that must export a given factory function. Other than that, the workload module logic can be arbitrary. Really, anything you can code in Node.JS.

                    Note

                    For a more technical introduction to the benchmark configuration file, see the corresponding page.

                    "},{"location":"getting-started/architecture/#benchmark-artifacts","title":"Benchmark artifacts","text":"

                    There might be additional artifacts necessary to run a benchmark that can vary between different benchmarks and runs. These usually include the followings:

                    • Crypto materials necessary to interact with the SUT.
                    • Smart contract source code for Caliper to deploy (if the SUT connector supports such operation).
                    • Runtime configuration files.
                    • Pre-installed third party packages for your workload modules.

                    Refer to the SUT connector configuration pages for the additional necessary artifacts.

                    Note

                    From here on out, we will refer to the introduced Caliper inputs simply as benchmark artifacts and denote them with the database symbol seen in the first figure.

                    "},{"location":"getting-started/architecture/#multi-platform-support","title":"Multi-platform support","text":"

                    Before we further dive into the architecture of Caliper, let\u2019s see how Caliper can support multiple SUT types. Caliper uses connector modules to hide the peculiarities of different SUT types and provide a unified interface towards the Caliper (and external) modules.

                    A SUT connector provides a simplified interface towards internal Caliper modules, as well as towards the workload modules. Accordingly, Caliper can request the execution of simple things, like \u201cinitialize the connector/SUT\u201d, and the connector implementation will take care of the rest. The exact tasks to perform during the initialization are often determined by the content of the network configuration file (and by the remote administrative actions the SUT supports).

                    Note

                    For the technical details of how to implement a connector, refer to the corresponding page.

                    "},{"location":"getting-started/architecture/#caliper-processes","title":"Caliper processes","text":"

                    Caliper considers scalability one of its most important goals (besides extensibility/flexibility). Workload generation from a single machine can quickly reach the resource limitations of the machine. If we want the workload rate to match the scalability and performance characteristics of the evaluated SUT then we need a distributed approach!

                    Accordingly, Caliper (as a framework) comprises of two different services/processes: a manager process and numerous worker processes.

                    • The manager process initializes the SUT (if supported) and coordinates the run of the benchmark (i.e., schedules the configured rounds) and handles the performance report generation based on the observed TX statistics.
                    • The worker processes perform the actual workload generation, independently of each other. Even if a worker process reaches the limits of its host machine, using more worker processes (on multiple machines) can further increase the workload rate of Caliper. Thus worker processes are the backbone of Caliper\u2019s scalability.

                    The described setup is illustrated in the next figure.

                    Note

                    For the time being, we will ignore the technical details of the distributed architecture, like the messaging between the processes. We will come back to it in a later section.

                    "},{"location":"getting-started/architecture/#the-manager-process","title":"The manager process","text":"

                    The Caliper manager process is the orchestrator of the entire benchmark run. It goes through several predefined stages as depicted by the figure below.

                    1. In the first stage, Caliper executes the startup script (if present) from the network configuration file. This step is mainly useful for local Caliper and SUT deployments as it provides a convenient way to start the network and Caliper in one step.

                    Note

                    The deployment of the SUT is not the responsibility of Caliper. Technically, Caliper only connects to an already running SUT, even if it was started through the startup script.

                    1. In the second stage, Caliper initializes the SUT. The tasks performed here are highly dependent on the capabilities of the SUT and the SUT connector. For example, the Hyperledger Fabric connector uses this stage to create/join channels and register/enroll new users.
                    2. In the third stage, Caliper deploys the smart contracts to the SUT, if the SUT and the connector support such operation (like with the Hyperledger Fabric connector).
                    3. In the fourth stage Caliper schedules and executes the configured rounds through the worker processes. This is the stage where the workload generation happens (through the workers!).
                    4. In the last stage, after executing the rounds and generating the report, Caliper executes the cleanup script (if present) from the network configuration file. This step is mainly useful for local Caliper and SUT deployments as it provides a convenient way to tear down the network and any temporary artifacts.

                    If your SUT is already deployed an initialized, then you only need Caliper to execute the rounds and nothing else. Luckily, you can configure every stage one-by-one whether it should be executed or not. See the flow control settings for details.

                    The above figure only shows the high-level steps of executing a benchmark. Some components are omitted for the sake of simplicity, like the resource and transaction monitor components. To learn more about the purpose and configuration of these components, refer to the Resource and Transaction Monitors documentation page.

                    "},{"location":"getting-started/architecture/#the-worker-process","title":"The worker process","text":"

                    The interesting things (from a user perspective) happen inside the worker processes. A worker process starts its noteworthy tasks when the manager process sends a message to it about executing the next round (the 4th step in the previous section). The important components of a worker process are shown in the figure below.

                    The worker process spends most of its time in the workload generation loop. The loop consists of two important steps:

                    1. Waiting for the rate controller to enable the next TX. Think of the rate controller as a delay circuit. Based on what kind of rate controller is used, it delays/halts the execution of the worker (in an asynchronous manner) before enabling the next TX. For example, if a fixed 50 TXs per second (TPS) rate is configured, the rate controller will halt for 20ms between each TX.

                    Note

                    The rate controllers of each round can be configured in the benchmark configuration file. For the available rate controllers, see the Rate Controllers page.

                    1. Once the rate controller enables the next TX, the worker gives control to the workload module. The workload module assembles the parameters of the TX (specific to the SUT and smart contract API) and calls the simple API of the SUT connector that will, in turn, send the TX request to the SUT (probably using the SDK of the SUT).

                    Note

                    The workload modules of each round can be configured in the benchmark configuration file. For the technical details of workload modules, see the Workload Modules page.

                    During the workload loop, the worker process sends progress updates to the manager process. Progress reporting on the manager side can be enabled and configured with the caliper-progress-reporting-enabled and caliper-progress-reporting-interval setting keys. For details, see the Basic Runtime Settings.

                    "},{"location":"getting-started/architecture/#process-distribution-models","title":"Process distribution models","text":"

                    The last part of the architecture discussion is demystifying the worker process management. Based on how worker processes are started and what messaging method is used between the manager and worker processes, we can distinguish the following distribution/deployment models:

                    1. Automatically spawned worker processes on the same host, using interprocess communication (IPC) with the manager process.
                    2. Automatically spawned worker processes on the same host, using a remote messaging mechanism with the manager process.
                    3. Manually started worker processes on an arbitrary number of hosts, using a remote messaging mechanism with the manager process.

                    Even though the third method is the way to go for more complex scenarios, the first two methods can help you get familiar with Caliper, and gradually aid you with the transition to the third method.

                    "},{"location":"getting-started/architecture/#modular-message-transport","title":"Modular message transport","text":"

                    The different deployment approaches are made possible by how Caliper handles messaging internally, as shown by the following figure.

                    The internal Caliper modules only deal with predefined messages whose content is independent of how the messages are sent. The module that sends the messages between the processes is swappable, thus enabling different communication methods.

                    The deployment model is configurable with the following two setting keys:

                    • caliper-worker-remote: if set to false (the default), then the manager process will spawn the required number of worker processes locally, resulting in the models 1 or 2.
                    • caliper-worker-communication-method: can take the values process (the default) or mqtt and determines the message transport implementation to use. The process communication corresponds to the first model, while mqtt denotes models 2 and 3.

                    The following table summarizes the different models and how to select them:

                    remote value method value Corresponding deployment model false process 1. Interprocess communication with local workers false mqtt 2. Remote messaging-based communication with local workers true mqtt 3. Remote messaging-based communication with remote workers true process Invalid, since IPC does not apply to remote communication

                    Note

                    For the technical details on configuration the messaging transport, see the Messengers page.

                    "},{"location":"getting-started/architecture/#interprocess-communication","title":"Interprocess communication","text":"

                    The examples on the Install & Usage page all use the IPC approach since it is the default behavior. The setup is illustrated in the figure below.

                    The caliper launch manager CLI command starts the manager process, which in turn will automatically spawn the configured number of worker processes (using the caliper launch worker CLI command). The communication between the processes is IPC, utilizing the built-in Node.JS method available for the parent-children process relationships.

                    This is the simplest deployment model for Caliper, requiring no additional configuration and third party messaging components. Accordingly, it is ideal when you first start using Caliper, or when you are still assembling the benchmark artifacts for your project, and just quickly want to test them.

                    Unfortunately, this model is constrained to a single host, thus suffers from scalability issues in the sense that only vertical scalability of the host is possible.

                    "},{"location":"getting-started/architecture/#local-message-broker-communication","title":"Local message broker communication","text":"

                    As a stepping stone towards the fully-distributed setup, the second deployment model replaces IPC with a third party messaging solution, while still hiding the worker process management from the user. The setup is illustrated in the figure below.

                    Like before, the caliper launch manager CLI command starts the manager process, which in turn will automatically spawn the configured number of worker processes (using the caliper launch worker CLI command). However, the messaging happens through a separate component, which could be deployed anywhere as long as its endpoint is reachable by the Caliper processes.

                    Unfortunately, this model is also constrained to a single host from the aspect of the Caliper processes. However, it is a useful model for taking your deployment to the next level once your benchmark artifacts are in place. Once you successfully integrated the messaging component, you are ready to move to the fully distributed Caliper setup.

                    "},{"location":"getting-started/architecture/#distributed-message-broker-communication","title":"Distributed message broker communication","text":"

                    When you take the management of the worker processes into your own hands, that\u2019s when the full potential of Caliper is unlocked. At this point, you can start as many workers on as many hosts as you would like, using the caliper launch worker CLI command. The setup is illustrated in the figure below.

                    The fully distributed deployment enables the horizontal scaling of the worker processes, greatly increasing the achievable workload rate. To ease the management of the many Caliper processes, you will probably utilize some automatic deployment/management solution, like Docker Swarm or Kubernetes. Luckily, the flexibility of the Caliper Docker image makes such integration painless.

                    However, there are some caveats you have to keep in mind:

                    1. Distributing the necessary benchmark artifacts to the Caliper processes is your responsibility. Different infrastructure solutions provide different means for this, so check your favorite vendor\u2019s documentation.
                    2. Setting up proper networking in distributed systems is always a challenge. Make sure that the Caliper processes can access the configured messaging component and the SUT components.
                    3. A single host may run multiple Caliper worker processes. When planning the worker distribution (or setting resource requirements for container management solutions) make sure that enough resources are allocated for workers to keep the configured TX scheduling precision.
                    "},{"location":"getting-started/architecture/#license","title":"License","text":"

                    The Caliper codebase is released under the Apache 2.0 license. Any documentation developed by the Caliper Project is licensed under the Creative Commons Attribution 4.0 International License. You may obtain a copy of the license, titled CC-BY-4.0, at http://creativecommons.org/licenses/by/4.0/.

                    "},{"location":"getting-started/contributing/","title":"Contributing to Hyperledger Caliper","text":"

                    Welcome to Hyperledger Caliper project, we are excited about the prospect of you contributing.

                    This guideline intends to make contribtuions to Caliper easier by:

                    • presenting a simple development workflow for contributors to follow;
                    • and providing a high-level description of the repository components.

                    If you have further suggestions about improving the guideline, then you can follow the presented workflow to make your contribution.

                    "},{"location":"getting-started/contributing/#overview","title":"Overview","text":"

                    The project uses GitHub to manage issues (bug reports and feature requests) and contributions (in the form of pull requests).

                    For general queries and discussion, please use the #caliper channel on the Hyperledger Discord Server (Discord Id required) or the Caliper mailing list (LFID recommended).

                    The contribution process boils down to two major steps: opening an issue and submitting a pull request (PR). Opening issues before PRs serves the following purposes:

                    • Documenting bugs, related error logs, potential fixes, and/or workarounds that users can find using their favorite search engine.
                    • Providing a forum for discussions where both contributors and users can weigh in about new features and their potential design.
                    • Enabling easy traceability of contributions through the \"Commit \u2192 PR \u2192 Issue\" reference chain.

                    Opening issues can be omitted only in the case of trivial and small fixes (e.g., typo fixes, simple documentation enhancements, etc.).

                    "},{"location":"getting-started/contributing/#opening-issues","title":"Opening Issues","text":"

                    Before opening an issue, make sure that:

                    1. You read the documentation carefully, so the observed error does not stem from incorrect Caliper configuration or usage.
                    2. You searched older issues (or other forums) for your question, maybe it is already answered/fixed.
                    3. It is worth to ask around on Discord, maybe other users already encountered your issue/task, and managed to solve it.

                    When opening an issue about a potential bug or feature request, make sure to properly fill and format the issue contents!

                    Choose the issue template that suits your intent (bug report or feature request), then fill out the form as best as you can.

                    If you find the available issue templates too constraining, then you can still use the \"blank\" issue template for now (it will be deprecated in the future), and also let us know how we can improve the issue templates.

                    The details of the blank template should be filled according to the following guideline:

                    1. Issue title: Should be a concise sentence summarising the details below, including which component or part of the benchmarking process is affected. For example: Fabric contract deployment silently fails in generator CI test
                    2. Context: A detailed description of the context of the issue. Should include information about, for example, how you encountered the issue, what were you trying to achieve, why you consider this a bug, and how it affected your work with Caliper?
                    3. Expected Behavior: What was your expected outcome/behavior of the Caliper run?
                    4. Actual Behavior: What was your actually observered outcome/behaviour instead of the expected one?
                    5. Possible Fix: If you have already identified the source of the issue, you can also propose a possible fix for it. It does not necessarily have to be a working code, some general steps/ideas of a fix is also appreciated.
                    6. Steps to Reproduce: This is perhaps the most important part of an issue. Since Caliper is highly configurable and can interact with multiple SUTs, it is crucial for the maintainers to know the exact steps to reproduce an issue. Always try to provide (or just describe) a minimal working example (MWE) that can reproduce the issue. Also, please attach the following information to the issues whenever possible (preferably using collapsable code blocks or GitHub gists):
                      • The benchmark configuration file content.
                      • The network configuration file content.
                      • The workload module implementation.
                      • The exact error logs (and their surroundings).
                      • Any other information you deem neccessary.
                    7. Existing Issues: You can also provide links to similar (or the same) issues in other forums (GitHub issues, StackOverflow questions or Discord messages). Such cross-linking can help us ensure that we can broadcast the potential fixes to multiple instances of the question.
                    8. Your Environment: Always include your execution environment information, including the used version/commit of Caliper, the operating system, Node.JS version, or any relevant information about your project that uses Caliper.
                    "},{"location":"getting-started/contributing/#submitting-pull-requests","title":"Submitting Pull Requests","text":""},{"location":"getting-started/contributing/#basic-workflow","title":"Basic workflow","text":"

                    The following workflow should make your contribution process clean and straighforward (some deviations might be neccessary in exceptional cases):

                    The following list assumes that you use the git command line tool. IDEs and graphical git tools should also expose the same commands if you prefer those.

                    1. Fork the Caliper repository. This needs to be done only once.
                    2. Clone the fork repository to your local machine (ideally done once):
                      • Using the HTTPS method: git clone https://github.com/<username>/caliper.git
                      • or using the SSH method: git clone git@github.com:<username>/caliper.git
                    3. Add the upstream/original Caliper repository as a remote, using the name upstream (ideally done once). This will allow you to easily sync your fork with the original repository.
                      • git remote add upstream https://github.com/hyperledger-caliper/caliper.git

                    The following steps follow the \"feature branch\" development practice, and should be performed for each of your contribution:

                    1. Checkout your main branch that will be the starting point/state of your contribution.
                      • git checkout main
                      • or git checkout -b main if the branch does not exist locally (only during your first checkout)
                    2. Retrieve the new updates from the upstream Caliper repository
                      • git fetch upstream (downloads the updates locally)
                      • then git rebase upstream/main (\"adds\" the updates to your local main brach)

                        This step is crucial to ensure that you always contribute based on the latest Caliper version, minimizing the chance of version control conflicts!

                    3. (Optional) Push your updated main branch to your remote repository just to keep things in sync
                      • git push
                    4. Create a new feature branch (named my-bug-fix) from the updated main branch:
                      • git checkout -b my-bug-fix
                      • then git push --set-upstream origin my-bug-fix to push the new branch to your remote repository
                    5. Implement and test your contribution
                      • The .build/checks-and-unit-tests.sh script runs the basic tests for the repository
                      • The BENCHMARK=<test_name> .build/benchmark-integration-test-direct.sh script can run one of the CI integration tests, depending on the value of <test_name>:
                        • BESU
                        • ETHEREUM
                        • FABRIC
                        • GENERATOR

                          Make sure that the tests pass locally before pushing your changes and opening a PR!

                    6. Commit your changes to your local feature branch, adding the DCO sign-off:
                      • git commit -s -m \"Commit message\"
                      • or git commit -s (without the -m parameter) to make git open up the configured text editor where you can write a detailed, multi-line commit message:
                        • The first line is a concise description of the commit (its purpose, what does it fix, etc).
                        • After a blank line, you can go into details about the changes of the commit.
                    7. Push your changes to your remote branch:
                      • git push
                    8. Open a PR using the GitHub webpage.
                    "},{"location":"getting-started/contributing/#updating-prs","title":"Updating PRs","text":"

                    It is possible that the maintainers/reviewers request some changes before your PR can be merged. In that case, just add your changes to your feature branch using a new signed commit (based on the above workflow), then push it to your remote branch. The PR will automatically pick up the new commit.

                    The maintainers can/will squash commits before merging to keep a simpler commit history for the project.

                    "},{"location":"getting-started/contributing/#how-to-contribute","title":"How to contribute","text":"

                    We are using GitHub issues for bug reports and feature requests.

                    If you find any bug in the source code or have any trivial changes (such as typos fix, minor feature), you can raise an issue or delivery a fix via a pull request directly.

                    If you have any enhancement suggestions or want to help extend caliper with more DLTs or have any other major changes, please start by opening an issue first. That way, relevant parties (e.g. maintainers or main contributors of the relevant subsystem) can have a chance to look at it before you do any work.

                    All PRs must get at least one review, you can ask hyperledger/caliper-committers for review. Normally we will review your contribution in one week. If you haven\u2019t heard from anyone in one week, feel free to @ or mail a maintainer to review it.

                    All PRs must be signed before be merged, be sure to use git commit -s to commit your changes.

                    If a PR is reviewed and changes are requested then please do not force push the changes, push the changes into a new commit, this makes it easier to see the changes between the previously reviewed code and the new changes.

                    We use Github Actions to test the build - please test on your local branch before raising a PR.

                    There is also Discord with a Caliper channel for communication, anybody is welcome to join.

                    "},{"location":"getting-started/contributing/#caliper-components","title":"Caliper Components","text":"

                    The project is maintained as a Node.js monorepository. Accordingly, it can seem overwhelming at first to navigate its content. The following sections list and introduce the main components of the repository.

                    "},{"location":"getting-started/contributing/#installing-the-caliper-code-base","title":"Installing the Caliper code base","text":"

                    Note

                    this section is intended only for developers who would like to modify the Caliper code-base and experiment with the changes locally before raising pull requests. You should perform the following steps every time you make a modification you want to test, to correctly propagate any changes.

                    The workflow of modifying the Caliper code-base usually consists of the following steps:

                    1. Bootstrapping the repository
                    2. Modifying and testing the code
                    3. Publishing package changes locally
                    4. Building the Docker image
                    "},{"location":"getting-started/contributing/#bootstrapping-the-caliper-repository","title":"Bootstrapping the Caliper repository","text":"

                    To install the basic dependencies of the repository, and to resolve the cross-references between the different packages in the repository, you must execute the following commands from the root of the repository directory:

                    1. ./.build/check-prerequisites.sh: Checks the version of Node and NPM in the system and warns if the versions are not compatible.

                    2. npm ci: Installs the dependencies of the repository from the package-lock.json file.

                    Or as a one-liner:

                    user@ubuntu:~/caliper$ ./.build/check-prerequisites.sh && npm ci\n

                    Note

                    do not run any of the above commands with sudo, as it will cause the bootstrap process to fail.

                    "},{"location":"getting-started/contributing/#publicpublished-packages","title":"Public/published packages","text":"

                    The packages directory contains the following public/published packages:

                    • caliper-cli: The command line interface (CLI) of Caliper.
                    • caliper-core: The core and common codebase of Caliper, used by the other packages.
                    • caliper-ethereum: The Ethereum and Hyperledger Besu connector implementation.
                    • caliper-fabric: The Hyperledger Fabric connector implementation.
                    • generator-caliper: The Yeaoman generator for Caliper configuration files.
                    "},{"location":"getting-started/contributing/#internal-packages","title":"Internal packages","text":"

                    The packages directory contains the following internal packages:

                    • caliper-publish: Utility CLI for publishing Caliper to NPM and DockerHub.
                    • caliper-tests-integration: Collection of CI integration tests.
                    "},{"location":"getting-started/contributing/#testing-methodologies","title":"Testing Methodologies","text":"

                    This section outlines the testing methodologies that this project follows, including both unit-level and integration-level testing.

                    "},{"location":"getting-started/contributing/#unit-level-testing","title":"Unit Level Testing","text":"

                    Unit testing focuses on testing individual packages in isolation. The tests are typically located within a test folder at the same level as the corresponding lib folder, mirroring the structure of the lib folder for easy navigation. This setup ensures that each component of the code is verified independently, confirming that it behaves as expected without interference from other parts of the system.

                    "},{"location":"getting-started/contributing/#unit-testing-dependencies-and-their-use","title":"Unit Testing Dependencies and Their Use","text":"

                    The following are the recommended testing modules for this project. While these tools are commonly used in the current codebase, other appropriate tools may be used as long as they facilitate effective testing.

                    • mockery: Mockery is a simple module for mocking Node.js modules during testing. It allows you to replace real modules with mocks or stubs.

                    • mocha: Mocha is a feature-rich JavaScript test framework that runs on Node.js and in the browser. It facilitates asynchronous testing, making it easy to write simple and flexible tests.

                    • chai: Chai is a BDD/TDD assertion library for Node.js and the browser. It can be paired with any JavaScript testing framework. We use it to create readable and expressive assertions.

                    • sinon: Sinon is a standalone test spies, stubs, and mocks for JavaScript. It works with any test framework and integrates well with Mocha and Chai. We utilize Sinon for checking how functions are called during testing.

                    • sinon-chai: This library provides a set of custom assertions for using Sinon with Chai. It allows you to write more readable assertions for Sinon spies, stubs, and mocks.

                    • nyc: NYC is a command-line utility for generating code coverage reports. It is often used with Mocha to ensure that tests cover as much code as possible.

                    "},{"location":"getting-started/contributing/#mandatory-tools-in-the-testing-pipeline","title":"Mandatory Tools in the Testing Pipeline","text":"

                    In addition to the testing frameworks, the following tools are mandatory for all testing pipelines:

                    • eslint: ESLint is a static code analysis tool for identifying problematic patterns in JavaScript code. It is essential for maintaining code quality.
                    • license-check-and-add: This tool ensures that all files in the codebase contain the required license headers. It is mandatory for all code submissions.
                    "},{"location":"getting-started/contributing/#points-to-note-for-adding-a-conforming-unit-test","title":"Points to Note for Adding a Conforming Unit Test","text":"

                    When writing unit tests, the following structure and practices are mandatory:

                    1. License Header: All test files must include the project's license header.
                    2. 'use strict' Directive: Ensure strict mode is enabled in all test files.
                    3. Test Organization:
                      • Use describe blocks to group related test cases.
                      • Use it statements for individual test cases.
                      • Nested describe blocks are encouraged for organizing complex test scenarios.
                    4. Consistent Test Naming: Test descriptions should flow naturally, making it clear what behavior is being tested (e.g., 'should return the correct value when input is valid').
                    5. Mocking Guidance: Be cautious with mocks that persist across tests. Always clean up after each test to avoid unexpected behavior.
                    6. Test Patterns: Refer to the Fabric Unit tests for examples of recommended patterns and best practices.
                    7. Final Checks: Always run all unit tests before submitting a PR and ensure no .only is left in the code, which would skip other tests.
                    "},{"location":"getting-started/contributing/#integration-level-testing","title":"Integration Level Testing","text":"

                    Integration testing ensures that Caliper integrates correctly with various packages, effectively testing the functionality of the package itself. These tests are organized within the caliper-tests-integration folder, with each test suite dedicated to a specific package or module.

                    "},{"location":"getting-started/contributing/#testing-the-code","title":"Testing the code","text":"

                    Caliper has both unit tests and integration tests.

                    Unit tests can be run using npm test either in the root of the caliper source tree (to run them all) or within the specific package (eg caliper-fabric) to run just the tests within that package.

                    To run the integration tests for a specific SUT, use the following script from the root directory of the repository, setting the BENCHMARK environment variable to the platform name:

                    user@ubuntu:~/caliper$ BENCHMARK=fabric ./.build/benchmark-integration-test-direct.sh\n

                    The following platform tests (i.e., valid BENCHMARK values) are available: besu, ethereum, fabric.

                    A PR must pass all unit and integration tests.

                    If you would like to run other examples, then you can directly access the CLI in the packages/caliper-cli directory, without publishing anything locally.

                    user@ubuntu:~/caliper$ node ./packages/caliper-cli/caliper.js launch manager \\\n    --caliper-workspace ~/caliper-benchmarks \\\n    --caliper-benchconfig benchmarks/scenario/simple/config.yaml \\\n    --caliper-networkconfig networks/fabric/test-network.yaml\n
                    "},{"location":"getting-started/contributing/#creating-new-sut-connectors","title":"Creating New SUT Connectors","text":"

                    Connectors are relatively heavy components in Caliper. Before you attempt to create a new connector for a new SUT type, consult with the Caliper maintainers (on Discord, or in the form of a feature request).

                    More importantly, make sure that you are overly familiar with the documentation page about implementing new connectors.

                    "},{"location":"getting-started/contributing/#publishing-to-local-npm-repository","title":"Publishing to local NPM repository","text":"

                    The NPM publishing and installing steps for the modified code-base can be tested through a local NPM proxy server, Verdaccio. The steps to perform are the following:

                    1. Start a local Verdaccio server to publish to
                    2. Publish the packages from the Caliper repository to the Verdaccio server
                    3. Install and bind the CLI from the Verdaccio server
                    4. Run the integration tests or any sample benchmark

                    The packages/caliper-publish directory contains an internal CLI for easily managing the following steps. So the commands of the following sections must be executed from the packages/caliper-publish directory:

                    user@ubuntu:~/caliper$ cd ./packages/caliper-publish\n

                    Note

                    use the --help flag for the following CLI commands and sub-commands to find out more details.

                    "},{"location":"getting-started/contributing/#starting-verdaccio","title":"Starting Verdaccio","text":"

                    To setup and start a local Verdaccio server, run the following npm command:

                    user@ubuntu:~/caliper/packages/caliper-tests-integration$ npm run start_verdaccio\n...\n[PM2] Spawning PM2 daemon with pm2_home=.pm2\n[PM2] PM2 Successfully daemonized\n[PM2] Starting /home/user/projects/caliper/packages/caliper-tests-integration/node_modules/.bin/verdaccio in fork_mode (1 instance)\n[PM2] Done.\n| App name  | id | mode | pid    | status | restart | uptime | cpu | mem       | user   | watching |\n|-----------|----|------|--------|--------|---------|--------|-----|-----------|--------|----------|\n| verdaccio | 0  | fork | 115203 | online | 0       | 0s     | 3%  | 25.8 MB   | user   | disabled |\n\nUse `pm2 show <id|name>` to get more details about an app\n

                    The Verdaccio server is now listening on the following address: http://localhost:4873

                    "},{"location":"getting-started/contributing/#publishing-the-packages","title":"Publishing the packages","text":"

                    Once Verdaccio is running, you can run the following command to publish every Caliper package locally:

                    user@ubuntu:~/caliper/packages/caliper-publish$ ./publish.js npm --registry \"http://localhost:4873\"\n...\n+ @hyperledger/caliper-core@0.6.1-unstable-20240422122901\n[PUBLISH] Published package @hyperledger/caliper-core@0.6.1-unstable-20240422122901\n...\n+ @hyperledger/caliper-fabric@0.6.1-unstable-20240422122901\n[PUBLISH] Published package @hyperledger/caliper-fabric@0.6.1-unstable-20240422122901\n...\n+ @hyperledger/caliper-cli@0.6.1-unstable-20240422122901\n[PUBLISH] Published package @hyperledger/caliper-cli@0.6.1-unstable-20240422122901\n

                    Take note of the dynamic version number you see in the logs, you will need it to install you modified Caliper version from Verdaccio (the unstable tag is also present on NPM, so Verdaccio would probably pull that version instead of your local one).

                    Since the published packages include a second-precision timestamp in their versions, you can republish any changes immediately without restarting the Verdaccio server and without worrying about conflicting packages.

                    "},{"location":"getting-started/contributing/#running-package-based-tests","title":"Running package-based tests","text":"

                    Once the packages are published to the local Verdaccio server, we can use the usual NPM install approach. The only difference is that now we specify the local Verdaccio registry as the install source instead of the default, public NPM registry:

                    user@ubuntu:~/caliper-benchmarks$ npm install --registry=http://localhost:4873 --only=prod \\\n    @hyperledger/caliper-cli@0.6.1-unstable-20240422122901\nuser@ubuntu:~/caliper-benchmarks$ npx caliper bind --caliper-bind-sut fabric:fabric-gateway\nuser@ubuntu:~/caliper-benchmarks$ npx caliper launch manager \\\n    --caliper-workspace . \\\n    --caliper-benchconfig benchmarks/scenario/simple/config.yaml \\\n    --caliper-networkconfig networks/fabric/test-network.yaml\n

                    Note

                    we used the local registry only for the Caliper packages. The binding happens through the public NPM registry. Additionally, we performed the commands through npx and the newly installed CLI binary (i.e., not directly calling the CLI code file).

                    "},{"location":"getting-started/contributing/#building-the-docker-image","title":"Building the Docker image","text":"

                    Once the modified packages are published to the local Verdaccio server, you can rebuild the Docker image. The Dockerfile is located in the packages/caliper-publish directory.

                    To rebuild the Docker image, execute the following:

                    user@ubuntu:~/caliper/packages/caliper-publish$ ./publish.js docker\n...\nSuccessfully tagged hyperledger/caliper:manager-unstable-20220206065953\n[BUILD] Built Docker image \"hyperledger/caliper:manager-unstable-20220206065953\"\n

                    Now you can proceed with the Docker-based benchmarking as described in the previous sections.

                    Note

                    once you are done with the locally published packages, you can clean them up the following way:

                    user@ubuntu:~/caliper/packages/caliper-publish$ ./publish.js verdaccio stop\n
                    "},{"location":"getting-started/contributing/#caliper-structure","title":"Caliper Structure","text":"

                    Caliper is modularised under packages into the following components:

                    caliper-cli This is the Caliper CLI that enables the running of a benchmark

                    caliper-core Contains all the Caliper core code.

                    caliper- Each caliper-<adapter> is a separate package that contains a distinct adaptor implementation to interact with different blockchain technologies. Current adaptors include:

                    • caliper-ethereum
                    • caliper-fabric

                    Each adapter extends the ConnectorBase from the core package, as well as exports a ConnectorFactory function.

                    caliper-tests-integration This is the integration test suite used for caliper; it runs in the Azure pipelines build and can (should) be run locally when checking code changes. Please see the readme within the package for more details.

                    "},{"location":"getting-started/contributing/#dependency-management-in-the-monorepo","title":"Dependency management in the monorepo","text":""},{"location":"getting-started/contributing/#npm-version","title":"npm version","text":"

                    For developing Caliper, the minimum required version of npm is 7.24.2. This is because npm 7 introduced the npm workspaces feature, which is used to manage the Caliper monorepo.

                    In case the npm version is not sufficient, the following error may be thrown when trying to install the dependencies:

                    user@ubuntu:~/caliper$ npm install\nnpm ERR! code ENOTSUP\nnpm ERR! notsup Unsupported engine for caliper@0.6.1-unstable: wanted: {\"node\":\">=18.19.0\",\"npm\":\">=7.24.2\"} (current: {\"node\":\"14.21.3\",\"npm\":\"6.14.18\"})\nnpm ERR! notsup Not compatible with your version of node/npm: caliper@0.6.1-unstable\nnpm ERR! notsup Not compatible with your version of node/npm: caliper@0.6.1-unstable\nnpm ERR! notsup Required: {\"node\":\">=18.19.0\",\"npm\":\">=7.24.2\"}\nnpm ERR! notsup Actual:   {\"npm\":\"6.14.18\",\"node\":\"14.21.3\"}\n

                    If the npm version is lower than 7.24.2, you can update it to latest by running the following command:

                    npm install -g npm@latest\n
                    "},{"location":"getting-started/contributing/#adding-new-dependencies","title":"Adding new dependencies","text":"

                    The monorepo is managed using npm workspaces. So to add a dependency to a package, the following command should be executed:

                    user@ubuntu:~/caliper$ npm install <dependency> --workspace=<package>\n
                    "},{"location":"getting-started/contributing/#removing-unused-dependencies","title":"Removing unused dependencies","text":"

                    To remove a dependency from a package, the following command should be executed:

                    user@ubuntu:~/caliper$ npm uninstall <dependency> --workspace=<package>\n
                    "},{"location":"getting-started/contributing/#managing-the-package-lockjson-file","title":"Managing the package-lock.json file","text":"

                    The package-lock.json file is generated by npm and it is used to lock the dependency versions. It is generated automatically when installing dependencies, and it should be committed to the repository. Please make sure that the right version of npm is used when installing dependencies, otherwise the package-lock.json file may be updated unnecessarily.

                    "},{"location":"getting-started/contributing/#how-dependency-conflicts-are-resolved-in-workspaces","title":"How dependency conflicts are resolved in workspaces","text":"

                    When installing dependencies to a workspace, if the dependency is already installed as a direct or indirect dependency of another workspace, npm will try to reuse the existing version. If the version is not compatible with the one specified in the workspace, npm will try to resolve the conflict by installing the requested version of the dependency in the node_modules directory of the workspace.

                    "},{"location":"getting-started/contributing/#add-an-adaptor-for-a-new-dlt","title":"Add an Adaptor for a New DLT","text":"

                    New adapters must be added within a new package, under packages, with the naming convention caliper-<adapter_name>. Each adapter must implement a new class extended from ConnectorBase as the adapter for the DLT, as well export a ConnectorFactory function. Please refer to the existing Connectors for examples and requirements for implementation.

                    "},{"location":"getting-started/contributing/#inclusive-language-guidelines","title":"Inclusive language guidelines","text":"

                    Please adhere to the inclusive language guidelines that the project has adopted as you make documentation updates.

                    • Consider that users who will read the docs are from different backgrounds and cultures and that they have different preferences.
                    • Avoid potential offensive terms and, for instance, prefer \u201callow list and deny list\u201d to \u201cwhite list and black list\u201d.
                    • We believe that we all have a role to play to improve our world, and even if writing inclusive documentation might not look like a huge improvement, it\u2019s a first step in the right direction.
                    • We suggest to refer to Microsoft bias free writing guidelines and Google inclusive doc writing guide as starting points.
                    "},{"location":"getting-started/contributing/#building-the-documentation","title":"Building the documentation","text":""},{"location":"getting-started/contributing/#requirements","title":"Requirements","text":"
                    • Python
                    • Pip (Python package installer)
                    "},{"location":"getting-started/contributing/#installation","title":"Installation","text":"
                    1. Clone the Repository:

                      git clone https://github.com/hyperledger-caliper/caliper.git\ncd caliper\n

                    2. Go to the documentation folder:

                      cd docs\n

                    3. Install Dependencies:

                      pip install -r pip-requirements.txt\n

                    4. Building the Documentation: To build the documentation, run:

                      mkdocs build\n

                      This command generates static files in the site directory.

                    5. Previewing Locally: To preview your documentation locally, use:

                      mkdocs serve\n

                      This starts a local development server at http://127.0.0.1:8000/ where you can view your changes in real-time.

                    "},{"location":"getting-started/contributing/#license","title":"License","text":"

                    The Caliper codebase is released under the Apache 2.0 license. Any documentation developed by the Caliper Project is licensed under the Creative Commons Attribution 4.0 International License. You may obtain a copy of the license, titled CC-BY-4.0, at http://creativecommons.org/licenses/by/4.0/.

                    "},{"location":"getting-started/faq/","title":"Caliper FAQ","text":""},{"location":"getting-started/faq/#i-environment-platform-version","title":"I. Environment, Platform & Version","text":"

                    Q: How do I run Caliper to test a blockchain system? A: Details for setting up Caliper to run benchmark tests on a blockchain system are provided in the Getting Started page of the site. When you run Caliper to test a blockchain network, you may encounter some errors. If so, first you should check the version of tools, SDKs and modules to make sure it is right.

                    "},{"location":"getting-started/faq/#ii-configuration-files-of-caliper","title":"II. Configuration Files of Caliper","text":"

                    Q: What kind of configuration files are needed to run Caliper? A: There are two kinds of configuration files in Caliper: the benchmark configuration file, which defines the arguments of the benchmark, like workload and monitoring settings; the blockchain configuration file, which specifies the information needed to interact with the backend blockchain system. For examples of these files please refer to here

                    There is another configuration file, namely ./config/default.yaml, containing runtime setting for Caliper and the blockchain adapters. These settings can also be specified as command line arguments or environment variables.

                    "},{"location":"getting-started/faq/#iii-testing-a-blockchain-network","title":"III. Testing a Blockchain Network","text":"

                    Q: What kind of networks does Caliper support currently? A: Now you can use Caliper to test Besu, Ethereum and Fabric.

                    Q: How can I test a blockchain system that Caliper does not support currently\uff1f A: If you want to test the blockchain system that Caliper does not support now, you must write your own blockchain adapter that Caliper can use to inferface with the backend network. For details, you can refer to the Writing Adapters page. The Caliper-specific configurations remain unchanged. Take a look at the provided adapter implementations and example networks to gather some best-practices.

                    "},{"location":"getting-started/faq/#iv-other-questions-related-to-caliper","title":"IV. Other Questions Related to Caliper","text":"

                    Q: How can I calculate the throughput (TPS)? A: Caliper will record the submitting time and committing time (the time when the Tx is committed on the ledger or when the failure occurred) for each Tx. So the send rate and the throughput are calculated by (Succ+Fail) / (last submitting time - first submitting time). This means both successful and failed transactions are included in the throughput and send rate calculations.

                    "},{"location":"getting-started/faq/#v-other-questions-related-to-the-backend-blockchain-system","title":"V. Other Questions Related to the Backend Blockchain System","text":"

                    Q: How can I test my own Fabric chaincode? A: You first need to deploy that chaincode to your Fabric network, then create your benchmark and workload files see the Caliper Fabric tutorial for further guidance.

                    Q: How can I use TLS communication? A: Fabric supports secure communication between nodes and clients using TLS. TLS communication can use both one-way (server only) and two-way (server and client) authentication. You can refer to the Fabric TLS configuration page for server side settings. For Caliper-side settings, check the adapter documentation that details how to set the necessary credentials.

                    "},{"location":"getting-started/faq/#q-how-can-i-monitor-remote-docker-containers","title":"Q: How can I monitor remote Docker containers?","text":"

                    A: If you need to access the Docker daemon remotely, you need to explicitly enable remote access. Beware that the default setup provides unencrypted and unauthenticated direct access to the Docker daemon. For details, refer to the official Docker documentation.

                    "},{"location":"getting-started/faq/#license","title":"License","text":"

                    The Caliper codebase is released under the Apache 2.0 license. Any documentation developed by the Caliper Project is licensed under the Creative Commons Attribution 4.0 International License. You may obtain a copy of the license, titled CC-BY-4.0, at http://creativecommons.org/licenses/by/4.0/.

                    "},{"location":"getting-started/installing-caliper/","title":"Installing Caliper","text":""},{"location":"getting-started/installing-caliper/#overview","title":"Overview","text":"

                    Caliper is published as the @hyperledger/caliper-cli NPM package and the hyperledger/caliper Docker image, both containing the CLI binary. Refer to the Installing from NPM and Using the Docker image sections for the available versions and their intricacies.

                    Installing and running Caliper consists of the following steps, thoroughly detailed by the remaining sections:

                    1. Acquire the Caliper CLI either from NPM or from DockerHub.
                    2. Execute a bind command through the CLI. This step pulls the specified version of SDK packages for the selected platform.
                    3. Start the benchmark through the CLI or by starting the Docker container.

                    The examples in the rest of the documentation use the caliper-benchmarks repository as the Caliper workspace since it contains many sample artifacts for benchmarking.

                    Important

                    make sure you check out the appropriate tag/commit of the repository, matching the version of Caliper you use.

                    To clone the caliper-benchmarks repository, run:

                    git clone https://github.com/hyperledger-caliper/caliper-benchmarks.git\ncd caliper-benchmarks\ngit checkout <your Caliper version>\n

                    Note

                    If you are running your custom benchmark, then change this directory path (and other related configurations) accordingly in the examples.

                    "},{"location":"getting-started/installing-caliper/#the-caliper-cli","title":"The Caliper CLI","text":"

                    Unless you are embedding the Caliper packages in your own application, you will probably use Caliper through its command line interface (CLI). The other sections will introduce the different ways of acquiring and calling the Caliper CLI. This section simply focuses on the API it provides.

                    Note

                    The following examples assume a locally installed CLI in the ~/caliper-benchmarks directory, hence the npx call before the caliper binary. Refer to the Local NPM install section for the specifics.

                    The entry point of the CLI is the caliper binary. You can confirm whether the CLI is installed correctly by checking its version:

                    user@ubuntu:~/caliper-benchmarks$ npx caliper --version\nv0.6.0\n

                    The CLI provides multiple commands to perform different tasks. To check the available commands and their descriptions, execute:

                    user@ubuntu:~/caliper-benchmarks$ npx caliper --help\ncaliper <command>\n\nCommands:\n  caliper.js bind [options]       Bind Caliper to a specific SUT and its SDK version\n  caliper.js launch <subcommand>  Launch a Caliper process either in a manager or worker role.\n  caliper.js unbind [options]     Unbind Caliper from a previously bound SUT and its SDK version\n  caliper.js completion           generate completion script\n\nOptions:\n  --help, -h  Show usage information  [boolean]\n  --version   Show version information  [boolean]\n\nExamples:\n  caliper bind\n  caliper unbind\n  caliper launch manager\n  caliper launch worker\n
                    You can also request the help page of a specific command, as demonstrated by the next subsections.

                    Note

                    the command options can be set either through the command line, or from various other sources supported by the configuration mechanism of Caliper. This flexibility makes it easy to embed the CLI in different environments.

                    "},{"location":"getting-started/installing-caliper/#the-bind-command","title":"The bind command","text":"

                    Acquiring Caliper is as easy as installing a single NPM package, or pulling a single Docker image. However, this single point of install necessitates an additional step of telling Caliper which platform to target and which platform SDK version to use. This step is called binding, provided by the bind CLI command.

                    To have a look at the help page of the command, execute:

                    user@ubuntu:~/caliper-benchmarks$ npx caliper bind --help\nUsage:\n  caliper bind --caliper-bind-sut fabric:fabric-gateway --caliper-bind-cwd ./ --caliper-bind-args=\"-g\"\n\nOptions:\n  --help, -h           Show usage information  [boolean]\n  --version            Show version information  [boolean]\n  --caliper-bind-sut   The name and version of the platform and its SDK to bind to  [string]\n  --caliper-bind-cwd   The working directory for performing the SDK install  [string]\n  --caliper-bind-args  Additional arguments to pass to \"npm install\". Use the \"=\" notation when setting this parameter  [string]\n  --caliper-bind-file  Yaml file to override default (supported) package versions when binding an SDK  [string]\n
                    The binding step technically consists of an extra npm install call with the appropriate packages and install settings, fully managed by the CLI. The following parameters can be set for the command:

                    • SUT/platform name and SDK version: specifies the name of the target platform and the SDK version to install, e.g., fAbric:2.2
                    • Working directory: the directory from which the npm install command must be performed. Defaults to the current working directory
                    • User arguments: additional arguments to pass to npm install, e.g., --save

                    The following SUT name and SDK version combinations are supported:

                    • besu: 1.3.2, 1.3, 1.4
                    • ethereum: 1.2.1, 1.3
                    • fabric: 1.4, 2.2, fabric-gateway

                    Note

                    Ensure that the SDK you are binding is compatible with the the SUT version that you intend to target.

                    The bind command is useful when you plan to run multiple benchmarks against the same SUT version. Bind once, then run different benchmarks without the need to bind again. As you will see in the next sections, the launcher commands for the manager and worker processes can also perform the binding step if the required parameter is present.

                    "},{"location":"getting-started/installing-caliper/#custom-bindings","title":"Custom bindings","text":"

                    The built-in bindings can be overridden by setting the caliper-bind-file parameter to a YAML file path. The file must match the structure of the default binding file, documented here. This way you can use experimental SDK versions that are not (yet) officially supported by Caliper. This also means that we cannot provide help for such SDK versions!

                    "},{"location":"getting-started/installing-caliper/#the-unbind-command","title":"The unbind command","text":"

                    It might happen that you would like to switch between different SUT SDK versions/bindings during your measurements or project development. Depending on the SUT SDK, simply rebinding to a different version might leave behind unwanted packages, resulting in obscure errors.

                    To avoid this, the CLI provides an unbindcommand, that behaves exactly like the bind command (even uses the same arguments), but instead of installing the packages present in the binding specification, it removes them, leaving no trace of the previous binding.

                    To have a look at the help page of the command, execute:

                    user@ubuntu:~/caliper-benchmarks$ npx caliper unbind --help\nUsage:\n  caliper unbind --caliper-bind-sut fabric:2.2 --caliper-bind-cwd ./ --caliper-bind-args=\"-g\"\n\nOptions:\n  --help, -h           Show usage information  [boolean]\n  --version            Show version information  [boolean]\n  --caliper-bind-sut   The name and version of the platform and its SDK to unbind  [string]\n  --caliper-bind-cwd   The working directory for performing the SDK removal  [string]\n  --caliper-bind-args  Additional arguments to pass to \"npm remove\". Use the \"=\" notation when setting this parameter  [string]\n  --caliper-bind-file  Yaml file to override default (supported) package versions when unbinding an SDK  [string]\n

                    Note

                    It is recommended to either bind/unbind globally (as done by the Caliper Docker image), or use the --caliper-bind-args=\"--save-dev\" argument when performing the binding/unbinding. This ensures that npm will correctly remove the packages.

                    "},{"location":"getting-started/installing-caliper/#the-launch-command","title":"The launch command","text":"

                    Caliper runs a benchmark by using worker processes to generate the workload, and by using a manager process to coordinate the different benchmark rounds among the worker processes. Accordingly, the CLI provides commands for launching both manager and worker processes.

                    To have a look at the help page of the command, execute:

                    user@ubuntu:~/caliper-benchmarks$ npx caliper launch --help\ncaliper launch <subcommand>\n\nLaunch a Caliper process either in a manager or worker role.\n\nCommands:\n  caliper launch manager [options]  Launch a Caliper manager process to coordinate the benchmark run\n  caliper launch worker [options]  Launch a Caliper worker process to generate the benchmark workload\n\nOptions:\n  --help, -h  Show usage information  [boolean]\n  --version   Show version information  [boolean]\n
                    "},{"location":"getting-started/installing-caliper/#the-launch-manager-command","title":"The launch manager command","text":"

                    The Caliper manager process can be considered as the entry point of a distributed benchmark run. It coordinates (and optionally spawns) the worker processes throughout the benchmark run.

                    To have a look at the help page of the command, execute:

                    user@ubuntu:~/caliper-benchmarks$ npx caliper launch manager --help\nUsage:\n caliper launch manager --caliper-bind-sut fabric:2.2 [other options]\n\nOptions:\n  --help, -h           Show usage information  [boolean]\n  --version            Show version information  [boolean]\n  --caliper-bind-sut   The name and version of the platform to bind to  [string]\n  --caliper-bind-cwd   The working directory for performing the SDK install  [string]\n  --caliper-bind-args  Additional arguments to pass to \"npm install\". Use the \"=\" notation when setting this parameter  [string]\n  --caliper-bind-file  Yaml file to override default (supported) package versions when binding an SDK  [string]\n

                    As you can see, the launch manager command can also process the parameters of the bind command, just in case you would like to perform the binding and the benchmark run in one step.

                    However, the command requires the following parameters to be set:

                    • caliper-workspace: the directory serving as the root of your project. Every relative path in other configuration files or settings will be resolved from this directory. The workspace concept was introduced to make Caliper projects portable across different machines.
                    • caliper-benchconfig: the path of the file containing the configuration of the test rounds, as detailed in the Architecture page. Should be relative to the workspace path.
                    • caliper-networkconfig: the path of the file containing the network configuration/description for the selected SUT, detailed in the configuration pages of the respective adapters. Should be relative to the workspace path.
                    "},{"location":"getting-started/installing-caliper/#the-launch-worker-command","title":"The launch worker command","text":"

                    The Caliper worker processes are responsible for generating the workload during the benchmark run. Usually more than one worker process is running, coordinated by the single manager process.

                    To have a look at the help page of the command, execute:

                    user@ubuntu:~/caliper-benchmarks$ npx caliper launch worker --help\nUsage:\n caliper launch manager --caliper-bind-sut fabric:2.2 [other options]\n\nOptions:\n  --help, -h           Show usage information  [boolean]\n  --version            Show version information  [boolean]\n  --caliper-bind-sut   The name and version of the platform to bind to  [string]\n  --caliper-bind-cwd   The working directory for performing the SDK install  [string]\n  --caliper-bind-args  Additional arguments to pass to \"npm install\". Use the \"=\" notation when setting this parameter  [string]\n  --caliper-bind-file  Yaml file to override default (supported) package versions when binding an SDK  [string]\n
                    As you can see, you can configure the worker processes the same way as the manager process. Including the optional binding step, but also the three mandatory parameters mentioned in the previous section.

                    "},{"location":"getting-started/installing-caliper/#caliper-test-phase-control","title":"Caliper test phase control","text":"

                    Caliper commands are capable of passing all runtime configuration settings. A subset of these commands are for flow control that provide direct control over the following Caliper phases:

                    • start
                    • init
                    • install
                    • test
                    • end

                    It is possible to skip, or perform only one of the above phases through use of the correct flag. For instance, it is common to have an existing network that may be targeted by Caliper through the provision of a --caliper-flow-only-test flag.

                    "},{"location":"getting-started/installing-caliper/#installing-from-npm","title":"Installing from NPM","text":"

                    Caliper is published as the @hyperledger/caliper-cli NPM package, providing a single point of install for every supported adapter.

                    "},{"location":"getting-started/installing-caliper/#versioning-semantics","title":"Versioning semantics","text":"

                    Before explaining the steps for installing Caliper, let\u2019s take a look at the Versions page of the CLI package. You will see a list of tags and versions. If you are new to NPM, think of versions as immutable pointers to a specific version (duh) of the source code, while tags are mutable pointers to a specific version. So tags can change where they point to. Easy, right?

                    But why is all this important to you? Because Caliper is still in its pre-release life-cycle (< v1.0.0), meaning that even minor version bumps are allowed to introduce breaking changes. And if you use Caliper in your project, you might run into some surprises depending on how you install Caliper from time to time.

                    Note

                    Until Caliper reaches v1.0.0, always use the explicit version numbers when installing from NPM. So let\u2019s forget about the latest tag, as of now they are just a mandatory hindrance of NPM. We deliberately do not provide such tags for the Docker images.

                    Let\u2019s see the three types of version numbers you will encounter:

                    • 0.6.0: Version numbers of this form denote releases deemed stable by the maintainers. Such versions have a corresponding GitHub tag, both in the caliper and caliper-benchmarks repositories. Moreover, the latest stable version is documented by the matching version of the documentation page. So make sure to align the different versions if you run into some issue.
                    • 0.6.1-unstable-20240422122901: Such version \u201cnumbers\u201d denote unstable releases that are published upon every merged pull request (hence the timestamp at the end), and eventually will become a stable version, e.g., 0.6.1. This way you always have access to the NPM (and Docker) artifacts pertaining to the main branch of the repository. Let\u2019s find and fix the bugs of new features before they make it to the stable release!
                    • unstable: This is the very latest unstable release that has been published and would correspond to a version also published as 0.6.1-unstable-<some date>. This lets you quickly work with the very latest code from the main branch.

                    Note

                    The newest unstable release always corresponds to the up-to-date version of the related repositories, and the vNext version of the documentation page!

                    "},{"location":"getting-started/installing-caliper/#pre-requisites","title":"Pre-requisites","text":"
                    • Node.js v18 LTS, v20 LTS or v22 LTS version is required to install the Caliper CLI from NPM:
                    • Docker version 20.10.11 or later is required for use with the Caliper docker image

                    The following tools may be required depending on which SUT and version you bind to

                    • python3, make, g++ and git (for fetching and compiling some packages during bind)
                    "},{"location":"getting-started/installing-caliper/#local-npm-install","title":"Local NPM install","text":"

                    Note

                    this is the highly recommended way to install Caliper for your project. Keeping the project dependencies local makes it easier to setup multiple Caliper projects. Global dependencies would require re-binding every time before a new benchmark run (to ensure the correct global dependencies).

                    1. Install the Caliper CLI as you would any other NPM package. It is highly recommended to explicitly specify the version number, e.g., @hyperledger/caliper-cli@0.6.0
                    2. Bind the CLI to the required platform SDK (e.g., fabric with the fabric-gateway SDK).
                    3. Invoke the local CLI binary (using npx) with the appropriate parameters. You can repeat this step for as many benchmarks as you would like.

                    Putting it all together:

                    user@ubuntu:~/caliper-benchmarks$ npm install --only=prod @hyperledger/caliper-cli@0.6.0\nuser@ubuntu:~/caliper-benchmarks$ npx caliper bind --caliper-bind-sut fabric:fabric-gateway\nuser@ubuntu:~/caliper-benchmarks$ npx caliper launch manager \\\n    --caliper-workspace . \\\n    --caliper-benchconfig benchmarks/scenario/simple/config.yaml \\\n    --caliper-networkconfig networks/fabric/test-network.yaml\n

                    We could also perform the binding automatically when launching the manager process (note the extra parameter for caliper launch manager):

                    user@ubuntu:~/caliper-benchmarks$ npm install --only=prod @hyperledger/caliper-cli@0.6.0\nuser@ubuntu:~/caliper-benchmarks$ npx caliper launch manager \\\n    --caliper-bind-sut fabric:fabric-gateway \\\n    --caliper-workspace . \\\n    --caliper-benchconfig benchmarks/scenario/simple/config.yaml \\\n    --caliper-networkconfig networks/fabric/test-network.yaml\n

                    Note

                    specifying the --only=prod parameter in step 2 will ensure that the default latest SDK dependencies for every platform will not be installed. Since we perform an explicit binding anyway (and only for a single platform), this is the desired approach, while also saving some storage and time.

                    Note

                    always make sure that the versions of the SUT, the bound SDK and the used artifacts match!

                    "},{"location":"getting-started/installing-caliper/#global-npm-install","title":"Global NPM install","text":"

                    Note

                    make sure that you have a really good reason for installing the Caliper CLI globally. The recommended approach is the local install. That way your project is self-contained and you can easily setup multiple projects (in multiple directories) that each target a different SUT (or just different SUT versions). Installing or re-binding dependencies globally can get tricky.

                    There are some minor differences compared to the local install:

                    1. You can perform the install, bind and run steps from anywhere (just specify the workspace accordingly).
                    2. You need to install the CLI globally (-g flag).
                    3. You need to tell the binding step to install the packages also globally (--caliper-bind-args parameter).
                    4. You can omit the npx command, since caliper will be in your PATH.
                    user@ubuntu:~$ npm install -g --only=prod @hyperledger/caliper-cli@0.6.0\nuser@ubuntu:~$ caliper bind --caliper-bind-sut fabric:2.2 --caliper-bind-args=-g\nuser@ubuntu:~$ caliper launch manager \\\n    --caliper-workspace ~/caliper-benchmarks \\\n    --caliper-benchconfig benchmarks/scenario/simple/config.yaml \\\n    --caliper-networkconfig networks/fabric/test-network.yaml\n

                    Note

                    for global install you don\u2019t need to change the directory to your workspace, you can simply specify --caliper-workspace ~/caliper-benchmarks. But this way you can\u2019t utilize the auto complete feature of your commandline for the relative paths of the artifacts.

                    Depending on your NPM settings, your user might need write access to directories outside of its home directory. This usually results in \u201cAccess denied\u201d errors. The following pointers here can guide you to circumvent the problem.

                    "},{"location":"getting-started/installing-caliper/#using-the-docker-image","title":"Using the Docker image","text":"

                    Caliper is published as the hyperledger/caliper Docker image, providing a single point of usage for every supported adapter.

                    The important properties of the image are the following:

                    • Working directory: /hyperledger/caliper/workspace
                    • The commands are executed by the node user (created in the base image)
                    • The environment variable CALIPER_WORKSPACE is set to the /hyperledger/caliper/workspace directory
                    • The entry point is the globally installed caliper binary
                    • The environment variable CALIPER_BIND_ARGS is set to -g, so the binding step also occurs globally.
                    • The default command is set to --version. This must be overridden when using the image.

                    This has the following implications:

                    1. It is recommended to mount your local workspace to the /hyperledger/caliper/workspace container directory. The default CALIPER_WORKSPACE environment variable value points to this location, so you don\u2019t need to specify it explicitly, one less setting to modify.
                    2. You need to choose a command to execute, either launch manager or launch worker. Check the Docker and Docker-Compose examples for the exact syntax.
                    3. The binding step is still necessary, similarly to the NPM install approach. Whether you use the launch manager or launch worker command, you only need to set the required binding parameter. The easiest way to do this is through the CALIPER_BIND_SUT and CALIPER_BIND_SDK environment variables.
                    4. You need to set the required parameters for the launched manager or worker. The easiest way to do this is through the CALIPER_BENCHCONFIG and CALIPER_NETWORKCONFIG environment variables.
                    "},{"location":"getting-started/installing-caliper/#starting-a-container","title":"Starting a container","text":"

                    Parts of starting a Caliper container (following the recommendations above):

                    1. Pick the required image version
                    2. Mount your local working directory to a container directory
                    3. Set the required binding and run parameters

                    Note

                    the latest tag is not supported, i.e, you explicitly have to specify the image version you want: hyperledger/caliper:0.6.0, similar to the recommended approach for the NPM packages.

                    Putting it all together, split into multiple lines for clarity, and naming the container caliper:

                    user@ubuntu:~/caliper-benchmarks$ docker run \\\n    -v $PWD:/hyperledger/caliper/workspace \\\n    -e CALIPER_BIND_SUT=fabric:fabric-gateway \\\n    -e CALIPER_BENCHCONFIG=benchmarks/scenario/simple/config.yaml \\\n    -e CALIPER_NETWORKCONFIG=networks/fabric/test-network.yaml \\\n    --name caliper hyperledger/caliper:0.6.0 launch manager\n

                    Note

                    The above network configuration file contains a start script to spin up a local Docker-based Fabric network, which will not work in this form. So make sure to remove the start (and end) script, and change the node endpoints to remote addresses.

                    "},{"location":"getting-started/installing-caliper/#using-docker-compose","title":"Using docker-compose","text":"

                    The above command is more readable when converted to a docker-compose.yaml file:

                    version: '2'\n\nservices:\n    caliper:\n        container_name: caliper\n        image: hyperledger/caliper:0.6.0\n        command: launch manager\n        environment:\n        - CALIPER_BIND_SUT=fabric:fabric-gateway\n        - CALIPER_BENCHCONFIG=benchmarks/scenario/simple/config.yaml\n        - CALIPER_NETWORKCONFIG=networks/fabric/test-network.yaml\n        volumes:\n        - ~/caliper-benchmarks:/hyperledger/caliper/workspace\n

                    Once you navigate to the directory containing the docker-compose.yaml file, just execute:

                    docker-compose up\n
                    "},{"location":"getting-started/installing-caliper/#license","title":"License","text":"

                    The Caliper codebase is released under the Apache 2.0 license. Any documentation developed by the Caliper Project is licensed under the Creative Commons Attribution 4.0 International License. You may obtain a copy of the license, titled CC-BY-4.0, at http://creativecommons.org/licenses/by/4.0/.

                    "},{"location":"getting-started/license/","title":"License","text":"
                                                 Apache License\n                       Version 2.0, January 2004\n                    http://www.apache.org/licenses/\n

                    TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

                    1. Definitions.

                    \u201cLicense\u201d shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document.

                    \u201cLicensor\u201d shall mean the copyright owner or entity authorized by the copyright owner that is granting the License.

                    \u201cLegal Entity\u201d shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, \u201ccontrol\u201d means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.

                    \u201cYou\u201d (or \u201cYour\u201d) shall mean an individual or Legal Entity exercising permissions granted by this License.

                    \u201cSource\u201d form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files.

                    \u201cObject\u201d form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types.

                    \u201cWork\u201d shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below).

                    \u201cDerivative Works\u201d shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof.

                    \u201cContribution\u201d shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, \u201csubmitted\u201d means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as \u201cNot a Contribution.\u201d

                    \u201cContributor\u201d shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work.

                    1. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form.

                    2. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed.

                    3. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions:

                    (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and

                    (b) You must cause any modified files to carry prominent notices stating that You changed the files; and

                    \u00a9 You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and

                    (d) If the Work includes a \u201cNOTICE\u201d text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License.

                    You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License.

                    1. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions.

                    2. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file.

                    3. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an \u201cAS IS\u201d BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License.

                    4. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages.

                    5. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability.

                    END OF TERMS AND CONDITIONS

                    APPENDIX: How to apply the Apache License to your work.

                      To apply the Apache License to your work, attach the following\n  boilerplate notice, with the fields enclosed by brackets \"{}\"\n  replaced with your own identifying information. (Don't include\n  the brackets!)  The text should be enclosed in the appropriate\n  comment syntax for the file format. We also recommend that a\n  file or class name and description of purpose be included on the\n  same \"printed page\" as the copyright notice for easier\n  identification within third-party archives.\n

                    Copyright {yyyy} {name of copyright owner}

                    Licensed under the Apache License, Version 2.0 (the \u201cLicense\u201d); you may not use this file except in compliance with the License. You may obtain a copy of the License at

                       http://www.apache.org/licenses/LICENSE-2.0\n

                    Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \u201cAS IS\u201d BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

                    "},{"location":"tutorial/","title":"Tutorials","text":"

                    These tutorials are specifically aimed at using Caliper to performance test different blockchain networks.

                    Please select a tutorial from the navigation section on the left which takes you through performance testing a smart contract using Caliper.

                    "},{"location":"tutorial/fabric/","title":"Basic Fabric Tutorial","text":""},{"location":"tutorial/fabric/#overview","title":"Overview","text":"

                    This tutorial takes you through performance testing a smart contract on a pre-existing Fabric network using Caliper.

                    To complete this tutorial you will need to have installed NodeJS. To do this, we recommend using nvm.

                    This tutorial is based on resources available from the official Hyperledger Fabric documentation. A network comprised of two organizations and a solo orderer, with the javascript asset-transfer-basic smart contract, is assumed to be built and ready to performance test.

                    Note

                    The following command list is a minimalist quick step guide to get the required Fabric network up and running. We use available Hyperledger Fabric resources at explicit levels. To understand and troubleshoot what occurs during the creation of the test network, please refer to the Fabric documentation linked above!

                    Ensure you have the following pre-reqs installed

                    • docker engine or docker desktop
                    • curl
                    • jq
                    • a supported node LTS version. a list current supported LTS versions can be found on the node.js website (to install node easily you can use the nvm tool found here)
                    # Pull down the 2.5.7 hyperledger fabric and 1.5.10 hyperledger fabric ca images/binaries\ncurl -sSL https://raw.githubusercontent.com/hyperledger/fabric/main/scripts/bootstrap.sh | bash -s -- 2.5.7 1.5.10\ncd fabric-samples\n# As fabric-samples for hyperledger 2.5 is via the main branch, fix to a specific commit which we know works\n# to protect against potential breaking changes in main\ngit checkout c691cf94a99372e0225927d7927b1bc367018029\n# Start up the test-network\ncd test-network/\n./network.sh up createChannel\n./network.sh deployCC -ccn basic -ccp ../asset-transfer-basic/chaincode-javascript -ccl javascript\n
                    "},{"location":"tutorial/fabric/#step-1-create-a-caliper-workspace","title":"Step 1 - Create a Caliper Workspace","text":"

                    Create a folder named caliper-workspace at the same level as the fabric-samples directory, and then within the caliper-workspace folder, create three folders named networks, benchmarks, and workload respectively

                    Caliper installation and use will be based on a local npm installation. Within the caliper-workspace directory, install caliper CLI using the following terminal command:

                    npm install --only=prod @hyperledger/caliper-cli@0.6.0\n

                    Bind the SDK using the following terminal command:

                    npx caliper bind --caliper-bind-sut fabric:fabric-gateway\n

                    Further information relating to the installation and binding of Caliper may be found within the relevant documentation pages.

                    Caliper requires two configuration files:

                    • The network configuration file, which describes the network under test and provides the test identities to use.
                    • The benchmark file, which defines the performance test to be completed via a sequenced set of test rounds, each specifying a workload module and a series of options to drive the workload over a time interval.

                    We will now populate these folders with the assets required by Caliper.

                    "},{"location":"tutorial/fabric/#step-2-build-a-network-configuration-file","title":"Step 2 - Build a Network Configuration File","text":"

                    The network configuration file is the file required by Caliper workers to be able to submit and evaluate transactions on a Hyperledger Fabric network. The file can be in YAML or JSON format, this tutorial shows the YAML format.

                    "},{"location":"tutorial/fabric/#create-a-template-network-configuration-file","title":"Create a Template Network Configuration File","text":"

                    Under the networks folder create a template file called networkConfig.yaml with the following content:

                    name: Caliper test\nversion: \"2.0.0\"\n\ncaliper:\n  blockchain: fabric\n\nchannels:\n\norganizations:\n

                    name: The name for the configuration, in this instance \u201cCaliper test\u201d.

                    version: The version of the configuration file being used. \u201c2.0.0\u201d ensures the new fabric connectors are used

                    caliper: Indicates to Caliper the SUT that is being targeted, and may contain additional start/end commands or sut specific options that are not required within this tutorial. For the purposes of this tutorial, we are targeting a fabric network.

                    channels: Describes the Hyperledger Fabric channels and the smart contracts deployed on these channels to be benchmarked.

                    organizations: A list of the Hyperledger Fabric organizations with identities and connection profiles associated with each organization

                    "},{"location":"tutorial/fabric/#a-brief-introduction-to-common-connection-profiles-ccp","title":"A brief introduction to Common Connection Profiles (CCP)","text":"

                    Common Connection Profiles are a file format by which all the Hyperledger Fabric SDKs can use to connect to a Hyperledger Fabric Network. As Caliper utilizes the fabric node sdk to connect to the network, caliper makes use of these connection profiles. Whoever is responsible for building a Hyperledger Fabric network should create these files.

                    A Common Connection Profile will be organization specific. So each organization will have their own unique file. Again the network provider should provide a file for each organization.

                    These profiles can come in 2 forms termed static or dynamic in the Hyperledger Fabric documentation. In summary static connection profiles contain all the information up front about the fabric network. It contains, amongst other things, all the peers, orderers and channels that exist. A dynamic connection profile is minimal usually containing just 1 or 2 peers of your organization for which the SDK will need to use discovery with in order to determine all the required information to be able to interact with the fabric network.

                    You will see that the test-network in fabric samples provides common connection profiles for each organization, and that they are dynamic connection profiles.

                    "},{"location":"tutorial/fabric/#populating-the-template-file","title":"Populating The Template File","text":"

                    Following the test-network tutorial, a Common Connection Profile is generated as well as a set of identities for each organization.

                    We will be using Org1 whose MSP id is Org1MSPto connect in this example, so there is no need to provide details about Org2 which is part of the test-network. Only having to provide a single organization is a very common pattern.

                    "},{"location":"tutorial/fabric/#organizations","title":"Organizations","text":"

                    Here we need to add information about the organization whose MSP id is Org1MSP. We need to provide a name, it\u2019s associated connection profile and at least 1 identity.

                    The connection profile can be found in fabric-samples -> test-network -> organizations -> peerOrganizations -> org1.example.com. There are both json and yaml versions of this file, we will make use of connection-org1.yaml. These connection profiles are what Hyperledger Fabric refer to as dynamic so they are expected to be used in conjunction with discovery, therefore we need to declare that this connection profile requires the use of discovery.

                    The identity we will use will be User1@org1.example.com.

                    The private key can be found in fabric-samples -> test-network -> organizations -> peerOrganizations -> org1.example.com -> users -> User1 -> msp -> keystore -> priv_sk

                    The public certificate can be found in fabric-samples -> test-network -> organizations -> peerOrganizations -> org1.example.com -> users -> User1 -> msp -> signedcerts -> User1@org1.example.com-cert.pem

                    The identity will need to be given a unique name within the organization. It doesn\u2019t have to match the name that the test-network has used, ie User1@org1.example.com so to keep it simple let\u2019s just give it a name of User1. For the purposes of this tutorial we will just point to the certificate and private key files, but it\u2019s also possible to embed the information directly into the network configuration file.

                    Below is the required organizations section that provides the above detail

                    organizations:\n  - mspid: Org1MSP\n    identities:\n      certificates:\n      - name: 'User1'\n        clientPrivateKey:\n          path: '../fabric-samples/test-network/organizations/peerOrganizations/org1.example.com/users/User1@org1.example.com/msp/keystore/priv_sk'\n        clientSignedCert:\n          path: '../fabric-samples/test-network/organizations/peerOrganizations/org1.example.com/users/User1@org1.example.com/msp/signcerts/User1@org1.example.com-cert.pem'\n    connectionProfile:\n      path: '../fabric-samples/test-network/organizations/peerOrganizations/org1.example.com/connection-org1.yaml'\n# Uncomment the next line only if you want to try this network configuration file binding to fabric:2.2, it's not required for fabric-gateway binding\n#      discover: true\n

                    Note the - sign in front of mspid and name in the above example. These are important as organizations could contain more than 1 organization. certificates can also contain a list defining more than 1 identity.

                    One other important point to note; The first organization defined in the file is known as the default organization. In workload modules if you don\u2019t specify an invoking organization, then the default organization is used. As there is only 1 organization defined anyway you will not see any reference to the invoking organization in the workload implementation.

                    "},{"location":"tutorial/fabric/#channels","title":"Channels","text":"

                    The Fabric connector for Caliper requires assistance when creating connections to a Fabric network. A list of channels must be provided that lists the smart contracts that may be interacted with.

                    As part of the test-network tutorial, a channel of mychannel will have been created and a contract (chaincode) with the id of basic will have been instantiated on that channel. We declare this as follows

                    channels:\n  - channelName: mychannel\n    contracts:\n    - id: basic\n

                    note the - sign in front of channelName and id in the above example. This is required because there can be more than 1 channel so channels specify a list of channels and contracts can have more than 1 contract (chaincode) ids that are of interest.

                    "},{"location":"tutorial/fabric/#the-complete-network-configuration-file","title":"The Complete Network Configuration File","text":"

                    The Caliper network configuration file should now be fully populated. It can be useful to take time to look over and ensure that the paths to the certificates, private keys and connection profile are correct.

                    name: Calier test\nversion: \"2.0.0\"\n\ncaliper:\n  blockchain: fabric\n\nchannels:\n  - channelName: mychannel\n    contracts:\n    - id: basic\n\norganizations:\n  - mspid: Org1MSP\n    identities:\n      certificates:\n      - name: 'User1'\n        clientPrivateKey:\n          path: '../fabric-samples/test-network/organizations/peerOrganizations/org1.example.com/users/User1@org1.example.com/msp/keystore/priv_sk'\n        clientSignedCert:\n          path: '../fabric-samples/test-network/organizations/peerOrganizations/org1.example.com/users/User1@org1.example.com/msp/signcerts/User1@org1.example.com-cert.pem'\n    connectionProfile:\n      path: '../fabric-samples/test-network/organizations/peerOrganizations/org1.example.com/connection-org1.yaml'\n# Uncomment the next line only if you want to try this network configuration file binding to fabric:2.2, it's not required for fabric-gateway binding\n#      discover: true\n
                    "},{"location":"tutorial/fabric/#step-3-build-a-test-workload-module","title":"Step 3 - Build a Test Workload Module","text":"

                    The workload module interacts with the deployed smart contract during the benchmark round. The workload module extends the Caliper class WorkloadModuleBase from caliper-core. The workload module provides three overrides:

                    • initializeWorkloadModule - used to initialize any required items for the benchmark
                    • submitTransaction - used to interact with the smart contract method during the monitored phase of the benchmark
                    • cleanupWorkloadModule - used to clean up after the completion of the benchmark

                    For more information, please see the specific documentation on Workload Configuration accessible on the left hand menu.

                    The workload we will be driving aims to benchmark the querying of existing assets within the world state database. Consequently we will use all three phases available in the workload module:

                    • initializeWorkloadModule - to create assets that may be queried in the submitTransaction phase
                    • submitTransaction - to query assets created in the initializeWorkloadModule phase
                    • cleanupWorkloadModule - used to remove assets created in the initializeWorkloadModule phase so that the benchmark may be repeated
                    "},{"location":"tutorial/fabric/#create-a-template-workload-module","title":"Create A Template Workload Module","text":"

                    Within the workload folder create a file called readAsset.js with the following content:

                    'use strict';\n\nconst { WorkloadModuleBase } = require('@hyperledger/caliper-core');\n\nclass MyWorkload extends WorkloadModuleBase {\n    constructor() {\n        super();\n    }\n\n    async initializeWorkloadModule(workerIndex, totalWorkers, roundIndex, roundArguments, sutAdapter, sutContext) {\n        await super.initializeWorkloadModule(workerIndex, totalWorkers, roundIndex, roundArguments, sutAdapter, sutContext);\n    }\n\n    async submitTransaction() {\n        // NOOP\n    }\n\n    async cleanupWorkloadModule() {\n        // NOOP\n    }\n}\n\nfunction createWorkloadModule() {\n    return new MyWorkload();\n}\n\nmodule.exports.createWorkloadModule = createWorkloadModule;\n
                    "},{"location":"tutorial/fabric/#populating-the-template-file_1","title":"Populating the Template File","text":"

                    When we populate this file we make reference to the available methods within the deployed smart contract assetTransfer.js file that can be found in: fabric-samples -> asset-transfer-basic -> chaincode-javascript -> lib -> assetTransfer.js

                    "},{"location":"tutorial/fabric/#populate-initializeworkloadmodule","title":"Populate initializeWorkloadModule","text":"

                    This method is used to prepare any items required by the primary submitTransaction method as the benchmark completes.

                    The number of assets to be created will be given as roundArguments.assets. We create assets using the smart contract by populating an arguments object, which defines the transaction body, and using the Caliper API sendRequests, which requires knowledge of:

                    • contractId, the name of smart contract that is to be used and is present within the Caliper network configuration file
                    • contractFunction, the specific function within the smart contract to invoke
                    • contractArguments, the arguments to pass to the smart contract function
                    • invokerIdentity, the identity to use that is present within the Caliper network configuration file. This can be optional and caliper will select an identity for you (from the appropriate invoking organization or default organization) and in this tutorial there would only ever be 1 identity to pick but for completeness the examples explicitly define the identity.
                    • readOnly, if performing a query operation or not

                    The method should look like this:

                        async initializeWorkloadModule(workerIndex, totalWorkers, roundIndex, roundArguments, sutAdapter, sutContext) {\n        await super.initializeWorkloadModule(workerIndex, totalWorkers, roundIndex, roundArguments, sutAdapter, sutContext);\n\n        for (let i=0; i<this.roundArguments.assets; i++) {\n            const assetID = `${this.workerIndex}_${i}`;\n            console.log(`Worker ${this.workerIndex}: Creating asset ${assetID}`);\n            const request = {\n                contractId: this.roundArguments.contractId,\n                contractFunction: 'CreateAsset',\n                invokerIdentity: 'User1',\n                contractArguments: [assetID,'blue','20','penguin','500'],\n                readOnly: false\n            };\n\n            await this.sutAdapter.sendRequests(request);\n        }\n    }\n

                    In the above example, different assets will be created that have the same parameters (blue, 20, penguin, 500). Comparing the above to the smart contract method itself, it should be evident that there is a 1:1 mapping of contract arguments to the method parameters.

                    "},{"location":"tutorial/fabric/#populate-submittransaction","title":"Populate submitTransaction","text":"

                    This method runs repeatedly in the benchmark test phase. We will be evaluating the ReadAsset smart contract method by querying the assets we created in the initializeWorkloadModule method.

                    First, create a string identity for the asset to query, formed by the concatenation of the worker index and a random integer between 0 and the number of created assets.

                    Then await the call on sendRequests, passing an object containing: contractId set as that passed in from the round arguments; contractFunction set as ReadAsset; invokerIdentity set as User1; and chaincodeArguments set as an array that contains the asset to query in this run.

                    The method should look like this:

                        async submitTransaction() {\n        const randomId = Math.floor(Math.random()*this.roundArguments.assets);\n        const myArgs = {\n            contractId: this.roundArguments.contractId,\n            contractFunction: 'ReadAsset',\n            invokerIdentity: 'User1',\n            contractArguments: [`${this.workerIndex}_${randomId}`],\n            readOnly: true\n        };\n\n        await this.sutAdapter.sendRequests(myArgs);\n    }\n
                    "},{"location":"tutorial/fabric/#populate-cleanupworkloadmodule","title":"Populate cleanupWorkloadModule","text":"

                    This function is used to clean up after a test as it deletes the assets created in the initializeWorkloadModule function though use of the smart contract function DeleteAsset. The implementation is similar to that within initializeWorkloadModule. Note it is possible to refactor both initializeWorkloadModule and cleanupWorkloadModule to utilize a common method that performs the create/delete action, this is left to the interested reader.

                       async cleanupWorkloadModule() {\n        for (let i=0; i<this.roundArguments.assets; i++) {\n            const assetID = `${this.workerIndex}_${i}`;\n            console.log(`Worker ${this.workerIndex}: Deleting asset ${assetID}`);\n            const request = {\n                contractId: this.roundArguments.contractId,\n                contractFunction: 'DeleteAsset',\n                invokerIdentity: 'User1',\n                contractArguments: [assetID],\n                readOnly: false\n            };\n\n            await this.sutAdapter.sendRequests(request);\n        }\n    }\n
                    "},{"location":"tutorial/fabric/#the-complete-workload-module","title":"The Complete Workload Module","text":"

                    The test callback file should now be fully populated:

                    'use strict';\n\nconst { WorkloadModuleBase } = require('@hyperledger/caliper-core');\n\nclass MyWorkload extends WorkloadModuleBase {\n    constructor() {\n        super();\n    }\n\n    async initializeWorkloadModule(workerIndex, totalWorkers, roundIndex, roundArguments, sutAdapter, sutContext) {\n        await super.initializeWorkloadModule(workerIndex, totalWorkers, roundIndex, roundArguments, sutAdapter, sutContext);\n\n        for (let i=0; i<this.roundArguments.assets; i++) {\n            const assetID = `${this.workerIndex}_${i}`;\n            console.log(`Worker ${this.workerIndex}: Creating asset ${assetID}`);\n            const request = {\n                contractId: this.roundArguments.contractId,\n                contractFunction: 'CreateAsset',\n                invokerIdentity: 'User1',\n                contractArguments: [assetID,'blue','20','penguin','500'],\n                readOnly: false\n            };\n\n            await this.sutAdapter.sendRequests(request);\n        }\n    }\n\n    async submitTransaction() {\n        const randomId = Math.floor(Math.random()*this.roundArguments.assets);\n        const myArgs = {\n            contractId: this.roundArguments.contractId,\n            contractFunction: 'ReadAsset',\n            invokerIdentity: 'User1',\n            contractArguments: [`${this.workerIndex}_${randomId}`],\n            readOnly: true\n        };\n\n        await this.sutAdapter.sendRequests(myArgs);\n    }\n\n    async cleanupWorkloadModule() {\n        for (let i=0; i<this.roundArguments.assets; i++) {\n            const assetID = `${this.workerIndex}_${i}`;\n            console.log(`Worker ${this.workerIndex}: Deleting asset ${assetID}`);\n            const request = {\n                contractId: this.roundArguments.contractId,\n                contractFunction: 'DeleteAsset',\n                invokerIdentity: 'User1',\n                contractArguments: [assetID],\n                readOnly: false\n            };\n\n            await this.sutAdapter.sendRequests(request);\n        }\n    }\n}\n\nfunction createWorkloadModule() {\n    return new MyWorkload();\n}\n\nmodule.exports.createWorkloadModule = createWorkloadModule;\n
                    "},{"location":"tutorial/fabric/#step-4-build-a-benchmark-configuration-file","title":"Step 4 - Build a Benchmark Configuration File","text":"

                    The benchmark configuration file defines the benchmark rounds and references the defined workload module(s). It will specify the number of test workers to use when generating the load, the number of test rounds, the duration of each round, the rate control applied to the transaction load during each round, and options relating to monitors. This particular tutorial will not make use of any of the available resource and transaction monitors; for these details please refer to the documentation.

                    The benchmark configuration file may be provided in a yaml or json format: here we will use a yaml format. Please note that yaml files are case sensitive and all labels are in lowercase.

                    The benchmark configuration file has a single required stanza:

                    test:\n
                    "},{"location":"tutorial/fabric/#create-a-template-benchmark-configuration-file","title":"Create A Template Benchmark Configuration File","text":"

                    Under the benchmarks folder create a file called myAssetBenchmark.yaml with the following content:

                    test:\n    name: basic-contract-benchmark\n    description: A test benchmark\n    workers:\n    rounds:\n

                    test: The root level block that contains the benchmark test information.

                    name: The name of the test, in this case \u201cbasic-contract-benchmark\u201d.

                    description: A description for the benchmark, in this case \u201cA test benchmark\u201d.

                    workers: A set of keys used to define the number of workers (separate worker client instances) used in the subsequent benchmark.

                    rounds: An array of distinct test rounds that will be progressed sequentially. Rounds may be used to benchmark different smart contract methods, or the same method in a different manner.

                    "},{"location":"tutorial/fabric/#populating-the-template-file_2","title":"Populating the Template File","text":"

                    We will now populate the template file to specify the number of workers and the test round that uses the workload module we have created.

                    "},{"location":"tutorial/fabric/#populate-workers","title":"Populate Workers","text":"

                    We will be using two separate workers, this is accomplished through the workers specification:

                      number: 2\n
                    "},{"location":"tutorial/fabric/#populate-rounds","title":"Populate Rounds","text":"

                    Each round block contains the following:

                    • label - the unique header label to use for the round.
                    • description - a description of the round being run.
                    • txDuration - the specification of the test duration, in seconds
                    • rateControl - a rate control type, with options.
                    • workloads - the workload module to use, with arguments to pass to the module. All arguments passed are available as roundArguments within the workload module.

                    We will specify a benchmark round labeled readAsset, with the description Query asset benchmark, to run for a 30s duration, using a fixed-load rate controller aiming to maintain a constant transaction pressure of 2. Additionally we will be providing a workload through specification of our readAsset.js workload file, which we will pass the arguments {assets: 10, contractId: asset-transfer-basic}.

                    The above is accomplished through the round specification:

                        - label: readAsset\n      description: Read asset benchmark\n      txDuration: 30\n      rateControl:\n        type: fixed-load\n        opts:\n          transactionLoad: 2\n      workload:\n        module: workload/readAsset.js\n        arguments:\n          assets: 10\n          contractId: basic\n
                    "},{"location":"tutorial/fabric/#the-complete-benchmark-configuration-file","title":"The Complete Benchmark Configuration File","text":"

                    The benchmark configuration file should now be fully populated:

                    test:\n    name: basic-contract-benchmark\n    description: test benchmark\n    workers:\n      number: 2\n    rounds:\n      - label: readAsset\n        description: Read asset benchmark\n        txDuration: 30\n        rateControl:\n          type: fixed-load\n          opts:\n            transactionLoad: 2\n        workload:\n          module: workload/readAsset.js\n          arguments:\n            assets: 10\n            contractId: basic\n
                    "},{"location":"tutorial/fabric/#step-5-run-the-caliper-benchmark","title":"Step 5 - Run the Caliper Benchmark","text":"

                    We are now ready to run the performance benchmark using the above configuration files and test module. The performance benchmark will be run using the Caliper CLI, which will need to be supplied a path to the workspace and workspace relative paths to the network configuration file and the benchmark configuration file. This information is provided with the flags --caliper-workspace, --caliper-networkconfig, and --caliper-benchconfig respectively.

                    Since the smart contract has already been installed and instantiated, Caliper only needs to perform the test phase. This is specified by using the flag --caliper-flow-only-test.

                    "},{"location":"tutorial/fabric/#run-the-command","title":"Run the command","text":"

                    Ensure that you are in the caliper-workspace directory.

                    In the terminal run the following Caliper CLI command:

                    npx caliper launch manager --caliper-workspace ./ --caliper-networkconfig networks/networkConfig.yaml --caliper-benchconfig benchmarks/myAssetBenchmark.yaml --caliper-flow-only-test

                    "},{"location":"tutorial/fabric/#benchmark-results","title":"Benchmark Results","text":"

                    The resulting report will detail the following items for each benchmark round:

                    • Name - the round name from the benchmark configuration file
                    • Succ/Fail - the number of successful/failing transactions
                    • Send Rate - the rate at which caliper issued the transactions
                    • Latency (max/min/avg) - statistics relating to the time taken in seconds between issuing a transaction and receiving a response
                    • Throughput - the average number of transactions processed per second

                    You have successfully benchmarked a smart contract. You can repeat the test varying the benchmark parameters, as well as adding resource monitors. For the full set of options, please refer to the Caliper Documentation

                    "},{"location":"tutorial/fabric/#license","title":"License","text":"

                    The Caliper codebase is released under the Apache 2.0 license. Any documentation developed by the Caliper Project is licensed under the Creative Commons Attribution 4.0 International License. You may obtain a copy of the license, titled CC-BY-4.0, at http://creativecommons.org/licenses/by/4.0/.

                    "}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\u200b\\-_,:!=\\[\\]()\"`/]+|\\.(?!\\d)|&[lg]t;|(?!\\b)(?=[A-Z][a-z])","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Caliper","text":"

                    Caliper is a blockchain performance benchmark framework, which allows users to test different blockchain solutions with predefined use cases, and get a set of performance test results.

                    "},{"location":"#supported-blockchain-solutions","title":"Supported Blockchain Solutions","text":"

                    Currently supported blockchain solutions:

                    • Hyperledger Besu
                    • Ethereum
                    • Hyperledger Fabric
                    "},{"location":"#supported-performance-metrics","title":"Supported Performance Metrics","text":"
                    • Transaction/read throughput
                    • Transaction/read latency (minimum, maximum, average, percentile)
                    • Resource consumption (CPU, Memory, Network IO, \u2026)

                    See PSWG for the exact definitions and corresponding measurement methods.

                    "},{"location":"#architecture","title":"Architecture","text":"

                    It helps to have a basic understanding of how Caliper works before diving into the examples. Have a look at the Architecture page!

                    "},{"location":"#installing-caliper","title":"Installing Caliper","text":"

                    Head to the Install & Usage page if you want to try Caliper right now. It\u2019s as simple as downloading an NPM package or starting a Docker container!

                    "},{"location":"#sample-networks","title":"Sample Networks","text":"

                    Sample benchmarks that may be used by Caliper are hosted on a companion GitHub repository.

                    Important

                    make sure that the version/tag of the benchmark repository matches the version of Caliper you are using! For example, if you are using Caliper v0.6.0, then checkout the v0.6.0 tag after cloning the benchmark repository. The main branch of the benchmark repository corresponds to the latest unstable Caliper version.

                    "},{"location":"#how-to-contribute","title":"How to Contribute","text":"

                    Every contribution is welcome! See the Contributing page for details.

                    "},{"location":"#license","title":"License","text":"

                    The Caliper codebase is released under the Apache 2.0 license. Any documentation developed by the Caliper Project is licensed under the Creative Commons Attribution 4.0 International License. You may obtain a copy of the license, titled CC-BY-4.0, at http://creativecommons.org/licenses/by/4.0/.

                    "},{"location":"concepts/bench-config/","title":"Benchmark Configuration","text":""},{"location":"concepts/bench-config/#overview","title":"Overview","text":"

                    The benchmark configuration file is one of the required configuration files necessary to run a Caliper benchmark. In contrast to the runtime configurations, used for tweaking the internal behavior of Caliper, the benchmark configuration pertains only to the execution of the benchmark workload and collection of the results.

                    Note

                    In theory, a benchmark configuration is independent of the system under test (SUT) and the internal configuration of Caliper. However, this independence might be limited by the implementation details of the benchmark workload module, which could target only a single SUT type.

                    The benchmark configuration consists of three main parts:

                    • Overview
                    • Benchmark test settings
                    • Monitoring settings
                    • Example
                    • License

                    For a complete benchmark configuration example, refer to the last section.

                    Note

                    The configuration file can be either a YAML or JSON file, conforming to the format described below. The benchmark configuration file path can be specified for the manager and worker processes using the caliper-benchconfig setting key.

                    "},{"location":"concepts/bench-config/#benchmark-test-settings","title":"Benchmark test settings","text":"

                    The settings related to the benchmark workload all reside under the root test attribute, which has some general child attributes, and the important rounds attribute.

                    Attribute Description test.name Short name of the benchmark to display in the report. test.description Detailed description of the benchmark to display in the report. test.workers Object of worker-related configurations. test.workers.type Currently unused. test.workers.number Specifies the number of worker processes to use for executing the workload. test.rounds Array of objects, each describing the settings of a round. test.rounds[i].label A short name of the rounds, usually corresponding to the types of submitted TXs. test.rounds[i].txNumber The number of TXs Caliper should submit during the round. test.rounds[i].txDuration The length of the round in seconds during which Caliper will submit TXs. test.rounds[i].rateControl The object describing the rate controller to use for the round. test.rounds[i].workload The object describing the workload module used for the round. test.rounds[i].workload.module The path to the benchmark workload module implementation that will construct the TXs to submit. test.rounds[i].workload.arguments Arbitrary object that will be passed to the workload module as configuration.

                    A benchmark configuration with the above structure will define a benchmark run that consists of multiple rounds. Each round is associated with a rate controller that is responsible for the scheduling of TXs, and a workload module that will generate the actual content of the scheduled TXs.

                    "},{"location":"concepts/bench-config/#monitoring-settings","title":"Monitoring settings","text":"

                    The monitoring configuration determines what kind of metrics the manager process can gather and from where. The configuration resides under the monitors attribute. Refer to the monitors configuration page for the details.

                    "},{"location":"concepts/bench-config/#example","title":"Example","text":"

                    The example configuration below says the following:

                    • Perform the benchmark run using 5 worker processes.
                    • There will be two rounds.
                    • The first init round will submit 500 TXs at a fixed 25 TPS send rate.
                    • The content of the TXs are determined by the init.js workload module.
                    • The second query round will submit TXs for 60 seconds at a fixed 5 TPS send rate.
                    • The content of the TXs are determined by the query.js workload module.
                    • The manager process will allow a Prometheus server to scrape information on port 3000 with a default scrape url of /metrics
                    • The manager process should include the predefined metrics of all local Docker containers in the report.
                    • The manager process should include the custom metric Endorse Time (s) based on the provided query for every available (peer) instance.
                    test:\n  workers:\n    number: 5\n  rounds:\n    - label: init\n      txNumber: 500\n      rateControl:\n        type: fixed-rate\n        opts:\n          tps: 25\n      workload:\n        module: benchmarks/samples/fabric/marbles/init.js\n    - label: query\n      txDuration: 60\n      rateControl:\n        type: fixed-rate\n        opts:\n          tps: 5\n      workload:\n        module: benchmarks/samples/fabric/marbles/query.js\nmonitors:\n  transaction:\n  - module: prometheus\n  resource:\n  - module: docker\n    options:\n      interval: 1\n      containers: ['all']\n  - module: prometheus\n    options:\n      url: \"http://prometheus:9090\"\n      metrics:\n        include: [dev-.*, couch, peer, orderer]\n        queries:\n        - name: Endorse Time (s)\n          query: rate(endorser_propsal_duration_sum{chaincode=\"marbles:v0\"}[5m])/rate(endorser_propsal_duration_count{chaincode=\"marbles:v0\"}[5m])\n          step: 1\n          label: instance\n          statistic: avg\n
                    "},{"location":"concepts/bench-config/#license","title":"License","text":"

                    The Caliper codebase is released under the Apache 2.0 license. Any documentation developed by the Caliper Project is licensed under the Creative Commons Attribution 4.0 International License. You may obtain a copy of the license, titled CC-BY-4.0, at http://creativecommons.org/licenses/by/4.0/.

                    "},{"location":"concepts/benchmark-generators/","title":"Benchmark Generators","text":""},{"location":"concepts/benchmark-generators/#overview","title":"Overview","text":"

                    The Caliper benchmark generator is a Yeoman generator for generating the configuration and callback files used to perform benchmarks on deployed smart contracts. This page will take you through installing and using the generator.

                    "},{"location":"concepts/benchmark-generators/#installation","title":"Installation","text":"

                    You must first have Yeoman installed to be able to install and use the generator. You can do this using the following command:

                    npm install -g yo\n

                    Once Yeoman is installed, use the following command to install the generator:

                    npm install -g @hyperledger/generator-caliper\n
                    "},{"location":"concepts/benchmark-generators/#using-the-generator","title":"Using the Generator","text":"

                    To use the generator, run the following command

                    yo caliper\n

                    If successful, you should get the following output where you will be prompted to choose a generator - choose Benchmark to run the Caliper benchmark generator:

                    Welcome to the Hyperledger Caliper generator!\n? Which generator would you like to run? (Use arrow keys)\n\u276f Benchmark\n

                    Note

                    Alternatively, you can run the benchmark generator using: yo caliper:benchmark.

                    You will then get the following output where you will be prompted to name your workspace:

                    Welcome to the Hyperledger Caliper benchmark generator!\nLet's start off by creating a workspace folder!\n? What would you like to call your workspace? myWorkspace\n
                    "},{"location":"concepts/benchmark-generators/#callback-prompts","title":"Callback Prompts","text":"

                    The benchmark generator will inititally take you through generating the callback file and you will be prompted for:

                    • the name of your smart contract,
                    • the version of your smart contract,
                    • a smart contract function
                    • the argument variables of your smart contract function, which must be entered in array format

                    By the end, you should have something similar to the following:

                    Now for the callback file...\n? What is the name of your smart contract? fabcar\n? What is the version of your smart contract? 0.0.1\n? Which smart contract function would you like to perform the benchmark on? changeCarOwner\n? What are the arguments of your smart contract function? (e.g. [\"arg1\", \"arg2\"]) [\"CAR001\", \"Tom\"]\n
                    "},{"location":"concepts/benchmark-generators/#configuration-prompts","title":"Configuration Prompts","text":"

                    Next, you will be taken through generating the configuration file and you will be prompted for:

                    • the name of the benchmark
                    • a description of the benchmark
                    • the number of workers

                    Note

                    On an invalid input value for workers, a default value will be used.

                    • a label for differentiating between multiple rounds
                    • the rate controller you would like to use. The generator currently provides the rate controllers displayed below as options. The generated configuration file will use default opts for whichever rate controller is chosen.
                    ? Which rate controller would you like to use? (Use arrow keys)\n\u276f Fixed Rate\n  Fixed Backlog\n  Linear Rate\n  Fixed Feedback Rate\n
                    • the method of which you should like to measure the length of the round. The round may be measured using either transaction duration, which defines the length of the round in seconds, or transaction number, which defines the length of the round using the number of transactions to be generated in the round.
                    ? How would you like to measure the length of the round? (Use arrow keys)\n\u276f Transaction Duration\n  Transaction Number\n
                    • a value for either txNumber or txDuration depending on the answer to previous prompt.

                    Note

                    On an invalid input value for either txDuration or txNumber, a default value will be used.

                    By the end, you should have something similar to the following:

                    Now for the benchmark configuration file...\n? What would you like to name your benchmark? Fabcar benchmark\n? What description would you like to provide for your benchamrk? Benchmark for performance testing fabcar contract modules\n? How many workers would you like to have? 5\n? What label (hint for test) would you like to provide for your benchmark? Round for changing car owner\n? Which rate controller would you like to use? Fixed Rate\n? How would you like to measure the length of the round? Transaction Number\n? How many transactions would you like to have in this round? 60\n

                    On successful generation, you should see the following:

                    Generating benchmarking files...\n   create myBenchmark/benchmarks/callbacks/changeCarOwner.js\n   create myBenchmark/benchmarks/config.yaml\nFinished generating benchmarking files\n

                    The generator can also be run non-interactively from the command line using the following command line options:

                    Options Default Description --workspace A workspace to put all the generated benchmark files. --contractId The name of your smart contract. --version The version of your smart contract. --contractFunction Your smart contract function. --contractArguments [] The arguments of your smart contract function. These must be in an array format. --benchmarkName A name for your benchmark. --benchmarkDescription A description for your benchmark. --workers 5 A value for the number of workers. --label A label for the round. --rateController The rate controller. --txType The way you would like to measure the length of the round - either \u201ctxDuration\u201d or \u201ctxNumber\u201d. --txDuration 50 The value for transaction duration if \u201ctxDuration\u201d was entered for txType. --txNumber 50 The value for transaction number if \u201ctxNumber\u201d was entered for txType.

                    Below is an example of the generator being run non-interactively from the command line using the options above:

                    yo caliper:benchmark -- --workspace 'myWorkspace' --contractId 'fabcar' --version '0.0.1' --contractFunction 'changeCarOwner' --contractArguments '[\"CAR001\", \"Tom\"]' --benchmarkName 'Fabcar benchmark' --benchmarkDescription 'Benchmark for performance testing fabcar contract modules' --workers 5 --label 'Round for changing car owner' --rateController 'fixed-rate' --txType 'txDuration' --txDuration 50\n

                    Note

                    All the options above are required when using the generator non-interactively.

                    "},{"location":"concepts/benchmark-generators/#next-steps","title":"Next Steps","text":"

                    The generated files will be placed within the workspace directory you named at the beginning of the generator, and you should have a directory structure similar to the one shown below:

                    .myWorkspace\n\u2514\u2500\u2500 benchmarks\n    \u2502  callbacks\n    \u2502  \u2514\u2500\u2500 changeCarOwner.js\n    \u2514\u2500 config.yaml\n

                    Currently, the generator does not provide invokerIdentity or contractArguments as inputs to your callback file. Should these be required, you will need to provide these in the run function of your callback file.

                    The generator only generates a single callback file for a single smart contract function. If you would like to test other smart contract functions, you may create more callback files under the callbacks directory. You will also need to update your benchmark configuration file to take into account the extra callbacks.

                    Note

                    The benchmark generator will only create the benchmark configuration file and the callback file. You will still need to provide a network configuration file to be able to perform the benchmark.

                    "},{"location":"concepts/benchmark-generators/#license","title":"License","text":"

                    The Caliper codebase is released under the Apache 2.0 license. Any documentation developed by the Caliper Project is licensed under the Creative Commons Attribution 4.0 International License. You may obtain a copy of the license, titled CC-BY-4.0, at http://creativecommons.org/licenses/by/4.0/.

                    "},{"location":"concepts/caliper-messengers/","title":"Messengers","text":""},{"location":"concepts/caliper-messengers/#overview","title":"Overview","text":"

                    Caliper uses an orchestrator to control workers that interact with the SUT in order to perform a benchmark. Messages are passed between the orchestrator and all workers in order to keep the workers synchronized, and to progress the specified benchmark tests. A user may specify the messaging protocol that is user by Caliper in order to facilitate communications between the orchestrator and worker.

                    "},{"location":"concepts/caliper-messengers/#messengers","title":"Messengers","text":"

                    The messaging protocol to be used for communications between the orchestrator and worker during a benchmark is declared in the caliper runtime configuration file. Unspecified values will default to those specified in the default configuration file.

                    Permitted messengers are:

                    • Process: The process messenger is the default messenger and is based on native NodeJS process based communications. This messenger type is only valid for instances when local workers are being used to perform a benchmark.
                    • MQTT: The mqtt messenger uses MQTT to facilitate communication between the orchestrator and workers. This messenger type is valid for both local and distributed workers, and assumes the existence of an MQTT broker service that may be used, such as mosquitto.

                    Note

                    Mosquitto v2 requires explicit authorization and authentication configurations, which is a breaking change compared to v1. To migrate to v2, follow the official migration guide of Mosquitto.

                    The following yaml extract specifies the use of an MQTT communication method, using an existing MQTT broker that may be connected to via the specified address:

                        worker:\n        communication:\n            method: mqtt\n            address: mqtt://localhost:1883\n

                    If not specifying a caliper.yaml configuration file, the above may be specified as command line arguments to the CLI process as:

                    --caliper-worker-communication-method mqtt --caliper-worker-communication-address mqtt://localhost:1883\n
                    "},{"location":"concepts/caliper-messengers/#license","title":"License","text":"

                    The Caliper codebase is released under the Apache 2.0 license. Any documentation developed by the Caliper Project is licensed under the Creative Commons Attribution 4.0 International License. You may obtain a copy of the license, titled CC-BY-4.0, at http://creativecommons.org/licenses/by/4.0/.

                    "},{"location":"concepts/caliper-monitors/","title":"Resource and Transaction Monitors","text":""},{"location":"concepts/caliper-monitors/#overview","title":"Overview","text":"

                    Caliper monitoring modules are used to collect resource utilization and transaction statistics during test execution, with the output being collated into the generated reports. Caliper monitors resources and transactions using:

                    • Resource monitors. Collect statistics on resource utilization during benchmarking, with monitoring reset between test rounds.
                    • Transaction monitors. Collect worker transaction statistics and provide conditional dispatch actions.
                    "},{"location":"concepts/caliper-monitors/#resource","title":"Resource","text":"

                    The type of resource monitor to be used within a Caliper benchmark is declared in the benchmark configuration file through the specification one or more monitoring modules in an array under the label monitors.resource.

                    Permitted monitors are:

                    • Process: The process monitor enables monitoring of a named process on the host machine, and is most typically used to monitor the resources consumed by the running clients. This monitor will retrieve statistics on: [memory(max), memory(avg), CPU(max), CPU(avg), Network I/O, Disc I/O]
                    • Docker: The docker monitor enables monitoring of specified Docker containers on the host or a remote machine, through using the Docker Remote API to retrieve container statistics. This monitor will retrieve statistics on: [memory(max), memory(avg), CPU(max), CPU(avg), Network I/O, Disc I/O]
                    • Prometheus: The prometheus monitor enables the retrieval of data from Prometheus. This monitor will only report based on explicit user provided queries that are issued to Prometheus. If defined, the provision of a Prometheus server will cause Caliper to default to using the Prometheus PushGateway.

                    Each declared resource monitoring module is accompanied with options required to configure each of the named monitors. A common option for some modules is interval, which is used to configure the refresh interval at which point resource utilization is measured by the monitor.

                    "},{"location":"concepts/caliper-monitors/#process-monitor","title":"Process Monitor","text":"

                    The process monitoring module options comprise:

                    • interval: monitor update interval in seconds
                    • processes: of an array of [command, arguments, multiOutput] key:value pairs.
                      • command: names the parent process to monitor
                      • arguments: filters on the parent process being monitored
                      • multiOutput: enables handling of the discovery of multiple processes and may be one of:
                      • avg: take the average of process values discovered under command/name
                      • sum: sum all process values discovered under command/name

                    The following declares the monitoring of all local node processes that match caliper.js, with a 3 second update frequency, and the average of all discovered processes being taken.

                    monitors:\n  resource:\n  - module: process\n    options:\n      interval: 3\n      processes: [{ command: 'node', arguments: 'caliper.js', multiOutput: 'avg' }]\n
                    "},{"location":"concepts/caliper-monitors/#docker-monitor","title":"Docker Monitor","text":"

                    The docker monitoring module options comprise:

                    • interval: monitor update interval in seconds
                    • cpuUsageNormalization: an optional boolean that may be used to convert the cpu usage in a more covenient value (scaled to 100) by normalising for the number of cores of the host machine, default is set to false
                    • containers: an array of container names that may relate to local or remote docker containers to be monitored. If all local docker containers are to be monitored, this may be achieved by providing all as a name

                    The following declares the monitoring of two named docker containers; one local and the other remote, with a 5 second update frequency:

                    monitors:\n  resource:\n  - module: docker\n    options:\n      interval: 5\n      containers:\n      - peer0.org1.example.com\n      - http://192.168.1.100:2375/orderer.example.com\n

                    The following declares the monitoring of all local docker containers, with a 5 second update frequency and normalization of the cpuUsage metric set to true.

                    monitors:\n  resource:\n  - module: docker\n    options:\n      interval: 5\n      cpuUsageNormalization: true\n      containers:\n      - all\n
                    "},{"location":"concepts/caliper-monitors/#prometheus-monitor","title":"Prometheus Monitor","text":"

                    Prometheus is an open-source systems monitoring and alerting toolkit that scrapes metrics from instrumented jobs, either directly or via an intermediary push gateway for short-lived jobs. It stores all scraped samples locally and runs rules over this data to either aggregate and record new time series from existing data or generate alerts. Grafana or other API consumers can be used to visualize the collected data.

                    All data stored on Prometheus may be queried by Caliper using the Prometheus query HTTP API. At a minimum this may be used to perform aggregate queries in order to report back the transaction statistics, though it is also possible to perform custom queries in order to report back information that has been scraped from other connected sources. Queries issued are intended to generate reports and so are expected to result in either a single value, or a vector that can be condensed into a single value through the application of a statistical routine. It is advisable to create required queries using Grafana to ensure correct operation before transferring the query into the monitor. Please see Prometheus and Grafana documentation for more information.

                    "},{"location":"concepts/caliper-monitors/#configuring-the-prometheus-monitor","title":"Configuring The Prometheus Monitor","text":"

                    The prometheus monitoring module options comprise:

                    • url: The Prometheus URL, used for direct queries
                    • metrics: The queries to be run for inclusion within the Caliper report, comprised of to keys: include and queries.
                      • include a string array that is used to determine metric inclusion through javascript regex. Any query results where the label of interest, as specified in the queries block, matches an item within the include list via regex, will be included in a generated report.
                      • queries a series of blocks that describe the queries that are to be run at the end of each Caliper test.

                    The queries block is defined by:

                    • name: the metric name that the query relates to, used when building the report
                    • query: the query to be issued to the Prometheus server at the end of each test. Note that Caliper will add time bounding for the query so that only results pertaining to the test round are included.
                    • step: the timing step size to use within the range query
                    • label: a string to match on the returned query and used as a component identifier when populating the report
                    • statistic: if multiple values are returned, for instance if looking at a specific resource over a time range, the statistic will condense the values to a single result to enable reporting. Permitted options are:
                      • avg: return the average from all values
                      • max: return the maximum from all values
                      • min: return the minimum from all values
                      • sum: return the summation of all values
                      • multiplier: An optional multiplier that may be used to convert exported metrics into a more convenient value (such as converting bytes to GB)

                    The following declares a Prometheus monitor that will run two bespoke queries between each test within the benchmark

                    monitors:\n    resource:\n    - module: prometheus\n      options:\n        url: \"http://localhost:9090\"\n        metrics:\n            include: [dev-.*, couch, peer, orderer]\n            queries:\n                - name: Endorse Time (s)\n                  query: rate(endorser_propsal_duration_sum{chaincode=\"marbles:v0\"}[1m])/rate(endorser_propsal_duration_count{chaincode=\"marbles:v0\"}[1m])\n                  step: 1\n                  label: instance\n                  statistic: avg\n                - name: Max Memory (MB)\n                  query: sum(container_memory_rss{name=~\".+\"}) by (name)\n                  step: 10\n                  label: name\n                  statistic: max\n                  multiplier: 0.000001\n
                    The two queries above will be listed in the generated report as \u201cEndorse Time (s)\u201d and \u201cMax Memory (MB)\u201d respectively:

                    • Endorse Time (s): Runs the listed query with a step size of 1; filters on return tags within the Prometheus query response using the instance label; exclude the result if the instance value does not regex match any of the string values provided in the include array; if the instance does match an include option, then determine the average of all return results and return this value to be reported under \u201cEndorse Time (s)\u201d.
                    • Max Memory (MB): Runs the listed query with a step size of 10; filter return tags within the Prometheus query response using the name label; exclude the result if the instance value does not regex match any of the string values provided in the include array; if the instance does match an include option, then determine the maximum of all return results; multiply by the provided multiplier and return this value to be reported under \u201cMax Memory (MB)\u201d.

                    Returned components with labels that pass a regex test against the include array items, will be included within the report; all others will be omitted.

                    "},{"location":"concepts/caliper-monitors/#basic-auth","title":"Basic Auth","text":"

                    It is possible to use a Prometheus Server that is secured via basic authentication through provision of a username and password as runtime parameters, under the flags:

                    • caliper-auth-prometheus-username
                    • caliper-auth-prometheus-password

                    These will be used to augment the configuration file based URL prior to making a connection.

                    "},{"location":"concepts/caliper-monitors/#transaction","title":"Transaction","text":"

                    Transaction monitors are used by Caliper workers to act on the completion of transactions. They are used internally to aggregate and dispatch transaction statistics to the manager process to enable transaction statistics aggregation for progress reporting via the default transaction monitor, and report generation.

                    The default transaction monitor, used for progress reporting by consuming information from the internal transaction monitor, may be updated through configuration file settings:

                    • caliper-progress-reporting-enabled: boolean flag to enable progress reporting, default true
                    • caliper-progress-reporting-interval: numeric value to set the update frequency, in milliseconds (default 5000) Additional transaction monitoring modules include:

                    • logging

                    • prometheus-push

                    One or more transaction modules may be specified by naming them as modules with an accompanying options block in an array format under monitors.transaction.

                    "},{"location":"concepts/caliper-monitors/#logging","title":"Logging","text":"

                    The logging transaction module is used to log aggregated transaction statistics at the completion of a test round, within the worker. The following specifies the use of a logging transaction monitor. No options are required by the module.

                    monitors:\n    transaction:\n    - module: logging\n
                    "},{"location":"concepts/caliper-monitors/#prometheus","title":"Prometheus","text":"

                    The prometheus transaction module is used to expose current transaction statistics of all workers to a Prometheus server, via a scrape mechanism. The module exposes the following metrics:

                    • caliper_tx_submitted (counter)
                    • caliper_tx_finished (counter)
                    • caliper_tx_e2e_latency (histogram)

                    The following specifies the use of a prometheus transaction module that exposes metrics for collection on the default port (3000) and the default scrape URL (/metrics).

                    monitors:\n    transaction:\n    - module: prometheus\n

                    If operating with process based workers, each worker will increment the default (or overridden) port with their 0 based index, thereby exposing metrics for each worker on different ports.

                    It is the responsibility of the user to configure a Prometheus server that correctly targets the exposed URLS through a correctly specified configuration file.

                    Options comprise:

                    • metricPath: override for the metrics path to be scraped (default /metrics).
                    • scrapePort: override for the port to be used when configuring the scrape sever (default 3000).
                    • processMetricCollectInterval: time interval for default metrics collection, enabled when present
                    • defaultLabels: object of key:value pairs to augment the default labels applied to the exposed metrics during collection.
                    • histogramBuckets: override for the histogram to be used for collection of caliper_tx_e2e_latency
                      • explicit: direct pass through of user defined bucket
                      • linear: use a linear bucket with user defined start, width and count parameters
                        • start: start bucket size
                        • width: bucket width
                        • count: number of buckets to create
                      • exponential
                        • start: start bucket size
                        • factor: bucket factor
                        • count: number of buckets to create
                    "},{"location":"concepts/caliper-monitors/#prometheus-push-gateway","title":"Prometheus Push Gateway","text":"

                    The prometheus-push transaction module is used to expose current transaction statistics of all workers to a Prometheus server, via a push gateway. The module exposes the following metrics:

                    • caliper_tx_submitted (counter)
                    • caliper_tx_finished (counter)
                    • caliper_tx_e2e_latency (histogram)

                    The following specifies the use of a prometheus-push transaction module that sends current transaction statistics to a push gateway located at http://localhost:9091 at 5 second intervals.

                    monitors:\n    transaction:\n    - module: prometheus-push\n      options:\n        pushInterval: 5000\n        pushUrl: \"http://localhost:9091\"\n

                    Options comprise:

                    • pushInterval: push interval in milliseconds
                    • pushUrl: URL for Prometheus Push Gateway
                    • processMetricCollectInterval: time interval for default metrics collection, enabled when present
                    • defaultLabels: object of key:value pairs to augment the default labels applied to the exposed metrics during collection.
                    • histogramBuckets: override for the histogram to be used for collection of caliper_tx_e2e_latency
                      • explicit: direct pass through of user defined bucket
                      • linear: use a linear bucket with user defined start, width and count parameters
                        • start: start bucket size
                        • width: bucket width
                        • count: number of buckets to create
                      • exponential
                        • start: start bucket size
                        • factor: bucket factor
                        • count: number of buckets to create

                    Use of a prometheus-push transaction module is predicated on the availability and use of a Prometheus Push Gateway that is available as a scrape target to Prometheus.

                    "},{"location":"concepts/caliper-monitors/#basic-auth_1","title":"Basic Auth","text":"

                    It is possible to use a Prometheus Push Gateway that is secured via basic authentication through provision of a username and password as runtime parameters, under the flags:

                    • caliper-auth-prometheuspush-username
                    • caliper-auth-prometheuspush-password

                    These will be used to augment the configuration file based URL prior to making a connection.

                    "},{"location":"concepts/caliper-monitors/#grafana-visualization","title":"Grafana Visualization","text":"

                    Grafana is an analytics platform that may be used to query and visualize metrics collected by Prometheus. Caliper clients make the following metrics available, either via a direct scrape or indirectly via a Prometheus Push Gateway:

                    • caliper_tx_submitted (counter)
                    • caliper_tx_finished (counter)
                    • caliper_tx_e2e_latency (histogram)

                    Each of the above are tagged with the following default labels:

                    • roundLabel: the current test round label
                    • roundIndex: the current test round index
                    • workerIndex: the zero based worker index that is sending the information

                    We are currently working on a Grafana dashboard to give you immediate access to the metrics published above, but in the interim please feel free to create custom queries to view the above metrics that are accessible in real time.

                    "},{"location":"concepts/caliper-monitors/#resource-charting","title":"Resource Charting","text":"

                    The data from each monitor is capable of being output in chart form within the generated Caliper report, via an option within the benchmark configuration file for each monitor. In addition to tabulated data for resource monitors, Caliper currently supports rendering of the following charts using charting.js:

                    • horizontal bar
                    • polar area

                    Charting is an option that is available for each resource monitor, and the specification of the charting to be produced is specified under each monitor type within the benchmark configuration file, under a charting block. It is possible to specify multiple charting options for a single resource monitor.

                    A chart will contain data for all items that are being tracked by the monitor; it is only possible to filter on the metrics that are to be charted. The following declares the charting block that is valid for the listed monitors:

                    charting:\n  bar:\n  - metrics: [all | <sting list>]\n  polar:\n  - metrics: [all | <sting list>]\n

                    If the all option is specified, then a chart will be output for each metric and include all monitored items within each chart. It is possible to filter on metrics by providing a comma separated list. The provided list is matched against metrics using a string comparison, and so it is only required to provide the initial part of the required match. The following declares a charting block that specifies a bar chart for all available metrics, and a polar chart for only metric0 and metric1:

                    charting:\n  bar:\n  - metrics: [all]\n  polar:\n  - metrics: [metric0, metric1]\n
                    "},{"location":"concepts/caliper-monitors/#process-charting","title":"Process Charting","text":"

                    he process resource monitor exposes the following metrics: Memory(max), Memory(avg), CPU%(max), CPU%(avg).

                    The following declares the monitoring of any running processes named caliper.js, with charting options specified to produce bar charts for all available metrics. Charts will be produced containing data from all monitored processes:

                    monitors:\n  resource:\n  - module: process\n    options:\n      interval: 3\n      processes: [{ command: 'node', arguments: 'caliper.js', multiOutput: 'avg' }]\n      charting:\n        bar:\n          metrics: [all]\n
                    "},{"location":"concepts/caliper-monitors/#docker-charting","title":"Docker Charting","text":"

                    The docker resource monitor exposes the following metrics: Memory(max), Memory(avg), CPU%(max), CPU%(avg), Traffic In, Traffic Out, Disc Read, Disc Write.

                    The following declares the monitoring of all local docker containers, with charting options specified to produce bar charts for Memory(avg) and CPU%(avg), and polar charts for all metrics. Charts will be produced containing data from all monitored containers:

                    monitors:\n  resource:\n  - module: docker\n    options:\n      interval: 5\n      containers:\n      - all\n      charting:\n        bar:\n          metrics: [Memory(avg), CPU%(avg)]\n        polar:\n          metrics: [all]\n
                    "},{"location":"concepts/caliper-monitors/#prometheus-charting","title":"Prometheus Charting","text":"

                    The Prometheus monitor enables user definition of all metrics within the configuration file.

                    The following declares the monitoring of two user defined metrics Endorse Time(s) and Max Memory(MB). Charting options are specified to produce polar charts filtered on the metric Max Memory (MB), and bar charts of all user defined metrics.

                    monitors:\n    resource:\n    - module: prometheus\n      options:\n        url: \"http://localhost:9090\"\n        metrics:\n            include: [dev.*, couch, peer, orderer]\n            queries:\n                - name: Endorse Time (s)\n                  query: rate(endorser_propsal_duration_sum{chaincode=\"marbles:v0\"}[1m])/rate(endorser_propsal_duration_count{chaincode=\"marbles:v0\"}[1m])\n                  step: 1\n                  label: instance\n                  statistic: avg\n                - name: Max Memory (MB)\n                  query: sum(container_memory_rss{name=~\".+\"}) by (name)\n                  step: 10\n                  label: name\n                  statistic: max\n                  multiplier: 0.000001\n        charting:\n          polar:\n            metrics: [Max Memory (MB)]\n          bar:\n            metrics: [all]\n
                    "},{"location":"concepts/caliper-monitors/#license","title":"License","text":"

                    The Caliper codebase is released under the Apache 2.0 license. Any documentation developed by the Caliper Project is licensed under the Creative Commons Attribution 4.0 International License. You may obtain a copy of the license, titled CC-BY-4.0, at http://creativecommons.org/licenses/by/4.0/.

                    "},{"location":"concepts/declarative-workload-module/","title":"Declarative Workloads","text":""},{"location":"concepts/declarative-workload-module/#overview","title":"Overview","text":"

                    DeclarativeWorkloadModuleBase is a base class that can be implemented for declaratively assigning workload parameters. The Contracts, Functions and Parameters for test runs are specified under arguments.behavior.

                    "},{"location":"concepts/declarative-workload-module/#example","title":"Example","text":"
                    workload:\n  module: declarative\n  arguments:\n    parameterOne: param1\n    parameterTwo: 42\n    behavior:\n      contracts:\n      - name: contract1\n        functions:\n        - name: function1\n          parameters:\n          - name: randomNumber\n            type: uniform_random\n            options:\n              min: 10\n              max: 100\n

                    The example above means the follows:

                    • The WorkloadModule used here is declarative.
                    • The roundArguments taken in by Caliper are parameterOne, assigned the value of 'param1' and parameterTwo assigned the value of 42.
                    • The arguments.behavior section specifies the declared properties of the workload module.
                    • A contracts list is defined.
                    • contracts contains name, followed by functions which has function1 present in it as the only list item.
                    • functions contains name and parameters.
                    • parameters contains a value provider with the name randomNumber of type uniform_random. This generates a random number between 10 and 100 for the parameter.
                    "},{"location":"concepts/declarative-workload-module/#hierarchy","title":"Hierarchy","text":"

                    Under arguments.behavior, contracts is the top layer. It consists of a list of contracts to be tested. Within a single contracts list element, functions property holds the list of all functions present under that contract. Similarly, under each functions list item, there is a parameters list which has different types of user defined parameters under it.

                    "},{"location":"concepts/declarative-workload-module/#contracts","title":"Contracts","text":"

                    Used to specify the list of contracts to be tested. Each contracts list element has the following format.

                    Property Type Description name string Name of the SUT contract to be tested. functionSelection string Type of contract picking logic. function list List of function descriptions."},{"location":"concepts/declarative-workload-module/#functions","title":"Functions","text":"

                    Used to specify the list of functions under a contract to be tested. Each functions list element has the following format.

                    Property Type Description name string Name of the SUT function to be tested. parameters list List of parameter descriptions for the function."},{"location":"concepts/declarative-workload-module/#parameters","title":"Parameters","text":"

                    Used to specify different generated parameters for each function.

                    Property Type Description type string Assigned a value according to the type of parameter used. name string Parameter Name. options string Additional information about the parameter definition.

                    The parameters list can contain one or more of the following items.

                    "},{"location":"concepts/declarative-workload-module/#uniform-random","title":"Uniform Random","text":"

                    Value provider format for generating a random number within a given range.

                    "},{"location":"concepts/declarative-workload-module/#example_1","title":"Example","text":"
                    - name: randomNumber\n  type: uniform_random\n  options:\n    min: 0\n    max: 100\n
                    "},{"location":"concepts/declarative-workload-module/#attributes","title":"Attributes","text":"Property Type Description options.min number Minimum inclusive range for generated random number. options.max number Maximum inclusive range for generated random number."},{"location":"concepts/declarative-workload-module/#parameters-reference","title":"Parameters Reference","text":"

                    Value Provider format for referencing a module.arguments item.

                    "},{"location":"concepts/declarative-workload-module/#example_2","title":"Example","text":"
                    - name: referenceNumber\n  type: parameter_reference\n  options:\n    name: marbleNumber #assuming this is defined under round.arguments\n
                    "},{"location":"concepts/declarative-workload-module/#attributes_1","title":"Attributes","text":"Property Type Description options.name string Should be a variable name specified under module.arguments."},{"location":"concepts/declarative-workload-module/#variable-reference","title":"Variable Reference","text":"

                    Value Provider format for referencing a Caliper workload variable.

                    "},{"location":"concepts/declarative-workload-module/#example_3","title":"Example","text":"
                    - name: referenceNumber\n  type: variable_reference\n  options:\n    name: txIndex #a Caliper workload parameter\n
                    "},{"location":"concepts/declarative-workload-module/#attributes_2","title":"Attributes","text":"Property Type Description options.name string Should refer to a base-class provided variable."},{"location":"concepts/declarative-workload-module/#list-element","title":"List Element","text":"

                    Value provider format for selecting an item from a given list.

                    "},{"location":"concepts/declarative-workload-module/#example_4","title":"Example","text":"
                    - name: selectedColor\n  type: list_element\n  options:\n    list: ['red', 'blue', 'green']\n    selector:\n      type: variable_reference\n      options:\n        name: txIndex #assuming this is defined under module.arguments\n
                    "},{"location":"concepts/declarative-workload-module/#attributes_3","title":"Attributes","text":"Property Type Description options.list list List from which an element is chosen. options.selector object Contains information about any valid numeric value provider for selecting elements by index."},{"location":"concepts/declarative-workload-module/#formatted-string","title":"Formatted String","text":"

                    Value provider format for generating formatted strings.

                    "},{"location":"concepts/declarative-workload-module/#example_5","title":"Example","text":"
                    - name: generatedString\n  type: formatted_string\n  options:\n    format: 'example_{1}_{2}'\n    parts:\n      - type: parameter_reference\n        options:\n          name: marbleIndex\n      - type: variable_reference\n        options:\n          name: txIndex\n
                    "},{"location":"concepts/declarative-workload-module/#attributes_4","title":"Attributes","text":"Property Type Description options.format string Specifies format and placeholders for variables. Placeholders are specified using this syntax: {variable_no}. 1-based indexing of the parts list is used for this purpose. options.parts list Specifies variable and parameter reference value providers for use in the string."},{"location":"concepts/declarative-workload-module/#implementing-the-declarativeworkloadmodulebase-class","title":"Implementing the DeclarativeWorkloadModuleBase class","text":""},{"location":"concepts/declarative-workload-module/#submitwitharguments","title":"submitWithArguments","text":"

                    The DeclarativeWorkloadModuleBase is exported as a module. In order to use it, submitWithArguments() needs to be implemented by the user. It accepts generatedArguments during a Caliper run.

                    "},{"location":"concepts/logging-control/","title":"Logging Control","text":""},{"location":"concepts/logging-control/#overview","title":"Overview","text":"

                    Caliper builds on the winston logger module to provide a flexible, multi-target logging mechanism. There are three different aspects when it comes to interacting with the Caliper logging subsystem:

                    1. Customizing the logging style
                    2. Configuring logging targets
                    3. Creating your own loggers

                    The first two points can be achieved through the runtime configuration mechanism of Caliper. So make sure that you are familiar with the different way of overriding runtime settings before reading on. The examples below only set the different options through the command line. Naturally, any other setting source could be used.

                    The runtime configuration settings corresponding to logging reside under the caliper-logging key hierarchy. See the caliper.logging section of the default configuration file bundled with Caliper for the general structure of the settings.

                    "},{"location":"concepts/logging-control/#customizing-the-logging-style","title":"Customizing the logging style","text":"

                    The two main aspects of the logging style are the message structure and the different formats that modify the message appearance if applied. The corresponding attributes are the caliper.logging.template property and the entire caliper.logging.formats property hierarchy, respectively.

                    The caliper.logging.formats hierarchy is special in a sense that every leaf property can be overridden one-by-one, even from the command line or from environment variables. As you will see later, this is not the case for the logging target settings.

                    Note

                    the following style settings apply to every specified logging target!

                    "},{"location":"concepts/logging-control/#setting-the-message-structure","title":"Setting the message structure","text":"

                    The message structure can be easily customized through the caliper.logging.template property. It is a simple string that contains predefined placeholders for some special values. Some placeholders are only available, when a corresponding format is also applied.

                    Let\u2019s start with examining the default structure:

                    caliper:\n  logging:\n    template: '%timestamp% %level% [%label%] [%module%] %message% (%metadata%)'\n

                    The following placeholders are available at the moment.

                    Placeholder Required format Description %timestamp% timestamp Will be replaced with the timestamp of the log message. %level% - Will be replaced with the severity level (e.g., info, warn, error) of the log message. %label% label Will be replaced with the configured label of the process. %module% - Will be replaced with the module name that logged the message. %message% - Will be replaced with the actual message. %metadata% - Will be replaced with the string representation of additional logging arguments.

                    You can override this template by changing the caliper-logging-template setting key, for example, from the command line: --caliper-logging-template=\"%time%: %message%\"

                    Note:
                    1. Do not forget the two enclosing quotes, since the template can contain spaces!
                    2. This template if applied after every format has been applied!
                    3. Adding spaces and different brackets this way is fine for simple coloring scenarios (or when coloring is disabled). However, when coloring the entire log message (or just parts that should be surrounded with additional characters), the result looks inconsistent when formatted this way. See the Tips & Tricks section for advanced message formatting scenarios.
                    "},{"location":"concepts/logging-control/#applying-formats","title":"Applying formats","text":"

                    The logging subsystem relies on winston\u2019s format mechanism to further modify the log messages. The corresponding settings are under the caliper.logging.formats property.

                    Each of these formats can be easily disabled by setting its property to false. For example, to disable the colorize format, set its corresponding caliper.logging.formats.colorize property to false, for example, from the command line: --caliper-logging-formats-colorize=false

                    Similarly, any sub-property of a format can be easily overridden. For example, changing the caliper.logging.formats.colorize.colors.info property from the command line: --caliper-logging-formats-colorize-colors-info=blue

                    The following formats and their options (sub-properties) are supported.

                    Note

                    the different formats are applied in the order they are presented, which is important (see the Tips & Tricks section for the reason).

                    "},{"location":"concepts/logging-control/#timestamp","title":"Timestamp","text":"

                    Adds the timestamp to the message in the specified format. The format string must conform to the rules of the fecha package.

                    For example: --caliper-logging-formats-timestamp=\"YYYY.MM.DD-HH:mm:ss.SSS\"

                    Note

                    the format makes the timestamp attribute available in the message, thus it can be referenced in the message template, or in other formats that can access message attributes.

                    "},{"location":"concepts/logging-control/#label","title":"Label","text":"

                    Adds a custom label to the message. This is useful for differentiating multiple Caliper instances (or the distributed client instances) after collecting their logs.

                    For example: --caliper-logging-formats-label=\"caliper-test-1\"

                    Note

                    the format makes the label attribute available in the message, thus it can be referenced in the message template, or in other formats that can access message attributes.

                    "},{"location":"concepts/logging-control/#json","title":"JSON","text":"

                    Outputs the messages as JSON strings. Useful for file-based logs that will be processed automatically by another tool. The format accepts a space sub-property as an options, which corresponds to the space parameter of the JSON.stringify function.

                    For example: --caliper-logging-formats-json=\"{space:0}\"

                    Note:
                    1. Enabling this format is easier from a configuration file. See the Tips & Tricks section.
                    2. Setting space to a non-zero number will effectively format the JSON output with indentations on multiple lines. This could \u201cspam\u201d the console a bit (not a problem for log files, unless you care about the extra newlines).
                    3. If this format is enabled, the rest of the formats won\u2019t be applied, since their purpose is mainly to make console logs more readable.
                    "},{"location":"concepts/logging-control/#padding","title":"Padding","text":"

                    Makes every log level string the same length, i.e., adds an extra space after \"info\" and \"warn\" make them the same length as \"error\" and \"debug\".

                    For example: --caliper-logging-formats-pad=true

                    "},{"location":"concepts/logging-control/#align","title":"Align","text":"

                    Prepends the message part of the log with a tabulator (\"\\t\") to align the messages of different logs in the same place.

                    For example: --caliper-logging-formats-align=true

                    Note

                    if the message format contains other information with variable lengths (e.g., the module name), it can cause misaligned messages. So this is just a \u201cbest effort\u201d format to make console messages more readable.

                    "},{"location":"concepts/logging-control/#attribute-format","title":"Attribute format","text":"

                    Defines string formatting options for the different attributes of a message. A \u201cformat string\u201d can be provided for each message attribute that will \u201creformat\u201d its value. The format string can use the %attribute% placeholder to reference the original value.

                    A format string can be specified for the following message attributes:

                    • timestamp
                    • level
                    • label
                    • module
                    • message
                    • metadata

                    For example, to customize the level information of the log (enclose it in the LEVEL[<level>] string):

                    --caliper-logging-formats-attributeformat-level=\"LEVEL[%attribute%]\"

                    Note

                    if the attribute is not a string (which can be the case for the \u201cmetadata\u201d attribute), then first the attribute value is converted to string, using JSON.stringify, and then it\u2019s inserted into the format string.

                    "},{"location":"concepts/logging-control/#colorize","title":"Colorize","text":"

                    Applies color coding for the different attributes of a message. Enabling/disabling coloring is specified on an attribute basis. The following sub-properties can be set to true/false to enable/disable coloring for the corresponding attribute:

                    • timestamp
                    • level
                    • label
                    • module
                    • message
                    • metadata
                    • all: setting it to true enables coloring for every attribute

                    For example, to colorize every part of the message: --caliper-logging-formats-colorize-all=true

                    Additionally, the format exposes a colors attribute, which contains coloring information for the info, error, warn and debug levels. The value of a level can be set to colors and styles provided by the colors package. To apply multiple styles, separate the values with a space.

                    For example, to really highlight error-level logs: --caliper-logging-formats-colorize-colors-error=\"white bgRed bold italic\"

                    Note

                    the colors package offers some exotic styles which seem tempting at first, but don\u2019t overdo it, for the sake of your eyes. Keep it simple.

                    "},{"location":"concepts/logging-control/#configuring-logging-targets","title":"Configuring logging targets","text":"

                    The source and target(s) of log messages are decoupled, thanks to the transport mechanism of winston. This means that a log message can be easily logged to multiple places, like the console, or different log files. Moreover, this is completely transparent to the module generating the log message!

                    The different targets are specified under the caliper.logging.targets property. The caliper.logging.targets section takes the following general form:

                    caliper:\n  logging:\n    targets:\n      mylogger1:\n        target: console\n        enabled: true\n        options:\n          # console target-specific options\n      mylogger2:\n        target: file\n        enabled: true\n        options:\n          # file target-specific options\n

                    Each subproperty of caliper.logging.targets is an arbitrary name for the given logging target (e.g., mylogger1, mylogger2, etc.).

                    Each target must specify the following properties:

                    • target: the identifier of a supported target. See the table below.
                    • enabled: indicates whether the target is enabled. Defaults to true if omitted.
                    • options: this object will be given as-is to the specific winston transport as options. See the table below for the supported options of each transport.

                    The following target values (i.e., transports) are supported. Click on the links for the official documentation of each transport.

                    Target Available options console Console Transport file File Transport daily-rotate-file Daily Rotating File Transport"},{"location":"concepts/logging-control/#disabling-loggers","title":"Disabling loggers","text":"

                    Even though the setting keys/properties of the caliper.logging.targets section cannot be overridden one-by-one (like the properties in the caliper.logging.formats section), the enabled property is an exception. To easily disable a logger, set its enabled property to false (using the target\u2019s name in the property hierarchy).

                    For example, to disable the mylogger1 target, the following approaches are available:

                    • From the command line: --caliper-logging-targets-mylogger1-enabled=false
                    • From an environment variable: export CALIPER_LOGGING_TARGETS_MYLOGGER1_ENABLED=false

                    Note

                    you must use lower-case letters (and/or digits) in your target name for this to work!

                    "},{"location":"concepts/logging-control/#overriding-logger-target-settings","title":"Overriding logger target settings","text":"

                    But what if you would like to modify one of the options of a transport? You can use a configuration file for that!

                    For the next example, we will disable the default file logger, modify the logging level of the console target, and also add a new daily rotating file logger. We can do all of this with a single configuration file.

                    caliper:\n  logging:\n    targets:\n      console:\n        options:\n          # we don't care about info level messages anymore\n          level: warn\n      file:\n        # we disable this\n        enabled: false\n      rotatingfile:\n        target: daily-rotate-file\n        # enabled by default\n        options:\n          # we log every message this way\n          level: debug\n          # start a new log file every hour\n          datePattern: 'YYYY-MM-DD-HH'\n          # compress old log files after rotating\n          zippedArchive: true\n          # include the hour-precision date in the file names\n          filename: 'caliper-%DATE%.log'\n          # options for opening the file for writing\n          options:\n            # append mode\n            flags: a\n            # make the file readable/writable by anyone\n            mode: 0666\n

                    If you save the above content as caliper.yaml in your workspace directory, then Caliper will pick it up automatically.

                    Note

                    Some remarks about the above file content:

                    1. We only set the properties we wanted to override. The default configuration file will be merged with the above configuration file, the values in the latter taking precedence.
                    2. The provided options for a transport are not verified by Caliper. It is simple passed to the specific transport. It is your responsibility to configure the transport the right way.
                    3. We could have disabled the file logger also from the command line, or from an environment variable. The reason we did it from a config file is explained in the Tips & tricks section.
                    "},{"location":"concepts/logging-control/#creating-your-own-loggers","title":"Creating your own loggers","text":"

                    The different modules of Caliper will automatically use the configured targets for logging. Moreover, your user test modules can also create logger instances to log runtime events related to your business logic.

                    To create your own logger instance, use the following API:

                    const logger = require('@hyperledger/caliper-core').CaliperUtils.getLogger('my-module');\n\n// ...\n\nlogger.debug('My custom debug message', metadataObject1, metadataObject2);\n

                    Once a logger instance is created, it exposes the usual info, warn, debug and error functions that each take as parameter a log message and optional objects, considered as \u201cmetadata\u201d.

                    This \u201cmetadata\u201d is especially useful for debug level logs. When you perform an operation based on a complex input parameter/object, you can log the following at the beginning of your function:

                    function complexCalculation(complexInput) {\n    logger.debug('Starting complex calculation. Input: ', complexInput);\n    // complex calculation\n}\n

                    The \u201cmetadata\u201d will appear at the place of the %metadata% placeholder, as discussed in the message template section.

                    Note

                    pPassing large metadata objects can hurt the performance of logging if done in a loop/hot path. Only use \u201cmetadata\u201d logging for debug messages, since the debug level is usually switched off in production code.

                    "},{"location":"concepts/logging-control/#tips-tricks","title":"Tips & tricks","text":""},{"location":"concepts/logging-control/#the-format-pipeline","title":"The format pipeline","text":"

                    Winston formats are a powerful feature that allow the arbitrary manipulation of log messages. From the user\u2019s perspective, a log message is a simple string displayed on the console, or saved in a file. However, to fully utilize the logging styles described in this documentation, it might help knowing what really happens under the hood.

                    Note

                    in the remainder of this section, we\u2019ll refer to log messages as LOG.

                    LOG can be considered an item/object, that is generated when issuing a call to logger.info(...) or similar functions. A LOG can have several attributes attached to it. Every LOG has the level and message attributes, containing the severity and the \u201cdescription\u201d of LOG. Additionally, Caliper automatically adds the module attribute to LOGs of every logger created through the Caliper API, denoting the name of the module who issued the log.

                    Let\u2019s introduce the format pipeline through an example.

                    "},{"location":"concepts/logging-control/#assumptions","title":"Assumptions","text":"

                    Let\u2019s assume that the following caliper.logging configuration is used:

                    template: '%timestamp%%level%%label%%module%%message%%metadata%'\nformats:\n    timestamp: 'YYYY.MM.DD-HH:mm:ss.SSS'\n    label: caliper\n    json: false\n    pad: true\n    align: false\n    attributeformat:\n        level: ' %attribute%'\n        label: ' [%attribute%]'\n        module: ' [%attribute%] '\n        metadata: ' (%attribute%)'\n    colorize:\n        all: true\n        colors:\n            info: green\n            error: red\n            warn: yellow\n            debug: grey\n
                    This means that the following formats will be applied to every LOG:

                    • module (automatically added by Caliper)
                    • timestamp
                    • label
                    • padding
                    • attribute formats
                    • colorizing
                    • template substitution

                    Furthermore, let\u2019s assume that the following code initiates the LOG:

                    const logger = require('@hyperledger/caliper-core').CaliperUtils.getLogger('my-module');\n\n// ...\n\nlogger.info('Doing operation X with:', 'someSetting', 'anotherSetting');\n
                    "},{"location":"concepts/logging-control/#the-life-of-a-log","title":"The life of a LOG","text":"

                    The logger.info call generates the initial LOG with the following attributes:

                    level: 'info'\nmessage: 'Doing operation X with:'\n

                    Before LOG enters the format pipeline, Caliper also adds the module name, and collects the additional parameters as metadata. Now LOG has the following attributes:

                    level: 'info'\nmessage: 'Doing operation X with:'\nmodule: 'my-module'\nmetadata: ['someSetting', 'anotherSetting']\n

                    This is the initial LOG entity that enters the format pipeline. Every enabled format is \u201cjust\u201d a transformation on the attributes of LOG. A format can manipulate the value of an existing attribute or/and add/remove arbitrary attributes.

                    The first step of the pipeline is the timestamp format. This adds the timestamp attribute containing the current time, in the specified format. After this step, LOG looks like this:

                    level: 'info'\nmessage: 'Doing operation X with:'\nmodule: 'my-module'\nmetadata: ['someSetting', 'anotherSetting']\ntimestamp: '2019.10.07-12:45:47.962'\n

                    The next step if the label format, which adds the label attribute with the specified value (caliper, in this case):

                    level: 'info'\nmessage: 'Doing operation X with:'\nmodule: 'my-module'\nmetadata: ['someSetting', 'anotherSetting']\ntimestamp: '2019.10.07-12:45:47.962'\nlabel: 'caliper'\n

                    The next step is the padding format, which ensure that every logging level string has the same length. This means, that an extra space is appended at the end of the level attribute:

                    level: 'info '\nmessage: 'Doing operation X with:'\nmodule: 'my-module'\nmetadata: ['someSetting', 'anotherSetting']\ntimestamp: '2019.10.07-12:45:47.962'\nlabel: 'caliper'\n

                    The next step is the attribute formatter. This formatter is configured to modify multiple attributes of LOG, based on a string template:

                    • level: add a space before it
                    • label: enclose in [] and add a space before it
                    • module: enclose in [] and add a space before and after it
                    • metadata: enclose in () and add a space before it

                    After these transformation, LOG looks like the following:

                    level: ' info '\nmessage: 'Doing operation X with:'\nmodule: ' [my-module] '\nmetadata: ' ([\"someSetting\", \"anotherSetting\"])'\ntimestamp: '2019.10.07-12:45:47.962'\nlabel: ' [caliper]'\n

                    Note

                    some remarks: 1. metadata was an Array, not a string, so it was stringified before the formatting was applied. 2. message and timestamp is unchanged.

                    The next step is the colorizing format, which adds certain color/style codes to the configured values. Since all is set true, and the level of LOG is info, every attribute is surrounded with the color code for green (denoted by <green> for sake of readability):

                    level: '<green> info <green>'\nmessage: '<green>Doing operation X with:<green>'\nmodule: '<green> [my-module] <green>'\nmetadata: '<green> ([\"someSetting\", \"anotherSetting\"])<green>'\ntimestamp: '<green>2019.10.07-12:45:47.962<green>'\nlabel: '<green> [caliper]<green>'\n

                    The last step in the pipeline (since the JSON format is disabled) is substituting the attributes into the logging template, to create the final message, that will appear in the console and in the file. The result is the concatenation of LOG\u2019s attributes in the following order:

                    1. timestamp
                    2. level
                    3. label
                    4. module
                    5. message
                    6. metadata

                    Omitting the color code for the sake of readability, this results in:

                    2019.10.07-12:45:47.962 info  [caliper] [my-module] Doing operation X with: ([\"someSetting\", \"anotherSetting\"])\n

                    Note

                    try adding other characters to the template string. And then be surprised that they are not colorized with the rest of the line. Actually, this is not surprising at all. The template string is \u201cevaluated\u201d after the colorizing format. Since these extra characters are not part of any attributes of LOG, they won\u2019t be colorized.

                    "},{"location":"concepts/logging-control/#use-a-configuration-file","title":"Use a configuration file","text":"

                    Logging settings are usually determined by your log analysis requirements. This means that once you settle on some logging style and targets, those settings will rarely change.

                    To this end, the ability to override the logging style settings from the command line or from environment variables is really just a convenience feature. Once you found your ideal settings, it\u2019s worth to record them in a configuration file.

                    The easiest way to do that is with a project-level configuration file. If you name the following file caliper.yaml and place it in your workspace root, then Caliper will automatically apply the settings.

                    Note

                    there are other ways to load a configuration file, as discussed in the runtime configuration page.

                    caliper:\n  logging:\n    # no need for timestamp and label\n    template: '%level% [%module%]: %message% %meta%'\n    formats:\n      # color codes look ugly in log files\n      colorize: false\n      # don't need these, since won't appear in the template\n      label: false\n      timestamp: false\n    targets:\n      file:\n        options:\n          # bump the log level from debug to warn, only log the critical stuff in this file\n          level: warn\n          filename: 'critical.log'\n      rotatingfile:\n        target: daily-rotate-file\n        enabled: true\n        options:\n          level: debug\n          datePattern: 'YYYY-MM-DD-HH'\n          zippedArchive: true\n          filename: 'debug-%DATE%.log'\n          options:\n            flags: a\n            mode: 0666\n
                    "},{"location":"concepts/logging-control/#license","title":"License","text":"

                    The Caliper codebase is released under the Apache 2.0 license. Any documentation developed by the Caliper Project is licensed under the Creative Commons Attribution 4.0 International License. You may obtain a copy of the license, titled CC-BY-4.0, at http://creativecommons.org/licenses/by/4.0/.

                    "},{"location":"concepts/rate-controllers/","title":"Rate Controllers","text":"

                    The rate at which transactions are input to the blockchain system is a key factor within performance tests. It may be desired to send transactions at a specified rate or follow a specified profile. Caliper permits the specification of custom rate controllers to enable a user to perform testing under a custom loading mechanism. A user may specify their own rate controller or use one of the default options:

                    • Fixed rate
                    • Fixed feedback rate
                    • Fixed load
                    • Maximum rate
                    • Linear rate
                    • Composite rate
                    • Zero rate
                    • Record rate
                    • Replay rate

                    For implementing your own rate controller, refer to the Adding Custom Controllers section.

                    "},{"location":"concepts/rate-controllers/#fixed-rate","title":"Fixed rate","text":"

                    The fixed rate controller is the most basic controller, and also the default option if no controller is specified. It will send input transactions at a fixed interval that is specified as TPS (transactions per second).

                    "},{"location":"concepts/rate-controllers/#options-and-use","title":"Options and use","text":"

                    The fixed-rate controller can be specified by setting the rate controller type to the fixed-rate string.

                    Controller options include:

                    • tps: the rate at which transactions are cumulatively sent to the SUT by all workers

                    The fixed rate controller, driving at 10 TPS, is specified through the following controller option:

                    {\n  \"type\": \"fixed-rate\",\n  \"opts\": {\n    \"tps\" : 10\n  }\n}\n
                    "},{"location":"concepts/rate-controllers/#fixed-feedback-rate","title":"Fixed feedback rate","text":"

                    The fixed feedback rate controller which is the extension of fixed rate also will originally send input transactions at a fixed interval. When the unfinished transactions exceeds times of the defined unfinished transactions for each client,it will stop sending input transactions temporally by sleeping a long period of time.

                    Controller options include:

                    • tps: the rate at which transactions are cumulatively sent to the SUT by all workers
                    • transactionLoad: the maximum transaction load on the SUT at which workers will pause sending further transactions

                    The fixed feedback rate controller, driving at 100 TPS, 100 unfinished transactions for each client, is specified through the following controller option:

                    {\n  \"type\": \"fixed-feedback-rate\",\n  \"opts\": {\n      \"tps\" : 100,\n      \"transactionLoad\": 100\n  }\n}\n
                    "},{"location":"concepts/rate-controllers/#fixed-load","title":"Fixed Load","text":"

                    The fixed load rate controller is a controller for driving the tests at a target loading (backlog transactions). This controller will aim to maintain a defined backlog of transactions within the system by modifying the driven TPS. The result is the maximum possible TPS for the system whilst maintaining the pending transaction load.

                    "},{"location":"concepts/rate-controllers/#options-and-use_1","title":"Options and use","text":"

                    The fixed-load controller can be specified by setting the rate controller type to the fixed-load string.

                    Controller options include:

                    • startTps: the initial rate at which transactions are cumulatively sent to the SUT by all workers
                    • transactionLoad: the number of transactions being processed by the SUT that is to be maintained

                    The fixed load controller, aiming to maintain a SUT transaction load of 5, with a starting TPS of 100, is specified through the following controller option:

                    {\n  \"type\": \"fixed-load\",\n  \"opts\": {\n    \"transactionLoad\": 5,\n    \"startTps\": 100\n  }\n}\n
                    "},{"location":"concepts/rate-controllers/#maximum-rate","title":"Maximum rate","text":"

                    The maximum rate controller is a controller for driving the workers to their maximum achievable rate without overloading the SUT. This controller will aim to maximize the driven TPS for the worker by ramping up the driven TPS and backing off again when a drop in TPS is witnessed; such drops are indicative of an overloaded system.

                    The achieved TPS is evaluated between txUpdate cycles, since this is the point at which TPS results are made available. A minimum sample interval that ensures settling of TPS rates should be considered for enhanced controller stability.

                    Please note that the action of the controller is to slowly ramp to the maximum achievable rate for each worker until a threshold is reached, meaning that there will be a significant warm up phase that may skew averaged results for the round. It is recommended to investigate achievable results using Prometheus queries and/or Grafana visualization.

                    "},{"location":"concepts/rate-controllers/#options-and-use_2","title":"Options and use","text":"

                    The maximum rate controller can be specified by setting the rate controller type to the maximum-rate string.

                    Controller options include:

                    • tps: the starting TPS
                    • step: the TPS increase for each interval. Note that on \u201cback-off\u201d this step size will automatically be reduced before re-attempting a TPS increase.
                    • sampleInterval: the minimum time between steps to ensure settling of achieved TPS rates
                    • includeFailed: boolean flag to indicate if the achieved TPS analysis within the controller is to include failed transactions (default true)

                    The maximum rate controller, with a starting TPS of 100, a TPS step size of 5, and a minimum sample interval of 20seconds, is specified through the following controller option:

                    {\n  \"type\": \"maximum-rate\",\n  \"opts\": {\n    \"tps\": 100,\n    \"step\": 5,\n    \"sampleInterval\": 20,\n    \"includeFailed\": true\n  }\n}\n
                    "},{"location":"concepts/rate-controllers/#linear-rate","title":"Linear rate","text":"

                    Exploring the performance limits of a system usually consists of performing multiple measurements with increasing load intensity. However, finding the tipping point of the system this way is not easy, it is more like a trial-and-error method.

                    The linear rate controller can gradually (linearly) change the TPS rate between a starting and finishing TPS value (both in increasing and decreasing manner). This makes it easier to find the workload rates that affect the system performance in an interesting way.

                    The linear rate controller can be used in both duration-based and transaction number-based rounds.

                    "},{"location":"concepts/rate-controllers/#options-and-use_3","title":"Options and use","text":"

                    The linear rate controller can be specified by setting the rate controller type to the linear-rate string.

                    Controller options include:

                    • startingTps: the rate at which transactions are cumulatively sent to the SUT by all workers at the start of the round
                    • finishingTps: the rate at which transactions are cumulatively sent to the SUT by all workers at the end of the round

                    The following example specifies a rate controller that gradually changes the transaction load from 25 TPS to 75 TPS during the benchmark round.

                    {\n  \"type\": \"linear-rate\",\n  \"opts\": {\n    \"startingTps\": 25,\n    \"finishingTps\": 75\n    }\n}\n

                    Note

                    similarly to the fixed rate controller, this controller also divides the workload between the available client, so the specified rates in the configuration are cumulative rates, and not the rates of individual clients. Using the above configuration with 5 clients results in clients that start at 5 TPS and finish at 15 TPS. Together they generate a [25-75] TPS load.

                    "},{"location":"concepts/rate-controllers/#composite-rate","title":"Composite rate","text":"

                    A benchmark round in Caliper is associated with a single rate controller. However, a single rate controller is rarely sufficient to model advanced client behaviors. Moreover, implementing new rate controllers for such behaviors can be cumbersome and error-prone. Most of the time a complex client behavior can be split into several, simpler phases.

                    Accordingly, the composite rate controller enables the configuration of multiple \u201csimpler\u201d rate controllers in a single round, promoting the reusability of existing rate controller implementations. The composite rate controller will automatically switch between the given controllers according to the specified weights (see the configuration details after the example).

                    "},{"location":"concepts/rate-controllers/#options-and-use_4","title":"Options and use","text":"

                    The composite rate controller can be specified by setting the rate controller type to the composite-rate string.

                    Controller options include:

                    • weights: an array of \u201cnumber-like\u201d values (explicit numbers or numbers as strings) specifying the weights associated with the rate controllers defined in the rateControllers property.

                    The weights do not necessarily have to sum to 1, since they will eventually be normalized to a vector of unit length. This means, that the weights can be specified in a manner that is the most intuitive for the given configuration. For example, the weights can correspond to durations, numbers of transactions or ratios.

                    In the above example, the weights are corresponding to ratios (2:1:2). The exact meaning of the weights is determined by whether the benchmark round is duration-based or transaction number-based. If the above controller definition is used in a round with a duration of 5 minutes, then in the first 2 minutes the transactions will be submitted at 100 TPS, then at 300 TPS for the next minute, and at 200 TPS for the last 2 minutes of the round.

                    Note, that 0 weights are also allowed in the array. Setting the weight of one or more controllers to 0 is a convenient way to \u201cremove/disable\u201d those controllers without actually removing them from the configuration file.

                    • rateControllers: an array of arbitrary rate controller specifications. See the documentation of the individual rate controllers on how to configure them. The number of specified rate controllers must equal to the number of specified weights.

                    Note, that technically, composite rate controllers can be nested to form a hierarchy. However, using a composite rate controller incurs an additional execution overhead in the rate control logic. Keep this in mind before specifying a deep hierarchy of composite rate controllers, or just flatten the hierarchy to a single level.

                    • logChange: a boolean value indicating whether the switches between the specified rate controllers should be logged or not.

                    For example, the definition of a square wave function (with varying amplitude) as the transaction submission rate is as easy as switching between fixed rate controllers with different TPS settings:

                    {\n  \"type\": \"composite-rate\",\n  \"opts\": {\n    \"weights\": [2, 1, 2],\n    \"rateControllers\": [\n      {\n        \"type\": \"fixed-rate\",\n        \"opts\": {\"tps\" : 100}\n      },\n      {\n        \"type\": \"fixed-rate\",\n        \"opts\": {\"tps\" : 300}\n      },\n      {\n        \"type\": \"fixed-rate\",\n        \"opts\": {\"tps\" : 200}\n      }\n    ],  \n    \"logChange\": true\n  }\n}\n

                    Important! The existence of the composite rate controller is almost transparent to the specified \u201csub-controllers.\u201d This is achieved by essentially placing the controllers in a \u201cvirtualized\u201d round, i.e., \u201clying\u201d to them about:

                    • the duration of the round (for duration-based rounds),
                    • the total number of transactions to submit (for transaction number-based rounds),
                    • the starting time of the round, and
                    • the index of the next transaction to submit.

                    The results of recently finished transactions are propagated to the sub-controllers as-is, so for the first few call of a newly activated sub-controller it can receive recent results that don\u2019t belong to its virtualized round.

                    This virtualization does not affect the memoryless controllers, i.e., the controllers whose control logic does not depend on global round properties or past transaction results. However, other controllers might exhibit some strange (but hopefully transient) behavior due to this \u201cvirtualized\u201d round approach. For example, the logic of the PID controller for example depends on the transaction backlog.

                    "},{"location":"concepts/rate-controllers/#zero-rate","title":"Zero rate","text":"

                    This controller stops the workload generation for the duration of the round.

                    "},{"location":"concepts/rate-controllers/#options-and-use_5","title":"Options and use","text":"

                    Using the controller on its own for a round is meaningless. However, it can be used as a building block inside a composite rate controller. The zero rate controller can be used only in duration-based rounds!

                    {\n  \"type\": \"composite-rate\",\n  \"opts\": {\n    \"weights\": [30, 10, 10, 30],\n    \"rateControllers\": [\n      {\n        \"type\": \"fixed-rate\",\n        \"opts\": {\"tps\" : 100}\n      },\n      {\n        \"type\": \"fixed-rate\",\n        \"opts\": {\"tps\" : 500}\n      },\n      {\n        \"type\": \"zero-rate\",\n        \"opts\": { }\n      },\n      {\n        \"type\": \"fixed-rate\",\n        \"opts\": {\"tps\" : 100}\n      }\n    ],  \n    \"logChange\": true\n  }\n}\n

                    Let\u2019s assume, that the above example is placed in a round definition with an 80 seconds duration (note the intuitive specification of the weights). In this case, an initial 30 seconds normal workload is followed by a 10 seconds intensive workload, which is followed by a 10 seconds cooldown period, etc.

                    The controller is identified by the zero-rate string as the value of the type property and requires no additional configuration.

                    "},{"location":"concepts/rate-controllers/#record-rate","title":"Record rate","text":"

                    This rate controller serves as a decorator around an other (arbitrary) controller. Its purpose is to record the times (relative to the start of the round) when each transaction was submitted, i.e., when the transaction was \u201cenabled\u201d by the \u201csub-controller.\u201d

                    The following example records the times when the underlying fixed rate controller enabled the transactions (for details, see the available options below the example):

                    {\n  \"type\": \"record-rate\",\n  \"opts\": {\n    \"rateController\": {\n      \"type\": \"fixed-rate\",\n      \"opts\": {\"tps\" : 100}\n    },\n    \"pathTemplate\": \"../tx_records_client<C>_round<R>.txt\",\n    \"outputFormat\": \"TEXT\",\n    \"logEnd\": true\n  }\n}\n

                    The record rate controller can be specified by setting the rate controller type to the record-rate string. The available options (opts property) are the following:

                    • rateController: the specification of an arbitrary rate controller.
                    • pathTemplate: the template for the file path where the recorded times will be saved. The path can be either an absolute path or relative to the root Caliper directory.

                    The template can (and should) contain special \u201cvariables/placeholders\u201d that can refer to special environment properties (see the remarks below). The available placeholders are the following: - <C>: placeholder for the 1-based index of the current client that uses this rate controller. - <R>: placeholder for the 1-based index of the current round that uses this rate controller.

                    • outputFormat: optional. Determines the format in which the recording will be saved. Defaults to \"TEXT\". The currently supported formats are the following:
                      • \"TEXT\": each recorded timing is encoded as text on separate lines.
                      • \"BIN_BE\": binary format with Big Endian encoding.
                      • \"BIN_LE\": binary format with Little Endian encoding.
                    • logEnd: optional. Indicates whether to log that the recordings are written to the file(s). Defaults to false.

                    Template placeholders: since Caliper provides a concise way to define multiple rounds and multiple workers with the same behavior, it is essential to differentiate between the recordings of the workers and rounds. Accordingly, the output file paths can contain placeholders for the round and worker indices that will be resolved automatically at each worker in each round. Otherwise, every worker would write the same file, resulting in a serious conflict between timings and transaction IDs.

                    Text format: the rate controller saves the recordings in the following format (assuming a constant 10 TPS rate and ignoring the noise in the actual timings), row i corresponding to the ith transaction:

                    100\n200\n300\n...\n

                    The value of i starts at 0, so the first line represents 0 transactions submitted. Each value in a line represents time in milliseconds from the start of the round.

                    Binary format: Both binary representations encode the X number of recordings as a series of X+1 UInt32 numbers (1 number for the array length, the rest for the array elements), either in Little Endian or Big Endian encoding:

                    Offset: |0      |4      |8      |12      |16      |...     \nData:   |length |1st    |2nd    |3rd     |4th     |...      \n
                    • The first entry (length) indicates the number of transactions submitted.
                    • Each subsequent entry in the array represents a timestamp of when that number of transactions were submitted from the start of the round.
                    • The subsequent values (Uint32 values in Big Endian or Little Endian format) represent time in milliseconds from the start of the round. Each entry corresponds to the number of transactions submitted starting from 0

                    For example, if there are 3 transactions enabled, the binary format would look like:

                    Offset: |0      |4      |8      |12     |16      \nData:   |length |0 tx   |1 tx   |2 tx   |3 tx \n        |4      |100    |200    |300    |400\n
                    "},{"location":"concepts/rate-controllers/#replay-rate","title":"Replay rate","text":"

                    One of the most important aspect of a good benchmark is its repeatability, i.e., it can be re-executed in a deterministic way whenever necessary. However, some benchmarks define the workload (e.g., user behavior) as a function of probabilistic distribution(s). This presents two problems from a practical point of view:

                    1. Repeatability: The random sampling of the given probability distribution(s) can differ between benchmark (re-)executions. This makes the comparison of different platforms questionable.
                    2. Efficiency: Sampling a complex probability distribution incurs an additional runtime overhead, which can limit the rate of the load, distorting the originally specified workload.

                    This rate controller aims to mitigate these problems by replaying a fix transaction load profile that was created \u201coffline.\u201d This way the profile is generated once, outside of the benchmark execution, and can be replayed any time with the same timing constraints with minimal overhead.

                    A trivial use case of this controller is to play back a transaction recording created by the record controller. However, a well-formed trace file is the only requirement for this controller, hence any tool/method can be used to generate the transaction load profile.

                    The following example specifies a rate controller that replays some client-dependent workload profiles (for details, see the available options below the example):

                    {\n  \"type\": \"replay-rate\",\n  \"opts\": {\n    \"pathTemplate\": \"../tx_records_client<C>.txt\",\n    \"inputFormat\": \"TEXT\",\n    \"logWarnings\": true,\n    \"defaultSleepTime\": 50\n    }\n}\n

                    The replay rate controller can be specified by setting the rate controller type to the replay-rate string. The available options (opts property) are the following:

                    • pathTemplate: the template for the file path where the transaction timings will be replayed from. The path can be either an absolute path or relative to the root Caliper directory.

                    The template can (and should) contain special \u201cvariables/placeholders\u201d that can refer to special environment properties (see the remarks at the record rate controller). The available placeholders are the following: - <C>: placeholder for the 1-based index of the current client that uses this rate controller. - <R>: placeholder for the 1-based index of the current round that uses this rate controller.

                    • inputFormat: optional. Determines the format in which the transaction timings are stored (see the details at the record rate controller). Defaults to \"TEXT\". The currently supported formats are the following:
                      • \"TEXT\": each recorded timing is encoded as text on separate lines.
                      • \"BIN_BE\": binary format with Big Endian encoding.
                      • \"BIN_LE\": binary format with Little Endian encoding.
                    • logWarnings: optional. Indicates whether to log that there are no more recordings to replay, so the defaultSleepTime is used between consecutive transactions. Defaults to false.
                    • defaultSleepTime: optional. Determines the sleep time between transactions for the case when the benchmark execution is longer than the specified recording. Defaults to 20 ms.
                    "},{"location":"concepts/rate-controllers/#about-the-recordings","title":"About the recordings:","text":"

                    Special care must be taken, when using duration-based benchmark execution, as it is possible to issue more transactions than specified in the recording. A safety measure for this case is the defaultSleepTime option. This should only occur in the last few moments of the execution, affecting only a few transactions, that can be discarded before performing additional performance analyses on the results.

                    The recommended approach is to use transaction number-based round configurations, since the number of transactions to replay is known beforehand. Note, that the number of workers affects the actual number of transactions submitted by a worker.

                    "},{"location":"concepts/rate-controllers/#adding-custom-controllers","title":"Adding Custom Controllers","text":"

                    It is possible to use rate controllers that are not built-in controllers of Caliper. When you specify the rate controller in the test configuration file (see the architecture documentation), you must set the type and opts attributes.

                    You can set the type attribute so that it points to your custom JS file that satisfies the following criteria:

                    1. The file/module exports a createRateController function that takes the following parameters:

                      1. An TestMessage parameter that is the object representation of the opts attribute set in the configuration file, and contains the custom settings of your rate controller.
                      2. A TransactionStatisticsCollector object that gives the rate controller access to the current worker transaction statistics
                      3. A workerIndex parameter of type number that is the 0-based index of the worker process using this rate controller. The function must return an object (i.e., your rate controller instance) that satisfies the next criteria.
                    2. The object returned by createRateController must implement the /packages/caliper-core/lib/worker/rate-control/rateInterface.js interface, i.e., must provide the following async functions:

                    3. applyRateControl , for performing the actual rate control by \u201cblocking\u201d the execution (in an async manner) for the desired time.
                    4. end, for disposing any acquired resources at the end of a round.

                    The following example is a complete implementation of a rate control that doesn\u2019t perform any control, thus allowing the submitting of transactions as fast as the program execution allows it (warning, this implementation run with many client processes could easily over-load a backend network, so use it with caution).

                    /*\n* Licensed under the Apache License, Version 2.0 (the \"License\");\n* you may not use this file except in compliance with the License.\n* You may obtain a copy of the License at\n*\n* http://www.apache.org/licenses/LICENSE-2.0\n*\n* Unless required by applicable law or agreed to in writing, software\n* distributed under the License is distributed on an \"AS IS\" BASIS,\n* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n* See the License for the specific language governing permissions and\n* limitations under the License.\n*/\n\n'use strict';\n\nconst RateInterface = require('path-to-caliper/caliper-core/lib/rate-control/rateInterface.js');\n\n/**\n * Rate controller for allowing uninterrupted workloadload generation.\n *\n * @property {object} options The user-supplied options for the controller. Empty.\n */\nclass MyRateController  extends RateInterface{\n    /**\n     * Initializes the rate controller instance.\n     * @param {TestMessage} testMessage The testMessage passed for the round execution\n     * @param {TransactionStatisticsCollector} stats The TX stats collector instance.\n     * @param {number} workerIndex The 0-based index of the worker node.\n     * @param {number} roundIndex The 0-based index of the current round.\n     * @param {number} numberOfWorkers The total number of worker nodes.\n     * @param {object} roundConfig The round configuration object.\n     */\n     constructor(testMessage, stats, workerIndex) {\n        super(testMessage, stats, workerIndex);\n    }\n\n    /**\n     * Doesn't perform any rate control.\n     * @async\n     */\n    async applyRateControl() {\n        // no sleeping is needed, allow the transaction invocation immediately\n    }\n\n    /**\n     * Notify the rate controller about the end of the round.\n     * @async\n     */\n    async end() { \n        // nothing to dispose of\n    }\n}\n\n/**\n * Factory for creating a new rate controller instance.\n * @param {TestMessage} testMessage start test message\n * @param {TransactionStatisticsCollector} stats The TX stats collector instance.\n * @param {number} workerIndex The 0-based index of the worker node.\n *\n * @return {RateInterface} The new rate controller instance.\n */\nfunction createRateController(testMessage, stats, workerIndex) {\n    return new MyRate(testMessage, stats, workerIndex);\n}\n\nmodule.exports.createRateController = createRateController;\n

                    Let\u2019s say you save this implementation into a file called maxRateController.js next to your Caliper directory (so they\u2019re on the same level). In the test configuration file you can set this rate controller (at its required place in the configuration hierarchy) the following way:

                    rateControl:\n  # relative path from the Caliper directory\n- type: ../maxRateController.js\n  # empty options\n  opts: \n
                    "},{"location":"concepts/rate-controllers/#license","title":"License","text":"

                    The Caliper codebase is released under the Apache 2.0 license. Any documentation developed by the Caliper Project is licensed under the Creative Commons Attribution 4.0 International License. You may obtain a copy of the license, titled CC-BY-4.0, at http://creativecommons.org/licenses/by/4.0/.

                    "},{"location":"concepts/runtime-config/","title":"Runtime Configuration","text":""},{"location":"concepts/runtime-config/#overview","title":"Overview","text":"

                    Caliper relies on the nconf package to provide a flexible and hierarchical configuration mechanism for runtime-related settings. Hierarchical configuration means that a runtime setting can be set or overridden from multiple sources/locations, and there is a priority order among them.

                    In general, a setting is a simple string key associated with some value. However, it is highly recommended to compose the keys in a way that follows the place of the module in the hierarchy that uses the setting. Consider the following key, for example:

                    caliper-fabric-timeout-invokeorquery

                    The key consists of several parts that makes it easy to identify the purpose of the setting: it is used in Caliper, by the Fabric connector, it is a timeout-related setting that specifies the timeout to use for transaction invocations or queries. Every setting key in Caliper follows the same convention.

                    The rule of thumb is to use lowercase letters (maybe numbers), and the hierarchy should be denoted by dashes (-) as separator.

                    Every setting used by Caliper is prefixed with the caliper- string. The prefix serves as a namespace for the internal settings of Caliper modules. It also prevents name collisions since the configuration mechanism parses every setting available from the various sources, some intended, for example, to the underlying SDK modules or the workload modules.

                    Note

                    For every available runtime setting, refer to the last section.

                    "},{"location":"concepts/runtime-config/#setting-sources","title":"Setting sources","text":"

                    Caliper supports the following sources/locations where runtime settings can be set/overridden, in priority order, starting with the highest priority:

                    1. Memory
                    2. Command line arguments
                    3. Environment variables
                    4. Project-level configuration file
                    5. User-level configuration file
                    6. Machine-level configuration file
                    7. Fallback/default configuration file

                    For simplicity, you can think of the above order as the following: the \u201ccloser\u201d the setting is set to its point of use, the higher the priority of the set value.

                    "},{"location":"concepts/runtime-config/#in-memory-settings","title":"In-memory settings","text":"

                    If some component (Caliper-related, or user provided) sets a setting during runtime (using the configuration API), then that value will have priority over any other source/location that might have also set the same setting.

                    The simple configuration API is provided by the ConfigUtil module of the caliper-core package. It exports a simple get and set method:

                    • get(key:string, fallbackValue:any) => any

                      Returns the value of the setting associated with the given key. If the setting is not set from any sources, then the fallbackValue is returned.

                    • set(key:string, value:any)

                      Sets the value for the settings associated with the given key. It will overwrite any other value set by other sources.

                    For example:

                    const { ConfigUtil } = require('@hyperledger/caliper-core');\n\n// retrieves a setting for your module, if not set, use some default\nconst shouldBeFast = ConfigUtil.get('mymodule-performance-shoudbefast', /*default:*/ true);\n\nif (shouldBeFast) { /* ... */ } else { /* ... */ }\n

                    The above code also shows how a plugin module can easily leverage Caliper\u2019s configuration mechanism. Since the mymodule-performance-shoudbefast setting is queried through the configuration API, setting it from various sources automatically became possible (see the next sections for details).

                    Note

                    Thus adding a flexible runtime setting to any module requires only to query that setting through the configuration API when you need it (with the desired default/fallback value).

                    "},{"location":"concepts/runtime-config/#command-line-arguments","title":"Command line arguments","text":"

                    If we wish to influence the behavior of a third-party code (e.g., Caliper or a user callback module), we usually can\u2019t (or don\u2019t want to) overwrite the setting in the source code. A standard way of modifying the behavior of third-party/pre-packaged applications is to provide the settings as commandline arguments.

                    Starting Caliper through the CLI, you can override runtime settings the following way:

                    caliper launch manager \\\n    --caliper-workspace yourworkspace/ \\\n    --caliper-benchconfig yourconfig.yaml \\\n    --caliper-networkconfig yournetwork.yaml \\\n    --mymodule-performance-shoudbefast=true\n

                    The arguments will be converted to lower-case letters and every _ character will be replaced with -. So the above command can be written in a more user friendly way:

                    caliper launch manager \\\n    --caliper-workspace yourworkspace/ \\\n    --caliper-benchconfig yourconfig.yaml \\\n    --caliper-networkconfig yournetwork.yaml \\\n    --MyModule_Performance_ShoudBeFast=true\n

                    Both ways will result in the setting key mymodule-performance-shoudbefast associated with the boolean value true.

                    Note, that nconf will automatically parse values of common types, so the true and false values will be parsed (and returned by get) as boolean values. This also holds for (both integer and floating point) numbers.

                    Moreover, boolean values can be specified as flags, without explicitly setting the true or false value (note the no- prefix for the second case):

                    • Setting a key to true:

                        caliper launch manager \\\n      --caliper-workspace yourworkspace/ \\\n      --caliper-benchconfig yourconfig.yaml \\\n      --caliper-networkconfig yournetwork.yaml \\\n      --mymodule-performance-shoudbefast\n

                    • Setting a key to false (note the no- prefix):

                        caliper launch manager \\\n      --caliper-workspace yourworkspace/ \\\n      --caliper-benchconfig yourconfig.yaml \\\n      --caliper-networkconfig yournetwork.yaml \\\n      --no-mymodule-performance-shoudbefast\n
                      Command line arguments overwrite the settings set in locations of the next sections.

                    "},{"location":"concepts/runtime-config/#environment-variables","title":"Environment variables","text":"

                    If Caliper is part of a scripted environment, then it would be cumbersome to modify the script to pass command line arguments to Caliper. The standard approach in these scenarios is to use environment variables.

                    The example setting can be set the following way using an environment variable:

                    export MYMODULE_PERFORMANCE_SHOULDBEFAST=true\n\n# calling some script containing the following command\ncaliper launch manager \\\n    --caliper-workspace yourworkspace/ \\\n    --caliper-benchconfig yourconfig.yaml \\\n    --caliper-networkconfig yournetwork.yaml\n

                    Note the standard notation of environment variable setting: upper-case letters separated by _ characters. Caliper performs the same transformation as with command line arguments: the variable names will be converted to lower-case letters and every _ character will be replaced with -. So the above setting will also result in the setting key mymodule-performance-shoudbefast associated with the boolean value true.

                    "},{"location":"concepts/runtime-config/#configuration-files","title":"Configuration files","text":"

                    Depending on the scenario, users may want to change multiple runtime settings. Using command line arguments and environment variables to change multiple settings can become cumbersome.

                    Using configuration files is a standard way of overriding multiple settings in a manageable way. Caliper provides multiple configuration \u201clocations\u201d where you can insert configuration files into the settings hierarchy. These locations also follow the \u201ccloser one wins\u201d semantic of the hierarchical configuration mechanism.

                    Moreover, YAML-based configuration files allow comments that make your configuration choices self-documenting and self-contained.

                    Note, that no additional transformation is performed on the key names of a YAML file, they are simply concatenated with - to get a flat string key from the object hierarchy.

                    So the hierarchical setting

                    mymodule:\n  performance:\n    shouldbefast: true\n

                    will be parsed as the mymodule-performance-shouldbefast string key associated with the true Boolean values.

                    "},{"location":"concepts/runtime-config/#project-level","title":"Project-level","text":"

                    If you have a group of settings that are always overridden in your Caliper benchmark project, then it is recommended to define them as a project-level configuration file. This file will usually consist of a subset of settings defined in the default configuration file (and probably your custom settings associated with your custom user module).

                    The project-level configuration file can be included into the hierarchy in two ways:

                    • Define the overridden settings in the caliper.yaml file in the workspace directory
                    • Or set the path of the configuration file explicitly through the caliper-projectconfig setting key using one of the higher priority locations above (i.e., in-memory, command line argument or environment variable):
                      • The command line approach:
                        caliper launch manager \\\n    --caliper-workspace yourworkspace/ \\\n    --caliper-benchconfig yourconfig.yaml \\\n    --caliper-networkconfig yournetwork.yaml \\\n    --Caliper-ProjectConfig mypath/project1-config.yaml\n
                      • The environment variable approach:
                        export CALIPER_PROJECTCONFIG=mypath/project1-config.yaml\ncaliper launch manager \\\n    --caliper-workspace yourworkspace/ \\\n    --caliper-benchconfig yourconfig.yaml \\\n    --caliper-networkconfig yournetwork.yaml\n

                    Note that project-level settings will override the settings defined by the locations of the next sections.

                    "},{"location":"concepts/runtime-config/#user-level","title":"User-level","text":"

                    If you find yourself overriding the same settings for multiple Caliper benchmark projects, then it is recommended to extract the common settings into a user-level configuration file. To include a user-level configuration file into the hierarchy, specify its path through the caliper-userconfig settings key using one of the higher priority locations above (i.e., in-memory, command line argument, environment variable or the project-level configuration file):

                    - The command line approach:\n```sh\ncaliper launch manager \\\n    --caliper-workspace yourworkspace/ \\\n    --caliper-benchconfig yourconfig.yaml \\\n    --caliper-networkconfig yournetwork.yaml \\\n    --Caliper-UserConfig ~/.config/my-caliper-config.yaml\n```\n- The environment variable approach:\n```sh\n    export CALIPER_USERCONFIG=~/.config/my-caliper-config.yaml\n    caliper launch manager \\\n        --caliper-workspace yourworkspace/ \\\n        --caliper-benchconfig yourconfig.yaml \\\n        --caliper-networkconfig yournetwork.yaml\n```\n- The configuration file approach (excerpt from the project-level configuration file):\n```sh\n      caliper:\nuserconfig: ~/.config/my-caliper-config.yaml\n# additional settings\n```\n
                    "},{"location":"concepts/runtime-config/#machine-level","title":"Machine-level","text":"

                    If multiple users use the same workstation and want to share common settings across Caliper projects and users, then a machine-level configuration file can be included into the hierarchy by specifying its path through the caliper-machineconfig settings key using one of the higher priority locations above (i.e., command line argument, environment variable, project- or user-level configuration files): - The command line approach:

                        caliper launch manager \\\n        --caliper-workspace yourworkspace/ \\\n        --caliper-benchconfig yourconfig.yaml \\\n        --caliper-networkconfig yournetwork.yaml \\\n        --Caliper-MachineConfig /etc/config/caliper.yaml\n
                    - The environment variable approach:
                        export CALIPER_MACHINECONFIG=/etc/config/caliper.yaml\n    caliper launch manager \\\n        --caliper-workspace yourworkspace/ \\\n        --caliper-benchconfig yourconfig.yaml \\\n        --caliper-networkconfig yournetwork.yaml\n
                    - The configuration file approach (excerpt from the project- or user-level configuration file):
                        caliper:\n        machineconfig: /etc/config/caliper.yaml\n    # additional settings\n

                    "},{"location":"concepts/runtime-config/#default-configuration","title":"Default configuration","text":"

                    A default/fallback configuration file is shipped with the Caliper-related packages that defines sensible fallback values and documentation for each available setting used by the Caliper modules. This configuration file has the lowest priority among the supported setting locations.

                    "},{"location":"concepts/runtime-config/#available-settings","title":"Available settings","text":"

                    Note

                    Always refer to the self-documenting default configuration file for the currently supported runtime configuration settings.

                    "},{"location":"concepts/runtime-config/#basic-settings","title":"Basic settings","text":"Key Description caliper-benchconfig Path to the benchmark configuration file that describes the test worker(s), test rounds and monitors. caliper-networkconfig Path to the network configuration file that contains information required to interact with the SUT. caliper-machineconfig The file path for the machine-level configuration file. Can be relative to the workspace. caliper-projectconfig The file path for the project-level configuration file. Can be relative to the workspace. caliper-userconfig The file path for the user-level configuration file. Can be relative to the workspace. caliper-workspace Workspace directory that contains all configuration information caliper-progress-reporting-enabled Boolean value for enabling transaction completion progress display by the Caliper manager process caliper-progress-reporting-interval Numeric value used to specify the caliper progress update frequency, in milliseconds"},{"location":"concepts/runtime-config/#binding-settings","title":"Binding settings","text":"Key Description caliper-bind-args The additional args to pass to the binding (i.e., npm install) command. caliper-bind-cwd The CWD to use for the binding (i.e., npm install) command. caliper-bind-file The path of a custom binding configuration file that will override the default one. caliper-bind-sut The binding specification of the SUT in the <SUT type>:<SDK version> format."},{"location":"concepts/runtime-config/#reporting-settings","title":"Reporting settings","text":"Key Description caliper-report-charting-hue The HUE value to construct the chart color scheme from. caliper-report-charting-scheme The color scheme method to use for producing chart colors. caliper-report-charting-transparency The transparency value [0..1] to use for the charts. caliper-report-options The options object to pass to fs.writeFile. caliper-report-path The absolute or workspace-relative path of the generated report file. caliper-report-precision Precision (significant digits) for the numbers in the report."},{"location":"concepts/runtime-config/#logging-settings","title":"Logging settings","text":"Key Description caliper-logging-formats-align Adds a tab delimiter before the messages to align them in the same place. caliper-logging-formats-attributeformat-\\ Specifies the formatting string for the log message attribute <attribute>. caliper-logging-formats-json Indicates that the logs should be serialized in JSON format. caliper-logging-formats-label Adds a specified label to every message. Useful for distributed worker scenario. caliper-logging-formats-pad Pads the log level strings to be the same length. caliper-logging-formats-timestamp Adds a timestamp to the messages with the specified format. caliper-logging-formats-colorize-all Indicates that all log message attributes must be colorized. caliper-logging-formats-colorize-\\ Indicates that log message attribute <attribute> must be colorized. caliper-logging-formats-colorize-colors-\\ Sets the color for the log messages with level <level>. caliper-logging-targets-\\-enabled Sets whether the target transport <target> is enabled or disabled. caliper-logging-template Specifies the message structure through placeholders."},{"location":"concepts/runtime-config/#worker-management-settings","title":"Worker management settings","text":"Key Description caliper-worker-communication-method Indicates the type of the communication between the manager and workers. caliper-worker-communication-address The address of the MQTT broker used for distributed worker management. caliper-worker-pollinterval The interval for polling for new available workers, in milliseconds. caliper-worker-remote Indicates whether the workers operate in distributed mode."},{"location":"concepts/runtime-config/#benchmark-phase-settings","title":"Benchmark phase settings","text":"Key Description caliper-flow-only-end Indicates whether to only perform the end command script in the network configuration file. caliper-flow-only-init Indicates whether to only perform the init phase of the benchmark. caliper-flow-only-install Indicates whether to only perform the smart contract install phase of the benchmark. caliper-flow-only-start Indicates whether to only perform the start command script in the network configuration file. caliper-flow-only-test Indicates whether to only perform the test phase of the benchmark. caliper-flow-skip-end Indicates whether to skip the end command script in the network configuration file. caliper-flow-skip-init Indicates whether to skip the init phase of the benchmark. caliper-flow-skip-install Indicates whether to skip the smart contract install phase of the benchmark. caliper-flow-skip-start Indicates whether to skip the start command script in the network configuration file. caliper-flow-skip-test Indicates whether to skip the test phase of the benchmark."},{"location":"concepts/runtime-config/#authentication-settings","title":"Authentication settings","text":"Key Description caliper-auth-prometheus-username Basic authentication username to use authenticate with an existing Prometheus server. caliper-auth-prometheus-password Basic authentication password to use authenticate with an existing Prometheus server. caliper-auth-prometheuspush-username Basic authentication username to use authenticate with an existing Prometheus Push Gateway. caliper-auth-prometheuspush-password Basic authentication password to use authenticate with an existing Prometheus Push Gateway."},{"location":"concepts/runtime-config/#fabric-connector-settings","title":"Fabric Connector settings","text":"

                    In the following table, The 1.4 Refers to 1.4 SUT without the caliper-fabric-gateway-enabled specified and 1.4Gateway Refers to 1.4 SUT with the caliper-fabric-gateway-enabled specified. All means that all the SUT versions support this option

                    Key SUT Version Description caliper-fabric-timeout-invokeorquery All The default timeout in seconds to use for invoking or querying transactions. Default is 60 seconds. caliper-fabric-gateway-enabled 1.4 Indicates whether to use the Fabric gateway-based SDK API for the 1.4 Fabric SUT. Default is false. caliper-fabric-gateway-localhost 1.4Gateway, 2.2 Indicates whether to convert discovered endpoints to localhost. Does not apply if discover is set to false in network config. Default is true. caliper-fabric-gateway-querystrategy 1.4Gateway, 2.2 Sets the query strategy to use for 2.2 and 1.4 when gateway is enabled. Default is Round Robin. caliper-fabric-gateway-eventstrategy 1.4Gateway, 2.2 Sets the event strategy to use for 2.2 and 1.4 when gateway is enabled. Default is any in Invoker Organisation. caliper-fabric-latencythreshold 1.4 Determines the reported commit time of a transaction based on the given percentage of event sources. caliper-fabric-loadbalancing 1.4 Determines how automatic load balancing is applied. caliper-fabric-verify-proposalresponse 1.4 Indicates whether to verify the received proposal responses. caliper-fabric-verify-readwritesets 1.4 Indicates whether to verify that the read-write sets returned by the endorsers match."},{"location":"concepts/runtime-config/#supported-event-strategies","title":"Supported Event Strategies","text":"

                    A description of the different types of event strategy for both the 1.4 and 2.2 SUT can be found here

                    To select an event strategy set the property caliper-fabric-gateway-eventstrategy to one of the following

                    Strategy Corresponds to msp_all MSPID_SCOPE_ALLFORTX msp_any MSPID_SCOPE_ANYFORTX network_all NETWORK_SCOPE_ALLFORTX network_any NETWORK_SCOPE_ANYFORTX

                    for example using a flag on the cli to set to have all peers in the network report that the transaction was committed you would specify

                    --caliper-fabric-gateway-eventstrategy network_all\n

                    The default is msp_any

                    "},{"location":"concepts/runtime-config/#supported-query-strategies","title":"Supported Query Strategies","text":"

                    A description of the different types of query strategy for both the 1.4 and 2.2 SUT can be found here

                    To select a query strategy set the property caliper-fabric-gateway-querystrategy to one of the following

                    Strategy Corresponds to msp_single MSPID_SCOPE_SINGLE msp_round_robin MSPID_SCOPE_ROUND_ROBIN

                    for example using a flag on the cli to set to have all peers in the network report that the transaction was committed you would specify

                    --caliper-fabric-gateway-querystrategy msp_single\n

                    The default is msp_round_robin

                    "},{"location":"concepts/runtime-config/#license","title":"License","text":"

                    The Caliper codebase is released under the Apache 2.0 license. Any documentation developed by the Caliper Project is licensed under the Creative Commons Attribution 4.0 International License. You may obtain a copy of the license, titled CC-BY-4.0, at http://creativecommons.org/licenses/by/4.0/.

                    "},{"location":"concepts/workload-module/","title":"Workload modules","text":""},{"location":"concepts/workload-module/#overview","title":"Overview","text":"

                    Workload modules are the essence of a Caliper benchmark since it is their responsibility to construct and submit TXs. Accordingly, workload modules implement the logic pertaining to your business, benchmark or user behavior. Think of the workload modules as the brain of an emulated SUT client, deciding what kind of TX to submit at the given moment.

                    "},{"location":"concepts/workload-module/#implementing-the-workload-module","title":"Implementing the workload module","text":"

                    Workload modules are Node.JS modules that expose a certain API. There are no further restrictions on the implementation, thus arbitrary logic (using further arbitrary components) can be implemented.

                    "},{"location":"concepts/workload-module/#the-api","title":"The API","text":"

                    Workload modules are loaded through factory functions, just like other pluggable modules in Caliper. Accordingly, a workload module implementation must export a single factory function, named createWorkloadModule:

                    /**\n * Create a new instance of the workload module.\n * @return {WorkloadModuleInterface}\n */\nfunction createWorkloadModule() {\n    return new MyWorkload();\n}\n\nmodule.exports.createWorkloadModule = createWorkloadModule;\n

                    The factory function must return an instance that implements the [WorkloadModuleInterface](https://github.com/hyperledger-caliper/caliper/blob/main/packages/caliper-core/lib/worker/workload/workloadModuleInterface.js) class. See the example for a complete implementation.

                    The interface contains the following three asynchronous functions:

                    "},{"location":"concepts/workload-module/#initializeworkloadmodule","title":"initializeWorkloadModule","text":"

                    The initializeWorkloadModule function is called by the worker processes before each round, providing contextual arguments to the module:

                    1. workerIndex (Number): The 0-based index of the worker instantiating the workload module.
                    2. totalWorkers (Number): The total number of workers participating in the round.
                    3. roundIndex (Number): The 0-based index of the currently executing round.
                    4. roundArguments (Object): The user-provided arguments for the round from the benchmark configuration file.
                    5. sutAdapter (BlockchainConnector): The connector of the underlying SUT (System Under Test).
                    6. sutContext (Object): The custom context object provided by the SUT connector.

                    This function is a good place to validate your workload module arguments provided by the benchmark configuration file. It\u2019s also a good practice to perform here any preprocessing needed to ensure the fast assembling of TX contents later in the submitTransaction function.

                    "},{"location":"concepts/workload-module/#submittransaction","title":"submitTransaction","text":"

                    The submitTransaction function is the backbone of the workload generation. The worker process calls this function every time the rate controller enables the next TX. So it is vital to keep this function implementation as efficient as possible in order to be able to keep up with high frequency scheduling settings.

                    The function requires no parameters, but it is its responsibility to submit the TX through the connector API.

                    "},{"location":"concepts/workload-module/#cleanupworkloadmodule","title":"cleanupWorkloadModule","text":"

                    The cleanupWorkloadModule function is called at the end of the round, and can be used to perform any resource cleanup required by your workload implementation.

                    "},{"location":"concepts/workload-module/#simple-base-class","title":"Simple base class","text":"

                    Although directly implementing the interface is possible, Caliper provides a simple utility base class that implements the required interface and also performs some common housekeeping operation. Thus inheriting from the WorkloadModuleBase class can result in simpler implementations.

                    The base class provides the following utilities:

                    • Creates instance variables in the constructor that match the parameters of the initializeWorkloadModule function.
                    • Provides an implementation for the initializeWorkloadModule function, where it saves the received arguments into the instance variables.
                    • Provides a no-op implementation for the cleanupWorkloadModule function.

                    Inheriting from this base class only requires the implementation of the submitTransaction function by the user. Additionally, the initialization logic can be extended/overridden if necessary.

                    "},{"location":"concepts/workload-module/#example","title":"Example","text":"

                    A complete (albeit simple) example of a workload module implementation (mostly identical with the WorkloadModuleBase implementation):

                    'use strict';\n\nconst { WorkloadModuleInterface } = require('@hyperledger/caliper-core');\n\nclass MyWorkload extends WorkloadModuleInterface {\n    constructor() {\n        super();\n        this.workerIndex = -1;\n        this.totalWorkers = -1;\n        this.roundIndex = -1;\n        this.roundArguments = undefined;\n        this.sutAdapter = undefined;\n        this.sutContext = undefined;\n    }\n\n    async initializeWorkloadModule(workerIndex, totalWorkers, roundIndex, roundArguments, sutAdapter, sutContext) {\n        this.workerIndex = workerIndex;\n        this.totalWorkers = totalWorkers;\n        this.roundIndex = roundIndex;\n        this.roundArguments = roundArguments;\n        this.sutAdapter = sutAdapter;\n        this.sutContext = sutContext;\n    }\n\n    async submitTransaction() {\n        let txArgs = {\n            // TX arguments for \"mycontract\"\n        };\n\n        return this.sutAdapter.invokeSmartContract('mycontract', 'v1', txArgs, 30);\n    }\n\n    async cleanupWorkloadModule() {\n        // NOOP\n    }\n}\n\nfunction createWorkloadModule() {\n    return new MyWorkload();\n}\n\nmodule.exports.createWorkloadModule = createWorkloadModule;\n

                    The simpler implementation using the utility base class is the following:

                    'use strict';\n\nconst { WorkloadModuleBase } = require('@hyperledger/caliper-core');\n\nclass MyWorkload extends WorkloadModuleBase {\n    async submitTransaction() {\n        let txArgs = {\n            // TX arguments for \"mycontract\"\n        };\n\n        return this.sutAdapter.invokeSmartContract('mycontract', 'v1', txArgs, 30);\n    }\n}\n\nfunction createWorkloadModule() {\n    return new MyWorkload();\n}\n\nmodule.exports.createWorkloadModule = createWorkloadModule;\n
                    "},{"location":"concepts/workload-module/#configuring-the-workload-module","title":"Configuring the workload module","text":"

                    To use your workload module for a given round, you only need to reference it in the benchmark configuration file:

                    1. Set the test.rounds[i].workload.module attribute to the path of your workload module file. The path can be either an absolute path, or a relative path to the configured workspace path. You can also set the attribute to the package name of your published workload module, but in that case you need to install it beforehand.
                    2. If your module supports different settings, set the test.rounds[i].workload.arguments attribute object accordingly. It will be passed to your module upon initialization.
                    "},{"location":"concepts/workload-module/#tips-tricks","title":"Tips & Tricks","text":"

                    The following advices might help you to improve your workload module implementation.

                    1. You can use (require) any Node.JS module in your code (including the core Caliper module). Modularization is important for keeping your implementation clean and manageable.
                    2. If you use third-party modules, then it is your responsibility to make them available to your workload module. This usually requires an npm install call in your module directory before you start Caliper.
                    3. Caliper provides some core utilities that might make your life easier, such as logging and runtime configuration. Use them, don\u2019t reinvent the wheel!
                    4. The submitTransaction function is on the hot path of the worker workload generation loop. Do computation-intensive tasks with care, it might hurt the scheduling precision of TXs! You can perform expensive pre-processing tasks in the initializeWorkloadModule function instead.
                    "},{"location":"concepts/workload-module/#license","title":"License","text":"

                    The Caliper codebase is released under the Apache 2.0 license. Any documentation developed by the Caliper Project is licensed under the Creative Commons Attribution 4.0 International License. You may obtain a copy of the license, titled CC-BY-4.0, at http://creativecommons.org/licenses/by/4.0/.

                    "},{"location":"connectors/ethereum-config/","title":"Ethereum","text":"

                    This page introduces the Ethereum adapter suitable for all the Ethereum clients that expose the web3 RPC interface over websockets.

                    Note

                    This adapter relies on web3js 1.2.x that is the stable version coming from 1.0.0-beta.37

                    Note

                    Hyperledger Besu and Geth are the current tested clients. The tests are driven via standard Ethereum JSON-RPC APIs so other clients should be compatible once docker configurations exist.

                    Note

                    Some highlights of the provided features:

                    • configurable confirmation blocks threshold

                    The page covers the following aspects of using the Ethereum adapter:

                    • how to assemble a connection profile file, a.k.a., the blockchain network configuration file;
                    • how to use the adapter interface from the user callback module;
                    • transaction data gathered by the adapter;
                    • and a [complete example]../connectors/ethereum-config.md/#connection-profile-example) of a connection profile.
                    "},{"location":"connectors/ethereum-config/#assembling-the-network-configuration-file","title":"Assembling the Network Configuration File","text":"

                    The JSON network configuration file of the adapter essentially defines which contracts are expected to be on the network and which account the adapter should use to deploy the pointed contracts and which account use to invoke them.

                    "},{"location":"connectors/ethereum-config/#connection-profile-example","title":"Connection profile example","text":"

                    We will provide an example of the configuration and then we\u2019ll in deep key by key

                    {\n    \"caliper\": {\n        \"blockchain\": \"ethereum\",\n        \"command\" : {\n            \"start\": \"docker-compose -f network/ethereum/1node-clique/docker-compose.yml up -d && sleep 3\",\n            \"end\" : \"docker-compose -f network/ethereum/1node-clique/docker-compose.yml down\"\n          }\n    },\n    \"ethereum\": {\n        \"url\": \"ws://localhost:8545\",\n        \"contractDeployerAddress\": \"0xc0A8e4D217eB85b812aeb1226fAb6F588943C2C2\",\n        \"contractDeployerAddressPassword\": \"password\",\n        \"fromAddress\": \"0xc0A8e4D217eB85b812aeb1226fAb6F588943C2C2\",\n        \"fromAddressPassword\": \"password\",\n        \"transactionConfirmationBlocks\": 12,\n        \"contracts\": {\n            \"simple\": {\n                \"path\": \"src/contract/ethereum/simple/simple.json\",\n                \"gas\": {\n                    \"open\": 45000,\n                    \"query\": 100000,\n                    \"transfer\": 70000\n                }\n            }\n        }\n    }\n}\n

                    The top-level caliper attribute specifies the type of the blockchain platform, so Caliper can instantiate the appropriate adapter when it starts. To use this adapter, specify the ethereum value for the blockchain attribute.

                    Furthermore, it also contains two optional commands: a start command to execute once before the tests and an end command to execute once after the tests. Using these commands is an easy way, for example, to automatically start and stop a test network. When connecting to an already deployed network, you can omit these commands.

                    These are the keys to provide inside the configuration file under the ethereum one:

                    • URL of the node to connect to. Only http is currently supported.
                    • Deployer address with which to deploy required contracts.
                    • Deployer address private key: the private key of the deployer address.
                    • Deployer address password: to unlock the deployer address.
                    • Address from which to invoke methods of the benchmark.
                    • Private Key: the private key of the benchmark address.
                    • Password: to unlock the benchmark address.
                    • Number of confirmation blocks to wait to consider a transaction as successfully accepted in the chain.
                    • Contracts configuration.

                    The following sections detail each part separately. For a complete example, please refer to the example section or one of the example files in the network/ethereum directories

                    "},{"location":"connectors/ethereum-config/#url","title":"URL","text":"

                    The URL of the node to connect to. Any host and port can be used if it is reachable. Currently only websocket is supported.

                    \"url\": \"ws://localhost:8545\"\n

                    Unfortunately, HTTP connections are explicitly disallowed, as

                    1. there is no efficient way to guarantee the order of transactions submitted over http, which leads to nonce errors, and
                    2. this adapter relies on web3.js, and this library has deprecated its support for RPC over HTTP.
                    "},{"location":"connectors/ethereum-config/#deployer-address","title":"Deployer Address","text":"

                    The address to use to deploy contracts of the network. Without particular or specific needs it can be set to be equal to the benchmark address. Its private key must be hold by the node connected with URL and it must be provided in the checksum form (the one with both lowercase and uppercase letters).

                    \"contractDeployerAddress\": \"0xc0A8e4D217eB85b812aeb1226fAb6F588943C2C2\"\n
                    "},{"location":"connectors/ethereum-config/#deployer-address-private-key","title":"Deployer Address Private Key","text":"

                    The private key for the deployer address. If present then transactions are signed inside caliper and sent \u201craw\u201d to the ethereum node.

                    \"contractDeployerAddressPrivateKey\": \"0x45a915e4d060149eb4365960e6a7a45f334393093061116b197e3240065ff2d8\"\n
                    "},{"location":"connectors/ethereum-config/#deployer-address-password","title":"Deployer Address Password","text":"

                    The password to use to unlock deployer address. If there isn\u2019t an unlock password, this key must be present as empty string. If the deployer address private key is present this is not used.

                    \"contractDeployerAddressPassword\": \"gottacatchemall\"\n

                    Note

                    Hyperledger Besu does not provide wallet services, so the contractDeployerPassword is not supported and the private key variant must be used.

                    "},{"location":"connectors/ethereum-config/#benchmark-address","title":"Benchmark Address","text":"

                    The address to use while invoking all the methods of the benchmark. Its private key must be hold by the node connected with URL and it must be provided in the checksum form (the one with both lowercase and uppercase letters).

                    \"fromAddress\": \"0xc0A8e4D217eB85b812aeb1226fAb6F588943C2C2\"\n
                    "},{"location":"connectors/ethereum-config/#benchmark-address-seed","title":"Benchmark Address Seed","text":"

                    As an alternative to fromAddress, fromAddressPrivateKey, and fromAddressPassword the network configuration can use a fixed seed and derive needed addresses via BIP-44 key derivation. Each caliper test worker will generate an address for use as fromAddress and fromAddressPrivateKey using the derivation path m/44'/60'/<x>'/0/0, where is the clientIdx passed into getContext.

                    This configuration does not override fromAddress, but it takes priority over fromAddressPrivateKey and fromAddressPassword.

                    \"fromAddressSeed\": \"0x3f841bf589fdf83a521e55d51afddc34fa65351161eead24f064855fc29c9580\"\n

                    Note

                    The use of fromAddressSeed is mandatory if you want to use more than 1 worker in your benchmark. This is because each worker independently fetches the current transaction nonce for the address it is using. If they all use the same address, they will all get the same nonces, causing known transaction errors. By using a seed, each worker will generate a unique address and private key.

                    "},{"location":"connectors/ethereum-config/#benchmark-address-private-key","title":"Benchmark Address Private Key","text":"

                    The private key for the benchmark address. If present then transactions are signed inside caliper and sent \u201craw\u201d to the ethereum node.

                    This configuration takes priority over fromAddressPassword.

                    \"fromAddressPassword\": \"0x45a915e4d060149eb4365960e6a7a45f334393093061116b197e3240065ff2d8\"\n
                    "},{"location":"connectors/ethereum-config/#benchmark-address-password","title":"Benchmark Address Password","text":"

                    The password to use to unlock benchmark address. If there isn\u2019t an unlock password, this key must be present as empty string. If the benchmark address private key is present this is not used.

                    \"fromAddressPassword\": \"gottacatchemall\"\n

                    Note

                    Hyperledger Besu does not provide wallet services, so the fromAddressPassword option is not supported and the private key variant must be used.

                    "},{"location":"connectors/ethereum-config/#confirmation-blocks","title":"Confirmation blocks","text":"

                    It is the number of blocks the adapter will wait before warn Caliper that a transaction has been successfully executed on the network. You can freely tune it from 1 to the desired confirmations. Keep in mind that in the Ethereum main net (PoW), 12 to 20 confirmations can be required to consider a transaction as accepted in the blockchain. If you\u2019re using different consensus algorithm (like clique in the example network provided) it can be safely brought to a lower value. In any case it is up to you.

                    \"transactionConfirmationBlocks\": 12\n
                    "},{"location":"connectors/ethereum-config/#contract-configuration","title":"Contract configuration","text":"

                    It is the list, provided as a JSON object, of pre-deployed contracts or contracts to deploy on the network before running the benchmark. You should provide a JSON entry for each contract. The key will represent the contract identifier to invoke methods on that contract.

                    Depending on whether you plan to use pre-deployed contracts or deploy them during the benchmark, the configuration will differ slightly.

                    Note

                    Defining configurations simultaneously for both pre-deployed contracts and contracts to be deployed by Caliper is currently not supported and may result in unexpected errors. This is because opting for pre-deployed contracts means the contract installation phase will be skipped.

                    In both cases, it strongly recommended to specify a gas field, which is an object with one field per contract function that you will call in your test. The value of these fields should be set to the amount of gas that will be required to execute your transaction. There is no need for this number to be an exact match, as it\u2019s used to set the gas limit for the transaction, so if your transaction might have a variable gas cost, just set this value to the highest gas usage that you would expect to see for your transaction.

                    Note: If you do not specify the gas for your contract functions, web3 will automatically call out to your node to estimate the gas requirement before submitting the transaction. This causes three problems. First, it means that your transaction will effectively execute twice, doubling the load on the node serving as your RPC endpoint. Second, the extra call will add significant additional latency to every transaction. Third, your transactions may be reordered, causing transaction failures due to out of order nonces.

                    "},{"location":"connectors/ethereum-config/#pre-deployed-contracts","title":"Pre-deployed contracts","text":"

                    To use pre-deployed contracts, you must launch caliper using the --caliper-flow-skip-install option, which skips the smart contract installation phase. For each key you must provide a JSON object containing the contract address and the contract ABI. The ABI is required to invoke methods on the contract.

                    \"contracts\": {\n    \"simple\": {\n        \"address\": \"0xc24f4561B8F1159E8D8661B282A2974cD48058C2\",\n        \"gas\": {\n            \"open\": 45000,\n            \"query\": 100000,\n            \"transfer\": 70000\n        },\n        \"abi\": [...]\n    }\n}\n
                    "},{"location":"connectors/ethereum-config/#contracts-to-deploy","title":"Contracts to Deploy","text":"

                    Contracts to be deployed by Caliper require the specification of a contract definition file for each. In the contract configuration you must include a path field pointing to each contract definition file. It\u2019s in this new file that you will define the contract\u2019s ABI and bytecode, as well as the gas required to deploy it.

                    \"contracts\": {\n    \"simple\": {\n        \"path\": \"src/contract/ethereum/simple/simple.json\",\n        \"gas\": {\n            \"open\": 45000,\n            \"query\": 100000,\n            \"transfer\": 70000\n        }\n    },\n    \"second\": {\n        \"path\": \"src/contract/ethereum/second/second.json\",\n        \"gas\": {\n            \"function\": 12345\n        }\n    }\n}\n
                    "},{"location":"connectors/ethereum-config/#contract-definition-file","title":"Contract definition file","text":"

                    Contract definition file is a simple JSON file containing basic information to deploy and use an Ethereum contract. Four keys are required:

                    • Name
                    • ABI
                    • Bytecode
                    • Gas

                    Here is an example:

                    {\n    \"name\": \"The simplest workload contract\",\n    \"abi\": [{\"constant\":true,\"inputs\":[{\"nam......ype\":\"function\"}],\n    \"bytecode\": \"0x608060405.........b0029\",\n    \"gas\": 259823\n}\n

                    "},{"location":"connectors/ethereum-config/#name","title":"Name","text":"

                    It is a name to display in logs when the contract gets deployed. It is only a description name.

                    "},{"location":"connectors/ethereum-config/#abi","title":"ABI","text":"

                    It is the ABI generated when compiling the contract. It is required in order to invoke methods on a contract.

                    "},{"location":"connectors/ethereum-config/#bytecode","title":"Bytecode","text":"

                    It is the bytecode generated when compiling the contract. Note that since it is an hexadecimal it must start with the 0x.

                    "},{"location":"connectors/ethereum-config/#gas","title":"Gas","text":"

                    It is the gas required to deploy the contract. It can be easily calculated with widely used solidity development kits or querying to a running Ethereum node.

                    "},{"location":"connectors/ethereum-config/#using-the-adapter-interface","title":"Using the Adapter Interface","text":"

                    The workload modules interact with the adapter at two phases of the tests: during the initialization of the workload module (the initializeWorkloadModule function), and when submitting invoke or query transactions (the submitTransaction function).

                    "},{"location":"connectors/ethereum-config/#the-initializeworkloadmodule-function","title":"The initializeWorkloadModule function","text":"

                    See the corresponding documentation of the function for the description of its parameters.

                    The last argument of the function is a sutContext object, which is a platform-specific object provided by the backend blockchain\u2019s connector. The context object provided by this connector is the following:

                    {\n  fromAddress: \"0xA89....7G\"\n  web3: Web3\n}\n

                    The fromAddress property is the benchmark address while web3 is the configured instance of the Web3js client.

                    "},{"location":"connectors/ethereum-config/#the-submittransaction-function","title":"The submitTransaction function","text":"

                    The sutAdapter object received (and saved) in the initializeWorkloadModule function is of type [ConnectorInterface](https://github.com/hyperledger-caliper/caliper/blob/v0.6.0/packages/caliper-core/lib/common/core/connector-interface.js). Its getType() function returns the fabric string value.

                    The sendRequests method of the connector API allows the workload module to submit requests to the SUT. It takes a single parameter: an object or array of objects containing the settings of the requests.

                    The settings object has the following structure:

                    • contract: string. Required. The ID of the contract (that is the key specified here).
                    • readOnly: boolean. Optional. Indicates whether the request is a TX or a query. Defaults to false.
                    • verb: string. Required. The name of the function to call on the contract.
                    • value: number. Optional. The value parameter in Wei to be passed to the payable function of the contract.
                    • args: mixed[]. Optional. The list of arguments to pass to the method in the correct order as they appear in method signature. It must be an array.
                    let requestsSettings = [{\n    contract: 'simple',\n    verb: 'open',\n    value: 1000000000000000000000,\n    args: ['sfogliatella', 1000]\n},{\n    contract: 'simple',\n    verb: 'open',\n    value: 900000000000000000000,\n    args: ['baba', 900]\n}];\n\nawait this.sutAdapter.sendRequests(requestsSettings);\n

                    Currently each method call inside sendRequests is sent separately, that is, they are NOT sent as a batch of calls on RPC.

                    To query a state on a contract state, set the readOnly attribute to true. The difference is that it can\u2019t produce any change on the blockchain and node will answer with its local view of data. Like for traditional requests, currently there is no support for batch queries.

                    "},{"location":"connectors/ethereum-config/#transaction-data-gathered-by-the-adapter","title":"Transaction Data Gathered by the Adapter","text":"

                    The previously discussed sendRequests method returns an array whose elements correspond to the result of the submitted request(s) with the type of TxStatus. The class provides some standard and platform-specific information about its corresponding transaction.

                    The standard information provided by the type are the following:

                    • GetID():string returns the transaction ID.
                    • GetStatus():string returns the final status of the transaction, either success or failed.
                    • GetTimeCreate():number returns the epoch when the transaction was submitted.
                    • GetTimeFinal():number return the epoch when the transaction was finished.
                    • IsCommitted():boolean indicates whether the transaction has been committed successfully.
                    • IsVerified():boolean indicates whether we are sure about the final status of the transaction. Always true for successful transactions. False in all other cases.
                    • GetResult():any returns the result of the transaction. Only requests with readOnly set to true will return the actual result of the call. Write operations, because they are asynchronous, will return the transaction hash.
                    "},{"location":"connectors/ethereum-config/#license","title":"License","text":"

                    The Caliper codebase is released under the Apache 2.0 license. Any documentation developed by the Caliper Project is licensed under the Creative Commons Attribution 4.0 International License. You may obtain a copy of the license, titled CC-BY-4.0, at http://creativecommons.org/licenses/by/4.0/.

                    "},{"location":"connectors/fabric-config/","title":"Fabric","text":""},{"location":"connectors/fabric-config/#overview","title":"Overview","text":"

                    This page introduces the Fabric adapter that utilizes the Common Connection Profile (CCP) feature of the Fabric SDK to provide compatibility and a unified programming model across different Fabric versions.

                    Note

                    The LTS versions of Hyperledger Fabric as well as the very latest 2.x release of Hyperledger Fabric are supported, all other versions are unsupported

                    The adapter exposes many SDK features directly to the user callback modules, making it possible to implement complex scenarios.

                    Note

                    Some highlights of the provided features:

                    • supports multiple channels and chaincodes
                    • supports multiple organizations
                    • supports multiple identities
                    • private data collection support
                    • support for TLS and limited mutual TLS communication (identity certificates cannot have restrictions on them)
                    • option to select the identity for submitting a TX/query
                    "},{"location":"connectors/fabric-config/#installing-dependencies","title":"Installing dependencies","text":"

                    You must bind Caliper to a specific Fabric SDK to target the corresponding (or compatible) SUT version. Refer to the binding documentation for details. When you bind to an SUT, you are in fact selecting a specific Fabric SDK to use which could be used with different versions of Fabric SUTs.

                    Note

                    • None of the Fabric bindings support administration actions. It it not possible to create/join channels nor deploy a chaincode. Consequently running caliper only facilitate operations using the --caliper-flow-only-test flag
                    "},{"location":"connectors/fabric-config/#binding-with-fabric-14-client-sdk","title":"Binding with Fabric 1.4 Client SDK","text":"

                    To bind with this client sdk, use fabric:1.4. Do not use this binding if using the BFT Concensus mechanism in Hyperledger Fabric that was introduced in version 3.0.0, use fabric:fabric-gateway instead.

                    It is confirmed that a 1.4 Fabric SDK is compatible with a Fabric 2.2 and later Fabric 2.x SUTs, therefore this binding can be used with later Fabric SUTs

                    Note that when using the binding target for the Fabric SDK 1.4 there are capability restrictions:

                    Note

                    • Currently setting discover to true in the network configuration file is not supported if you don\u2019t enable the gateway option (eg specifying \u2013caliper-Fabric-gateway-enabled as a command line option)
                    • Detailed execution data for every transaction is only available if you don\u2019t enable the gateway option
                    "},{"location":"connectors/fabric-config/#binding-with-fabric-22-client-sdk","title":"Binding with Fabric 2.2 Client SDK","text":"

                    To bind with this client sdk, use fabric:2.2. Do not use this binding if using the BFT Concensus mechanism in Hyperledger Fabric that was introduced in version 3.0.0, use fabric:fabric-gateway instead.

                    It is confirmed that a 2.2 Fabric SDK is compatible with 2.2 and later Fabric SUTs, therefore this binding can be used with 2.2 and later Fabric SUTs

                    Note

                    The following further restrictions exist for this binding: - Detailed execution data for every transaction is not available.

                    "},{"location":"connectors/fabric-config/#binding-with-fabric-gateway-client-sdk","title":"Binding with Fabric Gateway Client SDK","text":"

                    To bind with this client sdk, use fabric:fabric-gateway. This is now the preferred SDK to use given that Fabric 2.2 and earlier are now not in LTS and as such the Fabric 1.4 and 2.2 SDKs are deprecated.

                    Only Fabric 2.4 and later with the Peer Gateway capability enabled (which is the default setting for a Fabric peer) can be used so for older versions of Hyperledger Fabric you cannot bind with this client SDK.

                    Note

                    The following further restrictions exist for this binding - Detailed execution data for every transaction is not available. - mutual TLS is not supported - peer and organization targeting is not supported so the options targetPeers and targetOrganizations in a request will throw an error.

                    "},{"location":"connectors/fabric-config/#connection-profiles","title":"Connection Profiles","text":"

                    Connection Profiles are a Fabric standard that provides connectivity information for your Fabric network. In the past (Hyperledger Fabric 1.1) you needed to describe all your endpoints in a connection profile, ie all the orderers and all the peers in order to be able to connect a client application to the network. This is referred to as a static connection profile and when you use this connection profile with Caliper you should set the discover property to false. The problem with static connection profiles is that if a network topology changes (eg add/remove orderer, peer, organisation etc) then every client needs to have an updated connection profile.

                    Hyperledger Fabric in 1.2 introduced the concept of discovery. This allowed you to ask a peer for the network topology. Your Fabric network has to be configured correctly for this to work (but all Fabric networks should be configured to allow for discovery now). Connection profiles that use this capability will only have a list of 1 or more peers for the specific organisation that connection profile applies to which will be used to discover the network. These connection profiles are referred to as dynamic connection profiles and when you use this connection profile with Caliper you should set the discover property to true.

                    Network builders and providers should generate connection profiles (for example test-network in fabric-samples does this), however if you don\u2019t have a connection profile you will need to create one. Information about creating connection profiles can be found in Hyperledger Fabric documentation as well as the node-sdk documentation (the format changed between node sdks. The 1.4 version should work when binding to either Fabric 1.4 or 2.2 but the version documented by 2.2 may only work when binding to Fabric 2.2)

                    • node sdk 2.2 documentation for connection profiles
                    • node sdk 1.4 documentation for connection profiles

                    Unfortunately the documentation provided by Hyperledger Fabric is more focused on static connection profiles rather than dynamic connection profiles and your aim should be to create the simpler and smaller dynamic connection profile.

                    With the introduction of using the Peer Gateway rather than the traditional node sdks (1.4 and 2.2) caliper has introduced the concept of declaring peers in an organization within the network configuration file as an alternative to connection profiles. This provides a simple way to describe either peers to discover from (when binding to Fabric 1.4 or 2.2, for Fabric 1.4 you must enable the gateway option as it won\u2019t work otherwise as discovery is not supported with the Fabric 1.4 binding when the gateway option is not enabled) or the peer to be used as a gateway into the Fabric network (when binding to Fabric 2.4/fabric-gateway). An example of a peers section in the network configuration is

                    peers:\n      - endpoint: peer0.org3.example.com:7051\n        tlsCACerts:\n          pem: |-\n            -----BEGIN CERTIFICATE-----\n            ...\n            -----END CERTIFICATE-----\n        grpcOptions:\n          grpc.keepalive_timeout_ms: 20000\n          grpc.keepalive_time_ms: 120000\n          grpc.http2.min_time_between_pings_ms: 120000\n          grpc.http2.max_pings_without_data: 0\n          grpc.keepalive_permit_without_calls: 1\n
                    "},{"location":"connectors/fabric-config/#runtime-settings","title":"Runtime settings","text":""},{"location":"connectors/fabric-config/#common-settings","title":"Common settings","text":"

                    Some runtime properties of the adapter can be set through Caliper\u2019s runtime configuration mechanism. For the available settings, see the caliper.fabric section of the default configuration file and its embedded documentation.

                    The above settings are processed when starting Caliper. Modifying them during testing will have no effect. However, you can override the default values before Caliper starts from the usual configuration sources. In the following example the localhost property applies only when binding with Fabric 2.2 or Fabric 1.4 (and only if the gateway option is enabled)

                    Note

                    An object hierarchy in a configuration file generates a setting entry for every leaf property. Consider the following configuration file:

                    caliper:\n    fabric:\n        gateway:\n          localhost: false\n
                    After naming the project settings file caliper.yaml and placing it in the root of your workspace directory, it will override the following two setting keys with the following values:

                    • Setting caliper-fabric-gateway-localhost is set to false

                    The other settings remain unchanged.

                    Alternatively you can change this setting when you launch caliper with the CLI options of

                    --caliper-fabric-gateway-localhost false

                    "},{"location":"connectors/fabric-config/#the-connector-api","title":"The connector API","text":"

                    The workload modules interact with the adapter at three phases of the tests: during the initialization of the user module (in the initializeWorkloadModule callback), when submitting invoke or query transactions (in the submitTransaction callback), and at the optional cleanup of the user module (in the cleanupWorkloadModule callback).

                    "},{"location":"connectors/fabric-config/#the-initializeworkloadmodule-function","title":"The initializeWorkloadModule function","text":"

                    See the corresponding documentation of the function for the description of its parameters.

                    The last argument of the function is a sutContext object, which is a platform-specific object provided by the backend blockchain\u2019s connector. The context object provided by this connector is a FabricConnectorContext instance but this doesn\u2019t provide anything of use at this time.

                    For the current details/documentation of the API, refer to the source code.

                    "},{"location":"connectors/fabric-config/#the-submittransaction-function","title":"The submitTransaction function","text":"

                    The sutAdapter object received (and saved) in the initializeWorkloadModule function is of type [ConnectorInterface](https://github.com/hyperledger-caliper/caliper/blob/v0.6.0/packages/caliper-core/lib/common/core/connector-interface.js). Its getType() function returns the fabric string value.

                    The sendRequests method of the connector API allows the workload module to submit requests to the SUT. It takes a single parameter: an object or array of objects containing the settings of the requests.

                    The settings object has the following structure:

                    • contractId: string. Required. The ID of the contract to call. This is either the unique contractID specified in the network configuration file or the chaincode ID used to deploy the chaincode and must match the id field in the contacts section of channels in the network configuration file.
                    • contractFunction: string. Required. The name of the function to call in the contract.
                    • contractArguments: string[]. Optional. The list of string arguments to pass to the contract.
                    • readOnly: boolean. Optional. Indicates whether the request is a TX or a query. Defaults to false.
                    • transientMap: Map. Optional. The transient map to pass to the contract.
                    • invokerIdentity: string. Optional. The name of the user who should invoke the contract. If not provided, a user will be selected from the organization defined by invokerMspId or the first organization in the network configuration file if that property is not provided.
                    • invokerMspId: string. Optional. The mspid of the user organization who should invoke the contract. Defaults to the first organization in the network configuration file.
                    • targetPeers: string[]. Optional. An array of endorsing peer names as the targets of the transaction proposal. If omitted, the target list will be chosen for you and if discovery is used then the node SDK uses discovery to determine the correct peers.
                    • targetOrganizations: string[]. Optional. An array of endorsing organizations as the targets of the invoke. If both targetPeers and targetOrganizations are specified, then targetPeers will take precedence.
                    • channel: string. Optional. The name of the channel on which the contract to call resides.
                    • timeout: number. Optional. [Only applies to 1.4 binding when not enabling gateway use] The timeout in seconds to use for this request.
                    • orderer: string. Optional. [Only applies to 1.4 binding when not enabling gateway use] The name of the target orderer for the transaction broadcast. If omitted, then an orderer node of the channel will be automatically selected.
                    • So invoking a contract looks like the following:

                      let requestSettings = {\n    contractId: 'marbles',\n    contractFunction: 'initMarble',\n    contractArguments: ['MARBLE#1', 'Red', '100', 'Attila'],\n    invokerIdentity: 'client0.org2.example.com',\n    timeout: 10\n};\n\nawait this.sutAdapter.sendRequests(requestSettings);\n

                      Note

                      sendRequests also accepts an array of request settings. However, Fabric does not support submitting an atomic batch of transactions like Sawtooth, so there is no guarantee that the order of these transactions will remain the same, or whether they will reside in the same block.

                      "},{"location":"connectors/fabric-config/#gathered-tx-data","title":"Gathered TX data","text":"

                      The previously discussed sendRequests function returns the result (or an array of results) for the submitted request(s) with the type of TxStatus. The class provides some standard and platform-specific information about its corresponding transaction.

                      The standard data provided are the following: - GetID():string returns the transaction ID. - GetStatus():string returns the final status of the transaction, either success or failed. - GetTimeCreate():number returns the epoch when the transaction was submitted. - GetTimeFinal():number return the epoch when the transaction was finished. - IsVerified():boolean indicates whether we are sure about the final status of the transaction. Unverified (considered failed) transactions could occur, for example, if the adapter loses the connection with every Fabric event hub, missing the final status of the transaction. - GetResult():Buffer returns one of the endorsement results returned by the chaincode as a Buffer. It is the responsibility of the user callback to decode it accordingly to the chaincode-side encoding.

                      The adapter also gathers the following platform-specific data (if observed) about each transaction, each exposed through a specific key name. The placeholders <P> and <O> in the key names are node names taking their values from the top-level peers and orderers sections from the network configuration file (e.g., endorsement_result_peer0.org1.example.com). The Get(key:string):any function returns the value of the observation corresponding to the given key. Alternatively, the GetCustomData():Map<string,any> returns the entire collection of gathered data as a Map.

                      "},{"location":"connectors/fabric-config/#available-data-keys-for-all-fabric-suts","title":"Available data keys for all Fabric SUTs","text":"

                      The adapter-specific data keys that are available when binding to any of the Fabric SUT versions are :

                      Key name Data type Description request_type string Either the transaction or query string value for traditional transactions or queries, respectively."},{"location":"connectors/fabric-config/#available-data-keys-for-the-fabric-14-sut-when-gateway-is-not-enabled","title":"Available data keys for the Fabric 1.4 SUT when gateway is not enabled","text":"

                      The adapter-specific data keys that only the v1.4 SUT when not enabling the gateway makes available are :

                      Key name Data type Description time_endorse number The Unix epoch when the adapter received the proposal responses from the endorsers. Saved even in the case of endorsement errors. proposal_error string The error message in case an error occurred during sending/waiting for the proposal responses from the endorsers. proposal_response_error_<P> string The error message in case the endorser peer <P> returned an error as endorsement result. endorsement_result_<P> Buffer The encoded contract invocation result returned by the endorser peer <P>. It is the user callback\u2019s responsibility to decode the result. endorsement_verify_error_<P> string Has the value of 'INVALID' if the signature and identity of the endorser peer <P> couldn\u2019t be verified. This verification step can be switched on/off through the runtime configuration options. endorsement_result_error<P> string If the transaction proposal or query execution at the endorser peer <P> results in an error, this field contains the error message. read_write_set_error string Has the value of 'MISMATCH' if the sent transaction proposals resulted in different read/write sets. time_orderer_ack number The Unix epoch when the adapter received the confirmation from the orderer that it successfully received the transaction. Note, that this isn\u2019t the actual ordering time of the transaction. broadcast_error_<O> string The warning message in case the adapter did not receive a successful confirmation from the orderer node <O>. broadcast_response_error_<O> string The error message in case the adapter received an explicit unsuccessful response from the orderer node <O>. unexpected_error string The error message in case some unexpected error occurred during the life-cycle of a transaction. commit_timeout_<P> string Has the value of 'TIMEOUT' in case the event notification about the transaction did not arrive in time from the peer node <P>. commit_error_<P> string Contains the error code in case the transaction validation fails at the end of its life-cycle on peer node <P>. commit_success_<P> number The Unix epoch when the adapter received a successful commit event from the peer node <P>. Note, that transactions committed in the same block have nearly identical commit times, since the SDK receives them block-wise, i.e., at the same time. event_hub_error_<P> string The error message in case some event hub connection-related error occurs with peer node <P>.

                      You can access these data in your workload module after calling sendRequests:

                      let requestSettings = {\n    contractId: 'marbles',\n    contractVersion: '0.1.0',\n    contractFunction: 'initMarble',\n    contractArguments: ['MARBLE#1', 'Red', '100', 'Attila'],\n    invokerIdentity: 'client0.org2.example.com',\n    timeout: 10\n};\n\n// single argument, single return value\nconst result = await this.sutAdapter.sendRequests(requestSettings);\n\nlet shortID = result.GetID().substring(8);\nlet executionTime = result.GetTimeFinal() - result.GetTimeCreate();\nconsole.log(`TX [${shortID}] took ${executionTime}ms to execute. Result: ${result.GetStatus()}`);\n
                      "},{"location":"connectors/fabric-config/#the-cleanupworkloadmodule-function","title":"The cleanupWorkloadModule function","text":"

                      The cleanupWorkloadModule function is called at the end of the round, and can be used to perform any resource cleanup required by your workload implementation.

                      "},{"location":"connectors/fabric-config/#network-configuration-file-reference","title":"Network configuration file reference","text":"

                      The YAML network configuration file of the adapter mainly describes the organizations and the identities associated with those organizations, It also provides explicit information about the channels in your Fabric network and the chaincode (containing 1 or more smart contracts) deployed to those channels. It can reference Common Connection Profiles for each organization (as common connection profiles are specific to a single organization). These are the same connection profiles that would be consumed by the node-sdk. Whoever creates the Fabric network and channels would be able to provide appropriate profiles for each organization.

                      The following sections detail each part separately. For a complete example, please refer to the example section or one of the files in the Caliper repository, such as the caliper-fabric test folder.

                      name Required. Non-empty string. The name of the configuration file.
                      name: Fabric\n
                      version Required. Non-empty string. Specifies the YAML schema version that the Fabric SDK will use. Only the `'2.0.0'` string is allowed.
                      version: '2.0.0'\n
                      caliper Required. Non-empty object. Contains runtime information for Caliper. Can contain the following keys.
                      • blockchain Required. Non-empty string. Only the \"fabric\" string is allowed for this adapter.
                        caliper:\n    blockchain: fabric\n
                      • sutOptions Required. Non-empty object. These are sut specific options block, the following are specific to the Fabric implementation
                        • mutualTls Optional. Boolean. Indicates whether to use client-side TLS in addition to server-side TLS. Cannot be set to
                          true
                          without using server-side TLS. Defaults to
                          false
                          .
                          caliper:\n    blockchain: fabric\n    sutOptions:\n      mutualTls: true\n
                      • command Optional. Non-empty object. Specifies the start and end scripts. Note:

                        Must contain at least one of the following keys.

                        • start Optional. Non-empty string. Contains the command to execute at startup time. The current working directory for the commands is set to the workspace.
                          caliper:\n  command:\n    start: my-startup-script.sh\n
                        • end Optional. Non-empty string. Contains the command to execute at exit time. The current working directory for the commands is set to the workspace.
                          caliper:\n  command:\n    end: my-cleanup-script.sh\n
                      info Optional. Object. Specifies custom key-value pairs that will be included as-is in the generated report. The key-value pairs have no influence on the runtime behavior.
                       info:\n   Version: 1.1.0\n   Size: 2 Orgs with 2 Peers\n   Orderer: Solo\n   Distribution: Single Host\n   StateDB: CouchDB\n
                      organizations Required. Non-empty object. Contains information about 1 or more organizations that will be used when running a workload. Even in a multi-organization Fabric network, workloads would usually only be run from a single organization so it would be common to only see 1 organization defined. However it does support defining multiple organizations for which a workload can explicitly declare which organization to use. The first Organization in the network configuration will be the default organization if no explicit organization is requested.
                       organizations:\n- mspid: Org1MSP\n  identities:\n    wallet:\n      path: './org1wallet'\n      adminNames:\n      - admin\n    certificates:\n    - name: 'User1'\n      clientPrivateKey:\n        pem: |-\n          -----BEGIN PRIVATE KEY-----\n          ...\n          -----END PRIVATE KEY-----\n      clientSignedCert:\n        pem: |-\n          -----BEGIN CERTIFICATE-----\n          ...\n          -----END CERTIFICATE-----\n  connectionProfile:\n    path: './Org1ConnectionProfile.yaml'\n    discover: true\n- mspid: Org2MSP\n  connectionProfile:\n    path: './Org2ConnectionProfile.yaml'\n    discover: false\n  identities:\n    wallet:\n      path: './org2wallet'\n      adminNames:\n      - admin\n- mspid: Org3MSP\n  peers:\n    - endpoint: peer0.org3.example.com:7051\n      tlsCACerts:\n        pem: |-\n          -----BEGIN CERTIFICATE-----\n          ...\n          -----END CERTIFICATE-----\n      grpcOptions:\n        grpc.keepalive_timeout_ms: 20000\n        grpc.keepalive_time_ms: 120000\n        grpc.http2.min_time_between_pings_ms: 120000\n        grpc.http2.max_pings_without_data: 0\n        grpc.keepalive_permit_without_calls: 1\n
                      Each organization must have mspid, identities and either connectionProfile or peers provided and at least 1 certificate or wallet definition in the identities section so that at least 1 identity is defined
                      • mspid Required. Non-empty string. The unique MSP ID of the organization.
                        organizations:\n  - mspid: Org1MSP\n
                      • connectionProfile Required if peers not provided. Non-empty object. Reference to a Fabric network Common Connection Profile. These profiles are the same profiles that the Fabric SDKs would consume in order to interact with a Fabric network. A Common Connection Profile is organization specific so you need to ensure you point to a Common Connection Profile that is representative of the organization it is being included under. Connection Profiles also can be in 2 forms. A static connection profile will contain a complete description of the Fabric network, ie all the peers and orderers as well as all the channels that the organization is part of. A dynamic connection profile will contain a minimal amount of information usually just a list of 1 or more peers belonging to the organization (or is allowed to access) in order to discover the Fabric network nodes and channels.
                        organizations:\n  - mspid: Org1MSP\n    connectionProfile:\n      path: './test/sample-configs/Org1ConnectionProfile.yaml'\n      discover: true\n
                      • path Required. Non-empty string. The path to the connection profile file
                        organizations:\n  - mspid: Org1MSP\n    connectionProfile:\n      path: './test/sample-configs/Org1ConnectionProfile.yaml'\n
                      • discover Optional. Boolean. This does not need to be provided when binding to the
                        fabric-gateway and will be ignored if provided and is relevant only for the 1.4 and 2.2 fabric bindings. A value of 
                        true indicates that the connection profile is a dynamic connection profile and discovery should be used. If not specified then it defaults to 
                        false. For a Fabric 1.4 binding you can only set this value to true if you plan to use the 
                        gateway option.\n\n        
                        organizations:\n  - mspid: Org1MSP\n    connectionProfile:\n      path: './test/sample-configs/Org1ConnectionProfile.yaml'\n      discover: true\n
                        \n \n \n \n \n
                      • \n \n peers\n\n Required if connectionProfile not provided. Non-empty object.\n \n Reference to one or more peers that are either\n
                          \n
                        • a peer to discover the network from when bound to Fabric 2.2 or Fabric 1.4 in conjunction with using the gateway enabled option\n
                        • \n
                        • a gateway peer when bound to Fabric gateway
                        • \n
                        \n\n This option removes the need for connection profiles but the Fabric network must be set up correctly to allow the network to be discovered. These entries are the equivalent of a dynamic connection profile but in a more compact and easier form.\n\n
                        organizations:\n  - mspid: Org3MSP\n    peers:\n      - endpoint: peer0.org3.example.com:7051\n        tlsCACerts:\n          pem: |-\n            -----BEGIN CERTIFICATE-----\n            ...\n            -----END CERTIFICATE-----\n        grpcOptions:\n          grpc.keepalive_timeout_ms: 20000\n          grpc.keepalive_time_ms: 120000\n          grpc.http2.min_time_between_pings_ms: 120000\n          grpc.http2.max_pings_without_data: 0\n          grpc.keepalive_permit_without_calls: 1\n
                        \n
                      • \n \n endpoint\n\n Required. Non-empty string.\n \n the end point of the peer in the form of host:port (note that you do not specify a schema such as grpc:// or grpcs://, in fact these schemas are not real and were invented purely for connection profiles). Whether the end point is secured by tls or not is determined by the presence of the tlsCACerts property\n\n
                        peers:\n  - endpoint: peer0.org3.example.com:7051\n
                        \n \n
                      • \n
                      • \n \n tlsCACerts\n Optional. Non-empty object.\n \n Specifies the tls root certificate chain to verify a TLS connection with the peer by the client\n \n Note:\n

                        Must contain at most one of the following keys.

                        \n \n
                          \n
                        • \n \n path\n Optional. Non-empty string.\n \n The path of the file containing the certificate chain.\n\n
                          tlsCACerts:\n  path: path/to/cert.pem\n
                          \n\n \n
                        • \n
                        • \n \n pem\n Optional. Non-empty string.\n \n The content of the certificate file in exact PEM format (which must split into multiple lines for yaml or include escaped new lines for json).\n
                          tlsCACerts:\n   pem: |\n    -----BEGIN CERTIFICATE-----\n    ...\n    -----END CERTIFICATE-----\n
                          \n \n
                        • \n
                        \n \n
                      • \n
                      • \n \n grpcOptions\n Optional. Non-empty Object.\n \n A set of grpc specific options when creating a grpc connection to a peer.\n
                        peers:\n  - endpoint: peer0.org3.example.com:7051\n    grpcOptions:\n      grpc.keepalive_timeout_ms: 20000\n      grpc.keepalive_time_ms: 120000\n      grpc.http2.min_time_between_pings_ms: 120000\n      grpc.http2.max_pings_without_data: 0\n      grpc.keepalive_permit_without_calls: 1\n
                        \n \n
                      • \n\n \n
                      • \n
                      • \n \n identities\n\n Required. Non-empty object.\n \n Defines the location of 1 or more identities available for use. Currently only supports explicit identities by providing a certificate and private key as PEM or an SDK wallet that contains 1 or more identities on the file system. At least 1 identity must be provided via one of the child properties of identity.\n\n
                        identities:\n   wallet:\n     path: './wallets/org1wallet'\n     adminNames:\n     - admin\n   certificates:\n   - name: 'User1'\n     clientPrivateKey:\n       pem: |-\n         -----BEGIN PRIVATE KEY-----\n         ...\n         -----END PRIVATE KEY-----\n     clientSignedCert:\n       pem: |-\n         -----BEGIN CERTIFICATE-----\n         ...\n         -----END CERTIFICATE-----\n
                        \n
                          \n
                        • \n \n certificates\n Optional. A List of non-empty objects.\n \n Defines 1 or more identities by providing the PEM information for the client certificate and client private key as either an embedded PEM, a base64 encoded string of the PEM file contents or a path to individual PEM files\n\n
                          certificates:\n- name: 'User1'\n  clientPrivateKey:\n     path: path/to/privateKey.pem\n  clientSignedCert:\n     path: path/to/cert.pem\n- name: 'Admin'\n  admin: true\n  clientPrivateKey:\n   pem: |-\n    -----BEGIN PRIVATE KEY-----\n    MIGHAgEAMBMGByqGSM49AgEGCCqGSM49AwEHBG0wawIBAQQgIRZo3SAPXAJnGVOe\n    jRALBJ208m+ojeCYCkmJQV2aBqahRANCAARnoGOEw1k+MtjHH4y2rTxRjtOaKWXn\n    FGpsALLXfBkKZvxIhbr+mPOFZVZ8ztihIsZBaCuCIHjw1Tx65szJADcO\n    -----END PRIVATE KEY-----\n  clientSignedCert:\n   pem: |-\n     -----BEGIN CERTIFICATE-----\n    MIICSDCCAe+gAwIBAgIQfpGy5OOXBYpKZxg89x75hDAKBggqhkjOPQQDAjB2MQsw\n    CQYDVQQGEwJVUzETMBEGA1UECBMKQ2FsaWZvcm5pYTEWMBQGA1UEBxMNU2FuIEZy\n    YW5jaXNjbzEZMBcGA1UEChMQb3JnMS5leGFtcGxlLmNvbTEfMB0GA1UEAxMWdGxz\n    Y2Eub3JnMS5leGFtcGxlLmNvbTAeFw0xODA5MjExNzU3NTVaFw0yODA5MTgxNzU3\n    NTVaMHYxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlhMRYwFAYDVQQH\n    Ew1TYW4gRnJhbmNpc2NvMRkwFwYDVQQKExBvcmcxLmV4YW1wbGUuY29tMR8wHQYD\n    VQQDExZ0bHNjYS5vcmcxLmV4YW1wbGUuY29tMFkwEwYHKoZIzj0CAQYIKoZIzj0D\n    AQcDQgAED4FM1+iq04cjveIDyn4uj90lJlO6rASeOIzm/Oc2KQOjpRRlB3H+mVnp\n    rXN6FacjOp0/6OKeEiW392dcdCMvRqNfMF0wDgYDVR0PAQH/BAQDAgGmMA8GA1Ud\n    JQQIMAYGBFUdJQAwDwYDVR0TAQH/BAUwAwEB/zApBgNVHQ4EIgQgPQRWjQR5EUJ7\n    xkV+zbfY618IzOYGIpfLaV8hdlZfWVIwCgYIKoZIzj0EAwIDRwAwRAIgYzk8553v\n    fWAOZLxiDuMN9RiHve1o5aAQad+uD+eLpxMCIBmv8CtXf1C60h/0zyG1D6tTTnrB\n    H8Zua3x+ZQn/kqVv\n    -----END CERTIFICATE-----\n- name: 'User3'\n  clientPrivateKey:\n   pem: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JR0hBZ0VBTUJNR0J5cUdTTTQ5QWdFR0NDcUdTTTQ5QXdFSEJHMHdhd0lCQVFRZ0lSWm8zU0FQWEFKbkdWT2UKalJBTEJKMjA4bStvamVDWUNrbUpRVjJhQnFhaFJBTkNBQVJub0dPRXcxaytNdGpISDR5MnJUeFJqdE9hS1dYbgpGR3BzQUxMWGZCa0tadnhJaGJyK21QT0ZaVlo4enRpaElzWkJhQ3VDSUhqdzFUeDY1c3pKQURjTwotLS0tLUVORCBQUklWQVRFIEtFWS0tLS0tCg==\n  clientSignedCert:\n   pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNXRENDQWY2Z0F3SUJBZ0lSQU1wU2dXRmpESE9vaFhhMFI2ZTlUSGd3Q2dZSUtvWkl6ajBFQXdJd2RqRUwKTUFrR0ExVUVCaE1DVlZNeEV6QVJCZ05WQkFnVENrTmhiR2xtYjNKdWFXRXhGakFVQmdOVkJBY1REVk5oYmlCRwpjbUZ1WTJselkyOHhHVEFYQmdOVkJBb1RFRzl5WnpFdVpYaGhiWEJzWlM1amIyMHhIekFkQmdOVkJBTVRGblJzCmMyTmhMbTl5WnpFdVpYaGhiWEJzWlM1amIyMHdIaGNOTWpBd09UQTNNVEUwTWpBd1doY05NekF3T1RBMU1URTAKTWpBd1dqQjJNUXN3Q1FZRFZRUUdFd0pWVXpFVE1CRUdBMVVFQ0JNS1EyRnNhV1p2Y201cFlURVdNQlFHQTFVRQpCeE1OVTJGdUlFWnlZVzVqYVhOamJ6RVpNQmNHQTFVRUNoTVFiM0puTVM1bGVHRnRjR3hsTG1OdmJURWZNQjBHCkExVUVBeE1XZEd4elkyRXViM0puTVM1bGVHRnRjR3hsTG1OdmJUQlpNQk1HQnlxR1NNNDlBZ0VHQ0NxR1NNNDkKQXdFSEEwSUFCTWRMdlNVRElqV1l1Qnc0WVZ2SkVXNmlmRkx5bU9BWDdHS1k2YnRWUERsa2RlSjh2WkVyWExNegpKV2ppdnIvTDVWMlluWnF2ME9XUE1NZlB2K3pIK1JHamJUQnJNQTRHQTFVZER3RUIvd1FFQXdJQnBqQWRCZ05WCiBIU1VFRmpBVUJnZ3JCZ0VGQlFjREFnWUlLd1lCQlFVSEF3RXdEd1lEVlIwVEFRSC9CQVV3QXdFQi96QXBCZ05WCkhRNEVJZ1FnNWZPaHl6d2FMS20zdDU0L0g0YjBhVGU3L25HUHlKWk5oOUlGUks2ZkRhQXdDZ1lJS29aSXpqMEUKQXdJRFNBQXdSUUloQUtFbnkvL0pZN0dYWi9USHNRSXZVVFltWHNqUC9iTFRJL1Z1TFg3VHpjZWZBaUJZb1N5WQp5OTByZHBySTZNcDZSUGlxalZmMDJQNVpDODZVa1AwVnc0cGZpUT09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K\n
                          \n
                            \n
                          • \n \n name\n Required. Non-empty string.\n \n Specifies a name to associate with this identity. This name doesn\u2019t have to match anything within the certificate itself but must be unique\n\n
                            certificates:\n  - name: 'User1'\n
                            \n \n
                          • \n \n admin\n Optional. Boolean.\n \n Indicates if this identity can be considered an admin identity for the organization. Defaults to false if not provided This only needs to be provided if you plan to create channels and/or install and instantiate contracts (chaincode)\n\n
                            certificates:\n  - name: 'User2'\n    admin: true\n
                            \n \n
                          • \n \n clientPrivateKey\n Required. Non-empty object.\n \n Specifies the identity\u2019s private key for the organization.\n \n Note:\n

                            Must contain at most one of the following keys.

                            \n \n
                              \n
                            • \n \n path\n Optional. Non-empty string.\n \n The path of the file containing the private key\n
                               clientPrivateKey:\n    path: path/to/cert.pem\n
                              \n \n
                            • \n \n pem\n Optional. Non-empty string.\n \n The content of the private key file either in exact PEM format (which must split into multiple lines for yaml, or contain newline characters for JSON), or it could be a base 64 encoded version of the PEM (which will also encode the required newlines) as a single string. This single string format makes it much easier to embed into the network configuration file especially for a JSON based file\n
                              clientPrivateKey:\n   pem: |\n     -----BEGIN PRIVATE KEY-----\n      MIGHAgEAMBMGByqGSM49AgEGCCqGSM49AwEHBG0wawIBAQQgIRZo3SAPXAJnGVOe\n      jRALBJ208m+ojeCYCkmJQV2aBqahRANCAARnoGOEw1k+MtjHH4y2rTxRjtOaKWXn\n      FGpsALLXfBkKZvxIhbr+mPOFZVZ8ztihIsZBaCuCIHjw1Tx65szJADcO\n      -----END PRIVATE KEY-----\n
                              \n \n \n \n
                            • \n \n clientSignedCert\n Required. Non-empty object.\n \n Specifies the identity\u2019s certificate for the organization.\n \n Note:\n

                              Must contain at most one of the following keys.

                              \n \n
                                \n
                              • \n \n path\n Optional. Non-empty string.\n \n The path of the file containing the certificate\n
                                 clientSignedCert:\n    path: path/to/cert.pem\n
                                \n \n
                              • \n \n pem\n Optional. Non-empty string.\n \n The content of the certificate file either in exact PEM format (which must split into multiple lines for yaml, or contain newline characters for JSON), or it could be a base 64 encoded version of the PEM (which will also encode the required newlines) as a single string. This single string format makes it much easier to embed into the network configuration file especially for a JSON based file\n\n
                                clientSignedCert:\n   pem: |\n     -----BEGIN CERTIFICATE-----\n      MIICSDCCAe+gAwIBAgIQfpGy5OOXBYpKZxg89x75hDAKBggqhkjOPQQDAjB2MQsw\n      CQYDVQQGEwJVUzETMBEGA1UECBMKQ2FsaWZvcm5pYTEWMBQGA1UEBxMNU2FuIEZy\n      YW5jaXNjbzEZMBcGA1UEChMQb3JnMS5leGFtcGxlLmNvbTEfMB0GA1UEAxMWdGxz\n      Y2Eub3JnMS5leGFtcGxlLmNvbTAeFw0xODA5MjExNzU3NTVaFw0yODA5MTgxNzU3\n      NTVaMHYxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlhMRYwFAYDVQQH\n      Ew1TYW4gRnJhbmNpc2NvMRkwFwYDVQQKExBvcmcxLmV4YW1wbGUuY29tMR8wHQYD\n      VQQDExZ0bHNjYS5vcmcxLmV4YW1wbGUuY29tMFkwEwYHKoZIzj0CAQYIKoZIzj0D\n      AQcDQgAED4FM1+iq04cjveIDyn4uj90lJlO6rASeOIzm/Oc2KQOjpRRlB3H+mVnp\n      rXN6FacjOp0/6OKeEiW392dcdCMvRqNfMF0wDgYDVR0PAQH/BAQDAgGmMA8GA1Ud\n      JQQIMAYGBFUdJQAwDwYDVR0TAQH/BAUwAwEB/zApBgNVHQ4EIgQgPQRWjQR5EUJ7\n      xkV+zbfY618IzOYGIpfLaV8hdlZfWVIwCgYIKoZIzj0EAwIDRwAwRAIgYzk8553v\n      fWAOZLxiDuMN9RiHve1o5aAQad+uD+eLpxMCIBmv8CtXf1C60h/0zyG1D6tTTnrB\n      H8Zua3x+ZQn/kqVv\n      -----END CERTIFICATE-----\n
                                \n \n \n \n \n \n
                              • \n
                              • \n \n wallet\n Optional. Non-empty object.\n \n Provide the path to a file system wallet. Be aware that the persistence format used between v1.x and v2.x of the node sdks changed so make sure you provide a wallet created in the appropriate format for the version of SUT you bind to.\n
                                  \n
                                • \n \n path\n Required. Non-empty string.\n \n The path to the file system wallet\n
                                  identities:\n  wallet:\n    path: './wallets/org1wallet'\n
                                  \n \n
                                • \n \n adminNames\n Optional. List of strings.\n \n 1 or more names in the wallet that are identified as organization administrators. This only needs to be provided if you plan to create channels and/or install and instantiate contracts (chaincode)\n
                                  identities:\n  wallet:\n    path: './wallets/org1wallet'\n    adminNames:\n    - admin\n    - another_admin\n
                                  \n \n \n \n
                                • \n \n \n \n
                                \n\n\n\n channels\n\n Required. A list of objects.\n \n Contains one or more unique channels with associated information about the chaincode (contracts section) that will be available on the channel\n\n
                                channels:\n- channelName: mychannel\n  contracts:\n  - id: marbles\n    contractID: myMarbles\n\n- channelName: somechannel\n  contracts:\n  - id: basic\n
                                \n
                                  \n
                                • \n \n channelName\n\n Required. Non-empty String.\n \n The name of the channel.\n\n
                                  channels:\n  - channelName: mychannel\n
                                  \n \n
                                • \n
                                • \n \n contracts\n Required. Non-sparse array of objects.\n \n Each array element contains information about a chaincode deployed to the channel.\n \n Note:\n

                                  the contractID value of every contract in every channel must be unique on the configuration file level! If contractID is not specified for a contract then its default value is the id of the contract.

                                  \n \n\n
                                  channels:\n  mychannel\n    contracts:\n    - id: simple\n    - id: smallbank\n
                                  \n
                                • \n \n id\n\n Required. Non-empty string.\n \n The chaincode ID that was specified when the chaincode was deployed to the channel\n\n
                                  channels:\n  mychannel\n    contracts:\n    - id: simple\n
                                  \n \n
                                • \n
                                • \n \n contractID\n\n Required. Non-empty string.\n \n The Caliper-level unique ID of the contract. This ID will be referenced from the user callback modules. Can be an arbitrary name, it won\u2019t effect the contract properties on the Fabric side.\n \n If omitted, it defaults to the id property value.\n\n
                                  channels:\n  mychannel\n    contracts:\n    - id: simple\n    - contractID: simpleContract\n
                                  \n \n
                                • \n \n \n
                                "},{"location":"connectors/fabric-config/#network-configuration-example","title":"Network Configuration Example","text":"

                                The following example is a Fabric network configuration for the following network topology and artifacts:

                                \n
                                  \n
                                • two organizations Org1MSP and Org2MSP (Note that having 2 organizations is not common in a network configuration file);
                                • \n
                                • one channel named mychannel;
                                • \n
                                • asset-transfer-basic chaincode deployed to mychannel with a chaincode id of basic;
                                • \n
                                • the nodes of the network use TLS communication, but not mutual TLS;
                                • \n
                                • the Fabric samples test network is started and terminated automatically by Caliper;
                                • \n
                                \n
                                name: Fabric\nversion: \"2.0.0\"\n\ncaliper:\n  blockchain: fabric\n  sutOptions:\n    mutualTls: false\n  command:\n    start: ../fabric-samples/test-network/network.sh up createChannel && ../fabric-samples/test-network/network.sh deployCC -ccp ../fabric-samples/asset-transfer-basic/chaincode-javascript -ccn basic -ccl javascript\n    end: ../fabric-samples/test-network/network.sh down\n\ninfo:\n  Version: 1.1.0\n  Size: 2 Orgs\n  Orderer: Raft\n  Distribution: Single Host\n  StateDB: GoLevelDB\n\nchannels:\n  - channelName: mychannel\n    contracts:\n    - id: basic\n      contractID: BasicOnMyChannel\n\norganizations:\n  - mspid: Org1MSP\n    identities:\n      certificates:\n      - name: 'admin.org1.example.com'\n        admin: true\n        clientPrivateKey:\n          pem: |-\n            -----BEGIN PRIVATE KEY-----\n            ...\n            -----END PRIVATE KEY-----\n        clientSignedCert:\n          pem: |-\n            -----BEGIN CERTIFICATE-----\n            ...\n            -----END CERTIFICATE-----\n    connectionProfile:\n      path: './Org1ConnectionProfile.yaml'\n      discover: true\n  - mspid: Org2MSP\n    connectionProfile:\n    identities:\n      certificates:\n      - name: 'admin.org2.example.com'\n        admin: true\n        clientPrivateKey:\n          pem: |-\n            -----BEGIN PRIVATE KEY-----\n            ...\n            -----END PRIVATE KEY-----\n        clientSignedCert:\n          pem: |-\n            -----BEGIN CERTIFICATE-----\n            ...\n            -----END CERTIFICATE-----\n      path: './Org2ConnectionProfile.json'\n      discover: true\n
                                \n

                                Another example with only a single organization but using the peers property so everything required is contained in a single network configuration file:

                                \n
                                name: Fabric\nversion: \"2.0.0\"\n\ncaliper:\n  blockchain: fabric\n  sutOptions:\n    mutualTls: false\n\nchannels:\n  - channelName: mychannel\n    contracts:\n    - id: basic\n\norganizations:\n  - mspid: Org1MSP\n    identities:\n      certificates:\n      - name: 'admin.org1.example.com'\n        admin: true\n        clientPrivateKey:\n          pem: |-\n            -----BEGIN PRIVATE KEY-----\n            ...\n            -----END PRIVATE KEY-----\n        clientSignedCert:\n          pem: |-\n            -----BEGIN CERTIFICATE-----\n            ...\n            -----END CERTIFICATE-----\n    peers:\n      - endpoint: peer0.org1.example.com:7051\n        grpcOptions:\n          ssl-target-name-override: peer0.org1.example.com\n          grpc.keepalive_time_ms: 600000\n        tlsCACerts:\n          pem: |-\n            -----BEGIN CERTIFICATE-----\n            ...\n            -----END CERTIFICATE-----\n
                                "},{"location":"connectors/fabric-config/#license","title":"License","text":"

                                The Caliper codebase is released under the Apache 2.0 license. Any documentation developed by the Caliper Project is licensed under the Creative Commons Attribution 4.0 International License. You may obtain a copy of the license, titled CC-BY-4.0, at http://creativecommons.org/licenses/by/4.0/.

                                "},{"location":"connectors/writing-connectors/","title":"Writing Connectors","text":""},{"location":"connectors/writing-connectors/#overview","title":"Overview","text":"

                                Connectors are probably the most important modules in Caliper. They provide an abstraction layer between the system under test (SUT) and the different Caliper components (e.g., the manager and workers processes, or the workload modules). A connector\u2019s job is to simplify interaction with the SUT as much as possible, hiding any peculiarities behind its API.

                                Note

                                To get a sense of how a connector fits into the Caliper architecture, see the corresponding architecture documentation sections about multi-platform support, the manager process and the worker processes.

                                Caliper ships with some predefined/built-in connectors, but in general, connectors are treated as pluggable components (just like resource and TX monitors, workload modules, etc.). So nothing stops you from implementing and using your 3rd party connector! However, we strongly recommend that you absorb every part of this guide before implementing a new connector.

                                "},{"location":"connectors/writing-connectors/#requirements-for-quality-connectors","title":"Requirements for quality connectors","text":"

                                A connector\u2019s complexity is usually proportional to the complexity of the SUT (and/or its programming model). Accordingly, connectors are considered heavy-weight components compared to other extension points in Caliper.

                                There are a few things to keep in mind when implementing a connector. Some are technical, some impact usability.

                                Note

                                Caliper does not constrain the types of SUT you can test. Accordingly, the following points are general guidelines for connectors targeting complex distributed systems. You are allowed to deviate from the advices, but you should probably document such choices to lower the surprise factor for end-users.

                                1. Keep to the predefined interface.
                                2. You must implement the given interface so Caliper modules can interact with the connector.
                                3. If you expose additional capabilities outside of the interface, then you will disrupt the programming model of workload module developers. They will have to handle your extra API as a separate case, complicating the development of cross-platform benchmarks However, if you are implementing a workload module to performance test a specific SUT, then this is not a concern.
                                4. If your connector behaves similarly to other connectors following this guide, users will be quick to adapt and experiment with your connector/SUT.

                                5. Consider the distributed nature of the SUT.

                                6. Distributed systems consist of multiple nodes, sometimes with different roles.
                                7. You probably don\u2019t want the connector to be just a proxy for a single SUT node. The connector should be aware of as many SUT nodes as it makes sense to support features like load balancing or SUT-specific request execution policies.
                                8. Hide the network topology as much as you can from other Caliper modules, especially from workload modules. Most of the time an emulated client does not care about the receiving end of a request as long as it\u2019s executed.
                                9. If you must expose certain nodes to the workload modules, then do that through simple (preferably text-based) handles. But do not expose implementation-specific classes representing the nodes!

                                10. Consider the actors in the SUT.

                                11. Authentication and authorization are cornerstones for almost every remote system, thus handling digital identities (that can come in many forms) must be a first-class feature in connectors.
                                12. Similarly to the nodes in the SUT, there will be many actors/clients with different privileges initiating different requests. A connector should be able to impersonate multiple clients to allow for diverse client behaviors in workload modules.
                                13. A connector should allow for easy switching between client identities for each request, thus the connector must expose the identities towards the workload modules somehow. The same advice applies as for the nodes: use simple (preferably text-based) handles, but do not expose implementation-specific classes representing the identities!

                                14. Do not reinvent the wheel.

                                15. Each system exposes a standard remote API for communicating with clients. These APIs can take many forms (REST, gRPC, etc.). Regardless of the used API technology, there\u2019s probably a mature client library for it. Or even better, the target platform has its own SDK!
                                16. A connector shouldn\u2019t bother with network-level communication and such low-level details. Delegate such tasks to the SDKs or client libraries. This will allow your connector to be more robust and to attract additional contributors familiar with the used libraries.

                                17. Do not be the bottleneck.

                                18. The purpose of Caliper is to performance test the SUT from the client\u2019s point-of-view.
                                19. If assembling and sending a request takes time in the same order of magnitude as executing the request, then the results won\u2019t be representative. Sending requests is considered a hot path for connectors, and it should be as efficient as possible.
                                20. Using SDKs and widely known client libraries is an exception. Real client-side applications will probably do the same, so the library overheads must be incorporated into the request latencies. Do not micro-optimize by writing your own special-purpose SDK, just to push down the latency numbers!
                                21. Connector bottlenecks on the hot path will influence/limit the request output rate of Caliper worker processes. Caliper users won\u2019t be happy if they have to launch 10 worker processes just to send 100 requests per second to the SUT.

                                Note

                                A connector\u2019s job is to bridge the platform-agnostic Caliper-side API with the high-level SUT-specific client library, while adhering to the above points.

                                "},{"location":"connectors/writing-connectors/#implementing-the-connector","title":"Implementing the connector","text":"

                                You should treat a connector implementation process as a full-fledged Node.js project. Refer to the Caliper integration section for the suggested project structure. Putting the project structure aside, you have four implementation-related tasks:

                                1. Implement the connector interface (optionally using the available utility base class).
                                2. Implement a factory method for instantiating the connector.
                                3. Define the schema of your network configuration file.
                                4. Provide binding configurations for your connector.
                                "},{"location":"connectors/writing-connectors/#the-connector-interface","title":"The connector interface","text":"

                                Once you add the @hyperledger/caliper-core package (or one of its specific versions) as your project dependency, you will gain access to its exported ConnectorInterface class, which declares the following interface:

                                class ConnectorInterface extends EventEmitter {\n    getType() {}\n    getWorkerIndex() {}\n    async init(workerInit) {}\n    async installSmartContract() {}\n    async prepareWorkerArguments(number) {}\n    async getContext(roundIndex, args) {}\n    async releaseContext() {}\n    async sendRequests(requests) {}\n}\n\nmodule.exports = ConnectorInterface;\n

                                The interface is detailed in the next subsection, but for now, keep the following things in mind:

                                1. The connector is used in two different environments: in the manager and worker processes. The corresponding environment of the methods will be discussed in the interface reference subsection.
                                2. The connector must expose certain events about the requests, otherwise it\u2019s not observable by the Caliper workers, which breaks the scheduling mechanism of Caliper.
                                3. sendRequests is the hot path for the interface, implement it carefully and efficiently!
                                4. The behavior of the connector (and the methods to really implement) largely depends on the capabilities of the network configuration schema. The more flexibility you allow on the Caliper-side of the network, the more features you will have to provide. A flexible connector makes it easier to setup benchmark scenarios, resulting in happy users.
                                "},{"location":"connectors/writing-connectors/#interface-reference","title":"Interface reference","text":"

                                getType

                                • Description Retrieves a short name for the connector type, usually denoting the SUT, e.g., fast-ledger. The name can be used by workload modules capable of targeting multiple types of SUT.
                                • Return type string
                                • Returns The name of the connector.

                                getWorkerIndex

                                • Description Retrieves the zero-based worker process index that instantiated the connector.
                                • Return type number
                                • Returns The worker process index.

                                init

                                • Description The method is called by both the manager and (optionally) the worker processes to initialize the connector instance, and potentially certain aspects of the SUT. The initialization tasks are connector-specific, but usually can be divided among the processes:
                                  • The manager process instance performs one-time initialization tasks that require interaction with the SUT. Such tasks can include, for example, the creation of digital identities, or other housekeeping actions regarding the SUT.
                                  • The worker process instances usually just perform local housekeeping tasks, like creating the necessary data structures or caches needed later on when sending requests. This step is optional and can be performed by your factory method after creating the connector instance. If the worker processes need to access the SUT in this phase, then they should do so using only idempotent operations (like configuration queries) that guarantee correct execution for arbitrary number of parallel worker processes.
                                • Parameters
                                  • workerInit (boolean) Denotes whether the method is called by a worker process, or by the manager process.
                                • Return type Promise
                                • Returns The promise that will resolve upon method completion.

                                installSmartContract

                                • Description The method is called by the manager process to perform contract deployment on the SUT, if allowed remotely.
                                • Return type Promise
                                • Returns The promise that will resolve upon method completion.

                                prepareWorkerArguments

                                • Description This method is called by the manager process, and ensures that the connector instance in the manager process can distribute data to the connector instances in the worker processes. This method is the perfect place to return, for example, newly created digital identities to the manager process, which in turn will distribute them to the worker process instances for further use.
                                • Return type Promise<object[]>
                                • Returns The promise of connector-specific objects for each worker that will resolve upon method completion.

                                getContext

                                • Description: The method is called by the worker processes before each round, and can be used to assemble a connector-specific object that will be shared with the workload module of the current round. The method is also the perfect place to claim resources necessary for the next round, like establishing connections to remote nodes.
                                • Parameters:
                                • roundIndex (number): The zero-based index of the imminent round.
                                • args (object): The object assembled for this worker instance in the prepareWorkerArguments method of the manager instance.
                                • Return type: Promise<object>
                                • Returns: The promise of a connector-specific object that will resolve upon method completion.

                                releaseContext

                                • Description: The method is called by the worker processes after each round, and can be used to release resources claimed in the getContext method.
                                • Return type: Promise
                                • Returns: The promise that will resolve upon method completion.

                                sendRequests

                                • Description: This method is the hot path of the connector, called in the worker processes by the workload modules of the rounds. The method must accept one or multiple settings objects pertaining to the request or requests that must be sent to the SUT. The connector doesn\u2019t have to preserve the order of execution for the requests, unless the target SUT type supports such request batches. The connector must gather at least the start time, finish time, and final status (successful or failed) of every request through TxStatus instances.
                                • Return type: Promise
                                • Returns: The promise of one or more request execution results that will resolve upon method completion.
                                • "},{"location":"connectors/writing-connectors/#exposed-events","title":"Exposed events","text":"

                                  The connector must expose the following events with names matching the defined constants for them. Without these events the Caliper scheduling mechanism won\u2019t function correctly, and other components might also rely on them (like TX monitors).

                                  txsSubmitted

                                  • Description The event must be raised when one or more requests are submitted for execution to the SUT. Typically the event should be raised for every individual request.
                                  • Parameters
                                    • count (number) The number of requests submitted.

                                  txsFinished

                                  • Description The event must be raised when one or more requests are fully processed by the SUT (i.e., the connector received the results).
                                  • Parameters
                                    • results (TxStatus|TxStatus[]) One or more request execution result gathered by the connector.
                                  "},{"location":"connectors/writing-connectors/#optional-base-class","title":"Optional base class","text":"

                                  The @hyperledger/caliper-core package also exports a ConnectorBase class that provides sensible default implementations for the following ConnectorInterface methods:

                                  • prepareWorkerArguments: An empty object is returned for each worker by default, i.e., nothing is shared with the worker process instances.
                                  • sendRequests: Handles the cases when a single or multiple requests are submitted by the workload modules. Also raises the necessary events before and after the requests. The method delegates the execution of a single request to the _sendSingleRequest method (see below).
                                  • constructor: Declares a constructor that requires the worker index and SUT/connector type as parameters.
                                  • getType: Provides a simple getter for the corresponding constructor argument.
                                  • getWorkerIndex: Provides a simple getter for the corresponding constructor argument.

                                  If you opt in to use this base class for your connector then you must implement the _sendSingleRequest method.

                                  _sendSingleRequest

                                  • Description The method only has to handle the sending and processing of a single request.
                                  • Parameters
                                    • request (object) A connector-specific settings object for the request.
                                  • Return type Promise
                                  • Returns The promise of a request execution result that will resolve upon method completion.
                                  • "},{"location":"connectors/writing-connectors/#the-factory-method","title":"The factory method","text":"

                                    The entry point for your connector implementation will be a factory method. The manager and worker processes will call this exported factory method to instantiate your connector (be careful with the casing).

                                    ConnectorFactory - Description Instantiates a connector and optionally initializes it. When called from the manager process (denoted with a worker index of -1), the manager will handle calling the init and installSmartContracts methods. This initialization is optional in the worker processes, so the factory method must handle it if required. - Parameters - workerIndex (number) The zero-based index of the worker process, or -1 for the manager process. - Return type Promise - Returns The promise of a ConnectorInterface instance that will resolve upon method completion.

                                    The following is a possible implementation of a factory method for our fast-ledger connector:

                                        'use strict';\n\n    const FastLedgerConnector = require('./fast-ledger-connector');\n\n    async function ConnectorFactory(workerIndex) {\n        const connector = new FastLedgerConnector(workerIndex, 'fast-ledger');\n\n        // initialize the connector for the worker processes\n        if (workerIndex >= 0) {\n            await connector.init(true);\n        }\n\n        return connector;\n    }\n\n    module.exports.ConnectorFactory = ConnectorFactory;\n
                                    "},{"location":"connectors/writing-connectors/#the-network-configuration-file","title":"The network configuration file","text":"

                                    The network configuration file can contain whatever information your connector requires to communicate with the SUT and fulfill the connector quality requirements. The configuration file can be either a JSON or YAML file. YAML is preferred for its readability and comment support.

                                    The network configuration schema must contain a mandatory top-level field with the following structure:

                                    # mandatory\ncaliper:\n  # mandatory\n  blockchain: fast-ledger\n  # optional\n  commands:\n    start: startLedger.sh\n    end: stopLedger.sh\n

                                    The caliper.blockchain attribute tells Caliper which connector to load for the test. The value of the attribute depends on how you want to integrate the connector with Caliper.

                                    "},{"location":"connectors/writing-connectors/#binding-configuration","title":"Binding configuration","text":"

                                    The binding command of Caliper allows you to specify major connector dependencies to be installed during runtime (instead of packaged with the connector during development time). SUT SDKs and other client libraries usually fall into this category (i.e., libraries that facilitate interactions with the SUT). If the APIs of such libraries are consistent across different versions, then your single connector implementation can possibly target multiple SUT versions.

                                    In that case, users should be able to select a specific SDK version that will target the corresponding SUT version. You can achieve this by providing a binding configuration file (JSON or YAML) for your connector.

                                    "},{"location":"connectors/writing-connectors/#simple-configuration","title":"Simple configuration","text":"

                                    The schema of a general binding configuration is usually simple:

                                    sut:\n  fast-ledger:\n    1.0:\n      packages: ['fast-ledger-sdk@1.0.0']\n    1.4:\n      packages: ['fast-ledger-sdk@1.4.5']\n    2.0: &fast-ledger-latest\n      packages: ['fast-ledger-sdk@2.0.0']\n    latest: *fast-ledger-latest\n

                                    Several things to note about the above configuration:

                                    1. The sut top-level attribute denotes the configuration section that Caliper will process. You can write arbitrary YAML sections outside of this attribute, without any schema constraints. This means you can utilize, for example, YAML anchors and aliases to improve the readability of your complex binding specification. You will see an example soon.
                                    2. The sut attribute contains keys that identify the SUT types whose connector supports binding. We defined a single SUT type (fast-ledger) for our example connector.
                                    3. Under fast-ledger we can define several SUT versions our connector supports. It\u2019s recommended to use keys corresponding to the semantic version of the SUT. The users will specify the binding using the SUT type and SUT version, for example, by passing the --caliper-bind-sut fast-ledger:1.4 command line argument to Caliper.
                                    4. Every SUT version needs to declare the required packages Caliper should install during runtime. The different SUT versions will usually declare different SDK version to install.
                                    5. Even though we declared 1.4 as SUT version, we asked Caliper to install the 1.4.5 SDK version. It\u2019s good practice to always bind to the latest available patch release, so users can enjoy the latest bug fixes for an SDK version.
                                    6. Many library management systems (like NPM and DockerHub) provide latest tags to denote the newest release. If you provide such a binding \u201cversion\u201d for your connector, then users can bind your connector by using the simplified --caliper-bind-sut fast-ledger notation. You can easily refer to the binding version you deem latest using YAML anchors and aliases. Doing so will make your configuration easier to read and maintain.
                                    "},{"location":"connectors/writing-connectors/#advanced-configuration","title":"Advanced configuration","text":"

                                    Even though your connector supports multiple SUT versions on the implementation level, that doesn\u2019t mean that all versions can be equally supported in the same environment. A typical example would be to support older SUT versions, whose corresponding SDK packages fail to build \u201cautomagically\u201d under newer Node.js version. The binding configuration provides some flexibility to tune the installation of these packages.

                                    Node modules can include native components which may have to be compiled on the system it will run on and against the specific version of node that is being used (some package owners make precompiled versions for specific platforms and versions of node available to download to avoid having to perform a local compilation). This means you will have to have appropriate compilers installed. Moreover, the used compiler is strict by default, resulting in multiple compilation errors.

                                    To circumvent such hardships, the binding configuration schema allows us to tinker with the install process by specifying command line arguments and environment variables (picked up by npm install). You can put such install logic under the settings attribute.

                                    sut:\n  fast-ledger:\n    1.0:\n      packages: ['fast-ledger-sdk@1.0.0', 'comm-lib@1.0.0']\n      settings:\n      # compiling older comm-lib on newer Node.js version\n      - versionRegexp: '^((?!v8\\.).)*$'\n        env:\n          CXXFLAGS: '-Wno-error=class-memaccess'\n          CFLAGS: '-Wno-error=class-memaccess'\n        args: '--build-from-source'\n

                                    The settings attribute is actually an array of potentially applicable settings. Caliper will process them in order, and picks the first settings object whose regular expression (versionRegexp) matches the used Node.js version. The example demonstrates a setting that should be applied if a newer Node.js version is used (i.e., the version is not v8.X). If that is the case, then pass the command line arguments (specified by args) to npm install, and also set the environment variables specified under env (which will also be picked up by npm install and any sub-command executed by it).

                                    Your connector can use such advanced specification to provide support for a wide range of SUT/SDK versions in multiple environments.

                                    "},{"location":"connectors/writing-connectors/#documenting-the-connector","title":"Documenting the connector","text":"

                                    Providing proper user manual for your connector is just as important as a quality implementation. Otherwise, users will have a hard time interacting with your connector. We will take the Fabric connector documentation as an example, section by section.

                                    "},{"location":"connectors/writing-connectors/#overview_1","title":"Overview","text":"

                                    You should provide a short summary of your connector. This should include the following:

                                    • The supported SUT type and versions.
                                    • The capabilities of your connector (supported SUT features and limitations).

                                    The overview will lay down the foundation of what users can expect from your connector.

                                    "},{"location":"connectors/writing-connectors/#installing-dependencies","title":"Installing dependencies","text":"

                                    If your connector supports multiple SUT versions through the binding process, then document the necessary steps to bind to a specific version. The binding process is universal for every connector, so a short example should suffice.

                                    However, it can happen that not every SUT feature is supported by every binding. Carefully document the limitations of the affected bindings, and provide some workaround if possible.

                                    "},{"location":"connectors/writing-connectors/#runtime-settings","title":"Runtime settings","text":"

                                    The network configuration file only describes the SUT topology and related artifacts. SUT-agnostic design choices can still arise during the development of a connector. Instead of deciding yourself, you should delegate such choices to the end users utilizing the runtime configuration mechanism of Caliper where possible/meaningful.

                                    Such settings typically affect the operating mode of the connector, but don\u2019t change the overall semantics of the SUT interactions. Be sure to document every available runtime setting for your connector! Also, don\u2019t forget to provide sensible defaults to these settings where possible.

                                    "},{"location":"connectors/writing-connectors/#request-api","title":"Request API","text":"

                                    The main users of your connector will be workload module developers. They will interact with your connector mainly through the [sendRequests](#interface-reference) method. The method accepts either a single, or multiple settings object relating to the requests the user wants to send. You have to precisely specify what kind of settings are available for a request. These will typically include:

                                    • The operation to execute on the SUT.
                                    • The arguments of the operation.
                                    • The identity who should submit the request.
                                    • The node(s) to send the request to.
                                    • Differentiation between read-only/write requests.
                                    "},{"location":"connectors/writing-connectors/#gathered-request-data","title":"Gathered request data","text":"

                                    Your connector must report basic execution data towards Caliper to ensure correct reporting. But you are also free to collect any kind of client-side data you have access to. Who knows what data users will find useful. Make sure to document such collected data (both semantics and data types).

                                    "},{"location":"connectors/writing-connectors/#network-configuration-file","title":"Network configuration file","text":"

                                    The probably most important piece of your documentation is the schema of the network configuration file your connector can process. Try to provide an intuitive structure for defining the network topology, participants and any required artifacts. You should document the semantics and data types of different settings. Be sure to document any constraints that could arise between multiple attributes (mutual exclusion, valid values, etc.).

                                    "},{"location":"connectors/writing-connectors/#example-network-configuration","title":"Example network configuration","text":"

                                    Be sure to provide a fully specified and functioning network configuration example. For some, it is easier to absorb a concrete example than a reference-style documentation.

                                    "},{"location":"connectors/writing-connectors/#integration-with-caliper","title":"Integration with Caliper","text":"

                                    Once you\u2019ve implemented your connector, you have two choices to integrate it with Caliper:

                                    1. Use it as a 3rd party, pluggable component, which is part of your benchmark project.
                                    2. Contribute your connector to the official Caliper code-base, so it\u2019s always installed together with Caliper.
                                    "},{"location":"connectors/writing-connectors/#3rd-party-connector","title":"3rd party connector","text":"

                                    You can easily plug in your connector dynamically without it being part of the Caliper code-base. The process is the following:

                                    1. Create an index.js file in your project that will export your connector factory. The file provides a clean entry point for your connector:
                                    'use strict';\nmodule.exports.ConnectorFactory = require('./lib/connectorFactory').ConnectorFactory;\n
                                    1. Set the ./fast-ledger/index.js path for the caliper.blockchain attribute in your network configuration file. The path should be relative to the Caliper workspace directory, or an absolute path (not recommended for portability reasons). Caliper will load the module and the factory method from this path.
                                    2. If you support different bindings, then prepare a binding configuration file for your connector.
                                    3. When you launch Caliper, your connector implementation will be picked up through your network configuration file.
                                    4. You can specify your custom binding configuration using, for example, the --caliper-bind-file ./fast-ledger-binding.yaml command line argument that points to your custom file. Don\u2019t forget to also specify the binding itself with --caliper-bind-sut fast-ledger:1.0.

                                    Alternatively, you can set your caliper.blockchain attribute to an NPM package name if you published your connector. In that case, you must ensure that the package is installed in the Caliper workspace directory prior to running the benchmark. The recommended naming convention for packages is caliper-sut. For our example, the caliper.blockchain attribute would be set to caliper-fast-ledger.

                                    Note

                                    Until Caliper reaches its first major version, it is recommended to version your connector package based on which @hyperledger/caliper-core version you depend on.

                                    "},{"location":"connectors/writing-connectors/#built-in","title":"Built-in","text":"

                                    Note

                                    By contributing a connector to the code-base, you also accept the responsibility of maintaining the connector when needed. Otherwise, it might become stale and deprecated in future releases.

                                    If you would like to expose you connector to a wider user-base, then you should contribute your code to the official Caliper repository, so your connector becomes a built-in module, immediately available when someone installs Caliper.

                                    Note

                                    Don\u2019t hesitate to reach out to the project maintainers on Rocket.Chat (#caliper-contributors channel) who will help you with the integration.

                                    The integration consists of the following steps (for an example, see the [caliper-ethereum](https://github.com/hyperledger-caliper/caliper/tree/v0.6.0/packages/caliper-ethereum) connector):

                                    1. Create a caliper-fast-ledger directory in the packages directory of the repository. This will contain your connector implementation.
                                    2. Update your metadata in your own package.json file accordingly. The package name should be scoped: @hyperledger/caliper-fast-ledger.
                                    3. If your connector supports binding, then you should list the dynamic packages in the devDependencies section, so they\u2019re not automatically installed with Caliper (since the users will rebind it anyway). Also, add your connector\u2019s binding specifications to the built-in binding configuration file.
                                    4. Add your new directory path to the root lerna.json file, under the packages section. This will ensure that your package is bootstrapped correctly for other developers (and for testing, publishing, etc.).
                                    5. Add your new package (by name) to the Caliper CLI dependencies.
                                    6. List your connector as a built-in connector in the caliper-utils.js module, under the BuiltinConnectors variable:
                                      const BuiltinConnectors = new Map([\n ['fast-ledger', '@hyperledger/caliper-fast-ledger'],\n // other connectors...\n]);\n
                                    7. It is highly recommended to provide integration tests for your connector.
                                    8. Make sure that every code-related artifact (mostly .js, .yaml and .md files) contains the appropriate license header!
                                    9. And you\u2019re done! Now users can refer to the connector as fast-ledger in their network configuration files. The connector package will be published automatically upon every merged PR.
                                    "},{"location":"connectors/writing-connectors/#license","title":"License","text":"

                                    The Caliper codebase is released under the Apache 2.0 license. Any documentation developed by the Caliper Project is licensed under the Creative Commons Attribution 4.0 International License. You may obtain a copy of the license, titled CC-BY-4.0, at http://creativecommons.org/licenses/by/4.0/.

                                    "},{"location":"getting-started/architecture/","title":"Architecture","text":""},{"location":"getting-started/architecture/#overview","title":"Overview","text":"

                                    Caliper is a general framework for executing benchmarks against different blockchain platforms. Caliper was designed with scalability and extensibility in mind to easily integrate with today\u2019s popular monitoring and infrastructure solutions. Accordingly, the architecture of Caliper can seem a bit complicated at first.

                                    This page aims to gradually ease you into the intricacies of Caliper\u2019s architecture, taking one step at a time. By the end of this page, you should be familiar with the general concepts and API of Caliper. As you read along, you will find references to other, more technical documentation pages. Feel free to explore them once you are familiar with the basic building blocks of Caliper.

                                    "},{"location":"getting-started/architecture/#birds-eye-view","title":"Bird\u2019s eye view","text":"

                                    At its most simple form, Caliper is a service that generates a workload against a specific system under test (SUT) and continuously monitors its responses. Finally, Caliper generates a report based on the observed SUT responses. This simplistic view is depicted in the following figure.

                                    Caliper requires several inputs to run a benchmark, independently of the used SUT. The following subsections give a brief overview of these inputs.

                                    "},{"location":"getting-started/architecture/#benchmark-configuration-file","title":"Benchmark configuration file","text":"

                                    The benchmark configuration file describes how the benchmark should be executed. It tells Caliper how many rounds it should execute, at what rate the TXs should be submitted, and which module will generate the TX content. It also includes settings about monitoring the SUT.

                                    You can consider this file as the \u201cflow orchestrator\u201d of the benchmark. For the most part, the settings are independent of the SUT, so you can easily reuse them when performing multiple benchmarks against different SUT types or versions.

                                    Note

                                    For a more technical introduction to the benchmark configuration file, see the corresponding page.

                                    "},{"location":"getting-started/architecture/#network-configuration-file","title":"Network configuration file","text":"

                                    The content of the network configuration file is SUT-specific. The file usually describes the topology of the SUT, where its nodes are (their endpoint addresses), what identities/clients are present in the network, and what smart contracts Caliper should deploy or interact with.

                                    For the exact structure of the network configuration files, refer to the corresponding SUT connector documentations (we will discuss connectors a bit later on this page):

                                    • Hyperledger Besu & Ethereum
                                    • Hyperledger Fabric
                                    "},{"location":"getting-started/architecture/#workload-modules","title":"Workload modules","text":"

                                    Workload modules are the brain of a benchmark. Since Caliper is a general benchmark framework, it does not include any concrete benchmark implementation. When Caliper schedules TXs for a given round, it is the task of the round\u2019s workload module to generate the content of the TXs and submit it. Each round can have a different associated workload module, so separating your workload implementation based on phases/behavior should be easy.

                                    Workload modules are simply Node.JS modules that must export a given factory function. Other than that, the workload module logic can be arbitrary. Really, anything you can code in Node.JS.

                                    Note

                                    For a more technical introduction to the benchmark configuration file, see the corresponding page.

                                    "},{"location":"getting-started/architecture/#benchmark-artifacts","title":"Benchmark artifacts","text":"

                                    There might be additional artifacts necessary to run a benchmark that can vary between different benchmarks and runs. These usually include the followings:

                                    • Crypto materials necessary to interact with the SUT.
                                    • Smart contract source code for Caliper to deploy (if the SUT connector supports such operation).
                                    • Runtime configuration files.
                                    • Pre-installed third party packages for your workload modules.

                                    Refer to the SUT connector configuration pages for the additional necessary artifacts.

                                    Note

                                    From here on out, we will refer to the introduced Caliper inputs simply as benchmark artifacts and denote them with the database symbol seen in the first figure.

                                    "},{"location":"getting-started/architecture/#multi-platform-support","title":"Multi-platform support","text":"

                                    Before we further dive into the architecture of Caliper, let\u2019s see how Caliper can support multiple SUT types. Caliper uses connector modules to hide the peculiarities of different SUT types and provide a unified interface towards the Caliper (and external) modules.

                                    A SUT connector provides a simplified interface towards internal Caliper modules, as well as towards the workload modules. Accordingly, Caliper can request the execution of simple things, like \u201cinitialize the connector/SUT\u201d, and the connector implementation will take care of the rest. The exact tasks to perform during the initialization are often determined by the content of the network configuration file (and by the remote administrative actions the SUT supports).

                                    Note

                                    For the technical details of how to implement a connector, refer to the corresponding page.

                                    "},{"location":"getting-started/architecture/#caliper-processes","title":"Caliper processes","text":"

                                    Caliper considers scalability one of its most important goals (besides extensibility/flexibility). Workload generation from a single machine can quickly reach the resource limitations of the machine. If we want the workload rate to match the scalability and performance characteristics of the evaluated SUT then we need a distributed approach!

                                    Accordingly, Caliper (as a framework) comprises of two different services/processes: a manager process and numerous worker processes.

                                    • The manager process initializes the SUT (if supported) and coordinates the run of the benchmark (i.e., schedules the configured rounds) and handles the performance report generation based on the observed TX statistics.
                                    • The worker processes perform the actual workload generation, independently of each other. Even if a worker process reaches the limits of its host machine, using more worker processes (on multiple machines) can further increase the workload rate of Caliper. Thus worker processes are the backbone of Caliper\u2019s scalability.

                                    The described setup is illustrated in the next figure.

                                    Note

                                    For the time being, we will ignore the technical details of the distributed architecture, like the messaging between the processes. We will come back to it in a later section.

                                    "},{"location":"getting-started/architecture/#the-manager-process","title":"The manager process","text":"

                                    The Caliper manager process is the orchestrator of the entire benchmark run. It goes through several predefined stages as depicted by the figure below.

                                    1. In the first stage, Caliper executes the startup script (if present) from the network configuration file. This step is mainly useful for local Caliper and SUT deployments as it provides a convenient way to start the network and Caliper in one step.

                                    Note

                                    The deployment of the SUT is not the responsibility of Caliper. Technically, Caliper only connects to an already running SUT, even if it was started through the startup script.

                                    1. In the second stage, Caliper initializes the SUT. The tasks performed here are highly dependent on the capabilities of the SUT and the SUT connector. For example, the Hyperledger Fabric connector uses this stage to create/join channels and register/enroll new users.
                                    2. In the third stage, Caliper deploys the smart contracts to the SUT, if the SUT and the connector support such operation (like with the Hyperledger Fabric connector).
                                    3. In the fourth stage Caliper schedules and executes the configured rounds through the worker processes. This is the stage where the workload generation happens (through the workers!).
                                    4. In the last stage, after executing the rounds and generating the report, Caliper executes the cleanup script (if present) from the network configuration file. This step is mainly useful for local Caliper and SUT deployments as it provides a convenient way to tear down the network and any temporary artifacts.

                                    If your SUT is already deployed an initialized, then you only need Caliper to execute the rounds and nothing else. Luckily, you can configure every stage one-by-one whether it should be executed or not. See the flow control settings for details.

                                    The above figure only shows the high-level steps of executing a benchmark. Some components are omitted for the sake of simplicity, like the resource and transaction monitor components. To learn more about the purpose and configuration of these components, refer to the Resource and Transaction Monitors documentation page.

                                    "},{"location":"getting-started/architecture/#the-worker-process","title":"The worker process","text":"

                                    The interesting things (from a user perspective) happen inside the worker processes. A worker process starts its noteworthy tasks when the manager process sends a message to it about executing the next round (the 4th step in the previous section). The important components of a worker process are shown in the figure below.

                                    The worker process spends most of its time in the workload generation loop. The loop consists of two important steps:

                                    1. Waiting for the rate controller to enable the next TX. Think of the rate controller as a delay circuit. Based on what kind of rate controller is used, it delays/halts the execution of the worker (in an asynchronous manner) before enabling the next TX. For example, if a fixed 50 TXs per second (TPS) rate is configured, the rate controller will halt for 20ms between each TX.

                                    Note

                                    The rate controllers of each round can be configured in the benchmark configuration file. For the available rate controllers, see the Rate Controllers page.

                                    1. Once the rate controller enables the next TX, the worker gives control to the workload module. The workload module assembles the parameters of the TX (specific to the SUT and smart contract API) and calls the simple API of the SUT connector that will, in turn, send the TX request to the SUT (probably using the SDK of the SUT).

                                    Note

                                    The workload modules of each round can be configured in the benchmark configuration file. For the technical details of workload modules, see the Workload Modules page.

                                    During the workload loop, the worker process sends progress updates to the manager process. Progress reporting on the manager side can be enabled and configured with the caliper-progress-reporting-enabled and caliper-progress-reporting-interval setting keys. For details, see the Basic Runtime Settings.

                                    "},{"location":"getting-started/architecture/#process-distribution-models","title":"Process distribution models","text":"

                                    The last part of the architecture discussion is demystifying the worker process management. Based on how worker processes are started and what messaging method is used between the manager and worker processes, we can distinguish the following distribution/deployment models:

                                    1. Automatically spawned worker processes on the same host, using interprocess communication (IPC) with the manager process.
                                    2. Automatically spawned worker processes on the same host, using a remote messaging mechanism with the manager process.
                                    3. Manually started worker processes on an arbitrary number of hosts, using a remote messaging mechanism with the manager process.

                                    Even though the third method is the way to go for more complex scenarios, the first two methods can help you get familiar with Caliper, and gradually aid you with the transition to the third method.

                                    "},{"location":"getting-started/architecture/#modular-message-transport","title":"Modular message transport","text":"

                                    The different deployment approaches are made possible by how Caliper handles messaging internally, as shown by the following figure.

                                    The internal Caliper modules only deal with predefined messages whose content is independent of how the messages are sent. The module that sends the messages between the processes is swappable, thus enabling different communication methods.

                                    The deployment model is configurable with the following two setting keys:

                                    • caliper-worker-remote: if set to false (the default), then the manager process will spawn the required number of worker processes locally, resulting in the models 1 or 2.
                                    • caliper-worker-communication-method: can take the values process (the default) or mqtt and determines the message transport implementation to use. The process communication corresponds to the first model, while mqtt denotes models 2 and 3.

                                    The following table summarizes the different models and how to select them:

                                    remote value method value Corresponding deployment model false process 1. Interprocess communication with local workers false mqtt 2. Remote messaging-based communication with local workers true mqtt 3. Remote messaging-based communication with remote workers true process Invalid, since IPC does not apply to remote communication

                                    Note

                                    For the technical details on configuration the messaging transport, see the Messengers page.

                                    "},{"location":"getting-started/architecture/#interprocess-communication","title":"Interprocess communication","text":"

                                    The examples on the Install & Usage page all use the IPC approach since it is the default behavior. The setup is illustrated in the figure below.

                                    The caliper launch manager CLI command starts the manager process, which in turn will automatically spawn the configured number of worker processes (using the caliper launch worker CLI command). The communication between the processes is IPC, utilizing the built-in Node.JS method available for the parent-children process relationships.

                                    This is the simplest deployment model for Caliper, requiring no additional configuration and third party messaging components. Accordingly, it is ideal when you first start using Caliper, or when you are still assembling the benchmark artifacts for your project, and just quickly want to test them.

                                    Unfortunately, this model is constrained to a single host, thus suffers from scalability issues in the sense that only vertical scalability of the host is possible.

                                    "},{"location":"getting-started/architecture/#local-message-broker-communication","title":"Local message broker communication","text":"

                                    As a stepping stone towards the fully-distributed setup, the second deployment model replaces IPC with a third party messaging solution, while still hiding the worker process management from the user. The setup is illustrated in the figure below.

                                    Like before, the caliper launch manager CLI command starts the manager process, which in turn will automatically spawn the configured number of worker processes (using the caliper launch worker CLI command). However, the messaging happens through a separate component, which could be deployed anywhere as long as its endpoint is reachable by the Caliper processes.

                                    Unfortunately, this model is also constrained to a single host from the aspect of the Caliper processes. However, it is a useful model for taking your deployment to the next level once your benchmark artifacts are in place. Once you successfully integrated the messaging component, you are ready to move to the fully distributed Caliper setup.

                                    "},{"location":"getting-started/architecture/#distributed-message-broker-communication","title":"Distributed message broker communication","text":"

                                    When you take the management of the worker processes into your own hands, that\u2019s when the full potential of Caliper is unlocked. At this point, you can start as many workers on as many hosts as you would like, using the caliper launch worker CLI command. The setup is illustrated in the figure below.

                                    The fully distributed deployment enables the horizontal scaling of the worker processes, greatly increasing the achievable workload rate. To ease the management of the many Caliper processes, you will probably utilize some automatic deployment/management solution, like Docker Swarm or Kubernetes. Luckily, the flexibility of the Caliper Docker image makes such integration painless.

                                    However, there are some caveats you have to keep in mind:

                                    1. Distributing the necessary benchmark artifacts to the Caliper processes is your responsibility. Different infrastructure solutions provide different means for this, so check your favorite vendor\u2019s documentation.
                                    2. Setting up proper networking in distributed systems is always a challenge. Make sure that the Caliper processes can access the configured messaging component and the SUT components.
                                    3. A single host may run multiple Caliper worker processes. When planning the worker distribution (or setting resource requirements for container management solutions) make sure that enough resources are allocated for workers to keep the configured TX scheduling precision.
                                    "},{"location":"getting-started/architecture/#license","title":"License","text":"

                                    The Caliper codebase is released under the Apache 2.0 license. Any documentation developed by the Caliper Project is licensed under the Creative Commons Attribution 4.0 International License. You may obtain a copy of the license, titled CC-BY-4.0, at http://creativecommons.org/licenses/by/4.0/.

                                    "},{"location":"getting-started/contributing/","title":"Contributing to Hyperledger Caliper","text":"

                                    Welcome to Hyperledger Caliper project, we are excited about the prospect of you contributing.

                                    This guideline intends to make contribtuions to Caliper easier by:

                                    • presenting a simple development workflow for contributors to follow;
                                    • and providing a high-level description of the repository components.

                                    If you have further suggestions about improving the guideline, then you can follow the presented workflow to make your contribution.

                                    "},{"location":"getting-started/contributing/#overview","title":"Overview","text":"

                                    The project uses GitHub to manage issues (bug reports and feature requests) and contributions (in the form of pull requests).

                                    For general queries and discussion, please use the #caliper channel on the Hyperledger Discord Server (Discord Id required) or the Caliper mailing list (LFID recommended).

                                    The contribution process boils down to two major steps: opening an issue and submitting a pull request (PR). Opening issues before PRs serves the following purposes:

                                    • Documenting bugs, related error logs, potential fixes, and/or workarounds that users can find using their favorite search engine.
                                    • Providing a forum for discussions where both contributors and users can weigh in about new features and their potential design.
                                    • Enabling easy traceability of contributions through the \"Commit \u2192 PR \u2192 Issue\" reference chain.

                                    Opening issues can be omitted only in the case of trivial and small fixes (e.g., typo fixes, simple documentation enhancements, etc.).

                                    "},{"location":"getting-started/contributing/#opening-issues","title":"Opening Issues","text":"

                                    Before opening an issue, make sure that:

                                    1. You read the documentation carefully, so the observed error does not stem from incorrect Caliper configuration or usage.
                                    2. You searched older issues (or other forums) for your question, maybe it is already answered/fixed.
                                    3. It is worth to ask around on Discord, maybe other users already encountered your issue/task, and managed to solve it.

                                    When opening an issue about a potential bug or feature request, make sure to properly fill and format the issue contents!

                                    Choose the issue template that suits your intent (bug report or feature request), then fill out the form as best as you can.

                                    If you find the available issue templates too constraining, then you can still use the \"blank\" issue template for now (it will be deprecated in the future), and also let us know how we can improve the issue templates.

                                    The details of the blank template should be filled according to the following guideline:

                                    1. Issue title: Should be a concise sentence summarising the details below, including which component or part of the benchmarking process is affected. For example: Fabric contract deployment silently fails in generator CI test
                                    2. Context: A detailed description of the context of the issue. Should include information about, for example, how you encountered the issue, what were you trying to achieve, why you consider this a bug, and how it affected your work with Caliper?
                                    3. Expected Behavior: What was your expected outcome/behavior of the Caliper run?
                                    4. Actual Behavior: What was your actually observered outcome/behaviour instead of the expected one?
                                    5. Possible Fix: If you have already identified the source of the issue, you can also propose a possible fix for it. It does not necessarily have to be a working code, some general steps/ideas of a fix is also appreciated.
                                    6. Steps to Reproduce: This is perhaps the most important part of an issue. Since Caliper is highly configurable and can interact with multiple SUTs, it is crucial for the maintainers to know the exact steps to reproduce an issue. Always try to provide (or just describe) a minimal working example (MWE) that can reproduce the issue. Also, please attach the following information to the issues whenever possible (preferably using collapsable code blocks or GitHub gists):
                                      • The benchmark configuration file content.
                                      • The network configuration file content.
                                      • The workload module implementation.
                                      • The exact error logs (and their surroundings).
                                      • Any other information you deem neccessary.
                                    7. Existing Issues: You can also provide links to similar (or the same) issues in other forums (GitHub issues, StackOverflow questions or Discord messages). Such cross-linking can help us ensure that we can broadcast the potential fixes to multiple instances of the question.
                                    8. Your Environment: Always include your execution environment information, including the used version/commit of Caliper, the operating system, Node.JS version, or any relevant information about your project that uses Caliper.
                                    "},{"location":"getting-started/contributing/#submitting-pull-requests","title":"Submitting Pull Requests","text":""},{"location":"getting-started/contributing/#basic-workflow","title":"Basic workflow","text":"

                                    The following workflow should make your contribution process clean and straighforward (some deviations might be neccessary in exceptional cases):

                                    The following list assumes that you use the git command line tool. IDEs and graphical git tools should also expose the same commands if you prefer those.

                                    1. Fork the Caliper repository. This needs to be done only once.
                                    2. Clone the fork repository to your local machine (ideally done once):
                                      • Using the HTTPS method: git clone https://github.com/<username>/caliper.git
                                      • or using the SSH method: git clone git@github.com:<username>/caliper.git
                                    3. Add the upstream/original Caliper repository as a remote, using the name upstream (ideally done once). This will allow you to easily sync your fork with the original repository.
                                      • git remote add upstream https://github.com/hyperledger-caliper/caliper.git

                                    The following steps follow the \"feature branch\" development practice, and should be performed for each of your contribution:

                                    1. Checkout your main branch that will be the starting point/state of your contribution.
                                      • git checkout main
                                      • or git checkout -b main if the branch does not exist locally (only during your first checkout)
                                    2. Retrieve the new updates from the upstream Caliper repository
                                      • git fetch upstream (downloads the updates locally)
                                      • then git rebase upstream/main (\"adds\" the updates to your local main brach)

                                        This step is crucial to ensure that you always contribute based on the latest Caliper version, minimizing the chance of version control conflicts!

                                    3. (Optional) Push your updated main branch to your remote repository just to keep things in sync
                                      • git push
                                    4. Create a new feature branch (named my-bug-fix) from the updated main branch:
                                      • git checkout -b my-bug-fix
                                      • then git push --set-upstream origin my-bug-fix to push the new branch to your remote repository
                                    5. Implement and test your contribution
                                      • The .build/checks-and-unit-tests.sh script runs the basic tests for the repository
                                      • The BENCHMARK=<test_name> .build/benchmark-integration-test-direct.sh script can run one of the CI integration tests, depending on the value of <test_name>:
                                        • BESU
                                        • ETHEREUM
                                        • FABRIC
                                        • GENERATOR

                                          Make sure that the tests pass locally before pushing your changes and opening a PR!

                                    6. Commit your changes to your local feature branch, adding the DCO sign-off:
                                      • git commit -s -m \"Commit message\"
                                      • or git commit -s (without the -m parameter) to make git open up the configured text editor where you can write a detailed, multi-line commit message:
                                        • The first line is a concise description of the commit (its purpose, what does it fix, etc).
                                        • After a blank line, you can go into details about the changes of the commit.
                                    7. Push your changes to your remote branch:
                                      • git push
                                    8. Open a PR using the GitHub webpage.
                                    "},{"location":"getting-started/contributing/#updating-prs","title":"Updating PRs","text":"

                                    It is possible that the maintainers/reviewers request some changes before your PR can be merged. In that case, just add your changes to your feature branch using a new signed commit (based on the above workflow), then push it to your remote branch. The PR will automatically pick up the new commit.

                                    The maintainers can/will squash commits before merging to keep a simpler commit history for the project.

                                    "},{"location":"getting-started/contributing/#how-to-contribute","title":"How to contribute","text":"

                                    We are using GitHub issues for bug reports and feature requests.

                                    If you find any bug in the source code or have any trivial changes (such as typos fix, minor feature), you can raise an issue or delivery a fix via a pull request directly.

                                    If you have any enhancement suggestions or want to help extend caliper with more DLTs or have any other major changes, please start by opening an issue first. That way, relevant parties (e.g. maintainers or main contributors of the relevant subsystem) can have a chance to look at it before you do any work.

                                    All PRs must get at least one review, you can ask hyperledger/caliper-committers for review. Normally we will review your contribution in one week. If you haven\u2019t heard from anyone in one week, feel free to @ or mail a maintainer to review it.

                                    All PRs must be signed before be merged, be sure to use git commit -s to commit your changes.

                                    If a PR is reviewed and changes are requested then please do not force push the changes, push the changes into a new commit, this makes it easier to see the changes between the previously reviewed code and the new changes.

                                    We use Github Actions to test the build - please test on your local branch before raising a PR.

                                    There is also Discord with a Caliper channel for communication, anybody is welcome to join.

                                    "},{"location":"getting-started/contributing/#caliper-components","title":"Caliper Components","text":"

                                    The project is maintained as a Node.js monorepository. Accordingly, it can seem overwhelming at first to navigate its content. The following sections list and introduce the main components of the repository.

                                    "},{"location":"getting-started/contributing/#installing-the-caliper-code-base","title":"Installing the Caliper code base","text":"

                                    Note

                                    this section is intended only for developers who would like to modify the Caliper code-base and experiment with the changes locally before raising pull requests. You should perform the following steps every time you make a modification you want to test, to correctly propagate any changes.

                                    The workflow of modifying the Caliper code-base usually consists of the following steps:

                                    1. Bootstrapping the repository
                                    2. Modifying and testing the code
                                    3. Publishing package changes locally
                                    4. Building the Docker image
                                    "},{"location":"getting-started/contributing/#bootstrapping-the-caliper-repository","title":"Bootstrapping the Caliper repository","text":"

                                    To install the basic dependencies of the repository, and to resolve the cross-references between the different packages in the repository, you must execute the following commands from the root of the repository directory:

                                    1. ./.build/check-prerequisites.sh: Checks the version of Node and NPM in the system and warns if the versions are not compatible.

                                    2. npm ci: Installs the dependencies of the repository from the package-lock.json file.

                                    Or as a one-liner:

                                    user@ubuntu:~/caliper$ ./.build/check-prerequisites.sh && npm ci\n

                                    Note

                                    do not run any of the above commands with sudo, as it will cause the bootstrap process to fail.

                                    "},{"location":"getting-started/contributing/#publicpublished-packages","title":"Public/published packages","text":"

                                    The packages directory contains the following public/published packages:

                                    • caliper-cli: The command line interface (CLI) of Caliper.
                                    • caliper-core: The core and common codebase of Caliper, used by the other packages.
                                    • caliper-ethereum: The Ethereum and Hyperledger Besu connector implementation.
                                    • caliper-fabric: The Hyperledger Fabric connector implementation.
                                    • generator-caliper: The Yeaoman generator for Caliper configuration files.
                                    "},{"location":"getting-started/contributing/#internal-packages","title":"Internal packages","text":"

                                    The packages directory contains the following internal packages:

                                    • caliper-publish: Utility CLI for publishing Caliper to NPM and DockerHub.
                                    • caliper-tests-integration: Collection of CI integration tests.
                                    "},{"location":"getting-started/contributing/#testing-methodologies","title":"Testing Methodologies","text":"

                                    This section outlines the testing methodologies that this project follows, including both unit-level and integration-level testing.

                                    "},{"location":"getting-started/contributing/#unit-level-testing","title":"Unit Level Testing","text":"

                                    Unit testing focuses on testing individual packages in isolation. The tests are typically located within a test folder at the same level as the corresponding lib folder, mirroring the structure of the lib folder for easy navigation. This setup ensures that each component of the code is verified independently, confirming that it behaves as expected without interference from other parts of the system.

                                    "},{"location":"getting-started/contributing/#unit-testing-dependencies-and-their-use","title":"Unit Testing Dependencies and Their Use","text":"

                                    The following are the recommended testing modules for this project. While these tools are commonly used in the current codebase, other appropriate tools may be used as long as they facilitate effective testing.

                                    • mockery: Mockery is a simple module for mocking Node.js modules during testing. It allows you to replace real modules with mocks or stubs.

                                    • mocha: Mocha is a feature-rich JavaScript test framework that runs on Node.js and in the browser. It facilitates asynchronous testing, making it easy to write simple and flexible tests.

                                    • chai: Chai is a BDD/TDD assertion library for Node.js and the browser. It can be paired with any JavaScript testing framework. We use it to create readable and expressive assertions.

                                    • sinon: Sinon is a standalone test spies, stubs, and mocks for JavaScript. It works with any test framework and integrates well with Mocha and Chai. We utilize Sinon for checking how functions are called during testing.

                                    • sinon-chai: This library provides a set of custom assertions for using Sinon with Chai. It allows you to write more readable assertions for Sinon spies, stubs, and mocks.

                                    • nyc: NYC is a command-line utility for generating code coverage reports. It is often used with Mocha to ensure that tests cover as much code as possible.

                                    "},{"location":"getting-started/contributing/#mandatory-tools-in-the-testing-pipeline","title":"Mandatory Tools in the Testing Pipeline","text":"

                                    In addition to the testing frameworks, the following tools are mandatory for all testing pipelines:

                                    • eslint: ESLint is a static code analysis tool for identifying problematic patterns in JavaScript code. It is essential for maintaining code quality.
                                    • license-check-and-add: This tool ensures that all files in the codebase contain the required license headers. It is mandatory for all code submissions.
                                    "},{"location":"getting-started/contributing/#points-to-note-for-adding-a-conforming-unit-test","title":"Points to Note for Adding a Conforming Unit Test","text":"

                                    When writing unit tests, the following structure and practices are mandatory:

                                    1. License Header: All test files must include the project's license header.
                                    2. 'use strict' Directive: Ensure strict mode is enabled in all test files.
                                    3. Test Organization:
                                      • Use describe blocks to group related test cases.
                                      • Use it statements for individual test cases.
                                      • Nested describe blocks are encouraged for organizing complex test scenarios.
                                    4. Consistent Test Naming: Test descriptions should flow naturally, making it clear what behavior is being tested (e.g., 'should return the correct value when input is valid').
                                    5. Mocking Guidance: Be cautious with mocks that persist across tests. Always clean up after each test to avoid unexpected behavior.
                                    6. Test Patterns: Refer to the Fabric Unit tests for examples of recommended patterns and best practices.
                                    7. Final Checks: Always run all unit tests before submitting a PR and ensure no .only is left in the code, which would skip other tests.
                                    "},{"location":"getting-started/contributing/#integration-level-testing","title":"Integration Level Testing","text":"

                                    Integration testing ensures that Caliper integrates correctly with various packages, effectively testing the functionality of the package itself. These tests are organized within the caliper-tests-integration folder, with each test suite dedicated to a specific package or module.

                                    "},{"location":"getting-started/contributing/#testing-the-code","title":"Testing the code","text":"

                                    Caliper has both unit tests and integration tests.

                                    Unit tests can be run using npm test either in the root of the caliper source tree (to run them all) or within the specific package (eg caliper-fabric) to run just the tests within that package.

                                    To run the integration tests for a specific SUT, use the following script from the root directory of the repository, setting the BENCHMARK environment variable to the platform name:

                                    user@ubuntu:~/caliper$ BENCHMARK=fabric ./.build/benchmark-integration-test-direct.sh\n

                                    The following platform tests (i.e., valid BENCHMARK values) are available: besu, ethereum, fabric.

                                    A PR must pass all unit and integration tests.

                                    If you would like to run other examples, then you can directly access the CLI in the packages/caliper-cli directory, without publishing anything locally.

                                    user@ubuntu:~/caliper$ node ./packages/caliper-cli/caliper.js launch manager \\\n    --caliper-workspace ~/caliper-benchmarks \\\n    --caliper-benchconfig benchmarks/scenario/simple/config.yaml \\\n    --caliper-networkconfig networks/fabric/test-network.yaml\n
                                    "},{"location":"getting-started/contributing/#creating-new-sut-connectors","title":"Creating New SUT Connectors","text":"

                                    Connectors are relatively heavy components in Caliper. Before you attempt to create a new connector for a new SUT type, consult with the Caliper maintainers (on Discord, or in the form of a feature request).

                                    More importantly, make sure that you are overly familiar with the documentation page about implementing new connectors.

                                    "},{"location":"getting-started/contributing/#publishing-to-local-npm-repository","title":"Publishing to local NPM repository","text":"

                                    The NPM publishing and installing steps for the modified code-base can be tested through a local NPM proxy server, Verdaccio. The steps to perform are the following:

                                    1. Start a local Verdaccio server to publish to
                                    2. Publish the packages from the Caliper repository to the Verdaccio server
                                    3. Install and bind the CLI from the Verdaccio server
                                    4. Run the integration tests or any sample benchmark

                                    The packages/caliper-publish directory contains an internal CLI for easily managing the following steps. So the commands of the following sections must be executed from the packages/caliper-publish directory:

                                    user@ubuntu:~/caliper$ cd ./packages/caliper-publish\n

                                    Note

                                    use the --help flag for the following CLI commands and sub-commands to find out more details.

                                    "},{"location":"getting-started/contributing/#starting-verdaccio","title":"Starting Verdaccio","text":"

                                    To setup and start a local Verdaccio server, run the following npm command:

                                    user@ubuntu:~/caliper/packages/caliper-tests-integration$ npm run start_verdaccio\n...\n[PM2] Spawning PM2 daemon with pm2_home=.pm2\n[PM2] PM2 Successfully daemonized\n[PM2] Starting /home/user/projects/caliper/packages/caliper-tests-integration/node_modules/.bin/verdaccio in fork_mode (1 instance)\n[PM2] Done.\n| App name  | id | mode | pid    | status | restart | uptime | cpu | mem       | user   | watching |\n|-----------|----|------|--------|--------|---------|--------|-----|-----------|--------|----------|\n| verdaccio | 0  | fork | 115203 | online | 0       | 0s     | 3%  | 25.8 MB   | user   | disabled |\n\nUse `pm2 show <id|name>` to get more details about an app\n

                                    The Verdaccio server is now listening on the following address: http://localhost:4873

                                    "},{"location":"getting-started/contributing/#publishing-the-packages","title":"Publishing the packages","text":"

                                    Once Verdaccio is running, you can run the following command to publish every Caliper package locally:

                                    user@ubuntu:~/caliper/packages/caliper-publish$ ./publish.js npm --registry \"http://localhost:4873\"\n...\n+ @hyperledger/caliper-core@0.6.1-unstable-20240422122901\n[PUBLISH] Published package @hyperledger/caliper-core@0.6.1-unstable-20240422122901\n...\n+ @hyperledger/caliper-fabric@0.6.1-unstable-20240422122901\n[PUBLISH] Published package @hyperledger/caliper-fabric@0.6.1-unstable-20240422122901\n...\n+ @hyperledger/caliper-cli@0.6.1-unstable-20240422122901\n[PUBLISH] Published package @hyperledger/caliper-cli@0.6.1-unstable-20240422122901\n

                                    Take note of the dynamic version number you see in the logs, you will need it to install you modified Caliper version from Verdaccio (the unstable tag is also present on NPM, so Verdaccio would probably pull that version instead of your local one).

                                    Since the published packages include a second-precision timestamp in their versions, you can republish any changes immediately without restarting the Verdaccio server and without worrying about conflicting packages.

                                    "},{"location":"getting-started/contributing/#running-package-based-tests","title":"Running package-based tests","text":"

                                    Once the packages are published to the local Verdaccio server, we can use the usual NPM install approach. The only difference is that now we specify the local Verdaccio registry as the install source instead of the default, public NPM registry:

                                    user@ubuntu:~/caliper-benchmarks$ npm install --registry=http://localhost:4873 --only=prod \\\n    @hyperledger/caliper-cli@0.6.1-unstable-20240422122901\nuser@ubuntu:~/caliper-benchmarks$ npx caliper bind --caliper-bind-sut fabric:fabric-gateway\nuser@ubuntu:~/caliper-benchmarks$ npx caliper launch manager \\\n    --caliper-workspace . \\\n    --caliper-benchconfig benchmarks/scenario/simple/config.yaml \\\n    --caliper-networkconfig networks/fabric/test-network.yaml\n

                                    Note

                                    we used the local registry only for the Caliper packages. The binding happens through the public NPM registry. Additionally, we performed the commands through npx and the newly installed CLI binary (i.e., not directly calling the CLI code file).

                                    "},{"location":"getting-started/contributing/#building-the-docker-image","title":"Building the Docker image","text":"

                                    Once the modified packages are published to the local Verdaccio server, you can rebuild the Docker image. The Dockerfile is located in the packages/caliper-publish directory.

                                    To rebuild the Docker image, execute the following:

                                    user@ubuntu:~/caliper/packages/caliper-publish$ ./publish.js docker\n...\nSuccessfully tagged hyperledger/caliper:manager-unstable-20220206065953\n[BUILD] Built Docker image \"hyperledger/caliper:manager-unstable-20220206065953\"\n

                                    Now you can proceed with the Docker-based benchmarking as described in the previous sections.

                                    Note

                                    once you are done with the locally published packages, you can clean them up the following way:

                                    user@ubuntu:~/caliper/packages/caliper-publish$ ./publish.js verdaccio stop\n
                                    "},{"location":"getting-started/contributing/#caliper-structure","title":"Caliper Structure","text":"

                                    Caliper is modularised under packages into the following components:

                                    caliper-cli This is the Caliper CLI that enables the running of a benchmark

                                    caliper-core Contains all the Caliper core code.

                                    caliper- Each caliper-<adapter> is a separate package that contains a distinct adaptor implementation to interact with different blockchain technologies. Current adaptors include:

                                    • caliper-ethereum
                                    • caliper-fabric

                                    Each adapter extends the ConnectorBase from the core package, as well as exports a ConnectorFactory function.

                                    caliper-tests-integration This is the integration test suite used for caliper; it runs in the Azure pipelines build and can (should) be run locally when checking code changes. Please see the readme within the package for more details.

                                    "},{"location":"getting-started/contributing/#dependency-management-in-the-monorepo","title":"Dependency management in the monorepo","text":""},{"location":"getting-started/contributing/#npm-version","title":"npm version","text":"

                                    For developing Caliper, the minimum required version of npm is 7.24.2. This is because npm 7 introduced the npm workspaces feature, which is used to manage the Caliper monorepo.

                                    In case the npm version is not sufficient, the following error may be thrown when trying to install the dependencies:

                                    user@ubuntu:~/caliper$ npm install\nnpm ERR! code ENOTSUP\nnpm ERR! notsup Unsupported engine for caliper@0.6.1-unstable: wanted: {\"node\":\">=18.19.0\",\"npm\":\">=7.24.2\"} (current: {\"node\":\"14.21.3\",\"npm\":\"6.14.18\"})\nnpm ERR! notsup Not compatible with your version of node/npm: caliper@0.6.1-unstable\nnpm ERR! notsup Not compatible with your version of node/npm: caliper@0.6.1-unstable\nnpm ERR! notsup Required: {\"node\":\">=18.19.0\",\"npm\":\">=7.24.2\"}\nnpm ERR! notsup Actual:   {\"npm\":\"6.14.18\",\"node\":\"14.21.3\"}\n

                                    If the npm version is lower than 7.24.2, you can update it to latest by running the following command:

                                    npm install -g npm@latest\n
                                    "},{"location":"getting-started/contributing/#adding-new-dependencies","title":"Adding new dependencies","text":"

                                    The monorepo is managed using npm workspaces. So to add a dependency to a package, the following command should be executed:

                                    user@ubuntu:~/caliper$ npm install <dependency> --workspace=<package>\n
                                    "},{"location":"getting-started/contributing/#removing-unused-dependencies","title":"Removing unused dependencies","text":"

                                    To remove a dependency from a package, the following command should be executed:

                                    user@ubuntu:~/caliper$ npm uninstall <dependency> --workspace=<package>\n
                                    "},{"location":"getting-started/contributing/#managing-the-package-lockjson-file","title":"Managing the package-lock.json file","text":"

                                    The package-lock.json file is generated by npm and it is used to lock the dependency versions. It is generated automatically when installing dependencies, and it should be committed to the repository. Please make sure that the right version of npm is used when installing dependencies, otherwise the package-lock.json file may be updated unnecessarily.

                                    "},{"location":"getting-started/contributing/#how-dependency-conflicts-are-resolved-in-workspaces","title":"How dependency conflicts are resolved in workspaces","text":"

                                    When installing dependencies to a workspace, if the dependency is already installed as a direct or indirect dependency of another workspace, npm will try to reuse the existing version. If the version is not compatible with the one specified in the workspace, npm will try to resolve the conflict by installing the requested version of the dependency in the node_modules directory of the workspace.

                                    "},{"location":"getting-started/contributing/#add-an-adaptor-for-a-new-dlt","title":"Add an Adaptor for a New DLT","text":"

                                    New adapters must be added within a new package, under packages, with the naming convention caliper-<adapter_name>. Each adapter must implement a new class extended from ConnectorBase as the adapter for the DLT, as well export a ConnectorFactory function. Please refer to the existing Connectors for examples and requirements for implementation.

                                    "},{"location":"getting-started/contributing/#inclusive-language-guidelines","title":"Inclusive language guidelines","text":"

                                    Please adhere to the inclusive language guidelines that the project has adopted as you make documentation updates.

                                    • Consider that users who will read the docs are from different backgrounds and cultures and that they have different preferences.
                                    • Avoid potential offensive terms and, for instance, prefer \u201callow list and deny list\u201d to \u201cwhite list and black list\u201d.
                                    • We believe that we all have a role to play to improve our world, and even if writing inclusive documentation might not look like a huge improvement, it\u2019s a first step in the right direction.
                                    • We suggest to refer to Microsoft bias free writing guidelines and Google inclusive doc writing guide as starting points.
                                    "},{"location":"getting-started/contributing/#building-the-documentation","title":"Building the documentation","text":""},{"location":"getting-started/contributing/#requirements","title":"Requirements","text":"
                                    • Python
                                    • Pip (Python package installer)
                                    "},{"location":"getting-started/contributing/#installation","title":"Installation","text":"
                                    1. Clone the Repository:

                                      git clone https://github.com/hyperledger-caliper/caliper.git\ncd caliper\n

                                    2. Go to the documentation folder:

                                      cd docs\n

                                    3. Install Dependencies:

                                      pip install -r pip-requirements.txt\n

                                    4. Building the Documentation: To build the documentation, run:

                                      mkdocs build\n

                                      This command generates static files in the site directory.

                                    5. Previewing Locally: To preview your documentation locally, use:

                                      mkdocs serve\n

                                      This starts a local development server at http://127.0.0.1:8000/ where you can view your changes in real-time.

                                    "},{"location":"getting-started/contributing/#license","title":"License","text":"

                                    The Caliper codebase is released under the Apache 2.0 license. Any documentation developed by the Caliper Project is licensed under the Creative Commons Attribution 4.0 International License. You may obtain a copy of the license, titled CC-BY-4.0, at http://creativecommons.org/licenses/by/4.0/.

                                    "},{"location":"getting-started/faq/","title":"Caliper FAQ","text":""},{"location":"getting-started/faq/#i-environment-platform-version","title":"I. Environment, Platform & Version","text":"

                                    Q: How do I run Caliper to test a blockchain system? A: Details for setting up Caliper to run benchmark tests on a blockchain system are provided in the Getting Started page of the site. When you run Caliper to test a blockchain network, you may encounter some errors. If so, first you should check the version of tools, SDKs and modules to make sure it is right.

                                    "},{"location":"getting-started/faq/#ii-configuration-files-of-caliper","title":"II. Configuration Files of Caliper","text":"

                                    Q: What kind of configuration files are needed to run Caliper? A: There are two kinds of configuration files in Caliper: the benchmark configuration file, which defines the arguments of the benchmark, like workload and monitoring settings; the blockchain configuration file, which specifies the information needed to interact with the backend blockchain system. For examples of these files please refer to here

                                    There is another configuration file, namely ./config/default.yaml, containing runtime setting for Caliper and the blockchain adapters. These settings can also be specified as command line arguments or environment variables.

                                    "},{"location":"getting-started/faq/#iii-testing-a-blockchain-network","title":"III. Testing a Blockchain Network","text":"

                                    Q: What kind of networks does Caliper support currently? A: Now you can use Caliper to test Besu, Ethereum and Fabric.

                                    Q: How can I test a blockchain system that Caliper does not support currently\uff1f A: If you want to test the blockchain system that Caliper does not support now, you must write your own blockchain adapter that Caliper can use to inferface with the backend network. For details, you can refer to the Writing Adapters page. The Caliper-specific configurations remain unchanged. Take a look at the provided adapter implementations and example networks to gather some best-practices.

                                    "},{"location":"getting-started/faq/#iv-other-questions-related-to-caliper","title":"IV. Other Questions Related to Caliper","text":"

                                    Q: How can I calculate the throughput (TPS)? A: Caliper will record the submitting time and committing time (the time when the Tx is committed on the ledger or when the failure occurred) for each Tx. So the send rate and the throughput are calculated by (Succ+Fail) / (last submitting time - first submitting time). This means both successful and failed transactions are included in the throughput and send rate calculations.

                                    "},{"location":"getting-started/faq/#v-other-questions-related-to-the-backend-blockchain-system","title":"V. Other Questions Related to the Backend Blockchain System","text":"

                                    Q: How can I test my own Fabric chaincode? A: You first need to deploy that chaincode to your Fabric network, then create your benchmark and workload files see the Caliper Fabric tutorial for further guidance.

                                    Q: How can I use TLS communication? A: Fabric supports secure communication between nodes and clients using TLS. TLS communication can use both one-way (server only) and two-way (server and client) authentication. You can refer to the Fabric TLS configuration page for server side settings. For Caliper-side settings, check the adapter documentation that details how to set the necessary credentials.

                                    "},{"location":"getting-started/faq/#q-how-can-i-monitor-remote-docker-containers","title":"Q: How can I monitor remote Docker containers?","text":"

                                    A: If you need to access the Docker daemon remotely, you need to explicitly enable remote access. Beware that the default setup provides unencrypted and unauthenticated direct access to the Docker daemon. For details, refer to the official Docker documentation.

                                    "},{"location":"getting-started/faq/#license","title":"License","text":"

                                    The Caliper codebase is released under the Apache 2.0 license. Any documentation developed by the Caliper Project is licensed under the Creative Commons Attribution 4.0 International License. You may obtain a copy of the license, titled CC-BY-4.0, at http://creativecommons.org/licenses/by/4.0/.

                                    "},{"location":"getting-started/installing-caliper/","title":"Installing Caliper","text":""},{"location":"getting-started/installing-caliper/#overview","title":"Overview","text":"

                                    Caliper is published as the @hyperledger/caliper-cli NPM package and the hyperledger/caliper Docker image, both containing the CLI binary. Refer to the Installing from NPM and Using the Docker image sections for the available versions and their intricacies.

                                    Installing and running Caliper consists of the following steps, thoroughly detailed by the remaining sections:

                                    1. Acquire the Caliper CLI either from NPM or from DockerHub.
                                    2. Execute a bind command through the CLI. This step pulls the specified version of SDK packages for the selected platform.
                                    3. Start the benchmark through the CLI or by starting the Docker container.

                                    The examples in the rest of the documentation use the caliper-benchmarks repository as the Caliper workspace since it contains many sample artifacts for benchmarking.

                                    Important

                                    make sure you check out the appropriate tag/commit of the repository, matching the version of Caliper you use.

                                    To clone the caliper-benchmarks repository, run:

                                    git clone https://github.com/hyperledger-caliper/caliper-benchmarks.git\ncd caliper-benchmarks\ngit checkout <your Caliper version>\n

                                    Note

                                    If you are running your custom benchmark, then change this directory path (and other related configurations) accordingly in the examples.

                                    "},{"location":"getting-started/installing-caliper/#the-caliper-cli","title":"The Caliper CLI","text":"

                                    Unless you are embedding the Caliper packages in your own application, you will probably use Caliper through its command line interface (CLI). The other sections will introduce the different ways of acquiring and calling the Caliper CLI. This section simply focuses on the API it provides.

                                    Note

                                    The following examples assume a locally installed CLI in the ~/caliper-benchmarks directory, hence the npx call before the caliper binary. Refer to the Local NPM install section for the specifics.

                                    The entry point of the CLI is the caliper binary. You can confirm whether the CLI is installed correctly by checking its version:

                                    user@ubuntu:~/caliper-benchmarks$ npx caliper --version\nv0.6.0\n

                                    The CLI provides multiple commands to perform different tasks. To check the available commands and their descriptions, execute:

                                    user@ubuntu:~/caliper-benchmarks$ npx caliper --help\ncaliper <command>\n\nCommands:\n  caliper.js bind [options]       Bind Caliper to a specific SUT and its SDK version\n  caliper.js launch <subcommand>  Launch a Caliper process either in a manager or worker role.\n  caliper.js unbind [options]     Unbind Caliper from a previously bound SUT and its SDK version\n  caliper.js completion           generate completion script\n\nOptions:\n  --help, -h  Show usage information  [boolean]\n  --version   Show version information  [boolean]\n\nExamples:\n  caliper bind\n  caliper unbind\n  caliper launch manager\n  caliper launch worker\n
                                    You can also request the help page of a specific command, as demonstrated by the next subsections.

                                    Note

                                    the command options can be set either through the command line, or from various other sources supported by the configuration mechanism of Caliper. This flexibility makes it easy to embed the CLI in different environments.

                                    "},{"location":"getting-started/installing-caliper/#the-bind-command","title":"The bind command","text":"

                                    Acquiring Caliper is as easy as installing a single NPM package, or pulling a single Docker image. However, this single point of install necessitates an additional step of telling Caliper which platform to target and which platform SDK version to use. This step is called binding, provided by the bind CLI command.

                                    To have a look at the help page of the command, execute:

                                    user@ubuntu:~/caliper-benchmarks$ npx caliper bind --help\nUsage:\n  caliper bind --caliper-bind-sut fabric:fabric-gateway --caliper-bind-cwd ./ --caliper-bind-args=\"-g\"\n\nOptions:\n  --help, -h           Show usage information  [boolean]\n  --version            Show version information  [boolean]\n  --caliper-bind-sut   The name and version of the platform and its SDK to bind to  [string]\n  --caliper-bind-cwd   The working directory for performing the SDK install  [string]\n  --caliper-bind-args  Additional arguments to pass to \"npm install\". Use the \"=\" notation when setting this parameter  [string]\n  --caliper-bind-file  Yaml file to override default (supported) package versions when binding an SDK  [string]\n
                                    The binding step technically consists of an extra npm install call with the appropriate packages and install settings, fully managed by the CLI. The following parameters can be set for the command:

                                    • SUT/platform name and SDK version: specifies the name of the target platform and the SDK version to install, e.g., fAbric:2.2
                                    • Working directory: the directory from which the npm install command must be performed. Defaults to the current working directory
                                    • User arguments: additional arguments to pass to npm install, e.g., --save

                                    The following SUT name and SDK version combinations are supported:

                                    • besu: 1.3.2, 1.3, 1.4
                                    • ethereum: 1.2.1, 1.3
                                    • fabric: 1.4, 2.2, fabric-gateway

                                    Note

                                    Ensure that the SDK you are binding is compatible with the the SUT version that you intend to target.

                                    The bind command is useful when you plan to run multiple benchmarks against the same SUT version. Bind once, then run different benchmarks without the need to bind again. As you will see in the next sections, the launcher commands for the manager and worker processes can also perform the binding step if the required parameter is present.

                                    "},{"location":"getting-started/installing-caliper/#custom-bindings","title":"Custom bindings","text":"

                                    The built-in bindings can be overridden by setting the caliper-bind-file parameter to a YAML file path. The file must match the structure of the default binding file, documented here. This way you can use experimental SDK versions that are not (yet) officially supported by Caliper. This also means that we cannot provide help for such SDK versions!

                                    "},{"location":"getting-started/installing-caliper/#the-unbind-command","title":"The unbind command","text":"

                                    It might happen that you would like to switch between different SUT SDK versions/bindings during your measurements or project development. Depending on the SUT SDK, simply rebinding to a different version might leave behind unwanted packages, resulting in obscure errors.

                                    To avoid this, the CLI provides an unbindcommand, that behaves exactly like the bind command (even uses the same arguments), but instead of installing the packages present in the binding specification, it removes them, leaving no trace of the previous binding.

                                    To have a look at the help page of the command, execute:

                                    user@ubuntu:~/caliper-benchmarks$ npx caliper unbind --help\nUsage:\n  caliper unbind --caliper-bind-sut fabric:2.2 --caliper-bind-cwd ./ --caliper-bind-args=\"-g\"\n\nOptions:\n  --help, -h           Show usage information  [boolean]\n  --version            Show version information  [boolean]\n  --caliper-bind-sut   The name and version of the platform and its SDK to unbind  [string]\n  --caliper-bind-cwd   The working directory for performing the SDK removal  [string]\n  --caliper-bind-args  Additional arguments to pass to \"npm remove\". Use the \"=\" notation when setting this parameter  [string]\n  --caliper-bind-file  Yaml file to override default (supported) package versions when unbinding an SDK  [string]\n

                                    Note

                                    It is recommended to either bind/unbind globally (as done by the Caliper Docker image), or use the --caliper-bind-args=\"--save-dev\" argument when performing the binding/unbinding. This ensures that npm will correctly remove the packages.

                                    "},{"location":"getting-started/installing-caliper/#the-launch-command","title":"The launch command","text":"

                                    Caliper runs a benchmark by using worker processes to generate the workload, and by using a manager process to coordinate the different benchmark rounds among the worker processes. Accordingly, the CLI provides commands for launching both manager and worker processes.

                                    To have a look at the help page of the command, execute:

                                    user@ubuntu:~/caliper-benchmarks$ npx caliper launch --help\ncaliper launch <subcommand>\n\nLaunch a Caliper process either in a manager or worker role.\n\nCommands:\n  caliper launch manager [options]  Launch a Caliper manager process to coordinate the benchmark run\n  caliper launch worker [options]  Launch a Caliper worker process to generate the benchmark workload\n\nOptions:\n  --help, -h  Show usage information  [boolean]\n  --version   Show version information  [boolean]\n
                                    "},{"location":"getting-started/installing-caliper/#the-launch-manager-command","title":"The launch manager command","text":"

                                    The Caliper manager process can be considered as the entry point of a distributed benchmark run. It coordinates (and optionally spawns) the worker processes throughout the benchmark run.

                                    To have a look at the help page of the command, execute:

                                    user@ubuntu:~/caliper-benchmarks$ npx caliper launch manager --help\nUsage:\n caliper launch manager --caliper-bind-sut fabric:2.2 [other options]\n\nOptions:\n  --help, -h           Show usage information  [boolean]\n  --version            Show version information  [boolean]\n  --caliper-bind-sut   The name and version of the platform to bind to  [string]\n  --caliper-bind-cwd   The working directory for performing the SDK install  [string]\n  --caliper-bind-args  Additional arguments to pass to \"npm install\". Use the \"=\" notation when setting this parameter  [string]\n  --caliper-bind-file  Yaml file to override default (supported) package versions when binding an SDK  [string]\n

                                    As you can see, the launch manager command can also process the parameters of the bind command, just in case you would like to perform the binding and the benchmark run in one step.

                                    However, the command requires the following parameters to be set:

                                    • caliper-workspace: the directory serving as the root of your project. Every relative path in other configuration files or settings will be resolved from this directory. The workspace concept was introduced to make Caliper projects portable across different machines.
                                    • caliper-benchconfig: the path of the file containing the configuration of the test rounds, as detailed in the Architecture page. Should be relative to the workspace path.
                                    • caliper-networkconfig: the path of the file containing the network configuration/description for the selected SUT, detailed in the configuration pages of the respective adapters. Should be relative to the workspace path.
                                    "},{"location":"getting-started/installing-caliper/#the-launch-worker-command","title":"The launch worker command","text":"

                                    The Caliper worker processes are responsible for generating the workload during the benchmark run. Usually more than one worker process is running, coordinated by the single manager process.

                                    To have a look at the help page of the command, execute:

                                    user@ubuntu:~/caliper-benchmarks$ npx caliper launch worker --help\nUsage:\n caliper launch manager --caliper-bind-sut fabric:2.2 [other options]\n\nOptions:\n  --help, -h           Show usage information  [boolean]\n  --version            Show version information  [boolean]\n  --caliper-bind-sut   The name and version of the platform to bind to  [string]\n  --caliper-bind-cwd   The working directory for performing the SDK install  [string]\n  --caliper-bind-args  Additional arguments to pass to \"npm install\". Use the \"=\" notation when setting this parameter  [string]\n  --caliper-bind-file  Yaml file to override default (supported) package versions when binding an SDK  [string]\n
                                    As you can see, you can configure the worker processes the same way as the manager process. Including the optional binding step, but also the three mandatory parameters mentioned in the previous section.

                                    "},{"location":"getting-started/installing-caliper/#caliper-test-phase-control","title":"Caliper test phase control","text":"

                                    Caliper commands are capable of passing all runtime configuration settings. A subset of these commands are for flow control that provide direct control over the following Caliper phases:

                                    • start
                                    • init
                                    • install
                                    • test
                                    • end

                                    It is possible to skip, or perform only one of the above phases through use of the correct flag. For instance, it is common to have an existing network that may be targeted by Caliper through the provision of a --caliper-flow-only-test flag.

                                    "},{"location":"getting-started/installing-caliper/#installing-from-npm","title":"Installing from NPM","text":"

                                    Caliper is published as the @hyperledger/caliper-cli NPM package, providing a single point of install for every supported adapter.

                                    "},{"location":"getting-started/installing-caliper/#versioning-semantics","title":"Versioning semantics","text":"

                                    Before explaining the steps for installing Caliper, let\u2019s take a look at the Versions page of the CLI package. You will see a list of tags and versions. If you are new to NPM, think of versions as immutable pointers to a specific version (duh) of the source code, while tags are mutable pointers to a specific version. So tags can change where they point to. Easy, right?

                                    But why is all this important to you? Because Caliper is still in its pre-release life-cycle (< v1.0.0), meaning that even minor version bumps are allowed to introduce breaking changes. And if you use Caliper in your project, you might run into some surprises depending on how you install Caliper from time to time.

                                    Note

                                    Until Caliper reaches v1.0.0, always use the explicit version numbers when installing from NPM. So let\u2019s forget about the latest tag, as of now they are just a mandatory hindrance of NPM. We deliberately do not provide such tags for the Docker images.

                                    Let\u2019s see the three types of version numbers you will encounter:

                                    • 0.6.0: Version numbers of this form denote releases deemed stable by the maintainers. Such versions have a corresponding GitHub tag, both in the caliper and caliper-benchmarks repositories. Moreover, the latest stable version is documented by the matching version of the documentation page. So make sure to align the different versions if you run into some issue.
                                    • 0.6.1-unstable-20240422122901: Such version \u201cnumbers\u201d denote unstable releases that are published upon every merged pull request (hence the timestamp at the end), and eventually will become a stable version, e.g., 0.6.1. This way you always have access to the NPM (and Docker) artifacts pertaining to the main branch of the repository. Let\u2019s find and fix the bugs of new features before they make it to the stable release!
                                    • unstable: This is the very latest unstable release that has been published and would correspond to a version also published as 0.6.1-unstable-<some date>. This lets you quickly work with the very latest code from the main branch.

                                    Note

                                    The newest unstable release always corresponds to the up-to-date version of the related repositories, and the vNext version of the documentation page!

                                    "},{"location":"getting-started/installing-caliper/#pre-requisites","title":"Pre-requisites","text":"
                                    • Node.js v18 LTS, v20 LTS or v22 LTS version is required to install the Caliper CLI from NPM:
                                    • Docker version 20.10.11 or later is required for use with the Caliper docker image

                                    The following tools may be required depending on which SUT and version you bind to

                                    • python3, make, g++ and git (for fetching and compiling some packages during bind)
                                    "},{"location":"getting-started/installing-caliper/#local-npm-install","title":"Local NPM install","text":"

                                    Note

                                    this is the highly recommended way to install Caliper for your project. Keeping the project dependencies local makes it easier to setup multiple Caliper projects. Global dependencies would require re-binding every time before a new benchmark run (to ensure the correct global dependencies).

                                    1. Install the Caliper CLI as you would any other NPM package. It is highly recommended to explicitly specify the version number, e.g., @hyperledger/caliper-cli@0.6.0
                                    2. Bind the CLI to the required platform SDK (e.g., fabric with the fabric-gateway SDK).
                                    3. Invoke the local CLI binary (using npx) with the appropriate parameters. You can repeat this step for as many benchmarks as you would like.

                                    Putting it all together:

                                    user@ubuntu:~/caliper-benchmarks$ npm install --only=prod @hyperledger/caliper-cli@0.6.0\nuser@ubuntu:~/caliper-benchmarks$ npx caliper bind --caliper-bind-sut fabric:fabric-gateway\nuser@ubuntu:~/caliper-benchmarks$ npx caliper launch manager \\\n    --caliper-workspace . \\\n    --caliper-benchconfig benchmarks/scenario/simple/config.yaml \\\n    --caliper-networkconfig networks/fabric/test-network.yaml\n

                                    We could also perform the binding automatically when launching the manager process (note the extra parameter for caliper launch manager):

                                    user@ubuntu:~/caliper-benchmarks$ npm install --only=prod @hyperledger/caliper-cli@0.6.0\nuser@ubuntu:~/caliper-benchmarks$ npx caliper launch manager \\\n    --caliper-bind-sut fabric:fabric-gateway \\\n    --caliper-workspace . \\\n    --caliper-benchconfig benchmarks/scenario/simple/config.yaml \\\n    --caliper-networkconfig networks/fabric/test-network.yaml\n

                                    Note

                                    specifying the --only=prod parameter in step 2 will ensure that the default latest SDK dependencies for every platform will not be installed. Since we perform an explicit binding anyway (and only for a single platform), this is the desired approach, while also saving some storage and time.

                                    Note

                                    always make sure that the versions of the SUT, the bound SDK and the used artifacts match!

                                    "},{"location":"getting-started/installing-caliper/#global-npm-install","title":"Global NPM install","text":"

                                    Note

                                    make sure that you have a really good reason for installing the Caliper CLI globally. The recommended approach is the local install. That way your project is self-contained and you can easily setup multiple projects (in multiple directories) that each target a different SUT (or just different SUT versions). Installing or re-binding dependencies globally can get tricky.

                                    There are some minor differences compared to the local install:

                                    1. You can perform the install, bind and run steps from anywhere (just specify the workspace accordingly).
                                    2. You need to install the CLI globally (-g flag).
                                    3. You need to tell the binding step to install the packages also globally (--caliper-bind-args parameter).
                                    4. You can omit the npx command, since caliper will be in your PATH.
                                    user@ubuntu:~$ npm install -g --only=prod @hyperledger/caliper-cli@0.6.0\nuser@ubuntu:~$ caliper bind --caliper-bind-sut fabric:2.2 --caliper-bind-args=-g\nuser@ubuntu:~$ caliper launch manager \\\n    --caliper-workspace ~/caliper-benchmarks \\\n    --caliper-benchconfig benchmarks/scenario/simple/config.yaml \\\n    --caliper-networkconfig networks/fabric/test-network.yaml\n

                                    Note

                                    for global install you don\u2019t need to change the directory to your workspace, you can simply specify --caliper-workspace ~/caliper-benchmarks. But this way you can\u2019t utilize the auto complete feature of your commandline for the relative paths of the artifacts.

                                    Depending on your NPM settings, your user might need write access to directories outside of its home directory. This usually results in \u201cAccess denied\u201d errors. The following pointers here can guide you to circumvent the problem.

                                    "},{"location":"getting-started/installing-caliper/#using-the-docker-image","title":"Using the Docker image","text":"

                                    Caliper is published as the hyperledger/caliper Docker image, providing a single point of usage for every supported adapter.

                                    The important properties of the image are the following:

                                    • Working directory: /hyperledger/caliper/workspace
                                    • The commands are executed by the node user (created in the base image)
                                    • The environment variable CALIPER_WORKSPACE is set to the /hyperledger/caliper/workspace directory
                                    • The entry point is the globally installed caliper binary
                                    • The environment variable CALIPER_BIND_ARGS is set to -g, so the binding step also occurs globally.
                                    • The default command is set to --version. This must be overridden when using the image.

                                    This has the following implications:

                                    1. It is recommended to mount your local workspace to the /hyperledger/caliper/workspace container directory. The default CALIPER_WORKSPACE environment variable value points to this location, so you don\u2019t need to specify it explicitly, one less setting to modify.
                                    2. You need to choose a command to execute, either launch manager or launch worker. Check the Docker and Docker-Compose examples for the exact syntax.
                                    3. The binding step is still necessary, similarly to the NPM install approach. Whether you use the launch manager or launch worker command, you only need to set the required binding parameter. The easiest way to do this is through the CALIPER_BIND_SUT and CALIPER_BIND_SDK environment variables.
                                    4. You need to set the required parameters for the launched manager or worker. The easiest way to do this is through the CALIPER_BENCHCONFIG and CALIPER_NETWORKCONFIG environment variables.
                                    "},{"location":"getting-started/installing-caliper/#starting-a-container","title":"Starting a container","text":"

                                    Parts of starting a Caliper container (following the recommendations above):

                                    1. Pick the required image version
                                    2. Mount your local working directory to a container directory
                                    3. Set the required binding and run parameters

                                    Note

                                    the latest tag is not supported, i.e, you explicitly have to specify the image version you want: hyperledger/caliper:0.6.0, similar to the recommended approach for the NPM packages.

                                    Putting it all together, split into multiple lines for clarity, and naming the container caliper:

                                    user@ubuntu:~/caliper-benchmarks$ docker run \\\n    -v $PWD:/hyperledger/caliper/workspace \\\n    -e CALIPER_BIND_SUT=fabric:fabric-gateway \\\n    -e CALIPER_BENCHCONFIG=benchmarks/scenario/simple/config.yaml \\\n    -e CALIPER_NETWORKCONFIG=networks/fabric/test-network.yaml \\\n    --name caliper hyperledger/caliper:0.6.0 launch manager\n

                                    Note

                                    The above network configuration file contains a start script to spin up a local Docker-based Fabric network, which will not work in this form. So make sure to remove the start (and end) script, and change the node endpoints to remote addresses.

                                    "},{"location":"getting-started/installing-caliper/#using-docker-compose","title":"Using docker-compose","text":"

                                    The above command is more readable when converted to a docker-compose.yaml file:

                                    version: '2'\n\nservices:\n    caliper:\n        container_name: caliper\n        image: hyperledger/caliper:0.6.0\n        command: launch manager\n        environment:\n        - CALIPER_BIND_SUT=fabric:fabric-gateway\n        - CALIPER_BENCHCONFIG=benchmarks/scenario/simple/config.yaml\n        - CALIPER_NETWORKCONFIG=networks/fabric/test-network.yaml\n        volumes:\n        - ~/caliper-benchmarks:/hyperledger/caliper/workspace\n

                                    Once you navigate to the directory containing the docker-compose.yaml file, just execute:

                                    docker-compose up\n
                                    "},{"location":"getting-started/installing-caliper/#license","title":"License","text":"

                                    The Caliper codebase is released under the Apache 2.0 license. Any documentation developed by the Caliper Project is licensed under the Creative Commons Attribution 4.0 International License. You may obtain a copy of the license, titled CC-BY-4.0, at http://creativecommons.org/licenses/by/4.0/.

                                    "},{"location":"getting-started/license/","title":"License","text":"
                                                                 Apache License\n                       Version 2.0, January 2004\n                    http://www.apache.org/licenses/\n

                                    TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

                                    1. Definitions.

                                    \u201cLicense\u201d shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document.

                                    \u201cLicensor\u201d shall mean the copyright owner or entity authorized by the copyright owner that is granting the License.

                                    \u201cLegal Entity\u201d shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, \u201ccontrol\u201d means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.

                                    \u201cYou\u201d (or \u201cYour\u201d) shall mean an individual or Legal Entity exercising permissions granted by this License.

                                    \u201cSource\u201d form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files.

                                    \u201cObject\u201d form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types.

                                    \u201cWork\u201d shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below).

                                    \u201cDerivative Works\u201d shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof.

                                    \u201cContribution\u201d shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, \u201csubmitted\u201d means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as \u201cNot a Contribution.\u201d

                                    \u201cContributor\u201d shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work.

                                    1. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form.

                                    2. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed.

                                    3. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions:

                                    (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and

                                    (b) You must cause any modified files to carry prominent notices stating that You changed the files; and

                                    \u00a9 You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and

                                    (d) If the Work includes a \u201cNOTICE\u201d text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License.

                                    You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License.

                                    1. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions.

                                    2. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file.

                                    3. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an \u201cAS IS\u201d BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License.

                                    4. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages.

                                    5. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability.

                                    END OF TERMS AND CONDITIONS

                                    APPENDIX: How to apply the Apache License to your work.

                                      To apply the Apache License to your work, attach the following\n  boilerplate notice, with the fields enclosed by brackets \"{}\"\n  replaced with your own identifying information. (Don't include\n  the brackets!)  The text should be enclosed in the appropriate\n  comment syntax for the file format. We also recommend that a\n  file or class name and description of purpose be included on the\n  same \"printed page\" as the copyright notice for easier\n  identification within third-party archives.\n

                                    Copyright {yyyy} {name of copyright owner}

                                    Licensed under the Apache License, Version 2.0 (the \u201cLicense\u201d); you may not use this file except in compliance with the License. You may obtain a copy of the License at

                                       http://www.apache.org/licenses/LICENSE-2.0\n

                                    Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \u201cAS IS\u201d BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

                                    "},{"location":"tutorial/","title":"Tutorials","text":"

                                    These tutorials are specifically aimed at using Caliper to performance test different blockchain networks.

                                    Please select a tutorial from the navigation section on the left which takes you through performance testing a smart contract using Caliper.

                                    "},{"location":"tutorial/fabric/","title":"Basic Fabric Tutorial","text":""},{"location":"tutorial/fabric/#overview","title":"Overview","text":"

                                    This tutorial takes you through performance testing a smart contract on a pre-existing Fabric network using Caliper.

                                    To complete this tutorial you will need to have installed NodeJS. To do this, we recommend using nvm.

                                    This tutorial is based on resources available from the official Hyperledger Fabric documentation. A network comprised of two organizations and a solo orderer, with the javascript asset-transfer-basic smart contract, is assumed to be built and ready to performance test.

                                    Note

                                    The following command list is a minimalist quick step guide to get the required Fabric network up and running. We use available Hyperledger Fabric resources at explicit levels. To understand and troubleshoot what occurs during the creation of the test network, please refer to the Fabric documentation linked above!

                                    Ensure you have the following pre-reqs installed

                                    • docker engine or docker desktop
                                    • curl
                                    • jq
                                    • a supported node LTS version. a list current supported LTS versions can be found on the node.js website (to install node easily you can use the nvm tool found here)
                                    # Pull down the 2.5.7 hyperledger fabric and 1.5.10 hyperledger fabric ca images/binaries\ncurl -sSL https://raw.githubusercontent.com/hyperledger/fabric/main/scripts/bootstrap.sh | bash -s -- 2.5.7 1.5.10\ncd fabric-samples\n# As fabric-samples for hyperledger 2.5 is via the main branch, fix to a specific commit which we know works\n# to protect against potential breaking changes in main\ngit checkout c691cf94a99372e0225927d7927b1bc367018029\n# Start up the test-network\ncd test-network/\n./network.sh up createChannel\n./network.sh deployCC -ccn basic -ccp ../asset-transfer-basic/chaincode-javascript -ccl javascript\n
                                    "},{"location":"tutorial/fabric/#step-1-create-a-caliper-workspace","title":"Step 1 - Create a Caliper Workspace","text":"

                                    Create a folder named caliper-workspace at the same level as the fabric-samples directory, and then within the caliper-workspace folder, create three folders named networks, benchmarks, and workload respectively

                                    Caliper installation and use will be based on a local npm installation. Within the caliper-workspace directory, install caliper CLI using the following terminal command:

                                    npm install --only=prod @hyperledger/caliper-cli@0.6.0\n

                                    Bind the SDK using the following terminal command:

                                    npx caliper bind --caliper-bind-sut fabric:fabric-gateway\n

                                    Further information relating to the installation and binding of Caliper may be found within the relevant documentation pages.

                                    Caliper requires two configuration files:

                                    • The network configuration file, which describes the network under test and provides the test identities to use.
                                    • The benchmark file, which defines the performance test to be completed via a sequenced set of test rounds, each specifying a workload module and a series of options to drive the workload over a time interval.

                                    We will now populate these folders with the assets required by Caliper.

                                    "},{"location":"tutorial/fabric/#step-2-build-a-network-configuration-file","title":"Step 2 - Build a Network Configuration File","text":"

                                    The network configuration file is the file required by Caliper workers to be able to submit and evaluate transactions on a Hyperledger Fabric network. The file can be in YAML or JSON format, this tutorial shows the YAML format.

                                    "},{"location":"tutorial/fabric/#create-a-template-network-configuration-file","title":"Create a Template Network Configuration File","text":"

                                    Under the networks folder create a template file called networkConfig.yaml with the following content:

                                    name: Caliper test\nversion: \"2.0.0\"\n\ncaliper:\n  blockchain: fabric\n\nchannels:\n\norganizations:\n

                                    name: The name for the configuration, in this instance \u201cCaliper test\u201d.

                                    version: The version of the configuration file being used. \u201c2.0.0\u201d ensures the new fabric connectors are used

                                    caliper: Indicates to Caliper the SUT that is being targeted, and may contain additional start/end commands or sut specific options that are not required within this tutorial. For the purposes of this tutorial, we are targeting a fabric network.

                                    channels: Describes the Hyperledger Fabric channels and the smart contracts deployed on these channels to be benchmarked.

                                    organizations: A list of the Hyperledger Fabric organizations with identities and connection profiles associated with each organization

                                    "},{"location":"tutorial/fabric/#a-brief-introduction-to-common-connection-profiles-ccp","title":"A brief introduction to Common Connection Profiles (CCP)","text":"

                                    Common Connection Profiles are a file format by which all the Hyperledger Fabric SDKs can use to connect to a Hyperledger Fabric Network. As Caliper utilizes the fabric node sdk to connect to the network, caliper makes use of these connection profiles. Whoever is responsible for building a Hyperledger Fabric network should create these files.

                                    A Common Connection Profile will be organization specific. So each organization will have their own unique file. Again the network provider should provide a file for each organization.

                                    These profiles can come in 2 forms termed static or dynamic in the Hyperledger Fabric documentation. In summary static connection profiles contain all the information up front about the fabric network. It contains, amongst other things, all the peers, orderers and channels that exist. A dynamic connection profile is minimal usually containing just 1 or 2 peers of your organization for which the SDK will need to use discovery with in order to determine all the required information to be able to interact with the fabric network.

                                    You will see that the test-network in fabric samples provides common connection profiles for each organization, and that they are dynamic connection profiles.

                                    "},{"location":"tutorial/fabric/#populating-the-template-file","title":"Populating The Template File","text":"

                                    Following the test-network tutorial, a Common Connection Profile is generated as well as a set of identities for each organization.

                                    We will be using Org1 whose MSP id is Org1MSPto connect in this example, so there is no need to provide details about Org2 which is part of the test-network. Only having to provide a single organization is a very common pattern.

                                    "},{"location":"tutorial/fabric/#organizations","title":"Organizations","text":"

                                    Here we need to add information about the organization whose MSP id is Org1MSP. We need to provide a name, it\u2019s associated connection profile and at least 1 identity.

                                    The connection profile can be found in fabric-samples -> test-network -> organizations -> peerOrganizations -> org1.example.com. There are both json and yaml versions of this file, we will make use of connection-org1.yaml. These connection profiles are what Hyperledger Fabric refer to as dynamic so they are expected to be used in conjunction with discovery, therefore we need to declare that this connection profile requires the use of discovery.

                                    The identity we will use will be User1@org1.example.com.

                                    The private key can be found in fabric-samples -> test-network -> organizations -> peerOrganizations -> org1.example.com -> users -> User1 -> msp -> keystore -> priv_sk

                                    The public certificate can be found in fabric-samples -> test-network -> organizations -> peerOrganizations -> org1.example.com -> users -> User1 -> msp -> signedcerts -> User1@org1.example.com-cert.pem

                                    The identity will need to be given a unique name within the organization. It doesn\u2019t have to match the name that the test-network has used, ie User1@org1.example.com so to keep it simple let\u2019s just give it a name of User1. For the purposes of this tutorial we will just point to the certificate and private key files, but it\u2019s also possible to embed the information directly into the network configuration file.

                                    Below is the required organizations section that provides the above detail

                                    organizations:\n  - mspid: Org1MSP\n    identities:\n      certificates:\n      - name: 'User1'\n        clientPrivateKey:\n          path: '../fabric-samples/test-network/organizations/peerOrganizations/org1.example.com/users/User1@org1.example.com/msp/keystore/priv_sk'\n        clientSignedCert:\n          path: '../fabric-samples/test-network/organizations/peerOrganizations/org1.example.com/users/User1@org1.example.com/msp/signcerts/User1@org1.example.com-cert.pem'\n    connectionProfile:\n      path: '../fabric-samples/test-network/organizations/peerOrganizations/org1.example.com/connection-org1.yaml'\n# Uncomment the next line only if you want to try this network configuration file binding to fabric:2.2, it's not required for fabric-gateway binding\n#      discover: true\n

                                    Note the - sign in front of mspid and name in the above example. These are important as organizations could contain more than 1 organization. certificates can also contain a list defining more than 1 identity.

                                    One other important point to note; The first organization defined in the file is known as the default organization. In workload modules if you don\u2019t specify an invoking organization, then the default organization is used. As there is only 1 organization defined anyway you will not see any reference to the invoking organization in the workload implementation.

                                    "},{"location":"tutorial/fabric/#channels","title":"Channels","text":"

                                    The Fabric connector for Caliper requires assistance when creating connections to a Fabric network. A list of channels must be provided that lists the smart contracts that may be interacted with.

                                    As part of the test-network tutorial, a channel of mychannel will have been created and a contract (chaincode) with the id of basic will have been instantiated on that channel. We declare this as follows

                                    channels:\n  - channelName: mychannel\n    contracts:\n    - id: basic\n

                                    note the - sign in front of channelName and id in the above example. This is required because there can be more than 1 channel so channels specify a list of channels and contracts can have more than 1 contract (chaincode) ids that are of interest.

                                    "},{"location":"tutorial/fabric/#the-complete-network-configuration-file","title":"The Complete Network Configuration File","text":"

                                    The Caliper network configuration file should now be fully populated. It can be useful to take time to look over and ensure that the paths to the certificates, private keys and connection profile are correct.

                                    name: Calier test\nversion: \"2.0.0\"\n\ncaliper:\n  blockchain: fabric\n\nchannels:\n  - channelName: mychannel\n    contracts:\n    - id: basic\n\norganizations:\n  - mspid: Org1MSP\n    identities:\n      certificates:\n      - name: 'User1'\n        clientPrivateKey:\n          path: '../fabric-samples/test-network/organizations/peerOrganizations/org1.example.com/users/User1@org1.example.com/msp/keystore/priv_sk'\n        clientSignedCert:\n          path: '../fabric-samples/test-network/organizations/peerOrganizations/org1.example.com/users/User1@org1.example.com/msp/signcerts/User1@org1.example.com-cert.pem'\n    connectionProfile:\n      path: '../fabric-samples/test-network/organizations/peerOrganizations/org1.example.com/connection-org1.yaml'\n# Uncomment the next line only if you want to try this network configuration file binding to fabric:2.2, it's not required for fabric-gateway binding\n#      discover: true\n
                                    "},{"location":"tutorial/fabric/#step-3-build-a-test-workload-module","title":"Step 3 - Build a Test Workload Module","text":"

                                    The workload module interacts with the deployed smart contract during the benchmark round. The workload module extends the Caliper class WorkloadModuleBase from caliper-core. The workload module provides three overrides:

                                    • initializeWorkloadModule - used to initialize any required items for the benchmark
                                    • submitTransaction - used to interact with the smart contract method during the monitored phase of the benchmark
                                    • cleanupWorkloadModule - used to clean up after the completion of the benchmark

                                    For more information, please see the specific documentation on Workload Configuration accessible on the left hand menu.

                                    The workload we will be driving aims to benchmark the querying of existing assets within the world state database. Consequently we will use all three phases available in the workload module:

                                    • initializeWorkloadModule - to create assets that may be queried in the submitTransaction phase
                                    • submitTransaction - to query assets created in the initializeWorkloadModule phase
                                    • cleanupWorkloadModule - used to remove assets created in the initializeWorkloadModule phase so that the benchmark may be repeated
                                    "},{"location":"tutorial/fabric/#create-a-template-workload-module","title":"Create A Template Workload Module","text":"

                                    Within the workload folder create a file called readAsset.js with the following content:

                                    'use strict';\n\nconst { WorkloadModuleBase } = require('@hyperledger/caliper-core');\n\nclass MyWorkload extends WorkloadModuleBase {\n    constructor() {\n        super();\n    }\n\n    async initializeWorkloadModule(workerIndex, totalWorkers, roundIndex, roundArguments, sutAdapter, sutContext) {\n        await super.initializeWorkloadModule(workerIndex, totalWorkers, roundIndex, roundArguments, sutAdapter, sutContext);\n    }\n\n    async submitTransaction() {\n        // NOOP\n    }\n\n    async cleanupWorkloadModule() {\n        // NOOP\n    }\n}\n\nfunction createWorkloadModule() {\n    return new MyWorkload();\n}\n\nmodule.exports.createWorkloadModule = createWorkloadModule;\n
                                    "},{"location":"tutorial/fabric/#populating-the-template-file_1","title":"Populating the Template File","text":"

                                    When we populate this file we make reference to the available methods within the deployed smart contract assetTransfer.js file that can be found in: fabric-samples -> asset-transfer-basic -> chaincode-javascript -> lib -> assetTransfer.js

                                    "},{"location":"tutorial/fabric/#populate-initializeworkloadmodule","title":"Populate initializeWorkloadModule","text":"

                                    This method is used to prepare any items required by the primary submitTransaction method as the benchmark completes.

                                    The number of assets to be created will be given as roundArguments.assets. We create assets using the smart contract by populating an arguments object, which defines the transaction body, and using the Caliper API sendRequests, which requires knowledge of:

                                    • contractId, the name of smart contract that is to be used and is present within the Caliper network configuration file
                                    • contractFunction, the specific function within the smart contract to invoke
                                    • contractArguments, the arguments to pass to the smart contract function
                                    • invokerIdentity, the identity to use that is present within the Caliper network configuration file. This can be optional and caliper will select an identity for you (from the appropriate invoking organization or default organization) and in this tutorial there would only ever be 1 identity to pick but for completeness the examples explicitly define the identity.
                                    • readOnly, if performing a query operation or not

                                    The method should look like this:

                                        async initializeWorkloadModule(workerIndex, totalWorkers, roundIndex, roundArguments, sutAdapter, sutContext) {\n        await super.initializeWorkloadModule(workerIndex, totalWorkers, roundIndex, roundArguments, sutAdapter, sutContext);\n\n        for (let i=0; i<this.roundArguments.assets; i++) {\n            const assetID = `${this.workerIndex}_${i}`;\n            console.log(`Worker ${this.workerIndex}: Creating asset ${assetID}`);\n            const request = {\n                contractId: this.roundArguments.contractId,\n                contractFunction: 'CreateAsset',\n                invokerIdentity: 'User1',\n                contractArguments: [assetID,'blue','20','penguin','500'],\n                readOnly: false\n            };\n\n            await this.sutAdapter.sendRequests(request);\n        }\n    }\n

                                    In the above example, different assets will be created that have the same parameters (blue, 20, penguin, 500). Comparing the above to the smart contract method itself, it should be evident that there is a 1:1 mapping of contract arguments to the method parameters.

                                    "},{"location":"tutorial/fabric/#populate-submittransaction","title":"Populate submitTransaction","text":"

                                    This method runs repeatedly in the benchmark test phase. We will be evaluating the ReadAsset smart contract method by querying the assets we created in the initializeWorkloadModule method.

                                    First, create a string identity for the asset to query, formed by the concatenation of the worker index and a random integer between 0 and the number of created assets.

                                    Then await the call on sendRequests, passing an object containing: contractId set as that passed in from the round arguments; contractFunction set as ReadAsset; invokerIdentity set as User1; and chaincodeArguments set as an array that contains the asset to query in this run.

                                    The method should look like this:

                                        async submitTransaction() {\n        const randomId = Math.floor(Math.random()*this.roundArguments.assets);\n        const myArgs = {\n            contractId: this.roundArguments.contractId,\n            contractFunction: 'ReadAsset',\n            invokerIdentity: 'User1',\n            contractArguments: [`${this.workerIndex}_${randomId}`],\n            readOnly: true\n        };\n\n        await this.sutAdapter.sendRequests(myArgs);\n    }\n
                                    "},{"location":"tutorial/fabric/#populate-cleanupworkloadmodule","title":"Populate cleanupWorkloadModule","text":"

                                    This function is used to clean up after a test as it deletes the assets created in the initializeWorkloadModule function though use of the smart contract function DeleteAsset. The implementation is similar to that within initializeWorkloadModule. Note it is possible to refactor both initializeWorkloadModule and cleanupWorkloadModule to utilize a common method that performs the create/delete action, this is left to the interested reader.

                                       async cleanupWorkloadModule() {\n        for (let i=0; i<this.roundArguments.assets; i++) {\n            const assetID = `${this.workerIndex}_${i}`;\n            console.log(`Worker ${this.workerIndex}: Deleting asset ${assetID}`);\n            const request = {\n                contractId: this.roundArguments.contractId,\n                contractFunction: 'DeleteAsset',\n                invokerIdentity: 'User1',\n                contractArguments: [assetID],\n                readOnly: false\n            };\n\n            await this.sutAdapter.sendRequests(request);\n        }\n    }\n
                                    "},{"location":"tutorial/fabric/#the-complete-workload-module","title":"The Complete Workload Module","text":"

                                    The test callback file should now be fully populated:

                                    'use strict';\n\nconst { WorkloadModuleBase } = require('@hyperledger/caliper-core');\n\nclass MyWorkload extends WorkloadModuleBase {\n    constructor() {\n        super();\n    }\n\n    async initializeWorkloadModule(workerIndex, totalWorkers, roundIndex, roundArguments, sutAdapter, sutContext) {\n        await super.initializeWorkloadModule(workerIndex, totalWorkers, roundIndex, roundArguments, sutAdapter, sutContext);\n\n        for (let i=0; i<this.roundArguments.assets; i++) {\n            const assetID = `${this.workerIndex}_${i}`;\n            console.log(`Worker ${this.workerIndex}: Creating asset ${assetID}`);\n            const request = {\n                contractId: this.roundArguments.contractId,\n                contractFunction: 'CreateAsset',\n                invokerIdentity: 'User1',\n                contractArguments: [assetID,'blue','20','penguin','500'],\n                readOnly: false\n            };\n\n            await this.sutAdapter.sendRequests(request);\n        }\n    }\n\n    async submitTransaction() {\n        const randomId = Math.floor(Math.random()*this.roundArguments.assets);\n        const myArgs = {\n            contractId: this.roundArguments.contractId,\n            contractFunction: 'ReadAsset',\n            invokerIdentity: 'User1',\n            contractArguments: [`${this.workerIndex}_${randomId}`],\n            readOnly: true\n        };\n\n        await this.sutAdapter.sendRequests(myArgs);\n    }\n\n    async cleanupWorkloadModule() {\n        for (let i=0; i<this.roundArguments.assets; i++) {\n            const assetID = `${this.workerIndex}_${i}`;\n            console.log(`Worker ${this.workerIndex}: Deleting asset ${assetID}`);\n            const request = {\n                contractId: this.roundArguments.contractId,\n                contractFunction: 'DeleteAsset',\n                invokerIdentity: 'User1',\n                contractArguments: [assetID],\n                readOnly: false\n            };\n\n            await this.sutAdapter.sendRequests(request);\n        }\n    }\n}\n\nfunction createWorkloadModule() {\n    return new MyWorkload();\n}\n\nmodule.exports.createWorkloadModule = createWorkloadModule;\n
                                    "},{"location":"tutorial/fabric/#step-4-build-a-benchmark-configuration-file","title":"Step 4 - Build a Benchmark Configuration File","text":"

                                    The benchmark configuration file defines the benchmark rounds and references the defined workload module(s). It will specify the number of test workers to use when generating the load, the number of test rounds, the duration of each round, the rate control applied to the transaction load during each round, and options relating to monitors. This particular tutorial will not make use of any of the available resource and transaction monitors; for these details please refer to the documentation.

                                    The benchmark configuration file may be provided in a yaml or json format: here we will use a yaml format. Please note that yaml files are case sensitive and all labels are in lowercase.

                                    The benchmark configuration file has a single required stanza:

                                    test:\n
                                    "},{"location":"tutorial/fabric/#create-a-template-benchmark-configuration-file","title":"Create A Template Benchmark Configuration File","text":"

                                    Under the benchmarks folder create a file called myAssetBenchmark.yaml with the following content:

                                    test:\n    name: basic-contract-benchmark\n    description: A test benchmark\n    workers:\n    rounds:\n

                                    test: The root level block that contains the benchmark test information.

                                    name: The name of the test, in this case \u201cbasic-contract-benchmark\u201d.

                                    description: A description for the benchmark, in this case \u201cA test benchmark\u201d.

                                    workers: A set of keys used to define the number of workers (separate worker client instances) used in the subsequent benchmark.

                                    rounds: An array of distinct test rounds that will be progressed sequentially. Rounds may be used to benchmark different smart contract methods, or the same method in a different manner.

                                    "},{"location":"tutorial/fabric/#populating-the-template-file_2","title":"Populating the Template File","text":"

                                    We will now populate the template file to specify the number of workers and the test round that uses the workload module we have created.

                                    "},{"location":"tutorial/fabric/#populate-workers","title":"Populate Workers","text":"

                                    We will be using two separate workers, this is accomplished through the workers specification:

                                      number: 2\n
                                    "},{"location":"tutorial/fabric/#populate-rounds","title":"Populate Rounds","text":"

                                    Each round block contains the following:

                                    • label - the unique header label to use for the round.
                                    • description - a description of the round being run.
                                    • txDuration - the specification of the test duration, in seconds
                                    • rateControl - a rate control type, with options.
                                    • workloads - the workload module to use, with arguments to pass to the module. All arguments passed are available as roundArguments within the workload module.

                                    We will specify a benchmark round labeled readAsset, with the description Query asset benchmark, to run for a 30s duration, using a fixed-load rate controller aiming to maintain a constant transaction pressure of 2. Additionally we will be providing a workload through specification of our readAsset.js workload file, which we will pass the arguments {assets: 10, contractId: asset-transfer-basic}.

                                    The above is accomplished through the round specification:

                                        - label: readAsset\n      description: Read asset benchmark\n      txDuration: 30\n      rateControl:\n        type: fixed-load\n        opts:\n          transactionLoad: 2\n      workload:\n        module: workload/readAsset.js\n        arguments:\n          assets: 10\n          contractId: basic\n
                                    "},{"location":"tutorial/fabric/#the-complete-benchmark-configuration-file","title":"The Complete Benchmark Configuration File","text":"

                                    The benchmark configuration file should now be fully populated:

                                    test:\n    name: basic-contract-benchmark\n    description: test benchmark\n    workers:\n      number: 2\n    rounds:\n      - label: readAsset\n        description: Read asset benchmark\n        txDuration: 30\n        rateControl:\n          type: fixed-load\n          opts:\n            transactionLoad: 2\n        workload:\n          module: workload/readAsset.js\n          arguments:\n            assets: 10\n            contractId: basic\n
                                    "},{"location":"tutorial/fabric/#step-5-run-the-caliper-benchmark","title":"Step 5 - Run the Caliper Benchmark","text":"

                                    We are now ready to run the performance benchmark using the above configuration files and test module. The performance benchmark will be run using the Caliper CLI, which will need to be supplied a path to the workspace and workspace relative paths to the network configuration file and the benchmark configuration file. This information is provided with the flags --caliper-workspace, --caliper-networkconfig, and --caliper-benchconfig respectively.

                                    Since the smart contract has already been installed and instantiated, Caliper only needs to perform the test phase. This is specified by using the flag --caliper-flow-only-test.

                                    "},{"location":"tutorial/fabric/#run-the-command","title":"Run the command","text":"

                                    Ensure that you are in the caliper-workspace directory.

                                    In the terminal run the following Caliper CLI command:

                                    npx caliper launch manager --caliper-workspace ./ --caliper-networkconfig networks/networkConfig.yaml --caliper-benchconfig benchmarks/myAssetBenchmark.yaml --caliper-flow-only-test

                                    "},{"location":"tutorial/fabric/#benchmark-results","title":"Benchmark Results","text":"

                                    The resulting report will detail the following items for each benchmark round:

                                    • Name - the round name from the benchmark configuration file
                                    • Succ/Fail - the number of successful/failing transactions
                                    • Send Rate - the rate at which caliper issued the transactions
                                    • Latency (max/min/avg) - statistics relating to the time taken in seconds between issuing a transaction and receiving a response
                                    • Throughput - the average number of transactions processed per second

                                    You have successfully benchmarked a smart contract. You can repeat the test varying the benchmark parameters, as well as adding resource monitors. For the full set of options, please refer to the Caliper Documentation

                                    "},{"location":"tutorial/fabric/#license","title":"License","text":"

                                    The Caliper codebase is released under the Apache 2.0 license. Any documentation developed by the Caliper Project is licensed under the Creative Commons Attribution 4.0 International License. You may obtain a copy of the license, titled CC-BY-4.0, at http://creativecommons.org/licenses/by/4.0/.

                                    "}]} \ No newline at end of file diff --git a/0.6.0/sitemap.xml.gz b/0.6.0/sitemap.xml.gz index 1a0f1f5c0f6c509a617672dc9fe2f9d88fd58e55..b42227d6130227675e66d5447b6df9ce48884fdc 100644 GIT binary patch delta 13 Ucmb=gXP58h;Am(GpU7ST033S+D*ylh delta 13 Ucmb=gXP58h;AjZ-n#f)O02^Eb?*IS*