title | platform | product | category | subcategory | date |
---|---|---|---|---|---|
Data Center App Performance Toolkit User Guide For Bitbucket |
platform |
marketplace |
devguide |
build |
2022-12-21 |
This document walks you through the process of testing your app on Bitbucket using the Data Center App Performance Toolkit. These instructions focus on producing the required performance and scale benchmarks for your Data Center app.
In this document, we cover the use of the Data Center App Performance Toolkit on two types of environments:
Development environment: Bitbucket Data Center environment for a test run of Data Center App Performance Toolkit and development of app-specific actions. We recommend you use the AWS Quick Start for Bitbucket Data Center with the parameters prescribed here.
- Set up a development environment Bitbucket Data Center on AWS.
- Create a dataset for the development environment.
- Run toolkit on the development environment locally.
- Develop and test app-specific actions locally.
Enterprise-scale environment: Bitbucket Data Center environment used to generate Data Center App Performance Toolkit test results for the Marketplace approval process. Preferably, use the AWS Quick Start for Bitbucket Data Center with the parameters prescribed below. These parameters provision larger, more powerful infrastructure for your Bitbucket Data Center.
- Set up an enterprise-scale environment Bitbucket Data Center on AWS.
- Load an enterprise-scale dataset on your Bitbucket Data Center deployment.
- Set up an execution environment for the toolkit.
- Run all the testing scenarios in the toolkit.
{{% note %}} For simple spikes or tests, you can skip steps 1-2 and target any Bitbucket test instance. When you set up your execution environment, you may need to edit the scripts according to your test instance's data set. {{% /note %}}
Running the tests in a development environment helps familiarize you with the toolkit. It'll also provide you with a lightweight and less expensive environment for developing. Once you're ready to generate test results for the Marketplace Data Center Apps Approval process, run the toolkit in an enterprise-scale environment.
We recommend that you set up development environment using the AWS Quick Start for Bitbucket Data Center (How to deploy tab). All the instructions on this page are optimized for AWS. If you already have an existing Bitbucket Data Center environment, you can also use that too (if so, skip to Create a dataset for the development environment).
If you are a new user, perform an end-to-end deployment. This involves deploying Bitbucket into a new ASI:
Navigate to AWS Quick Start for Bitbucket Data Center > How to deploy tab > Deploy into a new ASI link.
If you have already deployed the ASI separately by using the ASI Quick StartASI Quick Start or by deploying another Atlassian product (Jira, Bitbucket, or Confluence Data Center development environment) with ASI, deploy Bitbucket into your existing ASI:
Navigate to AWS Quick Start for Bitbucket Data Center > How to deploy tab > Deploy into your existing ASI link.
{{% note %}} You are responsible for the cost of AWS services used while running this Quick Start reference deployment. This Quick Start doesn't have any additional prices. See Amazon EC2 pricing for more detail. {{% /note %}}
To reduce costs, we recommend you to keep your deployment up and running only during the performance runs.
AWS Bitbucket Data Center development environment infrastructure costs about 25 - 40$ per working week depending on such factors like region, instance type, deployment type of DB, and other.
All important parameters are listed and described in this section. For all other remaining parameters, we recommend using the Quick Start defaults.
Bitbucket setup
Parameter | Recommended value |
---|---|
Bitbucket Product | Software |
Version | The Data Center App Performance Toolkit officially supports 7.17.11 , 7.21.5 (Long Term Support releases) and 8.0.4 platform release. |
Cluster nodes
Parameter | Recommended value |
---|---|
Cluster node instance type | t3.medium (we recommend this instance type for its good balance between price and performance in testing environments) |
Maximum number of cluster nodes | 1 |
Minimum number of cluster nodes | 1 |
Cluster node instance volume size | 50 |
File server
Parameter | Recommended Value |
---|---|
File server instance type | m4.xlarge |
Home directory size | 100 |
Database
Parameter | Recommended Value |
---|---|
The database engine to deploy with | PostgresSQL |
The database engine version to use | 11 |
Database instance class | db.t3.medium |
RDS Provisioned IOPS | 1000 |
Master password | Password1! |
Enable RDS Multi-AZ deployment | false |
Bitbucket database password | Password1! |
Database storage | 100 |
Elasticsearch
Parameter | Recommended Value |
---|---|
Elasticsearch master user password | (leave blank) |
Elasticsearch instance type | m4.large.elasticsearch |
Elasticsearch disk-space per node (GB) | 100 |
Networking (for new ASI)
Parameter | Recommended Value |
---|---|
Trusted IP range | 0.0.0.0/0 (for public access) or your own trusted IP range |
Availability Zones | Select two availability zones in your region |
Permitted IP range | 0.0.0.0/0 (for public access) or your own trusted IP range |
Make instance internet facing | true |
Key Name | The EC2 Key Pair to allow SSH access. See Amazon EC2 Key Pairs for more info. |
Networking (for existing ASI)
Parameter | Recommended Value |
---|---|
Make instance internet facing | true |
Permitted IP range | 0.0.0.0/0 (for public access) or your own trusted IP range |
Key Name | The EC2 Key Pair to allow SSH access. See Amazon EC2 Key Pairs for more info. |
After successfully deploying Bitbucket Data Center in AWS, you'll need to configure it:
- In the AWS console, go to Services > CloudFormation > Stack > Stack details > Select your stack.
- On the Outputs tab, copy the value of the LoadBalancerURL key.
- Open LoadBalancerURL in your browser. This will take you to the Bitbucket setup wizard.
- On the Bitbucket setup page, populate the following fields:
- Application title: any name for your Bitbucket Data Center deployment
- Base URL: your stack's Elastic LoadBalancer URL
- License key: select new evaluation license or existing license checkbox Click Next.
- On the Administrator account setup page, populate the following fields:
- Username: admin (recommended)
- Full name: any full name of the admin user
- Email address: email address of the admin user
- Password: admin (recommended)
- Confirm Password: admin (recommended) Click Go to Bitbucket.
After creating the development environment Bitbucket Data Center, generate test dataset to run Data Center App Performance Toolkit:
- Create at least one project
- Create repository with some files in a project
- Create a couple of new branches from the repo, make and push changes to the branches and create a pull request
{{% warning %}}
To avoid merge conflicts with base performance scripts, do not create pull requests with master
branch as target or
source.
{{% /warning %}}
{{% warning %}} Make sure English (United States) language is selected as a default language on the > Server settings > Language page. Other languages are not supported by the toolkit. {{% /warning %}}
-
Clone Data Center App Performance Toolkit locally.
-
Follow the README.md instructions to set up toolkit locally.
-
Navigate to
dc-app-performance-toolkit/app
folder. -
Open the
bitbucket.yml
file and fill in the following variables:application_hostname
: your_dc_bitbucket_instance_hostname without protocol.application_protocol
: http or https.application_port
: for HTTP - 80, for HTTPS - 443, 8080, 7990 or your instance-specific port.secure
: True or False. Default value is True. Set False to allow insecure connections, e.g. when using self-signed SSL certificate.application_postfix
: it is empty by default; e.g., /bitbucket for url like this http://localhost:7990/bitbucket.admin_login
: admin user username.admin_password
: admin user password.load_executor
: executor for load tests - jmeterconcurrency
:1
- number of concurrent JMeter users.test_duration
:5m
- duration of the performance run.ramp-up
:1s
- amount of time it will take JMeter to add all test users to test execution.total_actions_per_hour
:3270
- number of total JMeter actions per hour.WEBDRIVER_VISIBLE
: visibility of Chrome browser during selenium execution (False is by default).
-
Run bzt.
bzt bitbucket.yml
-
Review the resulting table in the console log. All JMeter and Selenium actions should have 95+% success rate.
In case some actions does not have 95+% success rate refer to the following logs indc-app-performance-toolkit/app/results/bitbucket/YY-MM-DD-hh-mm-ss
folder:results_summary.log
: detailed run summaryresults.csv
: aggregated .csv file with all actions and timingsbzt.log
: logs of the Taurus tool executionjmeter.*
: logs of the JMeter tool executionpytest.*
: logs of Pytest-Selenium execution
{{% warning %}} Do not proceed with the next step until you have all actions 95+% success rate. Ask support if above logs analysis did not help. {{% /warning %}}
Data Center App Performance Toolkit has its own set of default test actions for Bitbucket Data Center: JMeter and Selenium for load and UI tests respectively.
App-specific action - action (performance test) you have to develop to cover main use cases of your application. Performance test should focus on the common usage of your application and not to cover all possible functionality of your app. For example, application setup screen or other one-time use cases are out of scope of performance testing.
- Define main use case of your app. Usually it is one or two main app use cases.
- Your app adds new UI elements in Bitbucket Data Center - Selenium app-specific action has to be developed.
- Your app introduces new endpoint or extensively calls existing Bitbucket Data Center API - JMeter app-specific actions has to be developed.
{{% note %}} We strongly recommend developing your app-specific actions on the development environment to reduce AWS infrastructure costs. {{% /note %}}
You develop an app that adds some additional fields to specific types of Bitbucket issues. In this case, you should develop Selenium app-specific action:
- Extend example of app-specific action in
dc-app-performance-toolkit/app/extension/bitbucket/extension_ui.py
.
Code example. So, our test has to open app-specific issues and measure time to load of this app-specific issues. - If you need to run
app_specific_action
as specific user uncommentapp_specific_user_login
function in code example. Note, that in this casetest_1_selenium_custom_action
should follow just beforetest_2_selenium_z_log_out
action. - In
dc-app-performance-toolkit/app/selenium_ui/bitbucket_ui.py
, review and uncomment the following block of code to make newly created app-specific actions executed:
# def test_1_selenium_custom_action(webdriver, datasets, screen_shots):
# app_specific_action(webdriver, datasets)
- Run toolkit with
bzt bitbucket.yml
command to ensure that all Selenium actions includingapp_specific_action
are successful.
After adding your custom app-specific actions, you should now be ready to run the required tests for the Marketplace Data Center Apps Approval process. To do this, you'll need an enterprise-scale environment.
We recommend that you use the AWS Quick Start for Bitbucket Data Center (How to deploy tab) to deploy a Bitbucket Data Center testing environment. This Quick Start will allow you to deploy Bitbucket Data Center with a new Atlassian Standard Infrastructure (ASI) or into an existing one.
The ASI is a Virtual Private Cloud (VPC) consisting of subnets, NAT gateways, security groups, bastion hosts, and other infrastructure components required by all Atlassian applications, and then deploys Bitbucket into this new VPC. Deploying Bitbucket with a new ASI takes around 50 minutes. With an existing one, it'll take around 30 minutes.
If you are a new user, perform an end-to-end deployment. This involves deploying Bitbucket into a new ASI:
Navigate to AWS Quick Start for Bitbucket Data Center > How to deploy tab > Deploy into a new ASI link.
If you have already deployed the ASI separately by using the ASI Quick StartASI Quick Start or by deploying another Atlassian product (Jira, Bitbucket, or Confluence Data Center development environment) with ASI, deploy Bitbucket into your existing ASI:
Navigate to AWS Quick Start for Bitbucket Data Center > How to deploy tab > Deploy into your existing ASI link.
{{% note %}} You are responsible for the cost of the AWS services used while running this Quick Start reference deployment. There is no additional price for using this Quick Start. For more information, go to aws.amazon.com/pricing. {{% /note %}}
To reduce costs, we recommend you to keep your deployment up and running only during the performance runs.
AWS Pricing Calculator provides an estimate of usage charges for AWS services based on certain information you provide. Monthly charges will be based on your actual usage of AWS services, and may vary from the estimates the Calculator has provided.
*The prices below are approximate and may vary depending on factors such as (region, instance type, deployment type of DB, etc.)
Stack | Estimated hourly cost ($) |
---|---|
One Node Bitbucket DC | 1.4 - 2.0 |
Two Nodes Bitbucket DC | 1.7 - 2.5 |
Four Nodes Bitbucket DC | 2.4 - 3.6 |
To reduce AWS infrastructure costs you could stop cluster nodes when the cluster is standing idle.
Cluster node might be stopped by using Suspending and Resuming Scaling Processes.
To stop one node within the cluster, follow the instructions below:
- In the AWS console, go to Services > EC2 > Auto Scaling Groups and open the necessary group to which belongs the node you want to stop.
- Click Edit (in case you have New EC2 experience UI mode enabled, press
Edit
onAdvanced configuration
) and addHealthCheck
to theSuspended Processes
. Amazon EC2 Auto Scaling stops marking instances unhealthy as a result of EC2 and Elastic Load Balancing health checks. - Go to EC2 Instances, select instance, click Instance state > Stop instance.
To return node into a working state follow the instructions:
- Go to EC2 Instances, select instance, click Instance state > Start instance, wait a few minutes for node to become available.
- Go to EC2 Auto Scaling Groups and open the necessary group to which belongs the node you want to start.
- Press Edit (in case you have New EC2 experience UI mode enabled, press
Edit
onAdvanced configuration
) and removeHealthCheck
fromSuspended Processes
of Auto Scaling Group.
To reduce AWS infrastructure costs database could be stopped when the cluster is standing idle. Keep in mind that database would be automatically started in 7 days.
To stop database:
- In the AWS console, go to Services > RDS > Databases.
- Select cluster database.
- Click on Actions > Stop.
To start database:
- In the AWS console, go to Services > RDS > Databases.
- Select cluster database.
- Click on Actions > Start.
All important parameters are listed and described in this section. For all other remaining parameters, we recommend using the Quick Start defaults.
Bitbucket setup
Parameter | Recommended Value |
---|---|
Version | The Data Center App Performance Toolkit officially supports 7.17.11 , 7.21.5 (Long Term Support releases) and 8.0.4 platform release. |
Cluster nodes
Parameter | Recommended Value |
---|---|
Bitbucket cluster node instance type | c5.2xlarge |
Maximum number of cluster nodes | 1 |
Minimum number of cluster nodes | 1 |
Cluster node instance volume size | 50 |
We recommend c5.2xlarge to strike the balance between cost and hardware we see in the field for our enterprise customers. More info could be found in public recommendations.
The Data Center App Performance Toolkit framework is also set up for concurrency we expect on this instance size. As such, underprovisioning will likely show a larger performance impact than expected.
File server
Parameter | Recommended Value |
---|---|
File server instance type | m4.xlarge |
Home directory size | 1000 |
Database
Parameter | Recommended Value |
---|---|
The database engine to deploy with | PostgresSQL |
The database engine version to use | 11 |
Database instance class | db.m4.large |
RDS Provisioned IOPS | 1000 |
Master password | Password1! |
Enable RDS Multi-AZ deployment | false |
Bitbucket database password | Password1! |
Database storage | 100 |
{{% note %}}
The Master (admin) password will be used later when restoring the SQL database dataset. If password value is not set to default, you'll need to change DB_PASS
value manually in the restore database dump script (later in Preloading your Bitbucket deployment with an enterprise-scale dataset).
{{% /note %}}
Elasticsearch
Parameter | Recommended Value |
---|---|
Elasticsearch master user password | (leave blank) |
Elasticsearch instance type | m4.xlarge.elasticsearch |
Elasticsearch disk-space per node (GB) | 1000 |
Networking (for new ASI)
Parameter | Recommended Value |
---|---|
Trusted IP range | 0.0.0.0/0 (for public access) or your own trusted IP range |
Availability Zones | Select two availability zones in your region |
Permitted IP range | 0.0.0.0/0 (for public access) or your own trusted IP range |
Make instance internet facing | true |
Key Name | The EC2 Key Pair to allow SSH access. See Amazon EC2 Key Pairs for more info. |
Networking (for existing ASI)
Parameter | Recommended Value |
---|---|
Make instance internet facing | true |
Permitted IP range | 0.0.0.0/0 (for public access) or your own trusted IP range |
Key Name | The EC2 Key Pair to allow SSH access. See Amazon EC2 Key Pairs for more info. |
After successfully deploying Bitbucket Data Center in AWS, you'll need to configure it:
- In the AWS console, go to Services > CloudFormation > Stack > Stack details > Select your stack.
- On the Outputs tab, copy the value of the LoadBalancerURL key.
- Open LoadBalancerURL in your browser. This will take you to the Bitbucket setup wizard.
- On the Bitbucket setup page, populate the following fields:
- Application title: any name for your Bitbucket Data Center deployment
- Base URL: your stack's Elastic LoadBalancer URL
- License key: select new evaluation license or existing license checkbox Click Next.
- On the Administrator account setup page, populate the following fields:
- Username: admin (recommended)
- Full name: any full name of the admin user
- Email address: email address of the admin user
- Password: admin (recommended)
- Confirm Password: admin (recommended) Click Go to Bitbucket.
Data dimensions and values for an enterprise-scale dataset are listed and described in the following table.
Data dimensions | Value for an enterprise-scale dataset |
---|---|
Projects | ~25 000 |
Repositories | ~52 000 |
Users | ~25 000 |
Pull Requests | ~ 1 000 000 |
Total files number | ~750 000 |
{{% note %}}
All the datasets use the standard admin
/admin
credentials.
{{% /note %}}
Pre-loading the dataset is a two-step process:
- Importing the main dataset. To help you out, we provide an enterprise-scale dataset you can import either via the populate_db.sh.
- Restoring attachments. We also provide attachments, which you can pre-load via an upload_attachments.sh script.
The following subsections explain each step in greater detail.
You can load this dataset directly into the database (via a populate_db.sh script).
{{% note %}} We recommend doing this via the CLI. {{% /note %}}
To populate the database with SQL:
-
In the AWS console, go to Services > EC2 > Instances.
-
On the Description tab, do the following:
- Copy the Public IP of the Bastion instance.
- Copy the Private IP of the Bitbucket node instance.
- Copy the Private IP of the Bitbucket NFS Server instance.
-
Using SSH, connect to the Bitbucket node via the Bastion instance:
For Linux or MacOS run following commands in terminal (for Windows use Git Bash terminal):
ssh-add path_to_your_private_key_pem export BASTION_IP=bastion_instance_public_ip export NODE_IP=node_private_ip export SSH_OPTS1='-o ServerAliveInterval=60' export SSH_OPTS2='-o ServerAliveCountMax=30' ssh ${SSH_OPTS1} ${SSH_OPTS2} -o "proxycommand ssh -W %h:%p ${SSH_OPTS1} ${SSH_OPTS2} ec2-user@${BASTION_IP}" ec2-user@${NODE_IP}
For more information, go to Connecting your nodes over SSH.
-
Stop Bitbucket Server:
sudo systemctl stop bitbucket
-
In a new terminal session connect to the Bitbucket NFS Server over SSH:
For Linux or MacOS run following commands in terminal (for Windows use Git Bash terminal):
ssh-add path_to_your_private_key_pem export BASTION_IP=bastion_instance_public_ip export NFS_SERVER_IP=nfs_server_private_ip export SSH_OPTS1='-o ServerAliveInterval=60' export SSH_OPTS2='-o ServerAliveCountMax=30' ssh ${SSH_OPTS1} ${SSH_OPTS2} -o "proxycommand ssh -W %h:%p ${SSH_OPTS1} ${SSH_OPTS2} ec2-user@${BASTION_IP}" ec2-user@${NFS_SERVER_IP}
For more information, go to Connecting your nodes over SSH.
-
Download the populate_db.sh script and make it executable:
wget https://raw.githubusercontent.com/atlassian/dc-app-performance-toolkit/master/app/util/bitbucket/populate_db.sh && chmod +x populate_db.sh
-
Review the following
Variables section
of the script:DB_CONFIG="/media/atl/bitbucket/shared/bitbucket.properties" # Depending on BITBUCKET installation directory BITBUCKET_CURRENT_DIR="/opt/atlassian/bitbucket/current/" BITBUCKET_VERSION_FILE="/media/atl/bitbucket/shared/bitbucket.version" # DB admin user name, password and DB name BITBUCKET_DB_NAME="bitbucket" BITBUCKET_DB_USER="postgres" BITBUCKET_DB_PASS="Password1!" # Datasets AWS bucket and db dump name DATASETS_AWS_BUCKET="https://centaurus-datasets.s3.amazonaws.com/bitbucket" DATASETS_SIZE="large"
-
Run the script:
./populate_db.sh 2>&1 | tee -a populate_db.log
{{% note %}}
Do not close or interrupt the session. It will take about an hour to restore SQL database. When SQL restoring is finished, an admin user will have admin
/admin
credentials.
In case of a failure, check the Variables
section and run the script one more time.
{{% /note %}}
After Importing the main dataset, you'll now have to pre-load an enterprise-scale set of attachments.
{{% note %}} Populate DB and restore attachments scripts could be run in parallel in separate terminal sessions to save time. {{% /note %}}
-
Using SSH, connect to the Bitbucket NFS Server via the Bastion instance:
For Linux or MacOS run following commands in terminal (for Windows use Git Bash terminal):
ssh-add path_to_your_private_key_pem export BASTION_IP=bastion_instance_public_ip export NFS_SERVER_IP=nfs_server_private_ip export SSH_OPTS1='-o ServerAliveInterval=60' export SSH_OPTS2='-o ServerAliveCountMax=30' ssh ${SSH_OPTS1} ${SSH_OPTS2} -o "proxycommand ssh -W %h:%p ${SSH_OPTS1} ${SSH_OPTS2} ec2-user@$BASTION_IP" ec2-user@${NFS_SERVER_IP}
For more information, go to Connecting your nodes over SSH.
-
Download the upload_attachments.sh script and make it executable:
wget https://raw.githubusercontent.com/atlassian/dc-app-performance-toolkit/master/app/util/bitbucket/upload_attachments.sh && chmod +x upload_attachments.sh
-
Review the following
Variables section
of the script:DATASETS_AWS_BUCKET="https://centaurus-datasets.s3.amazonaws.com/bitbucket" ATTACHMENTS_TAR="attachments.tar.gz" DATASETS_SIZE="large" ATTACHMENTS_TAR_URL="${DATASETS_AWS_BUCKET}/${BITBUCKET_VERSION}/${DATASETS_SIZE}/${ATTACHMENTS_TAR}" NFS_DIR="/media/atl/bitbucket/shared" ATTACHMENT_DIR_DATA="data"
-
Run the script:
./upload_attachments.sh 2>&1 | tee -a upload_attachments.log
{{% note %}} Do not close or interrupt the session. It will take about two hours to upload attachments. {{% /note %}}
-
Using SSH, connect to the Bitbucket node via the Bastion instance:
For Linux or MacOS run following commands in terminal (for Windows use Git Bash terminal):
ssh-add path_to_your_private_key_pem export BASTION_IP=bastion_instance_public_ip export NODE_IP=node_private_ip export SSH_OPTS1='-o ServerAliveInterval=60' export SSH_OPTS2='-o ServerAliveCountMax=30' ssh ${SSH_OPTS1} ${SSH_OPTS2} -o "proxycommand ssh -W %h:%p ${SSH_OPTS1} ${SSH_OPTS2} ec2-user@${BASTION_IP}" ec2-user@${NODE_IP}
For more information, go to Connecting your nodes over SSH.
-
Start Bitbucket DC:
sudo systemctl start bitbucket
-
Wait 10-15 minutes until Bitbucket DC is started.
If your app does not use Bitbucket search functionality just skip this section.
Otherwise, if your app is depending on Bitbucket search functionality you need to wait until Elasticsearch index is finished. Bitbucket-project index and bitbucket-repository index usually take about 10 hours on a User Guide recommended configuration, bitbucket-search index (search by repositories content) could take up to a couple of days.
To check status of indexing:
-
Open LoadBalancerURL in your browser.
-
Login with admin user.
-
Navigate to
LoadBalancerURL/rest/indexing/latest/status
page:"status":"INDEXING"
- index is in progress"status":"IDLE"
- index is finished
{{% note %}} In case of any difficulties with Index generation, contact us for support in the community Slack #data-center-app-performance-toolkit channel. {{% /note %}}
{{% note %}}
After Preloading your Bitbucket deployment with an enterprise-scale dataset, the admin user will have admin
/admin
credentials.
It's recommended to change default password from UI account page for security reasons.
{{% /note %}}
For generating performance results suitable for Marketplace approval process use dedicated execution environment. This is a separate AWS EC2 instance to run the toolkit from. Running the toolkit from a dedicated instance but not from a local machine eliminates network fluctuations and guarantees stable CPU and memory performance.
- Go to GitHub and create a fork of dc-app-performance-toolkit.
- Clone the fork locally, then edit the
bitbucket.yml
configuration file. Set enterprise-scale Bitbucket Data Center parameters:
{{% warning %}}
Do not push to the fork real application_hostname
, admin_login
and admin_password
values for security reasons.
Instead, set those values directly in .yml
file on execution environment instance.
{{% /warning %}}
application_hostname: test_bitbucket_instance.atlassian.com # Bitbucket DC hostname without protocol and port e.g. test-bitbucket.atlassian.com or localhost
application_protocol: http # http or https
application_port: 80 # 80, 443, 8080, 7990 etc
secure: True # Set False to allow insecure connections, e.g. when using self-signed SSL certificate
application_postfix: # e.g. /bitbucket in case of url like http://localhost:7990/bitbucket
admin_login: admin
admin_password: admin
load_executor: jmeter # only jmeter executor is supported
concurrency: 20 # number of concurrent virtual users for jmeter scenario
test_duration: 50m
ramp-up: 10m # time to spin all concurrent users
total_actions_per_hour: 32700 # number of total JMeter actions per hour
-
Push your changes to the forked repository.
-
- OS: select from Quick Start
Ubuntu Server 20.04 LTS
. - Instance type:
c5.2xlarge
- Storage size:
30
GiB
- OS: select from Quick Start
-
Connect to the instance using SSH or the AWS Systems Manager Sessions Manager.
ssh -i path_to_pem_file ubuntu@INSTANCE_PUBLIC_IP
-
Install Docker. Setup manage Docker as a non-root user.
-
Connect to the AWS EC2 instance and clone forked repository.
{{% note %}}
At this stage app-specific actions are not needed yet. Use code from master
branch with your bitbucket.yml
changes.
{{% /note %}}
You'll need to run the toolkit for each test scenario in the next section.
Using the Data Center App Performance Toolkit for Performance and scale testing your Data Center app involves two test scenarios:
Each scenario will involve multiple test runs. The following subsections explain both in greater detail.
{{% warning %}} Make sure English language is selected as a default language on the > Server settings > Language page. Other languages are not supported by the toolkit. {{% /warning %}}
This scenario helps to identify basic performance issues without a need to spin up a multi-node Bitbucket DC. Make sure the app does not have any performance impact when it is not exercised.
To receive performance baseline results without an app installed:
-
Use SSH to connect to execution environment.
-
Run toolkit with docker from the execution environment instance:
cd dc-app-performance-toolkit docker pull atlassian/dcapt docker run --shm-size=4g -v "$PWD:/dc-app-performance-toolkit" atlassian/dcapt bitbucket.yml
-
View the following main results of the run in the
dc-app-performance-toolkit/app/results/bitbucket/YY-MM-DD-hh-mm-ss
folder:results_summary.log
: detailed run summaryresults.csv
: aggregated .csv file with all actions and timingsbzt.log
: logs of the Taurus tool executionjmeter.*
: logs of the JMeter tool executionpytest.*
: logs of Pytest-Selenium execution
{{% note %}}
Review results_summary.log
file under artifacts dir location. Make sure that overall status is OK
before moving to the next steps. For an enterprise-scale environment run, the acceptable success rate for actions is 95% and above.
{{% /note %}}
To receive performance results with an app installed:
-
Install the app you want to test.
-
Setup app license.
-
Run toolkit with docker from the execution environment instance:
cd dc-app-performance-toolkit docker pull atlassian/dcapt docker run --shm-size=4g -v "$PWD:/dc-app-performance-toolkit" atlassian/dcapt bitbucket.yml
{{% note %}}
Review results_summary.log
file under artifacts dir location. Make sure that overall status is OK
before moving to the next steps. For an enterprise-scale environment run, the acceptable success rate for actions is 95% and above.
{{% /note %}}
To generate a performance regression report:
- Use SSH to connect to execution environment.
- Install and activate the
virtualenv
as described indc-app-performance-toolkit/README.md
- Allow current user (for execution environment default user is
ubuntu
) to access Docker generated reports:sudo chown -R ubuntu:ubuntu /home/ubuntu/dc-app-performance-toolkit/app/results
- Navigate to the
dc-app-performance-toolkit/app/reports_generation
folder. - Edit the
performance_profile.yml
file: - Run the following command:
python csv_chart_generator.py performance_profile.yml
- In the
dc-app-performance-toolkit/app/results/reports/YY-MM-DD-hh-mm-ss
folder, view the.csv
file (with consolidated scenario results), the.png
chart file and performance scenario summary report.
Use scp command to copy report artifacts from execution env to local drive:
- From local machine terminal (Git bash terminal for Windows) run command:
export EXEC_ENV_PUBLIC_IP=execution_environment_ec2_instance_public_ip scp -r -i path_to_exec_env_pem ubuntu@$EXEC_ENV_PUBLIC_IP:/home/ubuntu/dc-app-performance-toolkit/app/results/reports ./reports
- Once completed, in the
./reports
folder you will be able to review the action timings with and without your app to see its impact on the performance of the instance. If you see an impact (>20%) on any action timing, we recommend taking a look into the app implementation to understand the root cause of this delta.
The purpose of scalability testing is to reflect the impact on the customer experience when operating across multiple nodes. For this, you have to run scale testing on your app.
For many apps and extensions to Atlassian products, there should not be a significant performance difference between operating on a single node or across many nodes in Bitbucket DC deployment. To demonstrate performance impacts of operating your app at scale, we recommend testing your Bitbucket DC app in a cluster.
To receive scalability benchmark results for one-node Bitbucket DC with app-specific actions:
-
Apply app-specific code changes to a new branch of forked repo.
-
Use SSH to connect to execution environment.
-
Pull cloned fork repo branch with app-specific actions.
-
Run toolkit with docker from the execution environment instance:
cd dc-app-performance-toolkit docker pull atlassian/dcapt docker run --shm-size=4g -v "$PWD:/dc-app-performance-toolkit" atlassian/dcapt bitbucket.yml
{{% note %}}
Review results_summary.log
file under artifacts dir location. Make sure that overall status is OK
before moving to the next steps. For an enterprise-scale environment run, the acceptable success rate for actions is 95% and above.
{{% /note %}}
{{% note %}} Before scaling your DC make sure that AWS vCPU limit is not lower than needed number. Use vCPU limits calculator to see current limit. The same article has instructions on how to increase limit if needed. {{% /note %}}
To receive scalability benchmark results for two-node Bitbucket DC with app-specific actions:
- In the AWS console, go to CloudFormation > Stack details > Select your stack.
- On the Update tab, select Use current template, and then click Next.
- Enter
2
in the Maximum number of cluster nodes and the Minimum number of cluster nodes fields. - Click Next > Next > Update stack and wait until stack is updated.
{{% warning %}}
In case if you got error during update - BastionPrivIp cannot be updated
.
Please use those steps for a workaround:
-
In the AWS console, go to EC2 > Auto Scailng > Auto Scaling Groups.
-
On the Auto Scaling Groups page, select your stack ASG and click Edit
-
Enter
2
in the Desired capacity, Minimum capacity and Maximum capacity fields. -
Scroll down, click Update button and wait until stack is updated. {{% /warning %}}
-
Run toolkit with docker from the execution environment instance:
cd dc-app-performance-toolkit docker pull atlassian/dcapt docker run --shm-size=4g -v "$PWD:/dc-app-performance-toolkit" atlassian/dcapt bitbucket.yml
{{% note %}}
Review results_summary.log
file under artifacts dir location. Make sure that overall status is OK
before moving to the next steps. For an enterprise-scale environment run, the acceptable success rate for actions is 95% and above.
{{% /note %}}
{{% note %}} Before scaling your DC make sure that AWS vCPU limit is not lower than needed number. Use vCPU limits calculator to see current limit. The same article has instructions on how to increase limit if needed. {{% /note %}}
To receive scalability benchmark results for four-node Bitbucket DC with app-specific actions:
-
Scale your Bitbucket Data Center deployment to 4 nodes the same way as in Run 4.
-
Run toolkit with docker from the execution environment instance:
cd dc-app-performance-toolkit docker pull atlassian/dcapt docker run --shm-size=4g -v "$PWD:/dc-app-performance-toolkit" atlassian/dcapt bitbucket.yml
{{% note %}}
Review results_summary.log
file under artifacts dir location. Make sure that overall status is OK
before moving to the next steps. For an enterprise-scale environment run, the acceptable success rate for actions is 95% and above.
{{% /note %}}
To generate a scalability report:
- Use SSH to connect to execution environment.
- Allow current user (for execution environment default user is
ubuntu
) to access Docker generated reports:sudo chown -R ubuntu:ubuntu /home/ubuntu/dc-app-performance-toolkit/app/results
- Navigate to the
dc-app-performance-toolkit/app/reports_generation
folder. - Edit the
scale_profile.yml
file: - Run the following command from the
virtualenv
(as described indc-app-performance-toolkit/README.md
):python csv_chart_generator.py scale_profile.yml
- In the
dc-app-performance-toolkit/app/results/reports/YY-MM-DD-hh-mm-ss
folder, view the.csv
file (with consolidated scenario results), the.png
chart file and summary report.
Use scp command to copy report artifacts from execution env to local drive:
- From local terminal (Git bash terminal for Windows) run command:
export EXEC_ENV_PUBLIC_IP=execution_environment_ec2_instance_public_ip scp -r -i path_to_exec_env_pem ubuntu@$EXEC_ENV_PUBLIC_IP:/home/ubuntu/dc-app-performance-toolkit/app/results/reports ./reports
- Once completed, in the
./reports
folder you will be able to review action timings on Bitbucket Data Center with different numbers of nodes. If you see a significant variation in any action timings between configurations, we recommend taking a look into the app implementation to understand the root cause of this delta.
{{% warning %}} After completing all your tests, delete your Bitbucket Data Center stacks. {{% /warning %}}
{{% warning %}} Do not forget to attach performance testing results to your ECOHELP ticket. {{% /warning %}}
- Make sure you have two reports folders: one with performance profile and second with scale profile results.
Each folder should have
profile.csv
,profile.png
,profile_summary.log
and profile run result archives. Archives should contain all raw data created during the run:bzt.log
, selenium/jmeter/locust logs, .csv and .yml files, etc. - Attach two reports folders to your ECOHELP ticket.
In case of technical questions, issues or problems with DC Apps Performance Toolkit, contact us for support in the community Slack #data-center-app-performance-toolkit channel.