Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix spelling errors #495

Merged
merged 2 commits into from
Oct 11, 2022
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion charts/snapshotEngine/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -320,7 +320,7 @@ Overview of functionality of containers in Kubernetes Job Pods.

##### init-tezos-filesystem Container

In order for the storage to be imported sucessfully to a new node, the storage needs to be initialized by the `tezos-node` application.
In order for the storage to be imported successfully to a new node, the storage needs to be initialized by the `tezos-node` application.

This container performs the following steps -

Expand Down
2 changes: 1 addition & 1 deletion charts/snapshotEngine/scripts/snapshot-warmer.sh
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ delete_stuck_volumesnapshots() {
sleep 10
exit 1
else
printf "%s Sucessfully deleted stuck snapshot %s! \n" "$(timestamp)" "$snapshot_name"
printf "%s Successfully deleted stuck snapshot %s! \n" "$(timestamp)" "$snapshot_name"
fi
fi
done
Expand Down
2 changes: 1 addition & 1 deletion charts/tezos/templates/_containers.tpl
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@
* scripts/wait-for-dns.sh and pass it as a single
* argument to /bin/sh -c. For image == octez, this
* is the default.
* script_command overide the name of the script. We still look
* script_command override the name of the script. We still look
* in the scripts directory and postpend ".sh"
* with_config bring in the configMap defaults true only on utils.
* with_secret bring in the secrets map including the identities.
Expand Down
2 changes: 1 addition & 1 deletion charts/tezos/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ chain_initiator_job:
name: chain-initiator
pod_type: activating

# For non-public chains the defualt mutez given to an account if the
# For non-public chains the default mutez given to an account if the
# account is not explicitly set below.
bootstrap_mutez: "4000000000000"

Expand Down
2 changes: 1 addition & 1 deletion docs/Prerequisites.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@

### For deployment on a cloud platform (AWS)

- we recommmend [pulumi](https://www.pulumi.com/docs/get-started/install/), an infrastructure-as-code platform, for cloud deployments
- we recommend [pulumi](https://www.pulumi.com/docs/get-started/install/), an infrastructure-as-code platform, for cloud deployments

## Installing prerequisites

Expand Down
2 changes: 1 addition & 1 deletion mkchain/tqchain/_version.py
Original file line number Diff line number Diff line change
Expand Up @@ -294,7 +294,7 @@ def git_pieces_from_vcs(tag_prefix, root, verbose, run_command=run_command):
# TAG-NUM-gHEX
mo = re.search(r"^(.+)-(\d+)-g([0-9a-f]+)$", git_describe)
if not mo:
# unparseable. Maybe git-describe is misbehaving?
# unparsable. Maybe git-describe is misbehaving?
pieces["error"] = "unable to parse git-describe output: '%s'" % describe_out
return pieces

Expand Down
4 changes: 2 additions & 2 deletions mkchain/versioneer.py
Original file line number Diff line number Diff line change
Expand Up @@ -691,7 +691,7 @@ def git_pieces_from_vcs(tag_prefix, root, verbose, run_command=run_command):
# TAG-NUM-gHEX
mo = re.search(r'^(.+)-(\d+)-g([0-9a-f]+)$', git_describe)
if not mo:
# unparseable. Maybe git-describe is misbehaving?
# unparsable. Maybe git-describe is misbehaving?
pieces["error"] = ("unable to parse git-describe output: '%%s'"
%% describe_out)
return pieces
Expand Down Expand Up @@ -1105,7 +1105,7 @@ def git_pieces_from_vcs(tag_prefix, root, verbose, run_command=run_command):
# TAG-NUM-gHEX
mo = re.search(r"^(.+)-(\d+)-g([0-9a-f]+)$", git_describe)
if not mo:
# unparseable. Maybe git-describe is misbehaving?
# unparsable. Maybe git-describe is misbehaving?
pieces["error"] = "unable to parse git-describe output: '%s'" % describe_out
return pieces

Expand Down
2 changes: 1 addition & 1 deletion snapshotEngine/snapshot-maker.sh
Original file line number Diff line number Diff line change
Expand Up @@ -177,7 +177,7 @@ SERVICE_ACCOUNT="${SERVICE_ACCOUNT}" yq e -i '.spec.template.spec.serviceAccount

sleep 10

# Trigger subsequent filesytem inits, snapshots, tarballs, and uploads.
# Trigger subsequent filesystem inits, snapshots, tarballs, and uploads.
if ! kubectl apply -f mainJob.yaml
then
printf "%s Error creating Zip-and-upload job.\n" "$(date "+%Y-%m-%d %H:%M:%S" "$@")"
Expand Down
6 changes: 3 additions & 3 deletions snapshotEngine/zip-and-upload.sh
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ if [ "${HISTORY_MODE}" = archive ]; then
ARCHIVE_TARBALL_FILENAME=tezos-"${NETWORK}"-archive-tarball-"${BLOCK_HEIGHT}".lz4
printf "%s Archive tarball filename is ${ARCHIVE_TARBALL_FILENAME}\n" "$(date "+%Y-%m-%d %H:%M:%S" "$@")"

# If you upload a file bigger than 50GB, you have to do a mulitpart upload with a part size between 1 and 10000.
# If you upload a file bigger than 50GB, you have to do a multipart upload with a part size between 1 and 10000.
# Instead of guessing size, you can use expected-size which tells S3 how big the file is and it calculates the size for you.
# However if the file gets bigger than your expected size, the multipart upload fails because it uses a part size outside of the bounds (1-10000)
# This gets the old archive tarball size and then adds 10%. Archive tarballs dont seem to grow more than that.
Expand Down Expand Up @@ -192,7 +192,7 @@ if [ "${HISTORY_MODE}" = rolling ]; then
# LZ4 /"${HISTORY_MODE}"-snapshot-cache-volume/var/tezos/node selectively and upload to S3
printf "%s ********************* Rolling Tarball *********************\\n" "$(date "+%Y-%m-%d %H:%M:%S" "$@")"

# If you upload a file bigger than 50GB, you have to do a mulitpart upload with a part size between 1 and 10000.
# If you upload a file bigger than 50GB, you have to do a multipart upload with a part size between 1 and 10000.
# Instead of guessing size, you can use expected-size which tells S3 how big the file is and it calculates the size for you.
# However if the file gets bigger than your expected size, the multipart upload fails because it uses a part size outside of the bounds (1-10000)
# This gets the old rolling tarball size and then adds 10%. rolling tarballs dont seem to grow more than that.
Expand Down Expand Up @@ -427,7 +427,7 @@ else
fi

# Create snapshot.json
# List of all snapshot metadata accross all subdomains
# List of all snapshot metadata across all subdomains
# build site pages
python /getAllSnapshotMetadata.py

Expand Down
12 changes: 6 additions & 6 deletions utils/config-generator.py
Original file line number Diff line number Diff line change
Expand Up @@ -190,7 +190,7 @@ def get_baking_accounts(baker_values):
# Secret and public keys are matches and need be processed together. Neither key
# must be specified, as later code will fill in the details if they are not.
#
# We create any missing accounts that are refered to by a node at
# We create any missing accounts that are referred to by a node at
# BAKING_NODES to ensure that all named accounts exist.
def fill_in_missing_accounts():
print("\nFilling in any missing accounts...")
Expand Down Expand Up @@ -248,7 +248,7 @@ def verify_this_bakers_account(accounts):
#
# import_keys() creates three files in /var/tezos/client which specify
# the keys for each of the accounts: secret_keys, public_keys, and
# public_key_hashs.
# public_key_hashes.
#
# We iterate over fill_in_missing_baker_accounts() which ensures that we
# have a full set of accounts for which to write keys.
Expand Down Expand Up @@ -345,7 +345,7 @@ def import_keys(all_accounts):
tezdir = "/var/tezos/client"
secret_keys = []
public_keys = []
public_key_hashs = []
public_key_hashes = []

for account_name, account_values in all_accounts.items():
print("\n Importing keys for account: " + account_name)
Expand Down Expand Up @@ -391,7 +391,7 @@ def import_keys(all_accounts):

pkh_b58 = key.public_key_hash()
print(f" Appending public key hash: {pkh_b58}")
public_key_hashs.append({"name": account_name, "value": pkh_b58})
public_key_hashes.append({"name": account_name, "value": pkh_b58})
account_values["pkh"] = pkh_b58

# XXXrcd: fix this print!
Expand All @@ -410,8 +410,8 @@ def import_keys(all_accounts):
json.dump(secret_keys, open(tezdir + "/secret_keys", "w"), indent=4)
print(" Writing " + tezdir + "/public_keys")
json.dump(public_keys, open(tezdir + "/public_keys", "w"), indent=4)
print(" Writing " + tezdir + "/public_key_hashs")
json.dump(public_key_hashs, open(tezdir + "/public_key_hashs", "w"), indent=4)
print(" Writing " + tezdir + "/public_key_hashes")
json.dump(public_key_hashes, open(tezdir + "/public_key_hashes", "w"), indent=4)


def create_node_identity_json():
Expand Down
2 changes: 1 addition & 1 deletion utils/sidecar.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ def sync_checker():
header = r.json()
if header["level"] == 0:
# when chain has not been activated, bypass age check
# and return succesfully to mark as ready
# and return successfully to mark as ready
# otherwise it will never activate (activation uses rpc service)
return "Chain has not been activated yet"
timestamp = r.json()["timestamp"]
Expand Down