Skip to content

Commit

Permalink
Add an RFD spell check step to the lint workflow
Browse files Browse the repository at this point in the history
In order for the action to pass various RFDs were updated to fix
misspellings, and the dictionary was expanded.
  • Loading branch information
rosstimothy committed Dec 13, 2024
1 parent 941a993 commit c7a3a80
Show file tree
Hide file tree
Showing 16 changed files with 1,109 additions and 217 deletions.
1 change: 1 addition & 0 deletions .github/workflows/dependency-review.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -52,4 +52,5 @@ jobs:
pkg:cargo/zerofrom-derive,
pkg:cargo/zerovec,
pkg:cargo/zerovec-derive,
pkg:npm/cspell/dict-en-common-misspellings,
pkg:npm/prettier
29 changes: 29 additions & 0 deletions .github/workflows/lint.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ jobs:
has_go: ${{ steps.changes.outputs.has_go }}
has_rust: ${{ steps.changes.outputs.has_rust }}
has_proto: ${{ steps.changes.outputs.has_proto }}
has_rfd: ${{ steps.changes.outputs.has_rfd }}
steps:
- name: Checkout
if: ${{ github.event_name == 'merge_group' }}
Expand All @@ -26,6 +27,10 @@ jobs:
base: ${{ github.event.pull_request.base.ref || github.event.merge_group.base_ref }}
ref: ${{ github.event.pull_request.head.ref || github.event.merge_group.head_ref }}
filters: |
has_rfd:
- '.github/workflows/lint.yaml'
- 'rfd/**.md'
- 'rfd/cspell.json'
has_go:
- '.github/workflows/lint.yaml'
- '**.go'
Expand Down Expand Up @@ -231,3 +236,27 @@ jobs:
# We have to add the current directory as a safe directory or else git commands will not work as expected.
# The protoc-gen-terraform version must match the version in integrations/terraform/Makefile
run: git config --global --add safe.directory $(realpath .) && go install github.com/gravitational/protoc-gen-terraform@c91cc3ef4d7d0046c36cb96b1cd337e466c61225 && make terraform-resources-up-to-date

lint-rfd:
name: Lint (RFD)
needs: changes
if: ${{ !startsWith(github.head_ref, 'dependabot/') && needs.changes.outputs.has_rfd == 'true' }}
runs-on: ubuntu-22.04

permissions:
contents: read

container:
image: ghcr.io/gravitational/teleport-buildbox:teleport17

steps:
- name: Checkout
uses: actions/checkout@v4

- name: Install JS dependencies
run: pnpm install --frozen-lockfile

- name: Check spelling
run: |
shopt -s nullglob
pnpm cspell -c ./rfd/cspell.json rfd/**.md
1 change: 1 addition & 0 deletions package.json
Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,7 @@
"@types/react-router-dom": "^5.1.1",
"@types/react-transition-group": "^4.4.11",
"@types/wicg-file-system-access": "^2023.10.5",
"cspell": "^8.16.1",
"jest": "^29.7.0",
"jsdom-testing-mocks": "^1.13.1",
"msw": "^2.6.6",
Expand Down
715 changes: 715 additions & 0 deletions pnpm-lock.yaml

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion rfd/0151-database-permission-management.md
Original file line number Diff line number Diff line change
Expand Up @@ -247,7 +247,7 @@ permission calculation; the desired per-object permissions are subsequently
written back to the database.

Imported objects are also stored in the backend, where TAG can access them. This
enables the TAG to visualise the permissions.
enables the TAG to visualize the permissions.

Additionally, the imports will be done on a predetermined schedule (e.g. every
10 minutes), and stored in the backend. If the database engine supports it, the
Expand Down
2 changes: 1 addition & 1 deletion rfd/0152-automatic-database-users-mongodb.md
Original file line number Diff line number Diff line change
Expand Up @@ -150,7 +150,7 @@ For example, in the above sample role, even though the user is assigned role
`readAnyDatabase@admin`, Teleport will block access to databases not in the
`db_names`.

Of course, the user has the option to use `*` for `db_names` to soly rely on
Of course, the user has the option to use `*` for `db_names` to solely rely on
MongoDB's role management to restrict access.

### User accounts
Expand Down
2 changes: 1 addition & 1 deletion rfd/0153-resource-guidelines.md
Original file line number Diff line number Diff line change
Expand Up @@ -569,7 +569,7 @@ func (p *fooParser) parse(event backend.Event) (types.Resource, error) {
services.WithExpires(event.Item.Expires),
services.WithRevision(event.Item.Revision))
if err != nil {
return nil, trace.Wrap(err, "unmarshalling resource from event")
return nil, trace.Wrap(err, "unmarshaling resource from event")
}
return types.Resource153ToLegacy(foo), nil
default:
Expand Down
6 changes: 3 additions & 3 deletions rfd/0162-machine-id-token-join-method-bot-instance.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ hosts on-prem is more challenging.
The following burdens currently exist:

- When using the `token` join method, a Bot must be created for each Bot instance. This
means that the privileges of many distinct Bots need to be synchronised where
means that the privileges of many distinct Bots need to be synchronized where
those hosts are performing the same function.
- When using the `token` join method, a token can only be used once. This means
creating hundreds of join tokens and managing securely distributing these to
Expand All @@ -63,7 +63,7 @@ scale.
Currently, the `token` join method introduces a generation counter as a label
on the Bot user. This counter is contained within the Bot certificate and on
each renewal, this counter is incremented. When the counter within the certificate
de-synchronises with the counter on the user, the Bot is locked out as a security
de-synchronizes with the counter on the user, the Bot is locked out as a security
measure.

The fact that this counter is stored within a label on the Bot user creates a
Expand Down Expand Up @@ -403,7 +403,7 @@ Pros:
- Avoids introducing a new RPC and ensures that all data within the Heartbeat
comes from the same instance in time.
- Allows self-reported information to be used as part of renewal decision.
This is not a strong defence as it is self-reported and cannot be trusted.
This is not a strong defense as it is self-reported and cannot be trusted.
- Avoids a state where the BotInstance is incomplete immediately after joining
and before it has called SubmitHeartbeat.

Expand Down
2 changes: 1 addition & 1 deletion rfd/0165-ssh-connection-handover.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ Each new resumption-enabled connection (with a given _resumption token_) will be

The `handover` directory will be created with permissions such that only the owner of the directory (and the superuser) will be allowed to connect to sockets in it. This is important, since allowing arbitrary connections to hand-over sockets will result in potentially untrusted client IP propagation and/or unwanted disruption to existing connections.

The name of the socket should be a hash of the resumption token, to harden this mechanism against potential local information leaks. It's sufficient to use a regular hash function (rather than something keyed by a secret like a HMAC) because the resumption token is a 128-bit value picked uniformly, and thus unfeasible to bruteforce directly - we'll use SHA-256 truncated to 128 bits, because UNIX domain sockets have a path length limitation of 108 bytes on Linux and 104 on macOS and other BSDs, and dedicating well over a third of that just for the socket name runs the risk of exceeding the path length limit if the path to the Teleport data dir is long enough. 128 bits still provide more than enough collision resistance, and result in an unpadded base64 encoding length of 22 bytes, so 32 bytes (including `/handover/`) over the length of the path to the Teleport data directory.
The name of the socket should be a hash of the resumption token, to harden this mechanism against potential local information leaks. It's sufficient to use a regular hash function (rather than something keyed by a secret like a HMAC) because the resumption token is a 128-bit value picked uniformly, and thus unfeasible to brute force directly - we'll use SHA-256 truncated to 128 bits, because UNIX domain sockets have a path length limitation of 108 bytes on Linux and 104 on macOS and other BSDs, and dedicating well over a third of that just for the socket name runs the risk of exceeding the path length limit if the path to the Teleport data dir is long enough. 128 bits still provide more than enough collision resistance, and result in an unpadded base64 encoding length of 22 bytes, so 32 bytes (including `/handover/`) over the length of the path to the Teleport data directory.

### Hand-over protocol

Expand Down
2 changes: 1 addition & 1 deletion rfd/0168-database-ca-split.md
Original file line number Diff line number Diff line change
Expand Up @@ -212,7 +212,7 @@ A rotation of either CA requires customers reconfigure their self-hosted
databases to maintain access, so after rotating either DatabaseCA or
DBUserCA the security vulnerability will be resolved.

If we did not rotate both CAs when they are equal to eachother, then
If we did not rotate both CAs when they are equal to each other, then
rotating just the DatabaseCA would be pointless and pose a security risk:

Imagine you are a customer and have determined that a cert/key signed
Expand Down
2 changes: 1 addition & 1 deletion rfd/0171-database-session-playback.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,7 @@ message Status {
// Error includes system error message for the failed attempt
string Error = 2 [(gogoproto.jsontag) = "error,omitempty"];
// UserMessage is a user-friendly message for successfull or unsuccessfull auth attempt
// UserMessage is a user-friendly message for successful or unsuccessful auth attempt
string UserMessage = 3 [(gogoproto.jsontag) = "message,omitempty"];
}
Expand Down
6 changes: 3 additions & 3 deletions rfd/0172-tbot-ssh-muxer.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ This has a few drawbacks:
individually, contributing to latency in establishing the SSH tunnel. Some of
these tasks could be shared, or their results cached.
- Monitoring a large number of short-lived `tbot` processes is difficult.
- There is no easy way to implement global ratelimits and backoffs across all
- There is no easy way to implement global rate limits and backoffs across all
the short-lived `tbot` processes. This makes gracefully handling upstream
outages problematic.

Expand Down Expand Up @@ -153,7 +153,7 @@ once all downstream connections using it have been closed.
This could be further improved by balancing incoming downstream connections
against a pool of upstream connections, because the upstream connections would
live longer and we'd avoid a scenario where an upstream connection remains open
but is only being utilised by one or two remaining downstream connections.
but is only being utilized by one or two remaining downstream connections.

### UX

Expand All @@ -180,4 +180,4 @@ Fortunately, existing Linux permissions mechanisms can be used to restrict
access to the UDS and this is similar to how we handle the credentials
output by `tbot` today.

We should ensure we document the best practices to secure the UDS.
We should ensure we document the best practices to secure the UDS.
4 changes: 2 additions & 2 deletions rfd/0175-spiffe-federation.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ Implement support for SPIFFE Federation within Teleport Workload Identity,
including support for a SPIFFE based join method based on the federation
mechanism.

SPIFFE Federation is a standardised mechanism for exchanging trust bundles
SPIFFE Federation is a standardized mechanism for exchanging trust bundles
between trust domains. This enables workloads in one trust domain to validate
the identity of workloads in another trust domain. See
https://github.com/spiffe/spiffe/blob/main/standards/SPIFFE_Federation.md
Expand Down Expand Up @@ -223,7 +223,7 @@ has the following problems:
- Building the ability to validate X.509 certificates issued by arbitrary
CAs into our TLS servers is complex and error-prone. We risk a compromised
CA being able to issue user certificates that would be trusted by Teleport.
- Traversing TLS-terminating loadbalancers is challenging.
- Traversing TLS-terminating load balancers is challenging.
- Incompatibility with renewal mechanisms that rely on an X.509 client
certificate being presented (e.g BotInstance renewals) as only one client
certificate can be presented.
Expand Down
2 changes: 1 addition & 1 deletion rfd/0184-agent-auto-updates.md
Original file line number Diff line number Diff line change
Expand Up @@ -1300,7 +1300,7 @@ An alert will eventually fire, warning the user about the stuck update.
> [!NOTE]
> In the first version, canary selection will happen randomly. As most instances are running the ssh_service and not
> the other ones, we are less likely to catch an issue in a less common service.
> An optimisation would be to try to pick canaries maximizing the service coverage.
> An optimization would be to try to pick canaries maximizing the service coverage.
> This would make the test more robust and provide better availability guarantees.
#### Updating a group
Expand Down
4 changes: 2 additions & 2 deletions rfd/0191-workload-id-config-ux.md
Original file line number Diff line number Diff line change
Expand Up @@ -909,7 +909,7 @@ This persistence must remain over:

To achieve this persistence, the JoinAttributes protobuf message will be
encoded using `protojson.Marshal` and stored within the X509 certificate using
a new extension - `1.3.9999.2.21`. When unmarshalling, unknown fields should be
a new extension - `1.3.9999.2.21`. When unmarshaling, unknown fields should be
ignored to ensure forwards compatibility.

The GenerateUserCert RPC will be modified to propagate the JoinAttributes,
Expand Down Expand Up @@ -1302,7 +1302,7 @@ perform benchmarks and performance improvements with growing magnitudes of
scale in mind.

The flexible templating mechanism is intended to limit the number of Roles,
Bots and WorkloadIdentitys used for large-scale deployment. For example,
Bots and WorkloadIdentities used for large-scale deployment. For example,
thousands of CI workflows can share the same Bot and WorkloadIdentity.
Where possible, we can mitigate the need for growing numbers of WorkloadIdentity
resources by adding more advanced templating and authorization rule
Expand Down
Loading

0 comments on commit c7a3a80

Please sign in to comment.