Skip to content

Commit

Permalink
Feat: mikroorm conversion (#287)
Browse files Browse the repository at this point in the history
* feat: mikroorm ip
* chore: fixup aws env vars and docs
* chore: ensure parity between front-end and back-end dto
* docs: clarify dto and entity usage
  • Loading branch information
mbystedt authored Dec 19, 2024
1 parent 7b7f9ba commit a01e310
Show file tree
Hide file tree
Showing 354 changed files with 11,038 additions and 8,151 deletions.
1 change: 1 addition & 0 deletions .npmrc
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
public-hoist-pattern[]=@mikro-orm/*
1 change: 1 addition & 0 deletions docs/_sidebar.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@

* Development
** [Development](/development.md)
** [Data Transfer Objects](/dev_dto_entities.md)
** [Document Site](/dev_docsite.md)
** [MongoDB](/dev_mongodb.md)
** [Vault](/dev_vault.md)
Expand Down
46 changes: 46 additions & 0 deletions docs/dev_dto_entities.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
# Data Transfer Objects

Data Transfer Objects (DTOs) encapsulate data exchanged with the back-end APIs. Files ending with `dto.ts` must remain identical between the back-end and front-end.

## Location

The path `./ui/service` contains a copy of the back-end project, stripped of all files except the DTOs.

## References

DTOs may only import from the following:

- Other DTOs, using relative paths.
- `'class-transformer'`.
- `'class-validator'`.

This restriction ensures the DTO code can be easily shared with the front-end and other TypeScript code utilizing the REST APIs.

## Relation to Back-end Entities

The database layer uses **entities**, which are similar in purpose to DTOs but handle data transfer to and from the database.

- **Back-end `service` code** is allowed to manipulate entities as they serve as abstractions.
- **Back-end `controller` code** must not manipulate entities. Instead, the REST API should only send or receive DTOs.

Some DTOs closely mirror the structure of database entities. However, the back-end rarely (if ever) transforms entities into DTOs when returning data via controllers and the REST API. Instead, entities are configured to use Mikro-ORM's built-in serialization, which transforms them into plain objects (POJOs) that match the DTO structure.

Sharing entities with the front-end would introduce several issues:
- **Incorrect typing:** Fields stripped in the front-end would need to be optional in the back-end.
- **Unnecessary exposure:** Fields like `_id`, a binary `ObjectId`, have no utility in the front-end and should not appear in the API.

Additionally, sensitive fields are excluded. Since there is a significant difference between the data schema used for storage and the schema consumed by the front-end, DTOs are created to:
1. Validate incoming data.
2. Transform JSON fields (e.g., `boolean`, `string`, `number`) into higher-order objects like `Date`, in a consistent manner.

The primary drawback is the need to copy DTO data to entities when creating or updating entities.

## Embedded versus Entities

The database layer uses **Embeddable** objects to represent reusable components within database entities. These objects:
- Can be shared among multiple entities.
- Are stored as objects in the database.

## Embedded DTOs?

DTOs are often composed for specific use cases, such as pagination responses. Since all DTOs are embeddable, class names do not explicitly distinguish between "root" DTOs and embedded ones.
4 changes: 2 additions & 2 deletions docs/dev_env_vars.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ A suggested deployment strategy is to use [envconsul](https://github.com/hashico

| Env Var | Default | Secret | Description |
| --- | --- | --- | --- |
| APP_ENVIRONMENT | | | The name of the environment this instance is running in. A local environment should be blank. Required to push audit to AWS. |
| BROKER_URL | | | The external URL that this instance is running on. Used to create redirect urls. |
| HOSTNAME | | | The hostname of the server this instance is running on. Used in logs. The instance with a hostname ending in '-0' is the primary node. It will cause issues if there is no primary node or there are multiple primary nodes'. |

Expand Down Expand Up @@ -75,8 +76,7 @@ AWS configuration used to push the audit log to a Kinesis end point. Consuming t
| --- | --- | --- | --- |
| AWS_ACCESS_KEY_ID | | Yes | |
| AWS_SECRET_ACCESS_KEY | | Yes | |
| AWS_SESSION_TOKEN | | Yes | |
| AWS_KINESIS_ROLE_ARN | | Yes | |
| AWS_ROLE_ARN | | Yes | |
| AWS_DEFAULT_REGION | ca-central-1 | | |

## Log redirection
Expand Down
6 changes: 6 additions & 0 deletions docs/development.md
Original file line number Diff line number Diff line change
Expand Up @@ -132,6 +132,8 @@ If Kinesis and AWS access is not setup then some APIs will return a 503 (service

### Local MongoDB Disconnects

Note: The latest versions of Podman seems to have resolved this.

The connection to MongoDB may time out if your machine goes to sleep. The easiest way to recover is to stop the backend, restart the containers and rerun the vault setup. The provided restart script will do the container and setup steps for you.

```bash
Expand Down Expand Up @@ -204,6 +206,10 @@ To locally setup a GitHub App syncing, set the values GITHUB_SYNC_CLIENT_ID and

Broker can be setup to allow users to alias their identity in other identity providers to their account.

## Setup Collection Sync from OpenSearch

Broker can synchronize collections with unique names from an OpenSearch index. See: [OpenSearch Integration](./operations_opensearch.md)

### GitHub Alias

GitHub user alias requires a GitHub OAuth app. It is recommended that the GitHub OAuth app be registered under a GitHub organization in production. A GitHub OAuth app registered under a personal account can be used for testing.
Expand Down
30 changes: 29 additions & 1 deletion docs/operations_opensearch.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,10 +10,38 @@ In order to integrate with OpenSearch, Broker requires the environment variable

## Collection Sync

The broker can synchronize collections with unique names from an index. It searches for unique names from the past 12 hours and iterates over them. For each value, a document is sampled, and an entry for the collection is upserted into the broker.
Broker can synchronize collections with unique names from an OpenSearch index. It searches for unique names from the past 12 hours and iterates over them. For each value, a document is sampled, and an entry for the collection is upserted into the broker.

This is configured by adding a `sync` field to the `collectionConfig` document for the collection you want to synchronize. You must specify the index to query, the unique name value in the source index, and a mapping for the fields from the document to the collection.

Partial exerpt of server collection sync configuration:
```json
{
"sync": {
"index": "example-d",
"unique": "host.hostname",
"map": {
"host.hostname": {
"type": "first",
"dest": "name"
},
"host.architecture": {
"type": "first",
"dest": "architecture"
},
"host.name": {
"type": "pick",
"endsWith": [
"bcgov",
"dmz"
],
"dest": "hostName"
}
}
}
}
```

## Index Patterns

Environment variables and other configurations that store index names can accept a comma-separated string listing the indices to query. If an index ends with -d, appropriate indices for the time period (ending in -yyyy-mm-dd) will be generated.
Expand Down
Loading

0 comments on commit a01e310

Please sign in to comment.