From a2dd71226c96c07071728491fcb5dc7233fbb125 Mon Sep 17 00:00:00 2001 From: Yurii Shynbuiev Date: Mon, 20 May 2024 16:16:09 +0800 Subject: [PATCH 1/3] docs: rename ADRs in the scope of rebranding to the Identus ATL-7959 [skip ci] Signed-off-by: Yurii Shynbuiev --- ...rary-as-a-dsl-for-openapi-specification.md | 4 +- ...vate-keys-of-issuers-inside-prism-agent.md | 10 +- ...ll-library-for-sql-statement-generation.md | 8 +- .../20230405-did-linked-resources.md | 56 ++++---- ...230509-message-routing-for-multi-tenant.md | 8 +- .../20230515-mediator-message-storage.md | 2 +- ...-deterministic-key-generation-algorithm.md | 20 +-- ...0230518-data-isolation-for-multitenancy.md | 14 +- ...-facilitate-multitenancy-in-cloud-agent.md | 4 +- ...d-secure-cryptography-management-module.md | 8 +- ...4-performance-framework-for-atala-prism.md | 4 +- ...vocation-status-list-expansion-strategy.md | 2 +- ...103-use-jwt-claims-for-agent-admin-auth.md | 4 +- ...se-zio-failures-and-defects-effectively.md | 6 +- docs/decisions/index.md | 2 +- docs/general/interdependencies.md | 82 ------------ docs/general/key-derivation-benchmark.md | 4 +- docs/general/repositories_relationships.md | 125 ------------------ 18 files changed, 78 insertions(+), 285 deletions(-) delete mode 100644 docs/general/interdependencies.md delete mode 100644 docs/general/repositories_relationships.md diff --git a/docs/decisions/20221005-using-tapir-library-as-a-dsl-for-openapi-specification.md b/docs/decisions/20221005-using-tapir-library-as-a-dsl-for-openapi-specification.md index e617475982..074c4efa27 100644 --- a/docs/decisions/20221005-using-tapir-library-as-a-dsl-for-openapi-specification.md +++ b/docs/decisions/20221005-using-tapir-library-as-a-dsl-for-openapi-specification.md @@ -8,14 +8,14 @@ Related ADR/AIP: [Introduce REST HTTP for existing Node services](https://input-output.atlassian.net/wiki/spaces/AV2/pages/3454500948/AIP+-+001) ## Context and Problem Statement -PRISM Platform v2 will contain the REST API. The decision was made by team consensus during the first AOH meeting to follow "OpenAPI specification first" approach and generate stubs, server side and client side code based on OAS. +Identus Platform will contain the REST API. The decision was made by team consensus during the first AOH meeting to follow "OpenAPI specification first" approach and generate stubs, server side and client side code based on OAS. Following this strategy we currently have 4-5 OAS files (Castor, Pollux, Mercury, Configuration). The following tool was selected for code generation: [OpenAPI Tools](https://github.com/OpenAPITools/openapi-generator) Instead of using the yaml file as OpenAPI specification and openapi-generator for server and client stub generation - this ADR proposes to use [Tapir](https://tapir.softwaremill.com/en/latest/index.html) Scala library as DSL for OpenAPI specification, `interpret` the endpoint defitions as Scala server and client stub, generate the yaml file, and use openapi-generator for client stubs. -Technology stack that is going to be used in PRISM v2 backend: Scala 3 + ZIO ecosystem +Technology stack that is going to be used in the Identus platform backend: Scala 3 + ZIO ecosystem Akka framework after version 2.6.x cannot be used because [Lightbend changed the license type to BSL 1.1](https://www.lightbend.com/blog/why-we-are-changing-the-license-for-akka). diff --git a/docs/decisions/20221006-store-private-keys-of-issuers-inside-prism-agent.md b/docs/decisions/20221006-store-private-keys-of-issuers-inside-prism-agent.md index 9ef15b8b15..cd2aeecd06 100644 --- a/docs/decisions/20221006-store-private-keys-of-issuers-inside-prism-agent.md +++ b/docs/decisions/20221006-store-private-keys-of-issuers-inside-prism-agent.md @@ -1,4 +1,4 @@ -# Store private keys of Issuers inside prism-agent +# Store private keys of Issuers inside the cloud-agent - Status: accepted - Deciders: Benjamin Voiturier, Pat Losoponkul, Miloš Džepina, Shailesh Patil, Shota Jolbordi, Bart Suichies, Ezequiel Postan, Yurii Shynbuiev, David Poltorak @@ -6,14 +6,14 @@ ## Context and Problem Statement -While each holder has a wallet application on the phone (edge agent) to store private keys, contacts, and credentials, PRISM 2.0 will provide a custodial solution to Issuers and Verifiers. Thus they won't have their wallets or store/manage keys. There needs to be storage for the private keys of issuers and Verifiers on the PRISM side. +While each holder has a wallet application on the phone (edge agent) to store private keys, contacts, and credentials, Identus Cloud Agent will provide a custodial solution to Issuers and Verifiers. Thus they won't have their wallets or store/manage keys. There needs to be storage for the private keys of Issuers and Verifiers on the Cloud Agent side. ## Considered Options -- Having issuers store and manage their own keys on the edge wallet (prism 1.4 approach) -- Storing keys in a dedicated wallet application that is connected to prism-agent (cloud agent) -- Having prism-agent store and manage keys directly +- Having issuers store and manage their own keys on the edge wallet (Prism 1.4 approach) +- Storing keys in a dedicated wallet application that is connected to cloud agent +- Having cloud agent store and manage keys directly ## Decision Outcome diff --git a/docs/decisions/20230118-quill-library-for-sql-statement-generation.md b/docs/decisions/20230118-quill-library-for-sql-statement-generation.md index 7f0dd12cbf..c05dec8157 100644 --- a/docs/decisions/20230118-quill-library-for-sql-statement-generation.md +++ b/docs/decisions/20230118-quill-library-for-sql-statement-generation.md @@ -7,9 +7,9 @@ ## Context and Problem Statement -PostgreSQL is essential to the Atala PRISM technology stack, where most entities are stored. +PostgreSQL is essential to the Identus platform technology stack, where most entities are stored. -Backend services: PRISM Agent, PRISM Mediator, and PRISM Node use PostgreSQL. +Backend services: Identus Cloud Agent, Identus Mediator, and PRISM Node use PostgreSQL. [Doobie](https://tpolecat.github.io/doobie/index.html) library is currently used in Scala code to communicate with Postgresql. Quotes from the website @@ -175,5 +175,5 @@ val result: Future[Seq[Person]] = db.run(q.result) - [Database access libraries in Scala](https://medium.com/@takezoe/database-access-libraries-in-scala-7aa7590aa3db) - [Typechecking SQL queries with doobie](https://godatadriven.com/blog/typechecking-sql-queries-with-doobie/) - [Typechecking SQL in Slick and doobie](https://underscore.io/blog/posts/2015/05/28/typechecking-sql.html) -- [Doobie example in the Pollux library](https://github.com/input-output-hk/atala-prism-building-blocks/blob/pollux-v0.17.0/pollux/lib/sql-doobie/src/main/scala/io/iohk/atala/pollux/sql/repository/JdbcCredentialRepository.scala) -- [Quill example in the Pollux library](https://github.com/input-output-hk/atala-prism-building-blocks/blob/pollux-v0.17.0/pollux/lib/sql-doobie/src/main/scala/io/iohk/atala/pollux/sql/model/VerifiableCredentialSchema.scala) +- [Doobie example in the Pollux library](https://github.com/hyperledger/identus-cloud-agent/blob/pollux-v0.17.0/pollux/lib/sql-doobie/src/main/scala/io/iohk/atala/pollux/sql/repository/JdbcCredentialRepository.scala) +- [Quill example in the Pollux library](https://github.com/hyperledger/identus-cloud-agent/blob/pollux-v0.17.0/pollux/lib/sql-doobie/src/main/scala/io/iohk/atala/pollux/sql/model/VerifiableCredentialSchema.scala) diff --git a/docs/decisions/20230405-did-linked-resources.md b/docs/decisions/20230405-did-linked-resources.md index 0aabba77b5..10a8175080 100644 --- a/docs/decisions/20230405-did-linked-resources.md +++ b/docs/decisions/20230405-did-linked-resources.md @@ -10,12 +10,12 @@ [Research Spike - Schema and Verifiable Presentation Registry](https://input-output.atlassian.net/browse/ATL-3186) - Provide a clear and concise analysis of the various schema registry implementation and the associated benefits and downfalls of each approach. -- Provide a concrete proposal for what we would like to implement for PRISM. -- Provide a generic way of storing and linking the resources for the DID in the PRISM platform. +- Provide a concrete proposal for what we would like to implement for the Identus platform. +- Provide a generic way of storing and linking the resources for the DID in the Identus platform. ## Context and Problem Statement -Atala Prism platform must be able to store and distribute the various resources such as credential schemas, logos, revocation status lists, and documents (aka any text, JSON, images, etc). But in the scope of the current ADR the following resource types are discussed: +Identus platform must be able to store and distribute the various resources such as credential schemas, logos, revocation status lists, and documents (aka any text, JSON, images, etc). But in the scope of the current ADR the following resource types are discussed: - credential schema (JSON and AnonCreds) - credential definition (AnonCreds) @@ -395,7 +395,7 @@ As the solution is based on the latest ToIP specification, it derives all positi - the convention for references and the logic must be carefully reviewed: - `schemaId` in this solution is `{didRef}/resources/{cardano_transaction_id}`, so it doesn't refer to the `id` but to the Tx where everything else is stored (it's an interesting idea for a stateless design) - resource metadata is built according to the ToIP specification but for AnonCreds entities only: credential schema and credential definition. -- technology stack: it doesn't fit to current Atala PRISM platform, but can be used for inspiration. +- technology stack: it doesn't fit to current platform, but can be used for inspiration. ### Hyperledger AnonCreds @@ -483,24 +483,24 @@ Are similar to the Hyperledger AnonCreds solution The main benefit of the Trinsic approach to storing resources is a good abstraction layer, documentation, REST API and a variety of supported programming languages in SDKs for dealing with underlying resources. -### Atala PRISM solution #1 (W3C with dynamic resource resolution) +### Solution #1 (W3C with dynamic resource resolution) -Atala PRISM solution for storing the resources linked to the DID depends on two decisions that are described in the Context and Problem Statement: +The solution for storing the resources linked to the DID depends on two decisions that are described in the Context and Problem Statement: - where the resource is stored - how the resource is discovered and fetched -Taking into account the advantages and disadvantages of the existing solutions the decision about the solution for the Atala PRISM platform might be the following: +Taking into account the advantages and disadvantages of the existing solutions the decision about the solution for the Identus platform might be the following: -the resource is linked to the DID by convention specified in the W3C specification, so specifying the resource in the DID URL and defining the service endpoint that exposes the resource allows to discover and fetch the resource using the Universal Resolver -- as an option, the same resource can be discovered and fetched by the PRISM platform backend and SDK without loading the Universal resolver +- as an option, the same resource can be discovered and fetched by the Identus platform backend and SDK without loading the Universal resolver - the resource integrity must be guaranteed by one of the following options: - by signing the payload with one of the DID's keys or - by publishing the resource metadata that contains the information about the resource (id, type, name, media type, hash) on-chain or - for the resource that is less than the blockchain limitation (up to 64KB) by publishing the resource together with the hash, and/or signature - the resource can be stored in the cloud storage - PostgreSQL database - for indexing and lookup API -As the Atala PRISM platform can leverage the Cardano blockchain and there is a strong requirement for longevity and security - the resource together with the signature and/or hash must be stored in the Cardano blockchain. +As the Identus platform can leverage the Cardano blockchain and there is a strong requirement for longevity and security - the resource together with the signature and/or hash must be stored in the Cardano blockchain. An example of this solution will be the following (concerning the current infrastructure and services): @@ -582,9 +582,9 @@ For example, it might look like the following JSON object: ... and published on the Cardano blockchain as a payload of the AtalaOperation object, so can be retrieved from the blockchain and added to the indexed database for resolution by the REST API -Given there is an Agent or CredentialSchema service that exposes the REST API for fetching the credential schema by ID (in the current implementation it corresponds to the PrismAgent `/schema-registry/credential-schema/{uuid}`, but later might be changed to `/credential-schema/{didRef}/{id}?version={version}` ) +Given there is an Agent or CredentialSchema service that exposes the REST API for fetching the credential schema by ID (in the current implementation it corresponds to the Cloud Agent `/schema-registry/credential-schema/{uuid}`, but later might be changed to `/credential-schema/{didRef}/{id}?version={version}` ) -So, the services of the PRISM platform and SDK can resolve the given schema by URL and use the convenient lookup API with filtering and pagination to manage the credential schema in the Web application. +So, the services of the Identus platform and SDKs can resolve the given schema by URL and use the convenient lookup API with filtering and pagination to manage the credential schema in the Web application. To define the `schemaId` in the message of Issue Credential and Present Proof protocols the following DID URL can be used: @@ -638,7 +638,7 @@ Storing resources larger than 64KB is out of the scope of this ADR. These must b --- -### Atala PRISM solution #2 (ToIP specification implementation) +### Solution #2 (ToIP specification implementation) ToIP specification can be used to implement the resource resolution. To implement it the following things are required: @@ -648,9 +648,9 @@ To implement it the following things are required: - add `didDocumentMetadata.linkedResourceMetadata` field to the DID method and implement the logic in the VDR layer - implement the service layer according to the ToIP specification -ToIP solution specifies the requirements to the VDR (blockchain) that is not easy to achieve with the current implementation of the Atala PRISM platform. +ToIP solution specifies the requirements to the VDR (blockchain) that is not easy to achieve with the current implementation of the Identus platform. According to this specification, the Universal Resolver must have the direct access to the blockchain or use a centralized layer for fetching the resources over REST API. -Before implementing this specification is the Atala PRISM platform we need to answer the following questions: +Before implementing this specification is the Identus platform we need to answer the following questions: - who is hosting the `prism-node` infrastructure for the Universal Resolver and how it's managed? - should we make the PRISM DID Method responsible for resource resolution logic? @@ -670,7 +670,7 @@ Before implementing this specification is the Atala PRISM platform we need to an Each option has technical challenges and limitations, but it's possible to define the following decisions as an outcome: -- the resource MUST be stored on-chain to guarantee trust and longevity aspects, for the Atala PRISM platform it is the Cardano blockchain +- the resource MUST be stored on-chain to guarantee trust and longevity aspects, for the Identus platform it is the Cardano blockchain - the resource SHOULD be indexed for quick lookup over the API - the resource CAN be referenced in the DID Document for additional discoverability - the resource MUST be dereferenced from the DID URL according to W3C or ToIP specification and implementation @@ -679,20 +679,20 @@ Each option has technical challenges and limitations, but it's possible to defin - the solution SHOULD be scalable - the solution MUST be interoperable and easily adopted by the SSI ecosystem -Atala PRISM solution option #1 is considered a good option as it satisfies the requirements and the majority of the negative consequences are mitigated. +The solution option #1 is considered a good option as it satisfies the requirements and the majority of the negative consequences are mitigated. The following comparison table is a summary of the available options. -| Option | Simplicity | Trust | Scalability | Interop | Discoverability | Decentalisation | -| ------ | ---------- | ----- | ----------- | ------- | --------------- | --------------- | -| linkedResources field | :heavy_plus_sign: | :heavy_check_mark: | :heavy_minus_sign: | :heavy_minus_sign: | :heavy_plus_sign: | N/A | -| linkedResourceMetadata (Cheqd)| :heavy_minus_sign:/:heavy_plus_sign: | :heavy_check_mark: | :heavy_minus_sign:/:heavy_plus_sign:| :heavy_plus_sign:|:heavy_plus_sign: | :heavy_check_mark: | -| DID URL Dereferencing (W3C specification)| :heavy_plus_sign: | N/A | :heavy_plus_sign: | :heavy_plus_sign: | :heavy_minus_sign: | :heavy_check_mark: | +| Option | Simplicity | Trust | Scalability | Interop | Discoverability | Decentalisation | +|--------------------------------------------| ---------- | ----- | ----------- | ------- | --------------- | --------------- | +| linkedResources field | :heavy_plus_sign: | :heavy_check_mark: | :heavy_minus_sign: | :heavy_minus_sign: | :heavy_plus_sign: | N/A | +| linkedResourceMetadata (Cheqd) | :heavy_minus_sign:/:heavy_plus_sign: | :heavy_check_mark: | :heavy_minus_sign:/:heavy_plus_sign:| :heavy_plus_sign:|:heavy_plus_sign: | :heavy_check_mark: | +| DID URL Dereferencing (W3C specification) | :heavy_plus_sign: | N/A | :heavy_plus_sign: | :heavy_plus_sign: | :heavy_minus_sign: | :heavy_check_mark: | | DID URL Dereferencing (ToIP specification) | :heavy_minus_sign: | :heavy_check_mark: | :heavy_plus_sign:/:heavy_minus_sign: | :heavy_plus_sign:/:heavy_minus_sign: | :heavy_plus_sign: | :heavy_check_mark: | -| RootsID - Cardano AnonCreds | :heavy_plus_sign: | :heavy_check_mark: | :heavy_plus_sign:/:heavy_minus_sign: | :heavy_plus_sign: | :heavy_plus_sign: | :heavy_check_mark: | -| Hyperledger AnonCreds | :heavy_plus_sign: | :heavy_check_mark:| :heavy_plus_sign: | :heavy_minus_sign: | :heavy_minus_sign: | :heavy_check_mark: | -| Trinsic | :heavy_minus_sign: | :heavy_check_mark: | :heavy_plus_sign:/:heavy_minus_sign: | :heavy_minus_sign: | :heavy_minus_sign: | :heavy_check_mark: | -| Atala PRISM #1 W3C | :heavy_plus_sign: | :heavy_check_mark: | :heavy_plus_sign: | :heavy_plus_sign: | :heavy_minus_sign: | :heavy_check_mark: | -| Atala PRISM #2 ToIP | :heavy_minus_sign: | :heavy_check_mark: | :heavy_minus_sign:/:heavy_plus_sign: | :heavy_plus_sign:/:heavy_minus_sign: | :heavy_plus_sign: | :heavy_check_mark: | +| RootsID - Cardano AnonCreds | :heavy_plus_sign: | :heavy_check_mark: | :heavy_plus_sign:/:heavy_minus_sign: | :heavy_plus_sign: | :heavy_plus_sign: | :heavy_check_mark: | +| Hyperledger AnonCreds | :heavy_plus_sign: | :heavy_check_mark:| :heavy_plus_sign: | :heavy_minus_sign: | :heavy_minus_sign: | :heavy_check_mark: | +| Trinsic | :heavy_minus_sign: | :heavy_check_mark: | :heavy_plus_sign:/:heavy_minus_sign: | :heavy_minus_sign: | :heavy_minus_sign: | :heavy_check_mark: | +| Solution #1 W3C | :heavy_plus_sign: | :heavy_check_mark: | :heavy_plus_sign: | :heavy_plus_sign: | :heavy_minus_sign: | :heavy_check_mark: | +| Solution #2 ToIP | :heavy_minus_sign: | :heavy_check_mark: | :heavy_minus_sign:/:heavy_plus_sign: | :heavy_plus_sign:/:heavy_minus_sign: | :heavy_plus_sign: | :heavy_check_mark: | --- @@ -707,14 +707,14 @@ Each option reviewed in this ADR is a composition of the following architectural - decentralized or SaaS solution - SDK, Universal Resolver or REST API for fetching the resource -The main benefits of option #1 for the Atala PRISM platform are the following: +The main benefits of option #1 for the Identus platform are the following: - the resource is stored on-chain - the resource is published and indexed by the managed VDR layer (prism-node) - the resource is available via REST API & SDK for the product-level applications - the resource is dereferenced via the DID URL in the DID resolver - the resource is linked to the DID dynamically (using DID URL + dereferencing algorithm) -- this solution is scalable and decentralized (anyone can deploy the PRISM stack) +- this solution is scalable and decentralized (anyone can deploy the Identus stack) - level of trust can be guaranteed by the underlying VDR and enforced by hashes or signatures of the resource diff --git a/docs/decisions/20230509-message-routing-for-multi-tenant.md b/docs/decisions/20230509-message-routing-for-multi-tenant.md index 0e966499df..5d51c1155d 100644 --- a/docs/decisions/20230509-message-routing-for-multi-tenant.md +++ b/docs/decisions/20230509-message-routing-for-multi-tenant.md @@ -1,24 +1,24 @@ # Routing Requests to the Correct Tenant -- Status: draft [ accepted | deprecated | superseded by [xxx](yyyymmdd-xxx.md)] +- Status: accepted - Deciders: Yurii Shynbuiev,David Poltorak, Benjamin Voiturier, Shailesh Patil - Date: [2023-05-09] - Tags: multi-tenant, routing, message ## Context and Problem Statement -Prism agent in multi-tenancy is still a single agent running, however, some of the resources are now shared between the tenants of the agent. +The Cloud Agent in multi-tenancy is still a single agent running, however, some of the resources are now shared between the tenants of the agent. Each tenant has their own keys, with their own DIDs, connections. Transports and most of the settings are still shared between agents. All the API endpoints are same from outside Multi-tenancy, message routing can be used to ensure that messages are delivered only to the intended recipient or tenants, and not to unauthorized tenants. -Backend services: Prism Agent use PostgreSQL. Authentication and authorization +Backend services: Cloud Agent use PostgreSQL. Authentication and authorization
     sequenceDiagram
     autonumber
     actor H as Holder(DidComm)
     actor T as Tenant(Issuer)
-    participant A as PrismAgent
+    participant A as CloudAgent
     participant W as Wallet
     participant DB as Database[did <- tenantId]
     T->>A: Register Tenant
diff --git a/docs/decisions/20230515-mediator-message-storage.md b/docs/decisions/20230515-mediator-message-storage.md
index 8fc4c76a55..3057b16474 100644
--- a/docs/decisions/20230515-mediator-message-storage.md
+++ b/docs/decisions/20230515-mediator-message-storage.md
@@ -1,6 +1,6 @@
 # Mediator message storage 
 
-- Status: accepted [ accepted | deprecated | superseded by [xxx](yyyymmdd-xxx.md)]
+- Status: accepted
 - Deciders: Yurii Shynbuiev, Benjamin Voiturier, Shailesh Patil, Fabio Pinheiro , David Poltorak
 - Date: [2023-05-09] 
 - Tags: storage, db, message, mongo, postgres, sql
diff --git a/docs/decisions/20230516-hierarchical-deterministic-key-generation-algorithm.md b/docs/decisions/20230516-hierarchical-deterministic-key-generation-algorithm.md
index 57da4470eb..e94aefaab4 100644
--- a/docs/decisions/20230516-hierarchical-deterministic-key-generation-algorithm.md
+++ b/docs/decisions/20230516-hierarchical-deterministic-key-generation-algorithm.md
@@ -18,13 +18,13 @@ The document covers motivation, the overview of BIP32-based HD wallets, and the
 
 ## Context and Problem Statement
 
-The PRISM platform v2.x must provide the hierarchical deterministic (HD) key derivation algorithm for the identity wallets managed by the platform (Prism Agent and SDKs)
+The Identus Platform must provide the hierarchical deterministic (HD) key derivation algorithm for the identity wallets managed by the platform (Cloud Agent and SDKs)
 
 HD key derivation can be used for both `managed` and `unmanaged` solutions. In both cases, the key material is created from the `seed`.
 
-In the case of a `managed` solution, the keys are created by the `Agent` or `SDK` and stored in the `secured storage` that is managed by the PRISM platform.
+In the case of a `managed` solution, the keys are created by the `Agent` or `SDK` and stored in the `secured storage` that is managed by the Identus platform.
 
-In the case of an `unmanaged` solution, the key material is created by the tools (for instance, `prism-cli`) following similar rules, and is stored on the client side in the `secret storage` managed by the client.
+In the case of an `unmanaged` solution, the key material is created by the tools (for instance, `identus-cli`) following similar rules, and is stored on the client side in the `secret storage` managed by the client.
 
 ## Out of the Scope
 
@@ -42,16 +42,16 @@ The HD key derivation algorithm is a part of the Apollo building block, the choi
 
 ### Secure Storage
 
-Secure store implementation is a matter of another ADR. By now, the Hashicorp Vault is going to be used by the PRISM platform by default.
+Secure store implementation is a matter of another ADR. By now, the Hashicorp Vault is going to be used by the Identus platform by default.
 
-### Backward Compatibility with the PRISM v1.x
+### Backward Compatibility with the PRISM v1.4
 
-The current decision doesn't have backward compatibility with the PRISM v1.x, but it can be mitigated by switching to the `unmanaged` way of key management for the DIDs created in v1.4 or by implementing the backward compatibility module in the PRISM v2.x
+The current decision doesn't have backward compatibility with the PRISM v1.4, but it can be mitigated by switching to the `unmanaged` way of key management for the DIDs created in v1.4 or by implementing the backward compatibility module in the Identus Platform
 
 
 ## Decision Drivers
 
-- Deterministic key derivation for the PRISM platform and in all components: Prism Agent (JVM), Identity Wallets (Android, iOS, Web)
+- Deterministic key derivation for the Identus Platform and in all components: Cloud Agent (JVM), Identity Wallets (Android, iOS, Web)
 - Possibility to use the same `seed` value for `crypto` and `identity` wallets.
 - Compliance with BIP32 specification
 - Key material migration between the wallets
@@ -59,14 +59,14 @@ The current decision doesn't have backward compatibility with the PRISM v1.x, bu
 
 ## Considered Option
 
-Implement the HD key derivation algorithm according to the research spike for all the components of the PRISM platform.
+Implement the HD key derivation algorithm according to the research spike for all the components of the Identus Platform.
 The derivation path contains the following segments/layers:
 
 ```
 m/wallet-purpose`/did-method`/did-index`/key-purpose`/key-index`
 ```
 
-`wallet purpose` is used to distinguish the wallet purpose for the identity wallet and is a constant for the PRISM platform `0x1D`, which looks like ID
+`wallet purpose` is used to distinguish the wallet purpose for the identity wallet and is a constant for the Identus platform `0x1D`, which looks like ID
 
 `did-method` - the DID method that the key will be generated for. The value of `did-method` should be registered. The following are available values for the `did-method`:
 
@@ -96,7 +96,7 @@ The PRIMS platform uses HD key derivation algorithm for `managed` and `unmanaged
 
 ### Positive Consequences
 
-- deterministic key material derivation among all components of the PRISM platform
+- deterministic key material derivation among all components of the Identus platform
 - BIP32 compliance
 - key material migration capability
 - key material recovery capability
diff --git a/docs/decisions/20230518-data-isolation-for-multitenancy.md b/docs/decisions/20230518-data-isolation-for-multitenancy.md
index c7cc33b670..ef2be24cba 100644
--- a/docs/decisions/20230518-data-isolation-for-multitenancy.md
+++ b/docs/decisions/20230518-data-isolation-for-multitenancy.md
@@ -1,15 +1,15 @@
 # Data isolation for multi-tenancy
 
-- Status: draft
+- Status: accepted
 - Deciders: Benjamin Voiturier, Yurii Shynbuiev, Shailesh Patil 
 - Date: 2023-05-10
 - Tags: multi-tenancy, data-isolation, PostgreSQL
 
 Technical Story:
 
-The PRISM platform must support the multi-tenancy, so the data of the tenants must be isolated from each other and the access control policies must be applied to the data of each tenant.
+The Identus platform must support the multi-tenancy, so the data of the tenants must be isolated from each other and the access control policies must be applied to the data of each tenant.
 
-This ADR is about the data isolation for multi-tenancy that must be implemented in the PRISM Agent.
+This ADR is about the data isolation for multi-tenancy that must be implemented in the Cloud Agent.
 
 ## Context and Problem Statement
 
@@ -51,7 +51,7 @@ Data isolation mechanisms should be designed to minimize performance impacts and
 
 ### Technology Stack
 
-The PRISM platform heavily uses the relational database PostgreSQL. Even having the abstraction as a Data Access Layer (DAL), introducing the alternative solution implies a lot of engineering efforts for refactoring and is not recommended at the current phase of the platform development.
+The Identus platform heavily uses the relational database PostgreSQL. Even having the abstraction as a Data Access Layer (DAL), introducing the alternative solution implies a lot of engineering efforts for refactoring and is not recommended at the current phase of the platform development.
 
 ## Decision Drivers
 
@@ -73,7 +73,7 @@ Logical and Physical Separations define the level of `isolation` for storage, co
 
 - Performance and Scalability: Data isolation mechanisms should be designed to minimize performance impacts and provide scalability. The architecture should be able to handle increasing numbers of tenants and their data without sacrificing performance or compromising isolation.
 
-- The Complexity of the Implementation: It's essential to build the multi-tenancy capability for the PRISM platform without the introduction of unnecessary complexity at the application layer, operation layer, and maintenance, in a way that allows evolving the platform naturally along with the growth of the users, scalability requirements, and real business needs.
+- The Complexity of the Implementation: It's essential to build the multi-tenancy capability for the Identus platform without the introduction of unnecessary complexity at the application layer, operation layer, and maintenance, in a way that allows evolving the platform naturally along with the growth of the users, scalability requirements, and real business needs.
 
 ## Considered Options
 
@@ -99,10 +99,10 @@ Sharding options (Citus extension and AWS sharding) must be used with the combin
 
 ## Decision Outcome
 
-The `Row Security Policies` option is the easiest for implementation at the current phase of the Atala PRISM development.
+The `Row Security Policies` option is the easiest for implementation at the current phase of the Identus platform development.
 A single instance of the PostgreSQL database can keep the data and handle requests of hundreds of thousands of tenants leveraging the Row Security Policies without additional operation and infrastructure costs.
 
-At the same time, the PRISM Agent architecture can support `Instance per Tenant` or `Database per Tenant` configuration by isolating the DAL under the repository interface. So, these options also can be considered for organizations with a lot of tenants to provide better isolation and data protection guarantees.
+At the same time, the Cloud Agent architecture can support `Instance per Tenant` or `Database per Tenant` configuration by isolating the DAL under the repository interface. So, these options also can be considered for organizations with a lot of tenants to provide better isolation and data protection guarantees.
 These two options are excellent for a group of tenants under a single organisation or can be considered for tenants that require geographical separation of data, but should not be used for a single tenant.
 
 Moreover, for the SaaS application to manage thousands of organizations and millions of tenants, the `Row Security Policies` option will not be enough because of the resource limitations and amount of requests to the database. In this case, one of the PostgreSQL sharding options is required together with `Row Security Policies`. So, either `Citus extension` or Amazon RDS sharding should be used. `Citus extension` is a preferred way for an on-premise environment, but, it probably, can be used in AWS as well.
diff --git a/docs/decisions/20230527-use-keycloak-and-jwt-tokens-for-authentication-and-authorisation-to-facilitate-multitenancy-in-cloud-agent.md b/docs/decisions/20230527-use-keycloak-and-jwt-tokens-for-authentication-and-authorisation-to-facilitate-multitenancy-in-cloud-agent.md
index f857287657..527dc41fb6 100644
--- a/docs/decisions/20230527-use-keycloak-and-jwt-tokens-for-authentication-and-authorisation-to-facilitate-multitenancy-in-cloud-agent.md
+++ b/docs/decisions/20230527-use-keycloak-and-jwt-tokens-for-authentication-and-authorisation-to-facilitate-multitenancy-in-cloud-agent.md
@@ -5,7 +5,7 @@
 - Date: 2023-05-27
 - Tags: multitenancy, authorisation, authentication
 
-Technical Story: [Research Spike - 1d: find a way to authenticate and authorise the PRISM agent instance administrator | https://input-output.atlassian.net/browse/ATL-4362]
+Technical Story: [Research Spike - 1d: find a way to authenticate and authorise the Cloud Agent instance administrator | https://input-output.atlassian.net/browse/ATL-4362]
 
 ## Context and Problem Statement
 
@@ -22,7 +22,7 @@ In our Multi-tenant Value Brief, we propose modifications to the agent, enabling
 As we transition to multi-tenancy, several critical questions emerge:
 
 1. How should the cloud agent authenticate, or verify the identities of, its API consumers?
-2. How should the cloud agent authorise a particular identity to use a specific instance of PRISM?
+2. How should the cloud agent authorise a particular identity to use a specific instance of the agent?
 3. As the cloud agent becomes capable of hosting multiple tenants whose workloads must remain isolated, how should it become tenant-aware? That is, how should it determine which tenant an API consumer belongs to, and authorise them to manage and operate within that tenant?
 4. How can we mitigate the security risk associated with a leaked pre-shared key/token?"
 
diff --git a/docs/decisions/20230628-apollo-as-centralised-and-secure-cryptography-management-module.md b/docs/decisions/20230628-apollo-as-centralised-and-secure-cryptography-management-module.md
index e7ba84dc5a..8575c25d5d 100644
--- a/docs/decisions/20230628-apollo-as-centralised-and-secure-cryptography-management-module.md
+++ b/docs/decisions/20230628-apollo-as-centralised-and-secure-cryptography-management-module.md
@@ -12,7 +12,7 @@ Technical Story: [Apollo Cryptographic Module KMM | https://input-output.atlassi
 

### 1. Summary -This proposal sets out to crystallize a long-term plan for PRISM's cryptographic functionality. Rather than constructing an entirely new cryptographic functionality, our focus is on integrating robust, secure and tested libraries, meeting several key requirements in the process. +This proposal sets out to crystallize a long-term plan for Identus's cryptographic functionality. Rather than constructing an entirely new cryptographic functionality, our focus is on integrating robust, secure and tested libraries, meeting several key requirements in the process. By leveraging the flexibility of Kotlin Multiplatform, this library will ensure strong, provable security, centralized management of all cryptography, easy upgrades, and efficient code reuse across multiple platforms. @@ -27,10 +27,10 @@ This proposal outlines a comprehensive plan to develop a cryptographic library u Our cryptographic library will provide engineers with high assurances of security. This will be accomplished by using cryptographic primitives that are secure, with this security being provable through rigorous mathematical proofs. Documentation will accompany these proofs to offer transparency and enable a deeper understanding of the underlying logic and assurances. #### 2.2 Centralized Cryptography Management -We propose the creation of a cryptographic library that serves as the central management hub for all cryptographic operations within PRISM. By preventing "DIY" implementations, we decrease potential vulnerabilities and establish a standard, thus enhancing overall security across our organization. +We propose the creation of a cryptographic library that serves as the central management hub for all cryptographic operations within the Identus platform. By preventing "DIY" implementations, we decrease potential vulnerabilities and establish a standard, thus enhancing overall security across our organization. #### 2.3 Easy Upgrade Path -In light of emerging cryptographic needs such as the introduction of quantum-resistant cryptography, our library will be designed with easy upgrades in mind. Its modular design will allow for the seamless introduction of new cryptographic primitives as they become necessary or advisable. This adaptability will ensure that cryptographic upgrades across all of PRISM are consistent and efficient. +In light of emerging cryptographic needs such as the introduction of quantum-resistant cryptography, our library will be designed with easy upgrades in mind. Its modular design will allow for the seamless introduction of new cryptographic primitives as they become necessary or advisable. This adaptability will ensure that cryptographic upgrades across all of Identus's components are consistent and efficient. #### 2.4 Code Reusability Our library will make the most of Kotlin Multiplatform's capabilities for code reuse across different platforms. We aim to design cryptographic functions that promote this potential, thus minimizing the development effort required for adding new functionality or adapting to different platforms. @@ -132,4 +132,4 @@ Owners of this triage process are the engineering + roadmap leads. Those will ta ### 9. Conclusion -This proposed Kotlin Multiplatform cryptographic library will ensure that PRISM remains at the forefront of secure digital operations by providing strong, provable security, centralized cryptographic management, easy upgradeability, and efficient code reuse. By addressing these critical constraints and harnessing the benefits of Kotlin Multiplatform, we are set to create a library that will set a new standard for cryptographic operations within PRISM. +This proposed Kotlin Multiplatform cryptographic library will ensure that Identus remains at the forefront of secure digital operations by providing strong, provable security, centralized cryptographic management, easy upgradeability, and efficient code reuse. By addressing these critical constraints and harnessing the benefits of Kotlin Multiplatform, we are set to create a library that will set a new standard for cryptographic operations within the Identus platform. diff --git a/docs/decisions/20230714-performance-framework-for-atala-prism.md b/docs/decisions/20230714-performance-framework-for-atala-prism.md index b6e6d1e76d..383f73a30d 100644 --- a/docs/decisions/20230714-performance-framework-for-atala-prism.md +++ b/docs/decisions/20230714-performance-framework-for-atala-prism.md @@ -1,11 +1,11 @@ -# Performance framework for Atala PRISM +# Performance framework for the Identus platform - Status: accepted - Deciders: Anton Baliasnikov, Shota Jolbordi, David Poltorak - Date: 2023-07-14 - Tags: benchmarks, performance, k6, load testing -Technical Story: [PRISM Performance Management](https://input-output.atlassian.net/browse/ATL-4119) +Technical Story: [Performance Management](https://input-output.atlassian.net/browse/ATL-4119) ## Context and Problem Statement diff --git a/docs/decisions/20230928-revocation-status-list-expansion-strategy.md b/docs/decisions/20230928-revocation-status-list-expansion-strategy.md index 98417adf61..76616156ca 100644 --- a/docs/decisions/20230928-revocation-status-list-expansion-strategy.md +++ b/docs/decisions/20230928-revocation-status-list-expansion-strategy.md @@ -24,7 +24,7 @@ It's crucial to keep in mind that this status list will be part of a "status lis This credential will be frequently requested through the REST API during verification by verifiers and will be downloaded over the network. Therefore, we need to ensure that the status list remains reasonably small in size to prevent any slowdowns in the verification process. -In the future, there might be a need to reorganize the state and possibly move status lists to another public registry for verifiers to depend on. This is not the current scenario, as each Prism agent currently maintains status lists specific to their respective tenants. +In the future, there might be a need to reorganize the state and possibly move status lists to another public registry for verifiers to depend on. This is not the current scenario, as each cloud agent currently maintains status lists specific to their respective tenants. Absolutely, it's crucial to avoid overengineering the solution. This ensures that the code remains manageable and easy to maintain in the long run. diff --git a/docs/decisions/20240103-use-jwt-claims-for-agent-admin-auth.md b/docs/decisions/20240103-use-jwt-claims-for-agent-admin-auth.md index bcca30b666..e1d012d9c7 100644 --- a/docs/decisions/20240103-use-jwt-claims-for-agent-admin-auth.md +++ b/docs/decisions/20240103-use-jwt-claims-for-agent-admin-auth.md @@ -62,7 +62,7 @@ Example JWT payload containing `ClientRole`. (Some claims are omitted for readab { "exp": 1704267723, "aud": [ - "prism-agent", + "cloud-agent", "account" ], "realm_access": { @@ -73,7 +73,7 @@ Example JWT payload containing `ClientRole`. (Some claims are omitted for readab ] }, "resource_access": { - "prism-agent": { + "cloud-agent": { "roles": [ "admin" ] diff --git a/docs/decisions/20240116-use-zio-failures-and-defects-effectively.md b/docs/decisions/20240116-use-zio-failures-and-defects-effectively.md index 20ef42cccf..39665ce769 100644 --- a/docs/decisions/20240116-use-zio-failures-and-defects-effectively.md +++ b/docs/decisions/20240116-use-zio-failures-and-defects-effectively.md @@ -490,9 +490,9 @@ The upper layer will automatically do so appropriately and consistently using Ta #### Reporting RFC-9457 Error Response All declared Tapir endpoints must -use [`org.hyperledger.identus.api.http.ErrorResponse`](https://github.com/hyperledger/identus-cloud-agent/blob/main/prism-agent/service/server/src/main/scala/io/iohk/atala/api/http/ErrorResponse.scala) +use [`org.hyperledger.identus.api.http.ErrorResponse`](https://github.com/hyperledger/identus-cloud-agent/blob/main/cloud-agent/service/server/src/main/scala/io/iohk/atala/api/http/ErrorResponse.scala) as their output error -type ([example](https://github.com/hyperledger/identus-cloud-agent/blob/eb898e068f768507d6979a5d9bab35ef7ad4a045/prism-agent/service/server/src/main/scala/io/iohk/atala/connect/controller/ConnectionEndpoints.scala#L45)) +type ([example](https://github.com/hyperledger/identus-cloud-agent/blob/eb898e068f768507d6979a5d9bab35ef7ad4a045/cloud-agent/service/server/src/main/scala/io/iohk/atala/connect/controller/ConnectionEndpoints.scala#L45)) This type ensures that the response returned to the user complies with the [RFC-9457 Problem Details for HTTP APIs](https://www.rfc-editor.org/rfc/rfc9457.html). @@ -517,7 +517,7 @@ propagate through the effect’s error channel is the parent [`org.hyperledger.identus.shared.models.Failure`](https://github.com/hyperledger/identus-cloud-agent/blob/main/shared/src/main/scala/io/iohk/atala/shared/models/Failure.scala) type and its conversion to the ErrorResponse type is done automatically -via [Scala implicit conversion](https://github.com/hyperledger/identus-cloud-agent/blob/eb898e068f768507d6979a5d9bab35ef7ad4a045/prism-agent/service/server/src/main/scala/io/iohk/atala/api/http/ErrorResponse.scala#L44). +via [Scala implicit conversion](https://github.com/hyperledger/identus-cloud-agent/blob/eb898e068f768507d6979a5d9bab35ef7ad4a045/cloud-agent/service/server/src/main/scala/io/iohk/atala/api/http/ErrorResponse.scala#L44). #### Do not reflexively log errors diff --git a/docs/decisions/index.md b/docs/decisions/index.md index fed190ac85..208bc55d80 100644 --- a/docs/decisions/index.md +++ b/docs/decisions/index.md @@ -2,7 +2,7 @@ # Architecture knowledge base -Welcome 👋 to the architecture knowledge base of atala-prism-building-blocks. +Welcome 👋 to the architecture knowledge base of the Identus platform. You will find here all the Architecture Decision Records (ADR) of the project. diff --git a/docs/general/interdependencies.md b/docs/general/interdependencies.md deleted file mode 100644 index 45c5b25b53..0000000000 --- a/docs/general/interdependencies.md +++ /dev/null @@ -1,82 +0,0 @@ -# Interdependencies - -This document describes the interdependencies between the different Building Blocks. - - - -```mermaid -flowchart - PLUTO-didcomm -.WIP.-> OR1{OR} - didcommx --external dependency--> OR1{OR} - - subgraph PRISM - prism-cryto --> prism-node-1.4 - prism-cryto --> OR2{OR} - prism-node-1.4 -.copy proto files.-> prism-node - - subgraph BB[Building Blocks Interdependencies] - - - pluto -.WIP.-> OR2{OR} - pluto --> PLUTO-didcomm - - OR1 --> mercury - - OR2 ---> iris - OR2 --> castor - - subgraph REPO [BB Repository] - prism-node -.???\nMAYBE in the future.-> pollux - prism-node --> castor - - iris -.IRIS client\nMAYBE in the future.-> castor - iris -.IRIS client\nMAYBE in the future.-> prism-agent - iris --> iris-server - - shared --> prism-agent - shared --> pollux - shared --> castor - shared --> connect - shared --> iris - - mercury --> pollux - mercury --> connect - mercury --> prism-agent - - connect --> prism-agent - - castor --> pollux - castor --> prism-agent - %%castor --> iris - - pollux --> prism-agent - - infrastructure - test - end - end - end - - - -%% All -castor[Castor] -connect -infrastructure -iris[IRIS] -iris-server((IRIS-server)) -mercury[MERCURY] -mediator((Mercury\nMediator)) -pluto[PLUTO extrenal repo] -pollux[POLLUX] -prism-agent((Prism Agent)) -prism-node[prism-node-client] -shared -test -``` - -**Note:** -- The **Mercury Libraries** collect all dependencies with the single cross version. -We will represent it as a single box here. But the actual dependencies can be of a subset of the Mercury Libraries. -For more information about the Mercury Modules Interdependencies, see [HERE](./mercury/mercury-library/README.md). -- The **Pollux Libraries** collect all dependencies with the single cross version. \ No newline at end of file diff --git a/docs/general/key-derivation-benchmark.md b/docs/general/key-derivation-benchmark.md index fa2d96e618..f1897771f3 100644 --- a/docs/general/key-derivation-benchmark.md +++ b/docs/general/key-derivation-benchmark.md @@ -1,8 +1,8 @@ # Key derivation benchmark -This document provides a performance benchmark of a key derivation used inside the PRISM agent +This document provides a performance benchmark of a key derivation used inside the Cloud Agent in comparison with a key retrieval from HashiCorp Vault. It should provide a baseline for -future decisions in managing the key material on PRISM agent. +future decisions in managing the key material on the agent. ## Test setup diff --git a/docs/general/repositories_relationships.md b/docs/general/repositories_relationships.md deleted file mode 100644 index 2b351981ae..0000000000 --- a/docs/general/repositories_relationships.md +++ /dev/null @@ -1,125 +0,0 @@ -# Repositories & Relationships - -The building blocks separate the modules by capabilities that work together that make up the Atala PRISM framework. There are dependencies with Atala PRISM v1. - -This repo is the main repository for the Atala PRISM V2. - -So here we have a simplified view of all Atala repositories and how they are related: - -```mermaid -flowchart - subgraph doc - atala-prism-docs - atala-prism-interactive-demo-web - end - - subgraph PRISM_1_4 - atala-prism-spec -->|spec for| atala-prism - atala-mirror - atala-prism-credentials-verification-portal -->|uses node| atala-prism - atala-qa-automation - atala-prism - - subgraph SDK_PRISM_1_4 - direction TB - atala-prism-sdk --> - atala-prism-sdk-scalajs - end - - subgraph archive - atala-legacy - subgraph to_be_archived - direction RL - atala-prism-android-app - atala-prism-sdk-ios - atala-prism-sdk-ios-sp - atala-prism-ios-app - atala-prism-management-console-web %% -->|backend| - atala-prism-browser-extension-wallet - end - end - - subgraph empty - atala-style_-guide[atala-style-guide] - atala-prism-connect - atala-prism-vault-sdk - atala-cocoapods-specs - end - atala-prism-connect --->|wannabe| atala-prism - atala-prism-vault-sdk --->|wannabe| atala-prism - end - - subgraph coding-challanges-for-candidates - atala-swetest - end - - subgraph PRISM_2 - atala-prism-building-blocks ----> |currently uses node| atala-prism - atala-prism-building-blocks --> |currently uses crypto| SDK_PRISM_1_4 - atala-prism-apollo - atala-prism-didcomm-kmm - atala-prism-manage - atala-prism-products - atala-web-app-template - atala-prism-esdk -->|SDK for| atala-prism-building-blocks - atala-prism-crypto-sdk-sp -->|1.4 Crypto SDK to be replaced by apollo| atala-prism-wallet-sdk-swift - atala-prism-didcomm-swift -->|Didcomm rust to be replaced by our didcomm-kmm| atala-prism-wallet-sdk-swift - atala-prism-didcomm-kmm --> atala-prism-wallet-sdk-kmm - atala-prism-apollo -..->|will replace| atala-prism-crypto-sdk-sp - atala-prism-didcomm-kmm -..->|will replace| atala-prism-didcomm-swift - atala-prism-apollo --> atala-prism-wallet-sdk-kmm - atala-prism-wallet-sdk-kmm -..->|will replace| atala-prism-wallet-sdk-swift - end - - %% infra - atala-infra --wannabe--> atala-prism-infra - atala-prism-infra -->|infra for| PRISM_1_4 - atala-prism-dev-deployments -->|helm charts and environments managed by| atala-prism-v2-infra - atala-prism-terraform-modules -->|modules used in| atala-prism-v2-infra - atala-prism-v2-infra ---->|infrastructure as code definition for aws| PRISM_2 - atala-prism-v2-infra --> doc - - PRISM_2 ------->|still have some\n dependencies| PRISM_1_4 - - - - - click atala-prism-wallet-sdk-kmm href "https://github.com/input-output-hk/atala-prism-wallet-sdk-kmm" - click atala-prism href "https://github.com/input-output-hk/atala-prism" - click atala-prism-docs href "https://github.com/input-output-hk/atala-prism-docs" - click atala-prism-v2-infra href "https://github.com/input-output-hk/atala-prism-v2-infra" - click atala-prism-building-blocks href "https://github.com/input-output-hk/atala-prism-building-blocks" - click atala-prism-products href "https://github.com/input-output-hk/atala-prism-products" - click atala-prism-credentials-verification-portal href "https://github.com/input-output-hk/atala-prism-credentials-verification-portal" - click atala-prism-terraform-modules href "https://github.com/input-output-hk/atala-prism-terraform-modules" - click atala-prism-interactive-demo-web href "https://github.com/input-output-hk/atala-prism-interactive-demo-web" - click atala-prism-dev-deployments href "https://github.com/input-output-hk/atala-prism-dev-deployments" - click atala-prism-esdk href "https://github.com/input-output-hk/atala-prism-esdk" - click atala-prism-sdk href "https://github.com/input-output-hk/atala-prism-sdk" - click atala-qa-automation href "https://github.com/input-output-hk/atala-qa-automation" - click atala-prism-apollo href "https://github.com/input-output-hk/atala-prism-apollo" - click atala-prism-didcomm-kmm href "https://github.com/input-output-hk/atala-prism-didcomm-kmm" - click atala-prism-crypto-sdk-sp href "https://github.com/input-output-hk/atala-prism-crypto-sdk-sp" - click atala-prism-didcomm-swift href "https://github.com/input-output-hk/atala-prism-didcomm-swift" - click atala-prism-infra href "https://github.com/input-output-hk/atala-prism-infra" - click atala-prism-android-app href "https://github.com/input-output-hk/atala-prism-android-app" - click atala-prism-manage href "https://github.com/input-output-hk/atala-prism-manage" - click atala-web-app-template href "https://github.com/input-output-hk/atala-web-app-template" - click atala-prism-connect href "https://github.com/input-output-hk/atala-prism-connect" - click atala-prism-sdk-ios href "https://github.com/input-output-hk/atala-prism-sdk-ios" - click atala-prism-management-console-web href "https://github.com/input-output-hk/atala-prism-management-console-web" - click atala-prism-sdk-ios-sp href "https://github.com/input-output-hk/atala-prism-sdk-ios-sp" - click atala-mirror href "https://github.com/input-output-hk/atala-mirror" - click atala-prism-ios-app href "https://github.com/input-output-hk/atala-prism-ios-app" - click atala-prism-browser-extension-wallet href "https://github.com/input-output-hk/atala-prism-browser-extension-wallet" - click atala-prism-sdk-scalajs href "https://github.com/input-output-hk/atala-prism-sdk-scalajs" - click atala-prism-spec href "https://github.com/input-output-hk/atala-prism-spec" - click atala-prism-vault-sdk href "https://github.com/input-output-hk/atala-prism-vault-sdk" - click atala-infra href "https://github.com/input-output-hk/atala-infra" - click atala-legacy href "https://github.com/input-output-hk/atala-legacy" - click atala-swetest href "https://github.com/input-output-hk/atala-swetest" - click atala-cocoapods-specs href "https://github.com/input-output-hk/atala-cocoapods-specs" - click atala-style_-guide href "https://github.com/input-output-hk/atala-style-guide" - - style PRISM_2 fill:#f969 -``` From ca0537d6b87f622a4369df341ef1c68b93811341 Mon Sep 17 00:00:00 2001 From: Yurii Shynbuiev Date: Mon, 20 May 2024 16:25:10 +0800 Subject: [PATCH 2/3] docs: rename ADRs in the scope of rebranding to the Identus ATL-7050 Signed-off-by: Yurii Shynbuiev --- ...rary-as-a-dsl-for-openapi-specification.md | 14 +-- ...vate-keys-of-issuers-inside-prism-agent.md | 6 +- ...ll-library-for-sql-statement-generation.md | 8 +- .../20230405-did-linked-resources.md | 112 +++++++++--------- ...230509-message-routing-for-multi-tenant.md | 2 +- .../20230515-mediator-message-storage.md | 36 +++--- ...0230518-data-isolation-for-multitenancy.md | 10 +- ...-facilitate-multitenancy-in-cloud-agent.md | 2 +- ...d-secure-cryptography-management-module.md | 22 ++-- ...vocation-status-list-expansion-strategy.md | 12 +- ...se-zio-failures-and-defects-effectively.md | 14 +-- 11 files changed, 119 insertions(+), 119 deletions(-) diff --git a/docs/decisions/20221005-using-tapir-library-as-a-dsl-for-openapi-specification.md b/docs/decisions/20221005-using-tapir-library-as-a-dsl-for-openapi-specification.md index 074c4efa27..4a4c05c9c9 100644 --- a/docs/decisions/20221005-using-tapir-library-as-a-dsl-for-openapi-specification.md +++ b/docs/decisions/20221005-using-tapir-library-as-a-dsl-for-openapi-specification.md @@ -17,16 +17,16 @@ Instead of using the yaml file as OpenAPI specification and openapi-generator fo Technology stack that is going to be used in the Identus platform backend: Scala 3 + ZIO ecosystem -Akka framework after version 2.6.x cannot be used because [Lightbend changed the license type to BSL 1.1](https://www.lightbend.com/blog/why-we-are-changing-the-license-for-akka). +Akka framework after version 2.6.x cannot be used because [Lightbend changed the license type to BSL 1.1](https://www.lightbend.com/blog/why-we-are-changing-the-license-for-akka). Looks like Akka 2.6.x still can be used according to [License FQA](https://www.lightbend.com/akka/license-faq) -Currently, we have a code generation for Akka that is wrapped up into ZIO. Code generation mustache templates for ZIO-http are not available in OpenAPI tools. +Currently, we have a code generation for Akka that is wrapped up into ZIO. Code generation mustache templates for ZIO-http are not available in OpenAPI tools. -Mustache templates and code generation doesn't work out of the box, so the original templates where copied to the project and fixed by @Shota and @Pat. +Mustache templates and code generation doesn't work out of the box, so the original templates where copied to the project and fixed by @Shota and @Pat. Current templates and generator contains constraints that were reported by [@Pat](https://docs.google.com/document/d/1WhUtflM_o-5uSx9LW76lycz2kbk071cVZiv6EtVwhAQ/edit#heading=h.ywcvgffenpz) and [@Shota](https://input-output-rnd.slack.com/archives/G018JE9NHAM/p1664563129397819), this requires engineering time to adopt the OAS for a code generation. @Ben says that we can live with these constraints -Generally, OAS files are written by the engineers with different experience and different view on formatting, schemas, normalization, datatype. For instance, in current templates don't have +Generally, OAS files are written by the engineers with different experience and different view on formatting, schemas, normalization, datatype. For instance, in current templates don't have - a consistent way for paginating the entities - standard Responses for 4xx and 5xx errors - normalized data types (we use ```anyOf```, ```allOf```) @@ -57,13 +57,13 @@ Quality and formatting of autogenerated code depend on the template (not all tem - reuse endpoint definitions for creating server and client stubs in Scala - align the server side of REST API with the current technology stack (ZIO + ecosystem) - have a control over the codebase and data types -- reduce time-of-maintenance of the code (either OAS should be adapted for generator or mustache templates should be fixed) +- reduce time-of-maintenance of the code (either OAS should be adapted for generator or mustache templates should be fixed) - functional way of implementation of non-functional requirement (metrics, tracing, logging) - straight forward generation of Swagger UI, Redoc documentation and Async API documentation based on endpoint definitions ## Considered Options -- use OpenAPI tools (edit OAS manually, generate server stub for Akka and client stubs for any other languages) +- use OpenAPI tools (edit OAS manually, generate server stub for Akka and client stubs for any other languages) - use OpenAPI tools, but generate code for other server-side library (Play, Finch, Lagom) - use Tapir library (edit endpoint definitions as Scala code, reuse endpoint definitions for server stubs, generate OAS based on endpoint definitions, generate client stubs for any other language) @@ -71,7 +71,7 @@ Quality and formatting of autogenerated code depend on the template (not all tem Chosen option:"use Tapir library" till the end of the year, evaluate this solution in 2023 -All endpoint definition are written in Tapir DSL. +All endpoint definition are written in Tapir DSL. OpenAPI specification generated based on endpoint definition and is published as an artefact. (must be a part of CI) diff --git a/docs/decisions/20221006-store-private-keys-of-issuers-inside-prism-agent.md b/docs/decisions/20221006-store-private-keys-of-issuers-inside-prism-agent.md index cd2aeecd06..b739c2a2c8 100644 --- a/docs/decisions/20221006-store-private-keys-of-issuers-inside-prism-agent.md +++ b/docs/decisions/20221006-store-private-keys-of-issuers-inside-prism-agent.md @@ -18,13 +18,13 @@ While each holder has a wallet application on the phone (edge agent) to store pr ## Decision Outcome -Chosen option: Option 3, because it is the simplest approach that satisfies the needs of providing the Issuer and Verifier with key storage while also not requiring them to manage their own keys. Option 3 was chosen instead of Option 2 because it achieves the same goal but does not require work on integrating another wallet application, so in short, it is simpler and faster to implement. +Chosen option: Option 3, because it is the simplest approach that satisfies the needs of providing the Issuer and Verifier with key storage while also not requiring them to manage their own keys. Option 3 was chosen instead of Option 2 because it achieves the same goal but does not require work on integrating another wallet application, so in short, it is simpler and faster to implement. ### Negative Consequences -While Option 3 is simpler to implement then Option 2 and provides basic functionality required to solve the problem emphasized in [ Context and Problem Statement](#context-and-problem-statement), it does not provide full functionality and security of widely used and well tested wallet application. Therefore this decision is considered to be temporary and made only in the interest of solving the problem as fast as possible. +While Option 3 is simpler to implement then Option 2 and provides basic functionality required to solve the problem emphasized in [Context and Problem Statement](#context-and-problem-statement), it does not provide full functionality and security of widely used and well tested wallet application. Therefore this decision is considered to be temporary and made only in the interest of solving the problem as fast as possible. ## Links -- [Recording of the meeting where decision was made](https://drive.google.com/file/d/120YyW2IEpl-F-6kF0V0Fau4bM7BbQ6mT/view?usp=sharing) +- [Recording of the meeting where decision was made](https://drive.google.com/file/d/120YyW2IEpl-F-6kF0V0Fau4bM7BbQ6mT/view?usp=sharing) diff --git a/docs/decisions/20230118-quill-library-for-sql-statement-generation.md b/docs/decisions/20230118-quill-library-for-sql-statement-generation.md index c05dec8157..301e5dfda1 100644 --- a/docs/decisions/20230118-quill-library-for-sql-statement-generation.md +++ b/docs/decisions/20230118-quill-library-for-sql-statement-generation.md @@ -1,8 +1,8 @@ # Quill library for SQL statement generation and validation - Status: accepted -- Deciders: Yurii Shynbuiev, Fabio Pinheiro, Benjamin Voiturier -- Date: [2023-01-17] +- Deciders: Yurii Shynbuiev, Fabio Pinheiro, Benjamin Voiturier +- Date: [2023-01-17] - Tags: DAL, SQL, Postrgresql, Typesafe ## Context and Problem Statement @@ -37,7 +37,7 @@ Quill provides a Quoted Domain Specific Language (QDSL) to express queries in Sc 4. Compile-time query validation: If configured, the query is verified against the database at compile time, and the compilation fails if it is not valid. The query validation does not alter the database state. ``` -There are [Slick](https://scala-slick.org/) and [ScalikeJDBC](http://scalikejdbc.org/) libraries as well. +There are [Slick](https://scala-slick.org/) and [ScalikeJDBC](http://scalikejdbc.org/) libraries as well. Comparison of these libraries is not a goal of this ADR, but it's essential to know the differences. @@ -164,7 +164,7 @@ val q = TableQuery[Person].filter(_.id === 1) val result: Future[Seq[Person]] = db.run(q.result) ``` -#### Two more real example of Doobie and Quill usage are in the [Links](#links) section. +#### Two more real example of Doobie and Quill usage are in the [Links](#links) section ## Links diff --git a/docs/decisions/20230405-did-linked-resources.md b/docs/decisions/20230405-did-linked-resources.md index 10a8175080..89836c3b1b 100644 --- a/docs/decisions/20230405-did-linked-resources.md +++ b/docs/decisions/20230405-did-linked-resources.md @@ -9,7 +9,7 @@ [Research Spike - Schema and Verifiable Presentation Registry](https://input-output.atlassian.net/browse/ATL-3186) -- Provide a clear and concise analysis of the various schema registry implementation and the associated benefits and downfalls of each approach. +- Provide a clear and concise analysis of the various schema registry implementation and the associated benefits and downfalls of each approach. - Provide a concrete proposal for what we would like to implement for the Identus platform. - Provide a generic way of storing and linking the resources for the DID in the Identus platform. @@ -46,11 +46,11 @@ The technical solution contains a lot of variations and particular small decisio ### Storage limitations -All decentralized storage (DLT or IPFS) has storage limitations, and the amount of data that can be stored is limited by the available storage capacity and the way how the resources are stored. +All decentralized storage (DLT or IPFS) has storage limitations, and the amount of data that can be stored is limited by the available storage capacity and the way how the resources are stored. The following aspect must be taken into account for storing the resources in DLT: -- transaction size limit (can be mitigated by data fragmentation, so the single resource is stored in multiple transactions) - 16KB, 32KB, 64KB, up to 1MB - depending on the type of the blockchain +- transaction size limit (can be mitigated by data fragmentation, so the single resource is stored in multiple transactions) - 16KB, 32KB, 64KB, up to 1MB - depending on the type of the blockchain - throughput - bytes we can insert to storage per unit of time - latency - multi-second time per insert - cost - each insertion costs fees @@ -71,7 +71,7 @@ For larger resource types IPFS or another option should be considered. Large res IPFS doesn't have a size limitation (it's limited by the underlying storage or the particular SDK) and requires additional infrastructure and `incentives` (the way to pay for the storage) from the community. -IPFS can be used for storing the resources, but it should be covered in the scope of separated ADR. +IPFS can be used for storing the resources, but it should be covered in the scope of separated ADR. ### Scalability @@ -85,7 +85,7 @@ Scalability issues also must be considered in the decision for linking the resou ### Access control -SSI resources stored in DLT and IPFS can be accessed by anyone who has access to the network. +SSI resources stored in DLT and IPFS can be accessed by anyone who has access to the network. This can be a security concern for organizations that need to control access to their SSI resources. @@ -95,7 +95,7 @@ The types of resources such as credential schemas, logos, and revocation lists s ### Data privacy -While DLT and IPFS are designed to be secure, there is still a risk that SSI resources stored on these platforms could be accessed or stolen by unauthorized parties. +While DLT and IPFS are designed to be secure, there is still a risk that SSI resources stored on these platforms could be accessed or stolen by unauthorized parties. This is especially concerning when it comes to sensitive personal information. @@ -181,34 +181,34 @@ The drawbacks of the solution: Each resource entry is a part of the collection and is described in the `linkedResourceMetadata` field. -The solution is described in the Cheqd ARD in the [Links](#Links) section of the current ADR +The solution is described in the Cheqd ARD in the [Links](#links) section of the current ADR Example: ``` { - "didDocumentMetadata": { - "linkedResourceMetadata": [ - { - "resourceURI": "did:cheqd:mainnet:1f8e08a2-eeb6-40c3-9e01-33e4a0d1479d/resources/f3d39687-69f5-4046-a960-3aae86a0d3ca", - "resourceCollectionId": "1f8e08a2-eeb6-40c3-9e01-33e4a0d1479d", - "resourceId": "f3d39687-69f5-4046-a960-3aae86a0d3ca", - "resourceName": "PassportSchema", // First version of a Resource called PassportSchema - "resourceType": "CL-Schema", - "mediaType": "application/json", - "created": "2022-07-19T08:40:00Z", - "checksum": "7b2022636f6e74656e74223a202274657374206461746122207d0ae3b0c44298", - "previousVersionId": null, // null if no previous version, otherwise, resourceId of previous version - "nextVersionId": null, // null if no new version, otherwise, resourceId of new version - } - ] - } + "didDocumentMetadata": { + "linkedResourceMetadata": [ + { + "resourceURI": "did:cheqd:mainnet:1f8e08a2-eeb6-40c3-9e01-33e4a0d1479d/resources/f3d39687-69f5-4046-a960-3aae86a0d3ca", + "resourceCollectionId": "1f8e08a2-eeb6-40c3-9e01-33e4a0d1479d", + "resourceId": "f3d39687-69f5-4046-a960-3aae86a0d3ca", + "resourceName": "PassportSchema", // First version of a Resource called PassportSchema + "resourceType": "CL-Schema", + "mediaType": "application/json", + "created": "2022-07-19T08:40:00Z", + "checksum": "7b2022636f6e74656e74223a202274657374206461746122207d0ae3b0c44298", + "previousVersionId": null, // null if no previous version, otherwise, resourceId of previous version + "nextVersionId": null, // null if no new version, otherwise, resourceId of new version + } + ] + } } ``` -The solution is not fully interoperable with the SSI ecosystem, but it's probably the first successful specification that formalizes the DID-linked resources and the DID URL. +The solution is not fully interoperable with the SSI ecosystem, but it's probably the first successful specification that formalizes the DID-linked resources and the DID URL. -Cheqd's approach for linking the resources to the DID is not a part of the current version of DID specification. Even if it's possible to find some information about `linkedResources` and `linkedResourceMetadata` optional field of the DIDDoc in the cache of the search system or ChatGPT. +Cheqd's approach for linking the resources to the DID is not a part of the current version of DID specification. Even if it's possible to find some information about `linkedResources` and `linkedResourceMetadata` optional field of the DIDDoc in the cache of the search system or ChatGPT. Looks like the ToIP specification is inspired by Cheqd's ADR. @@ -236,10 +236,10 @@ The main idea is an algorithm that allows using the DID URL and the information Dereference is performed by defining the service `id` and `relativeRef` params or `path` in the DID URL **NOTE:** -The `service.type` property is not taken into account in this flow. +The `service.type` property is not taken into account in this flow. According to the did-core specification, the service type and its associated properties SHOULD be registered in the [DID Specification Registries]( https://www.w3.org/TR/did-spec-registries/#service-types). -So, defining and registering the `schemaService` or `resourceService` should be the next step to facilitate the interoperability of SSI systems. +So, defining and registering the `schemaService` or `resourceService` should be the next step to facilitate the interoperability of SSI systems. Example 1: using `service` and `relativeRef` @@ -321,7 +321,7 @@ ToIP specification doesn't describe the details about the storage of the underly The new specification for DID URL dereferencing is an improved specification with recommended Cheqd idea to publish the resource metadata in the DID Document. -The main difference with the previous specification is an introduction of parameters that can discover the resource (instead of using `resource` field only) and simplification of the Cheqd's approach by skipping the `collection` abstraction. +The main difference with the previous specification is an introduction of parameters that can discover the resource (instead of using `resource` field only) and simplification of the Cheqd's approach by skipping the `collection` abstraction. The DID Document refers to the associated resource via linked resource metadata. @@ -331,7 +331,7 @@ The current status of the document is a draft, but it's going to be published in The list of resource parameters with descriptions is the following: -- `resourceUri` (required): A string or a map that conforms to the rules of [RFC3986] for URIs which SHOULD directly lead to a location where the resource can be accessed from. +- `resourceUri` (required): A string or a map that conforms to the rules of [RFC3986] for URIs which SHOULD directly lead to a location where the resource can be accessed from. - `resourceCollectionId` (optional): A string that conforms to a method-specific unique identifier format. - `resourceId` (optional): A string that conforms to a method-specific unique identifier format. - `resourceName` (required): A string that uniquely names and identifies a resource. This property, along with the resourceType below, can be used to track version changes within a resource. @@ -340,13 +340,13 @@ The list of resource parameters with descriptions is the following: - `mediaType` (required): A string that identifies the IANA-registered Media Type for a resource. - `created` (required): A JSON String serialized as an XML DateTime normalized to UTC 00:00:00 and without sub-second decimal precision. - `checksum` (optional): A string that provides a checksum (e.g. SHA256, MD5) for the resource to facilitate data integrity. -- `previousVersionId` (optional): The value of the property MUST be a string. This is the previous version of a resource with the same resourceName and resourceType. The value must be 'null' if there is no previous version. -- `nextVersionId` (optional): The value of the property MUST be a string. This is the previous version of a resource with the same resourceName and resourceType. The value must be 'null' if there is no previous version. +- `previousVersionId` (optional): The value of the property MUST be a string. This is the previous version of a resource with the same resourceName and resourceType. The value must be 'null' if there is no previous version. +- `nextVersionId` (optional): The value of the property MUST be a string. This is the previous version of a resource with the same resourceName and resourceType. The value must be 'null' if there is no previous version. This specification describes many important aspects: -- the list of the query parameters in the DID URL for dereferencing the resource and error messages, -- DID Method and VDR requirements, and +- the list of the query parameters in the DID URL for dereferencing the resource and error messages, +- DID Method and VDR requirements, and - DID Resolver requirements #### Positive Consequences @@ -354,7 +354,7 @@ This specification describes many important aspects: - interoperability: the resource is resolved in a standard way according to the ToIP specification following W3C specification for DID URL dereferencing - discoverability: the resource defined in DID URL is resolved and fetched dynamically - scalability: compared to W3C specification, the DID Document is not required to fetch the resource, so instead of 2-3 steps (calls), the resource resolution should be completed in a single step. The behavior must be described in the DID Method and implemented by the DID resolver. -- trust: publishing the `checksum` of the resource inside of the DID Document allows other SSI system to check the resource validity. +- trust: publishing the `checksum` of the resource inside of the DID Document allows other SSI system to check the resource validity. #### Negative Consequences @@ -393,8 +393,8 @@ As the solution is based on the latest ToIP specification, it derives all positi #### Negative Consequences - scalability: the specification is inspired by the Cheqd approach to store the linkedResourceMetadata inside of the DID Document - the convention for references and the logic must be carefully reviewed: - - `schemaId` in this solution is `{didRef}/resources/{cardano_transaction_id}`, so it doesn't refer to the `id` but to the Tx where everything else is stored (it's an interesting idea for a stateless design) - - resource metadata is built according to the ToIP specification but for AnonCreds entities only: credential schema and credential definition. + - `schemaId` in this solution is `{didRef}/resources/{cardano_transaction_id}`, so it doesn't refer to the `id` but to the Tx where everything else is stored (it's an interesting idea for a stateless design) + - resource metadata is built according to the ToIP specification but for AnonCreds entities only: credential schema and credential definition. - technology stack: it doesn't fit to current platform, but can be used for inspiration. @@ -402,7 +402,7 @@ As the solution is based on the latest ToIP specification, it derives all positi According to the AnonCreds specification, such kinds of resources as credential schema and credential definition are stored on-chain. Indy blockchain is used by the Hyperledger technology stack. -The credential schema and definition are not signed by the issuer, but the transaction with the underlying resource is published by the issuer. So, the integrity of the resource is guaranteed by the fact that it's published inside of the transaction signed by the issuer. +The credential schema and definition are not signed by the issuer, but the transaction with the underlying resource is published by the issuer. So, the integrity of the resource is guaranteed by the fact that it's published inside of the transaction signed by the issuer. Example of the credential schema transaction: @@ -495,9 +495,9 @@ Taking into account the advantages and disadvantages of the existing solutions t -the resource is linked to the DID by convention specified in the W3C specification, so specifying the resource in the DID URL and defining the service endpoint that exposes the resource allows to discover and fetch the resource using the Universal Resolver - as an option, the same resource can be discovered and fetched by the Identus platform backend and SDK without loading the Universal resolver - the resource integrity must be guaranteed by one of the following options: - - by signing the payload with one of the DID's keys or - - by publishing the resource metadata that contains the information about the resource (id, type, name, media type, hash) on-chain or - - for the resource that is less than the blockchain limitation (up to 64KB) by publishing the resource together with the hash, and/or signature + - by signing the payload with one of the DID's keys or + - by publishing the resource metadata that contains the information about the resource (id, type, name, media type, hash) on-chain or + - for the resource that is less than the blockchain limitation (up to 64KB) by publishing the resource together with the hash, and/or signature - the resource can be stored in the cloud storage - PostgreSQL database - for indexing and lookup API As the Identus platform can leverage the Cardano blockchain and there is a strong requirement for longevity and security - the resource together with the signature and/or hash must be stored in the Cardano blockchain. @@ -506,9 +506,9 @@ An example of this solution will be the following (concerning the current infras - prism-node must be able to store the generic resource payload, signature and/or hash on-chain and restore the given resource in the underlying database (PostgreSQL) for indexing and lookup API - credential schema (or any other resource module) must be a part of the Atala SSI infrastructure and allow - - publishing the concrete resource as a generic resource using the prism-node API - - expose the API for discovery and fetching the resource by URL - - expose the API for managing the resources (create, publish, lookup with pagination) + - publishing the concrete resource as a generic resource using the prism-node API + - expose the API for discovery and fetching the resource by URL + - expose the API for managing the resources (create, publish, lookup with pagination) - the Universal Resolver for the DID Method must be able to discover and fetch the resource by DID URL - is needed, SDK and backend services can fetch the resources directly (not via the Universal Resolver) @@ -616,7 +616,7 @@ Storing resources larger than 64KB is out of the scope of this ADR. These must b - the image is stored in the cloud database in a binary format - the metadata and the hash of the image are stored on-chain -- optionally, the signature of the owner DID can be generated for the payload and stored together with the hash +- optionally, the signature of the owner DID can be generated for the payload and stored together with the hash - to prove the integrity of the image file, the hash of the binary representation must be the same and/or the signature must be verified - the resource can be fetched in the same way and the credential schema from the previous example @@ -632,7 +632,7 @@ Storing resources larger than 64KB is out of the scope of this ADR. These must b #### Negative Consequences - longevity: for the resource that can not be stored on-chain because of the large size longevity is guaranteed by the cloud recovery procedures and data backup. As an option for mitigating this problem, the resource can be stored in IPFS (additional ADR is required for this) -- vendor lock: the solution is coupled to the Cardano blockchain +- vendor lock: the solution is coupled to the Cardano blockchain **NOTE:** one of the main concerns of this ADR is storing the resources on-chain because of size limitation, throughput, latency and cost. This option allows to postpone this decision and implement the DID-linked resources without the need of storing resources on-chain. @@ -682,17 +682,17 @@ Each option has technical challenges and limitations, but it's possible to defin The solution option #1 is considered a good option as it satisfies the requirements and the majority of the negative consequences are mitigated. The following comparison table is a summary of the available options. -| Option | Simplicity | Trust | Scalability | Interop | Discoverability | Decentalisation | -|--------------------------------------------| ---------- | ----- | ----------- | ------- | --------------- | --------------- | -| linkedResources field | :heavy_plus_sign: | :heavy_check_mark: | :heavy_minus_sign: | :heavy_minus_sign: | :heavy_plus_sign: | N/A | -| linkedResourceMetadata (Cheqd) | :heavy_minus_sign:/:heavy_plus_sign: | :heavy_check_mark: | :heavy_minus_sign:/:heavy_plus_sign:| :heavy_plus_sign:|:heavy_plus_sign: | :heavy_check_mark: | -| DID URL Dereferencing (W3C specification) | :heavy_plus_sign: | N/A | :heavy_plus_sign: | :heavy_plus_sign: | :heavy_minus_sign: | :heavy_check_mark: | -| DID URL Dereferencing (ToIP specification) | :heavy_minus_sign: | :heavy_check_mark: | :heavy_plus_sign:/:heavy_minus_sign: | :heavy_plus_sign:/:heavy_minus_sign: | :heavy_plus_sign: | :heavy_check_mark: | -| RootsID - Cardano AnonCreds | :heavy_plus_sign: | :heavy_check_mark: | :heavy_plus_sign:/:heavy_minus_sign: | :heavy_plus_sign: | :heavy_plus_sign: | :heavy_check_mark: | -| Hyperledger AnonCreds | :heavy_plus_sign: | :heavy_check_mark:| :heavy_plus_sign: | :heavy_minus_sign: | :heavy_minus_sign: | :heavy_check_mark: | -| Trinsic | :heavy_minus_sign: | :heavy_check_mark: | :heavy_plus_sign:/:heavy_minus_sign: | :heavy_minus_sign: | :heavy_minus_sign: | :heavy_check_mark: | -| Solution #1 W3C | :heavy_plus_sign: | :heavy_check_mark: | :heavy_plus_sign: | :heavy_plus_sign: | :heavy_minus_sign: | :heavy_check_mark: | -| Solution #2 ToIP | :heavy_minus_sign: | :heavy_check_mark: | :heavy_minus_sign:/:heavy_plus_sign: | :heavy_plus_sign:/:heavy_minus_sign: | :heavy_plus_sign: | :heavy_check_mark: | +| Option | Simplicity | Trust | Scalability | Interop | Discoverability | Decentalisation | +|--------------------------------------------|--------------------------------------|--------------------|--------------------------------------|--------------------------------------|--------------------|--------------------| +| linkedResources field | :heavy_plus_sign: | :heavy_check_mark: | :heavy_minus_sign: | :heavy_minus_sign: | :heavy_plus_sign: | N/A | +| linkedResourceMetadata (Cheqd) | :heavy_minus_sign:/:heavy_plus_sign: | :heavy_check_mark: | :heavy_minus_sign:/:heavy_plus_sign: | :heavy_plus_sign: | :heavy_plus_sign: | :heavy_check_mark: | +| DID URL Dereferencing (W3C specification) | :heavy_plus_sign: | N/A | :heavy_plus_sign: | :heavy_plus_sign: | :heavy_minus_sign: | :heavy_check_mark: | +| DID URL Dereferencing (ToIP specification) | :heavy_minus_sign: | :heavy_check_mark: | :heavy_plus_sign:/:heavy_minus_sign: | :heavy_plus_sign:/:heavy_minus_sign: | :heavy_plus_sign: | :heavy_check_mark: | +| RootsID - Cardano AnonCreds | :heavy_plus_sign: | :heavy_check_mark: | :heavy_plus_sign:/:heavy_minus_sign: | :heavy_plus_sign: | :heavy_plus_sign: | :heavy_check_mark: | +| Hyperledger AnonCreds | :heavy_plus_sign: | :heavy_check_mark: | :heavy_plus_sign: | :heavy_minus_sign: | :heavy_minus_sign: | :heavy_check_mark: | +| Trinsic | :heavy_minus_sign: | :heavy_check_mark: | :heavy_plus_sign:/:heavy_minus_sign: | :heavy_minus_sign: | :heavy_minus_sign: | :heavy_check_mark: | +| Solution #1 W3C | :heavy_plus_sign: | :heavy_check_mark: | :heavy_plus_sign: | :heavy_plus_sign: | :heavy_minus_sign: | :heavy_check_mark: | +| Solution #2 ToIP | :heavy_minus_sign: | :heavy_check_mark: | :heavy_minus_sign:/:heavy_plus_sign: | :heavy_plus_sign:/:heavy_minus_sign: | :heavy_plus_sign: | :heavy_check_mark: | --- diff --git a/docs/decisions/20230509-message-routing-for-multi-tenant.md b/docs/decisions/20230509-message-routing-for-multi-tenant.md index 5d51c1155d..7153614aa0 100644 --- a/docs/decisions/20230509-message-routing-for-multi-tenant.md +++ b/docs/decisions/20230509-message-routing-for-multi-tenant.md @@ -2,7 +2,7 @@ - Status: accepted - Deciders: Yurii Shynbuiev,David Poltorak, Benjamin Voiturier, Shailesh Patil -- Date: [2023-05-09] +- Date: [2023-05-09] - Tags: multi-tenant, routing, message ## Context and Problem Statement diff --git a/docs/decisions/20230515-mediator-message-storage.md b/docs/decisions/20230515-mediator-message-storage.md index 3057b16474..4f231df501 100644 --- a/docs/decisions/20230515-mediator-message-storage.md +++ b/docs/decisions/20230515-mediator-message-storage.md @@ -1,8 +1,8 @@ -# Mediator message storage +# Mediator message storage - Status: accepted - Deciders: Yurii Shynbuiev, Benjamin Voiturier, Shailesh Patil, Fabio Pinheiro , David Poltorak -- Date: [2023-05-09] +- Date: [2023-05-09] - Tags: storage, db, message, mongo, postgres, sql ## Context and Problem Statement @@ -11,11 +11,11 @@ Relational databases like PostgreSQL store data in structured tables, with rows SQL is used in PostgreSQL to save, retrieve, access, and manipulate the database data. While PostgreSQL may be ideal for managing structured data streams, it tends to struggle when dealing with big unstructured data, as maintaining such relations can increase time complexity significantly. Postgres SQL relies on relational data models that need to defined in advance -Change in any field in table requires lengthy process of migration scripts run and to maintain but it works there are tools available and we also use it PrismAgent, +Change in any field in table requires lengthy process of migration scripts run and to maintain but it works there are tools available and we also use it PrismAgent, But maintenance cost of software is higher. -Contrastingly, non-relational databases like MongoDB excel in these scenarios because data isn't constrained to a single table. -This approach permits massive data streams to be imported directly into the system without the burden of setting up increasingly complex relationships and keys. +Contrastingly, non-relational databases like MongoDB excel in these scenarios because data isn't constrained to a single table. +This approach permits massive data streams to be imported directly into the system without the burden of setting up increasingly complex relationships and keys. MongoDB typically stores data within documents using JSON (JavaScript Object Notation) or BSON (Binary JavaScript Object Notation), which simplifies the handling of big data and complex data streams. document database supports a rapid, iterative cycle of development the way that a document database turns data into code. MongoDB is faster at inserting and for queries that use nested references instead of joins @@ -23,9 +23,9 @@ In Mediator the data which we send or receive is json message and to process jso Postgres SQL by default in vertically scalable where as Mongo can scale horizontally, This can help in scalability problem Mediator messages store in simple and straight forward write there is no transactional workflow involved so we don't gain much by using relational db like postgres. -Below are the 2 options which we can use to reduce infrastructure management -MongoDB Atlas. Fully managed MongoDB in the cloud which can reduce the infrastructure management https://www.mongodb.com/atlas/database -Amazon DocumentDB (with MongoDB compatibility) https://aws.amazon.com/documentdb/ +Below are the 2 options which we can use to reduce infrastructure management +MongoDB Atlas. Fully managed MongoDB in the cloud which can reduce the infrastructure management +Amazon DocumentDB (with MongoDB compatibility) ## Decision Drivers - DIDCOMM messages are json based @@ -44,7 +44,7 @@ Amazon DocumentDB (with MongoDB compatibility) https://aws.amazon.com/documentd Chosen option: MongoDB because of storing unstructured json data and json queries that requires minimal changes to the existing code and provides the benefits for the current use cases. Is a NoSQL database that uses a document-oriented data model. Data is stored in a semi-structured format (BSON, similar to JSON), which can easily accommodate changes in data structure. This makes MongoDB particularly suitable for large volumes of data that may not be easily modeled in a relational schema. -Kafka Stream was also considered but current usecases are more towards storage of the messages, streaming is not the usecase for all the scenarios and the mediator pickup protocol (https://didcomm.org/pickup/3.0/) requirements need the flexibily to delete the messages read. +Kafka Stream was also considered but current usecases are more towards storage of the messages, streaming is not the usecase for all the scenarios and the mediator pickup protocol () requirements need the flexibily to delete the messages read. ### Positive Consequences @@ -62,21 +62,21 @@ Kafka Stream was also considered but current usecases are more towards storage o ### MongoDB -MongoDB provides flexibility with json storage and queries +MongoDB provides flexibility with json storage and queries -- Good, because it is horizontally scalabale +- Good, because it is horizontally scalabale - Good, because typically performs better with large, unstructured datasets and write-heavy applications - Good, because have strong communities and extensive support materials, so you can expect to find help when you encounter issues. - Bad, Is not full ACID compliance - Bad, Doesn't natively support complex joins like a relational database -## Refrences used -https://www.plesk.com/blog/various/mongodb-vs-postgresql/ -https://www.dbvis.com/thetable/json-vs-jsonb-in-postgresql-a-complete-comparison/ -https://severalnines.com/blog/overview-json-capabilities-within-postgresql/ -https://www.mongodb.com/docs/manual/core/schema-validation/ -https://www.mongodb.com/compare/mongodb-dynamodb -https://www.projectpro.io/article/dynamodb-vs-mongodb/826 +## Refrences used + + + + + + diff --git a/docs/decisions/20230518-data-isolation-for-multitenancy.md b/docs/decisions/20230518-data-isolation-for-multitenancy.md index ef2be24cba..923f432771 100644 --- a/docs/decisions/20230518-data-isolation-for-multitenancy.md +++ b/docs/decisions/20230518-data-isolation-for-multitenancy.md @@ -1,7 +1,7 @@ # Data isolation for multi-tenancy - Status: accepted -- Deciders: Benjamin Voiturier, Yurii Shynbuiev, Shailesh Patil +- Deciders: Benjamin Voiturier, Yurii Shynbuiev, Shailesh Patil - Date: 2023-05-10 - Tags: multi-tenancy, data-isolation, PostgreSQL @@ -13,7 +13,7 @@ This ADR is about the data isolation for multi-tenancy that must be implemented ## Context and Problem Statement -In a multi-tenant architecture, where multiple clients or tenants share the same infrastructure or application, data isolation is crucial to ensure the privacy and security of each tenant's data. +In a multi-tenant architecture, where multiple clients or tenants share the same infrastructure or application, data isolation is crucial to ensure the privacy and security of each tenant's data. The specific requirements for data isolation may vary depending on the system and its specific needs. However, here are some common requirements for achieving data isolation in a multi-tenant architecture: @@ -21,7 +21,7 @@ The specific requirements for data isolation may vary depending on the system an Tenants' data should be logically separated from each other, meaning that each tenant should have their own isolated environment within the system. This can be achieved through logical partitions, such as separate databases, schemas, or tables. -### Physical Separation +### Physical Separation In addition to logical separation, physical separation can provide an extra layer of isolation. This involves segregating tenants' data onto separate physical resources, such as servers, storage devices, or networks. Physical separation helps prevent data leakage or unauthorized access between tenants. @@ -119,7 +119,7 @@ Moreover, for the SaaS application to manage thousands of organizations and mill - The Complexity of the Implementation - this option can be implemented on top of the current codebase without significant refactoring of the codebase and additional work for infrastructure engineers. -### Negative Consequences +### Negative Consequences - Physical Separation - is not covered by this option @@ -176,7 +176,7 @@ The main advantages of Citus: - provides additional monitoring and statistics to manage the tenants - routing to the shard is managed by Citus using the `hash` of the table index (compared to AWS sharding option, the routing is done at the application layer and the system table contains the information about the mapping of the tenant to the instance of the database) -One of the previously described options `should` be implemented behind these options. +One of the previously described options `should` be implemented behind these options. - Good, because can manage millions of tenants - Good, because can manage the isolation (logical or physical bases on the configured mapping of the tenant to the database) diff --git a/docs/decisions/20230527-use-keycloak-and-jwt-tokens-for-authentication-and-authorisation-to-facilitate-multitenancy-in-cloud-agent.md b/docs/decisions/20230527-use-keycloak-and-jwt-tokens-for-authentication-and-authorisation-to-facilitate-multitenancy-in-cloud-agent.md index 527dc41fb6..04a97d19ef 100644 --- a/docs/decisions/20230527-use-keycloak-and-jwt-tokens-for-authentication-and-authorisation-to-facilitate-multitenancy-in-cloud-agent.md +++ b/docs/decisions/20230527-use-keycloak-and-jwt-tokens-for-authentication-and-authorisation-to-facilitate-multitenancy-in-cloud-agent.md @@ -37,7 +37,7 @@ As we transition to multi-tenancy, several critical questions emerge: ## Considered Options All options use OIDC and the Client Credentials Grant flow which is suitable for machine-to-machine use. - + We have not included an option where we write our own AuthN/AuthZ implementation. All options require an additional component to be added to the stack to store identity related data [Users, roles etc] and to potentially act as a Policy Decision Point (PDP), Policy Administration Point (PAP) and a Policyf Information Point (PIP). ### Keycloak as AuthN/AuthZ diff --git a/docs/decisions/20230628-apollo-as-centralised-and-secure-cryptography-management-module.md b/docs/decisions/20230628-apollo-as-centralised-and-secure-cryptography-management-module.md index 8575c25d5d..9316b7ed6c 100644 --- a/docs/decisions/20230628-apollo-as-centralised-and-secure-cryptography-management-module.md +++ b/docs/decisions/20230628-apollo-as-centralised-and-secure-cryptography-management-module.md @@ -14,7 +14,7 @@ Technical Story: [Apollo Cryptographic Module KMM | https://input-output.atlassi ### 1. Summary This proposal sets out to crystallize a long-term plan for Identus's cryptographic functionality. Rather than constructing an entirely new cryptographic functionality, our focus is on integrating robust, secure and tested libraries, meeting several key requirements in the process. -By leveraging the flexibility of Kotlin Multiplatform, this library will ensure strong, provable security, centralized management of all cryptography, easy upgrades, and efficient code reuse across multiple platforms. +By leveraging the flexibility of Kotlin Multiplatform, this library will ensure strong, provable security, centralized management of all cryptography, easy upgrades, and efficient code reuse across multiple platforms. A significant additional advantage of our chosen framework, particularly for the JavaScript version of this library, is the future potential to export to WebAssembly (WASM). @@ -99,15 +99,15 @@ In order to consider this completed or done the existing SDK's must have integra

#### Implementation resources -| Engineer | Role | Availability | -| ----------- | ----------- | ----------- | -| Francisco Javier Ribó | Engineering Lead + Developer | Part time | -| Yurii | Engineering + CoreDID Integration Lead | Part time | -| Gonçalo Frade | SDK Project Lead + Roadmap Lead + Developer | Part time | -| Alexandros Zacharakis Jesus Diaz Vico | Cryptography Engineers + Roadmap Lead | Part time | -| Ahmed Moussa | KMM Lead / architect + Developer | Fulltime | -| Cristian Gonzalez | Developer | Fulltime | -| Curtis HArding | Developer | Part time | +| Engineer | Role | Availability | +|---------------------------------------|---------------------------------------------|--------------| +| Francisco Javier Ribó | Engineering Lead + Developer | Part time | +| Yurii | Engineering + CoreDID Integration Lead | Part time | +| Gonçalo Frade | SDK Project Lead + Roadmap Lead + Developer | Part time | +| Alexandros Zacharakis Jesus Diaz Vico | Cryptography Engineers + Roadmap Lead | Part time | +| Ahmed Moussa | KMM Lead / architect + Developer | Fulltime | +| Cristian Gonzalez | Developer | Fulltime | +| Curtis HArding | Developer | Part time |

@@ -117,7 +117,7 @@ The main goal of this section is to describe the process where we choose what co **Comments** -1. There is a risk of starting to add to Apollo "anything that looks like cryptography". For instance, the Anoncreds part that takes care of formatting the credentials (which is what anoncreds-rs does) should not go into Apollo. +1. There is a risk of starting to add to Apollo "anything that looks like cryptography". For instance, the Anoncreds part that takes care of formatting the credentials (which is what anoncreds-rs does) should not go into Apollo. 2. But the underlying cryptographic functionality (for which anoncreds-rs calls libursa) should go into Apollo. 3. Maybe something similar applies to HD wallets. diff --git a/docs/decisions/20230928-revocation-status-list-expansion-strategy.md b/docs/decisions/20230928-revocation-status-list-expansion-strategy.md index 76616156ca..b4ec063357 100644 --- a/docs/decisions/20230928-revocation-status-list-expansion-strategy.md +++ b/docs/decisions/20230928-revocation-status-list-expansion-strategy.md @@ -1,4 +1,4 @@ -# JWT credential revocation status list expansion strategy +# JWT credential revocation status list expansion strategy - Status: accepted - Decider: Benjamin Voiturier, Yurii Shynbuiev, Ezequiel Postan, Shota Jolbordi @@ -9,7 +9,7 @@ Technical Story: [Revocation status list expansion strategy decision | https://i ## Context and Problem Statement -In the process of devising a mechanism for revoking JWT credentials, we've opted to implement the [statusList2021](https://www.w3.org/TR/vc-status-list/) method. +In the process of devising a mechanism for revoking JWT credentials, we've opted to implement the [statusList2021](https://www.w3.org/TR/vc-status-list/) method. This approach entails the creation of a "status list credential" that incorporates a gZip compressed status list in a form a bitString. The specification recommends a minimum size of 16 KB for the status list included in a credential. However, it does not delineate a maximum size, nor does it provide guidance on how to proceed if the selected status list surpasses its capacity to store information about revoked credentials. @@ -49,7 +49,7 @@ If we stick with the smallest recommended status list size, one revocation statu ## Pros and Cons of the Options -#### Option 1: +#### Option 1 Option 1 offers the primary advantage of being straightforward to implement. It is also important to note that Option 2 isn't significantly more challenging to implement, so we shouldn't overly prioritize this consideration. @@ -60,16 +60,16 @@ For example, a sequence of 5 zeros (00000) will be stored as 5(0), indicating fi Assuming that most credentials won't be revoked and will have an index of 0 in the status list, the gzipped status list in the status list credential should be very compact. This is the most crucial factor to consider in the end. -#### Option 2: +#### Option 2 -Option 2 slightly reduces privacy compared to Option 1 in certain scenarios. +Option 2 slightly reduces privacy compared to Option 1 in certain scenarios. For example, in cases where the number of Verifiable Credentials (VCs) starts small but grows over time. Initially, both options face the same issue with a small anonymity set due to the limited number of VCs issued. As the number of VCs increases, Option 1 maintains a continuously growing anonymity set. However, in Option 2, when the issuer reaches the 16KB limit and creates a new list, there will be a period where the new list has only a few VCs, resulting in a smaller anonymity set for VCs in the second list. -Option 2 however, has a big advantage considering upcoming need for AnonCreds revocation. +Option 2 however, has a big advantage considering upcoming need for AnonCreds revocation. AnonCreds doesn't allow for expanding the status list size once defined during revocation registry creation. Pushing back Option 2 for AnonCreds and starting with an initial capacity of 1 million credentials may not be efficient. The size of the attached TAILS FILE grows rapidly with capacity (e.g., 8.4MB for 32,768 VCs!). diff --git a/docs/decisions/20240116-use-zio-failures-and-defects-effectively.md b/docs/decisions/20240116-use-zio-failures-and-defects-effectively.md index 39665ce769..dd265cecfe 100644 --- a/docs/decisions/20240116-use-zio-failures-and-defects-effectively.md +++ b/docs/decisions/20240116-use-zio-failures-and-defects-effectively.md @@ -123,15 +123,15 @@ the [ZIO Types of Errors](https://zio.dev/reference/error-management/types) docu That is, carefully distinguishing between: - **ZIO Failures** - - The expected/recoverable errors (i.e. domain-specific errors). - - Declared in the Error channel of the effect => ZIO[R, E, A]. - - Supposed to be handled by the caller to prevent call stack propagation. + - The expected/recoverable errors (i.e. domain-specific errors). + - Declared in the Error channel of the effect => ZIO[R, E, A]. + - Supposed to be handled by the caller to prevent call stack propagation. - **ZIO Defects** - - The unexpected/unrecoverable errors. - - Not represented in the ZIO effect. - - We do NOT expect the caller to handle them. - - Propagated throughout the call stack until converted to a Failure or logged for traceability and debugging + - The unexpected/unrecoverable errors. + - Not represented in the ZIO effect. + - We do NOT expect the caller to handle them. + - Propagated throughout the call stack until converted to a Failure or logged for traceability and debugging purposes by the uppermost layer. From f1b2928909d17e7b169206cf73c46904b4adfb52 Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" <41898282+github-actions[bot]@users.noreply.github.com> Date: Mon, 20 May 2024 15:33:57 +0700 Subject: [PATCH 3/3] style: apply linters automatic fixes (#1054) Signed-off-by: Hyperledger Bot Co-authored-by: Hyperledger Bot --- docs/decisions/20230405-did-linked-resources.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/docs/decisions/20230405-did-linked-resources.md b/docs/decisions/20230405-did-linked-resources.md index 89836c3b1b..b6f0256ccb 100644 --- a/docs/decisions/20230405-did-linked-resources.md +++ b/docs/decisions/20230405-did-linked-resources.md @@ -393,8 +393,8 @@ As the solution is based on the latest ToIP specification, it derives all positi #### Negative Consequences - scalability: the specification is inspired by the Cheqd approach to store the linkedResourceMetadata inside of the DID Document - the convention for references and the logic must be carefully reviewed: - - `schemaId` in this solution is `{didRef}/resources/{cardano_transaction_id}`, so it doesn't refer to the `id` but to the Tx where everything else is stored (it's an interesting idea for a stateless design) - - resource metadata is built according to the ToIP specification but for AnonCreds entities only: credential schema and credential definition. + - `schemaId` in this solution is `{didRef}/resources/{cardano_transaction_id}`, so it doesn't refer to the `id` but to the Tx where everything else is stored (it's an interesting idea for a stateless design) + - resource metadata is built according to the ToIP specification but for AnonCreds entities only: credential schema and credential definition. - technology stack: it doesn't fit to current platform, but can be used for inspiration. @@ -495,9 +495,9 @@ Taking into account the advantages and disadvantages of the existing solutions t -the resource is linked to the DID by convention specified in the W3C specification, so specifying the resource in the DID URL and defining the service endpoint that exposes the resource allows to discover and fetch the resource using the Universal Resolver - as an option, the same resource can be discovered and fetched by the Identus platform backend and SDK without loading the Universal resolver - the resource integrity must be guaranteed by one of the following options: - - by signing the payload with one of the DID's keys or - - by publishing the resource metadata that contains the information about the resource (id, type, name, media type, hash) on-chain or - - for the resource that is less than the blockchain limitation (up to 64KB) by publishing the resource together with the hash, and/or signature + - by signing the payload with one of the DID's keys or + - by publishing the resource metadata that contains the information about the resource (id, type, name, media type, hash) on-chain or + - for the resource that is less than the blockchain limitation (up to 64KB) by publishing the resource together with the hash, and/or signature - the resource can be stored in the cloud storage - PostgreSQL database - for indexing and lookup API As the Identus platform can leverage the Cardano blockchain and there is a strong requirement for longevity and security - the resource together with the signature and/or hash must be stored in the Cardano blockchain. @@ -506,9 +506,9 @@ An example of this solution will be the following (concerning the current infras - prism-node must be able to store the generic resource payload, signature and/or hash on-chain and restore the given resource in the underlying database (PostgreSQL) for indexing and lookup API - credential schema (or any other resource module) must be a part of the Atala SSI infrastructure and allow - - publishing the concrete resource as a generic resource using the prism-node API - - expose the API for discovery and fetching the resource by URL - - expose the API for managing the resources (create, publish, lookup with pagination) + - publishing the concrete resource as a generic resource using the prism-node API + - expose the API for discovery and fetching the resource by URL + - expose the API for managing the resources (create, publish, lookup with pagination) - the Universal Resolver for the DID Method must be able to discover and fetch the resource by DID URL - is needed, SDK and backend services can fetch the resources directly (not via the Universal Resolver)