Skip to content

Commit

Permalink
Merge remote-tracking branch 'origin/main' into feat/crypto-agility-t…
Browse files Browse the repository at this point in the history
…ests
  • Loading branch information
yshyn-iohk committed May 23, 2024
2 parents b0351cc + 21f5f4f commit 5775f7a
Show file tree
Hide file tree
Showing 30 changed files with 582 additions and 431 deletions.
1 change: 1 addition & 0 deletions .github/workflows/integration-tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@ jobs:
run-integration-tests:
name: "Run integration tests"
runs-on: ubuntu-latest
if: ${{ !contains(github.event.pull_request.title, '[skip ci]') }}
env:
REPORTS_DIR: "tests/integration-tests/target/site/serenity"
steps:
Expand Down
1 change: 1 addition & 0 deletions .github/workflows/performance-tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@ jobs:
run-e2e-tests:
name: "Run performance tests"
runs-on: ubuntu-latest
if: ${{ !contains(github.event.pull_request.title, '[skip ci]') }}
steps:
- name: Checkout
uses: actions/checkout@v4
Expand Down
1 change: 1 addition & 0 deletions .github/workflows/unit-tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ jobs:
build-and-unit-tests:
name: "Build and unit tests"
runs-on: self-hosted
if: ${{ !contains(github.event.pull_request.title, '[skip ci]') }}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
container:
Expand Down
6 changes: 3 additions & 3 deletions build.sbt
Original file line number Diff line number Diff line change
Expand Up @@ -72,10 +72,10 @@ lazy val V = new {
val testContainersScala = "0.41.3"
val testContainersJavaKeycloak = "3.2.0" // scala-steward:off

val doobie = "1.0.0-RC2"
val quill = "4.7.3"
val doobie = "1.0.0-RC5"
val quill = "4.8.4"
val flyway = "9.22.3"
val postgresDriver = "42.2.29"
val postgresDriver = "42.7.3"
val logback = "1.4.14"
val slf4j = "2.0.13"

Expand Down
62 changes: 62 additions & 0 deletions docs/decisions/20240520-use-did-urls-to-reference-resources.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
# Storage for SSI related resources

- Status: accepted
- Deciders: Javi, Ben, Yurii
- Date: 2024-05-20
- Tags: Verifiable Data Registry (VDR), decentralized storage


## Context and Problem Statement

The main question to answer is: What is the most practical way to store resources related to VC verification and revocation?

In the context of SSI, there are resources such as credential definitions, schemas, revocation lists, etc., that are referenced inside VCs. These resources need to be accessible to different parties in order to verifiy the credentials. In this ADR, we discuss the trade-offs of different storage alternatives.

## Decision Drivers

A desired solution should balance

- Methods for data integrity and authenticity validation: For instance, if we are referring to a credential definition, the user retrieving the resource should be able to validate that the resource hasn't been altered since its creation. In the case of more dynamic resources, such as revocation lists, which are actually updated throughout time, the recipient party would need to validate that the resource was created by the issuer.
- Data availability: It is important for resources to be highly available. If a resource is missing, it can lead to an inability to validate a VC.
- Decentralization: There must be a consideration to avoid innecesary central points of failure
- Historical data requests: Some use cases may require support for querying historical data. For example, retreive a revocation list at certain point in the past.
- Write access control: Most generally issuers (as they create most of the resources), require to have control of the data they store in order to be able to update it when needed, and also to avoid third parties to make un-authorized changes.
- Latency, throughput, deployment costs: Any solution should provide a reasonable balance of non functional requirements, such as achieving enough throughput, or having low enough latency for the system to be practical.

## Considered Options

We considered the following alternatives, which contemplate the approaches currently discussed by the broad SSI ecosystem at the time of this writing.

- URLs and traditional HTTP servers: with no surprises, in this approach, each resource is identified with a URL and stored in traditional servers. The URLs will encode hashes as query parameters to enforce integrity for static resources. Dynamic resources will be signed by the resource creator's key.
- DID URLs and traditional HTTP servers: in this variation, resources are still stored in servers. Resources are identified by DID URLs that dereference services of the associated DID document. The services will contain the final URL to retrieve the corresponding resources. Once again, hashes will be associated to static resources as DID URL query parameters, while dynamic resources will be signed adequately.
- IPFS: An IPFS approach would be useful for storing static resources using IPFS identifiers for them. Dynamic resources however become more challenges. Even though we recognize the existence of constructions like IPNS or other layers to manage dynamic resources, we find them less secure in terms of availability and consistency guarantees.
- Ledger based storage (Cardano in particular): In this approach, resources would be stored in transactions' metadata posted on-chain. The data availability and integrity can be inherited from the underlying ledger.
- A combination of previous methods and the use of a ledger: Similar as above, data references are posted on-chain, but the actual resources are stores in servers. The servers could be traditional HTTP servers or IPFS nodes.

## Decision Outcome

After a careful analysis we concluded the following points:
- There is an architectural need to develop a "proxy" component, a.k.a. VDR proxy, that would work as a first phase for resource resolution. Behind the VDR proxy, different storage implementations could be added as extensions
- With respect to specific implementations
+ ledger based storage at this stage introduces latency, throughput bottlenecks, costs and other issues not suitable for most use cases.
+ Hybrid solutions that make use of a ledger share similar drawbacks.
+ Decentralized Hash Tables (such as IPFS) do not provide efficient handling for dynamic resources (such as revocation lists).
+ We concluded that a reasonable first iteration could be delivered using DID URLs to identify resources while they would be, a priori, stored in traditional HTTP servers.

### Positive Consequences

- The implementation of a VDR proxy enables a transparent abstraction that allows to extend the possible methods to retrieve resources.
- DID URLs allow for a fair decentralization level at issuer's disposal to control the location of resources

### Negative Consequences

- There is a level of under-specification at W3C specifications with respect to DID URL dereferencing. This forces us to define the under-specificied behaviour or simply creata-our-own solution.

## Links

We leave a list of useful links for context

- [AnonCreds Methods Registry](https://hyperledger.github.io/anoncreds-methods-registry/)
- [AnonCreds Specification](https://hyperledger.github.io/anoncreds-spec/)
- [W3C DID resolution algorithm](https://w3c-ccg.github.io/did-resolution/)

8 changes: 5 additions & 3 deletions docs/docusaurus/dids/create.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,8 +54,9 @@ The result should show an empty list, as no DIDs exist on this Cloud Agent insta
### 2. Create the Cloud Agent managed DID using DID registrar endpoint

The DID controller can create a new DID by sending a [DID document](/docs/concepts/glossary#did-document) template to the Agent.
Since key pairs are generated and managed by the Cloud Agent, DID controller only has to specify the key `id` and its purpose (e.g., `authentication`, `assertionMethod`, etc.).
The current PRISM DID method supports a key with a single purpose, but it is extendible to support a key with multiple purposes in the future.
Since key pairs are generated and managed by the Cloud Agent, DID controller only has to specify the key `id`,
`purpose` (`authentication`, `assertionMethod`, etc.), and optional `curve` (`secp256k1`, `Ed25519`, `X25519`).
If the `curve` is omitted, the agent uses the `secp256k1` curve by default.

```bash
curl --location --request POST 'http://localhost:8080/cloud-agent/did-registrar/dids' \
Expand All @@ -67,7 +68,8 @@ curl --location --request POST 'http://localhost:8080/cloud-agent/did-registrar/
"publicKeys": [
{
"id": "auth-1",
"purpose": "authentication"
"purpose": "authentication",
"curve": "secp256k1"
}
],
"services": []
Expand Down
3 changes: 2 additions & 1 deletion docs/docusaurus/dids/update.md
Original file line number Diff line number Diff line change
Expand Up @@ -120,7 +120,8 @@ curl --location --request POST 'http://localhost:8080/cloud-agent/did-registrar/
"actionType": "ADD_KEY",
"addKey": {
"id": "key-2",
"purpose": "authentication"
"purpose": "authentication",
"curve": "secp256k1"
}
}
]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,13 +3,6 @@ package org.hyperledger.identus.pollux.anoncreds
import org.scalatest.flatspec.AnyFlatSpec

import scala.jdk.CollectionConverters.*
import org.hyperledger.identus.pollux.anoncreds.{
AnoncredLinkSecretWithId,
AnoncredLinkSecret,
AnoncredPresentationRequest,
AnoncredLib,
AnoncredCredentialRequests
}

/** polluxAnoncredsTest/Test/testOnly org.hyperledger.identus.pollux.anoncreds.PoCNewLib
*/
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,6 @@ import cats.effect.kernel.Resource
import cats.effect.std.Dispatcher
import com.zaxxer.hikari.HikariConfig
import doobie.hikari.HikariTransactor
import doobie.util.ExecutionContexts
import doobie.util.transactor.Transactor
import zio.*
import zio.interop.catz.*
Expand All @@ -18,7 +17,7 @@ object TransactorLayer {
// Here we use `Dispatcher.apply`
// but at the agent level it is `Dispatcher.parallel` due to evicted version
// Dispatcher.parallel[Task].allocated.map { case (dispatcher, _) =>
Dispatcher[Task].allocated.map { case (dispatcher, _) =>
Dispatcher.parallel[Task].allocated.map { case (dispatcher, _) =>
given Dispatcher[Task] = dispatcher
TransactorLayer.hikari[Task](config)
}
Expand All @@ -34,7 +33,7 @@ object TransactorLayer {
// Here we use `Dispatcher.apply`
// but at the agent level it is `Dispatcher.parallel` due to evicted version
// Dispatcher.parallel[ContextAwareTask].allocated.map { case (dispatcher, _) =>
Dispatcher[ContextAwareTask].allocated.map { case (dispatcher, _) =>
Dispatcher.parallel[ContextAwareTask].allocated.map { case (dispatcher, _) =>
given Dispatcher[ContextAwareTask] = dispatcher
TransactorLayer.hikari[ContextAwareTask](config)
}
Expand All @@ -56,14 +55,7 @@ object TransactorLayer {
hikariConfig
}
.map { hikariConfig =>
val pool: Resource[A, Transactor[A]] = for {
// Resource yielding a transactor configured with a bounded connect EC and an unbounded
// transaction EC. Everything will be closed and shut down cleanly after use.
ec <- ExecutionContexts.fixedThreadPool[A](config.awaitConnectionThreads) // our connect EC
xa <- HikariTransactor.fromHikariConfig[A](hikariConfig, ec)
// xa <- HikariTransactor.fromHikariConfig[A](hikariConfig, Some(LogHandler.jdkLogHandler))
} yield xa

val pool: Resource[A, Transactor[A]] = HikariTransactor.fromHikariConfig[A](hikariConfig)
pool.toManaged.toLayer[Transactor[A]]
}

Expand Down
4 changes: 2 additions & 2 deletions tests/integration-tests/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,9 +46,9 @@ The project structure is represented below:
│ ├── abilities -> contains the abilities of the actors
│ ├── common -> contains the common classes (test constants and helper functions)
│ ├── config -> contains the configuration classes (Hoplite)
│ ├── features -> contains the features implementation steps
│ ├── interactions -> contains the interactions of the actors
│ └── runners -> contains the test runners to execute the tests
│ ├── models -> contains the models
│ ├── steps -> contains the features implementation steps
└── resources -> contains the test resources
├── configs -> contains the test configuration files
├── containers -> contains the Docker Compose files to start the test environment
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -39,9 +39,7 @@ open class ListenToEvents(
TestConstants.EVENT_TYPE_CONNECTION_UPDATED -> connectionEvents.add(gson.fromJson(eventString, ConnectionEvent::class.java))
TestConstants.EVENT_TYPE_ISSUE_CREDENTIAL_RECORD_UPDATED -> credentialEvents.add(gson.fromJson(eventString, CredentialEvent::class.java))
TestConstants.EVENT_TYPE_PRESENTATION_UPDATED -> presentationEvents.add(gson.fromJson(eventString, PresentationEvent::class.java))
TestConstants.EVENT_TYPE_DID_STATUS_UPDATED -> {
didEvents.add(gson.fromJson(eventString, DidEvent::class.java))
}
TestConstants.EVENT_TYPE_DID_STATUS_UPDATED -> didEvents.add(gson.fromJson(eventString, DidEvent::class.java))
else -> {
throw IllegalArgumentException("ERROR: unknown event type ${event.type}")
}
Expand All @@ -56,7 +54,7 @@ open class ListenToEvents(
return ListenToEvents(url, webhookPort)
}

fun `as`(actor: Actor): ListenToEvents {
fun with(actor: Actor): ListenToEvents {
return actor.abilityTo(ListenToEvents::class.java)
}
}
Expand Down
28 changes: 0 additions & 28 deletions tests/integration-tests/src/test/kotlin/common/Utils.kt

This file was deleted.

65 changes: 60 additions & 5 deletions tests/integration-tests/src/test/kotlin/models/Events.kt
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,7 @@ package models

import com.google.gson.JsonElement
import com.google.gson.annotations.SerializedName
import org.hyperledger.identus.client.models.Connection
import org.hyperledger.identus.client.models.IssueCredentialRecord
import org.hyperledger.identus.client.models.ManagedDID
import org.hyperledger.identus.client.models.PresentationStatus
import org.hyperledger.identus.client.models.*

data class Event(
@SerializedName("type") var type: String,
Expand Down Expand Up @@ -35,10 +32,68 @@ data class PresentationEvent(
@SerializedName("type") var type: String,
@SerializedName("id") var id: String,
@SerializedName("ts") var ts: String,
@SerializedName("data") var data: PresentationStatus,
@SerializedName("data") var data: PresentationStatusAdapter, // FIXME: rollback to PresentationStatus when Status is fixed
@SerializedName("walletId") var walletId: String,
)

data class PresentationStatusAdapter( // FIXME: delete this class when PresentationStatus.Status is fixed
@SerializedName("presentationId") val presentationId: String,
@SerializedName("thid") val thid: String,
@SerializedName("role") val role: PresentationStatus.Role,
@SerializedName("status") val status: Status,
@SerializedName("metaRetries") val metaRetries: Int,
@SerializedName("proofs") val proofs: List<ProofRequestAux>? = null,
@SerializedName("data") val `data`: List<String>? = null,
@SerializedName("connectionId") val connectionId: String? = null,
) {
enum class Status(val value: String) {
@SerializedName(value = "RequestPending")
REQUEST_PENDING("RequestPending"),

@SerializedName(value = "RequestSent")
REQUEST_SENT("RequestSent"),

@SerializedName(value = "RequestReceived")
REQUEST_RECEIVED("RequestReceived"),

@SerializedName(value = "RequestRejected")
REQUEST_REJECTED("RequestRejected"),

@SerializedName(value = "PresentationPending")
PRESENTATION_PENDING("PresentationPending"),

@SerializedName(value = "PresentationGenerated")
PRESENTATION_GENERATED("PresentationGenerated"),

@SerializedName(value = "PresentationSent")
PRESENTATION_SENT("PresentationSent"),

@SerializedName(value = "PresentationReceived")
PRESENTATION_RECEIVED("PresentationReceived"),

@SerializedName(value = "PresentationVerified")
PRESENTATION_VERIFIED("PresentationVerified"),

@SerializedName(value = "PresentationAccepted")
PRESENTATION_ACCEPTED("PresentationAccepted"),

@SerializedName(value = "PresentationRejected")
PRESENTATION_REJECTED("PresentationRejected"),

@SerializedName(value = "ProblemReportPending")
PROBLEM_REPORT_PENDING("ProblemReportPending"),

@SerializedName(value = "ProblemReportSent")
PROBLEM_REPORT_SENT("ProblemReportSent"),

@SerializedName(value = "ProblemReportReceived")
PROBLEM_REPORT_RECEIVED("ProblemReportReceived"),

@SerializedName(value = "PresentationVerificationFailed")
PRESENTATION_VERIFICATION_FAILED("PresentationVerificationFailed"),
}
}

data class DidEvent(
@SerializedName("type") var type: String,
@SerializedName("id") var id: String,
Expand Down
20 changes: 20 additions & 0 deletions tests/integration-tests/src/test/kotlin/models/JwtCredential.kt
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
package models

import com.jayway.jsonpath.DocumentContext
import com.jayway.jsonpath.JsonPath
import java.util.Base64

class JwtCredential(base64: String) {
private val payload: DocumentContext

init {
val jwt = String(Base64.getDecoder().decode(base64))
val parts = jwt.split(".")
payload = JsonPath.parse(String(Base64.getUrlDecoder().decode(parts[1])))
}

fun statusListId(): String {
val listUrl = payload.read<String>("$.vc.credentialStatus.statusListCredential")
return listUrl.split("/credential-status/")[1]
}
}
35 changes: 11 additions & 24 deletions tests/integration-tests/src/test/kotlin/steps/common/CommonSteps.kt
Original file line number Diff line number Diff line change
Expand Up @@ -33,31 +33,18 @@ class CommonSteps {

@Given("{actor} has an issued credential from {actor}")
fun holderHasIssuedCredentialFromIssuer(holder: Actor, issuer: Actor) {
holder.attemptsTo(
Get.resource("/issue-credentials/records"),
)
holder.attemptsTo(
Ensure.thatTheLastResponse().statusCode().isEqualTo(HttpStatus.SC_OK),
)
val receivedCredential = SerenityRest.lastResponse().get<IssueCredentialRecordPage>().contents!!.findLast { credential ->
credential.protocolState == IssueCredentialRecord.ProtocolState.CREDENTIAL_RECEIVED &&
credential.credentialFormat == IssueCredentialRecord.CredentialFormat.JWT
}
actorsHaveExistingConnection(issuer, holder)

if (receivedCredential != null) {
holder.remember("issuedCredential", receivedCredential)
} else {
val publishDidSteps = PublishDidSteps()
val issueSteps = IssueCredentialsSteps()
actorsHaveExistingConnection(issuer, holder)
publishDidSteps.agentHasAnUnpublishedDID(holder)
publishDidSteps.agentHasAPublishedDID(issuer)
issueSteps.issuerOffersACredential(issuer, holder, "short")
issueSteps.holderReceivesCredentialOffer(holder)
issueSteps.holderAcceptsCredentialOfferForJwt(holder)
issueSteps.acmeIssuesTheCredential(issuer)
issueSteps.bobHasTheCredentialIssued(holder)
}
val publishDidSteps = PublishDidSteps()
publishDidSteps.createsUnpublishedDid(holder)
publishDidSteps.agentHasAPublishedDID(issuer)

val issueSteps = IssueCredentialsSteps()
issueSteps.issuerOffersACredential(issuer, holder, "short")
issueSteps.holderReceivesCredentialOffer(holder)
issueSteps.holderAcceptsCredentialOfferForJwt(holder)
issueSteps.acmeIssuesTheCredential(issuer)
issueSteps.bobHasTheCredentialIssued(holder)
}

@Given("{actor} and {actor} have an existing connection")
Expand Down
Loading

0 comments on commit 5775f7a

Please sign in to comment.