Skip to content

Latest commit

 

History

History

tenant-management-service

tenant-management-service

LoopBack

This is the primary service of the control plane responsible for onboarding a tenant and triggering it's provisioning.

Overview

A Microservice for handling tenant management operations. It provides -

  • lead creation and verification
  • Tenant Onboarding of both pooled and silo tenants
  • Billing and Invoicing
  • Provisioning of resources for silo and pooled tenants
  • IDP - confgure identity provider

work flow

image

Installation

Install Tenant Management Service using npm;

$ [npm install | yarn add] @sourceloop/ctrl-plane-tenant-management-service

Usage

  • Create a new Loopback4 Application (If you don't have one already) lb4 testapp
  • Install the tenant management service npm i @sourceloop/ctrl-plane-tenant-management-service
  • Set the environment variables.
  • Run the migrations.
  • Add the TenantManagementServiceComponent to your Loopback4 Application (in application.ts).
    // import the TenantManagementServiceComponent
    import {TenantManagementServiceComponent} from '@sourceloop/ctrl-plane-tenant-management-service';
    // add Component for TenantManagementService
    this.component(TenantManagementServiceComponent);
  • Set up a Loopback4 Datasource with dataSourceName property set to TenantManagementDB. You can see an example datasource here.
  • Bind any of the custom providers you need.

Onboarding a tenant

  • The onboarding process starts through a concept of a Lead. A Lead is a prospective client who may or may not end being a tenant in our system.
  • The overall flow could be something like this - flow
  • The Lead is created through POST /leads endpoint, which creates a Lead and sends an email to verify the email address of the lead
  • The mail has a link which should direct to a front end application, which in turn would call the upcoming api's using a temporary authorization code included in the mail.
  • The front end application first calls the /leads/{id}/verify which updates the validated status of the lead in the DB and returns a new JWT Token that can be used for subsequent calls
  • If the token is validated in the previous step, the UI should call the /leads/{id}/tenants endpoint with the necessary payload(as per swagger documentation).
  • This endpoint would onboard the tenant in the DB, and the facade is then supposed to trigger the relevant events using the /tenants/{id}/provision endpoint.
  • The provisioning endpoint will invoke the publish method on the EventConnector. This connector's purpose is to provide a place for consumer to write the event publishing logic. And your custom service can be bound to the key EventConnectorBinding exported by the service. Refer the example with Amazon EventBridge implementation in the sandbox.

IDP - Identity Provider

The IDP (Identity Provider) Controller provides an endpoint to manage identity provider configurations for tenants. It supports multiple identity providers, such as Keycloak and Auth0, and ensures secure handling of identity provider setup requests through rate-limiting, authorization, and input validation.

Features

Multi-IDP Support:
  • Supports Keycloak and Auth0 as identity providers.
  • Extensible for additional providers like Cognito.
Bindings:

TenantManagementServiceBindings.IDP_KEYCLOAK - Provides Keycloak configuration handler.

TenantManagementServiceBindings.IDP_AUTH0 - Provides Auth0 configuration handler.

This switch statement selects the appropriate identity provider (IDP) configuration based on the identityProvider value in the request payload.

  • AUTH0: Calls idpAuth0Provider to configure Auth0.
  • KEYCLOAK: Calls idpKeycloakProvider to configure Keycloak.

Finally, it returns the response (res) from the selected provider.

export interface IdpResp {
  authId: string;
}

authId is the id of the user created over identity provider.

Webhook Integration

  • A webhook endpoint is available in the service that is supposed to update the status of a tenant based on the updates from the third-party responsible for actual provisioning of resources
  • To add Webhook configuration in your application, add the WebhookTenantManagementServiceComponent to your Loopback4 Application (in application.ts).
    // import the UserTenantServiceComponent
    import {WebhookTenantManagementServiceComponent} from '@sourceloop/ctrl-plane-tenant-management-service';
    // add the component here
    this.component(WebhookTenantManagementServiceComponent);
  • To test this from local, ensure that your local service is exposed through a tool like ngrok or localtunne
  • Your third-party tool is responsible for hitting this endpoint with the expected payload and a signature and timestamp in headers x-signature and x-timestamp respectively.
  • The signature is derived using the following logic (written in Node.js but could be implemented in any other language) -
const timestamp = Date.now();
const secret = process.env.SECRET;
const context = process.env.CONTEXT_ID;
const payload = `{"status":"success", "initiatorId":${process.env.TENANT_ID},"type":0}`;
const signature = crypto
  .createHmac('sha256', secret)
  .update(`${payload}${context}${timestamp}`)
  .digest('hex');

The identity provider and its related providers are also a part of the 'WebhookTenantManagementServiceComponent' since we expect it to be invoked separately once the tenant provisioning is completed via the orchestrator or any other medium preferred.

Environment Variables

Name Required Description Default Value
NODE_ENV Y Node environment value, i.e. `dev`, `test`, `prod
LOG_LEVEL Y Log level value, i.e. `error`, `warn`, `info`, `verbose`, `debug`
DB_HOST Y Hostname for the database server.
DB_PORT Y Port for the database server.
DB_USER Y User for the database.
DB_PASSWORD Y Password for the database user.
DB_DATABASE Y Database to connect to on the database server.
DB_SCHEMA Y Database schema used for the data source. In PostgreSQL, this will be `public` unless a schema is made explicitly for the service.
REDIS_HOST Y Hostname of the Redis server.
REDIS_PORT Y Port to connect to the Redis server over.
REDIS_URL Y Fully composed URL for Redis connection. Used instead of other settings if set.
REDIS_PASSWORD Y Password for Redis if authentication is enabled.
REDIS_DATABASE Y Database within Redis to connect to.
JWT_SECRET Y Symmetric signing key of the JWT token.
JWT_ISSUER Y Issuer of the JWT token.
SYSTEM_USER_ID Y system user id.
FROM_EMAIL Y email to send notification.
APP_NAME Y app name.
APP_VALIDATE_URL Y frontend url to validate.
APP_LOGIN_URL Y control plane url.
VALIDATION_TOKEN_EXPIRY Y expiry time for token.
AWS_REGION Y aws region.
PUBLIC_API_MAX_ATTEMPTS Y number of attempts for public api.
WEBHOOK_API_MAX_ATTEMPTS Y number of attempts for webhook api.
WEBHOOK_SECRET_EXPIRY Y expiry time for webhook secret.
LEAD_TOKEN_EXPIRY Y expiry time for lead token.
SILOED_PIPELINE Y pipeline key for soloed.
POOLED_PIPELINE Y pipeline key for pooled.
LEAD_KEY_LENGTH Y lenght of random key for lead.

Setting up a DataSource

Here is a sample Implementation DataSource implementation using environment variables and PostgreSQL as the data source.

import {inject, lifeCycleObserver, LifeCycleObserver} from '@loopback/core';
import {juggler} from '@loopback/repository';
import {TenantManagementDbSourceName} from '@sourceloop/ctrl-plane-tenant-management-service';

const config = {
  name: TenantManagementDbSourceName,
  connector: 'postgresql',
  url: '',
  host: process.env.DB_HOST,
  port: process.env.DB_PORT,
  user: process.env.DB_USER,
  password: process.env.DB_PASSWORD,
  database: process.env.DB_DATABASE,
  schema: process.env.DB_SCHEMA,
};

@lifeCycleObserver('datasource')
export class AuthenticationDbDataSource
  extends juggler.DataSource
  implements LifeCycleObserver
{
  static dataSourceName = TenantManagementDbSourceName;
  static readonly defaultConfig = config;

  constructor(
    // You need to set datasource configuration name as 'datasources.config.Authentication' otherwise you might get Errors
    @inject(`datasources.config.${TenantManagementDbSourceName}`, {
      optional: true,
    })
    dsConfig: object = config,
  ) {
    super(dsConfig);
  }
}

create one more datasource with redis as connector and db name 'TenantManagementCacheDB' that is used for cache

import {inject, lifeCycleObserver, LifeCycleObserver} from '@loopback/core';
import {AnyObject, juggler} from '@loopback/repository';
import {readFileSync} from 'fs';

const config = {
  name: 'TenantManagementCacheDB',
  connector: 'kv-redis',
  host: process.env.REDIS_HOST,
  port: process.env.REDIS_PORT,
  password: process.env.REDIS_PASSWORD,
  db: process.env.REDIS_DATABASE,
  url: process.env.REDIS_URL,
  tls:
    +process.env.REDIS_TLS_ENABLED! ||
    (process.env.REDIS_TLS_CERT
      ? {
          ca: readFileSync(process.env.REDIS_TLS_CERT),
        }
      : undefined),
  sentinels:
    +process.env.REDIS_HAS_SENTINELS! && process.env.REDIS_SENTINELS
      ? JSON.parse(process.env.REDIS_SENTINELS)
      : undefined,
  sentinelPassword:
    +process.env.REDIS_HAS_SENTINELS! && process.env.REDIS_SENTINEL_PASSWORD
      ? process.env.REDIS_SENTINEL_PASSWORD
      : undefined,
  role:
    +process.env.REDIS_HAS_SENTINELS! && process.env.REDIS_SENTINEL_ROLE
      ? process.env.REDIS_SENTINEL_ROLE
      : undefined,
};

// Observe application's life cycle to disconnect the datasource when
// application is stopped. This allows the application to be shut down
// gracefully. The `stop()` method is inherited from `juggler.DataSource`.
// Learn more at https://loopback.io/doc/en/lb4/Life-cycle.html
@lifeCycleObserver('datasource')
export class RedisDataSource
  extends juggler.DataSource
  implements LifeCycleObserver
{
  static readonly dataSourceName = 'TenantManagementCacheDB';
  static readonly defaultConfig = config;

  constructor(
    @inject(`datasources.config.TenantManagementCacheDB`, {optional: true})
    dsConfig: AnyObject = config,
  ) {
    if (
      +process.env.REDIS_HAS_SENTINELS! &&
      !!process.env.REDIS_SENTINEL_HOST &&
      !!process.env.REDIS_SENTINEL_PORT
    ) {
      dsConfig.sentinels = [
        {
          host: process.env.REDIS_SENTINEL_HOST,
          port: +process.env.REDIS_SENTINEL_PORT,
        },
      ];
    }
    super(dsConfig);
  }
}

Migrations

The migrations required for this service can be copied from the service. You can customize or cherry-pick the migrations in the copied files according to your specific requirements and then apply them to the DB.

Database Schema

alt text

The major tables in the schema are briefly described below -

Address - this model represents the address of a company or lead

Contact - this model represents contacts belonging to a tenant

Invoice - this model represents an invoice with the amount and period generated for a tenant in the system

Leads - this model represents a lead that could eventually be a tenant in the system

Tenants - main model of the service that represents a tenant in the system, either pooled or siloed