Skip to content

ibm-ecosystem-engineering/turbo-webmethods-envizi-integration

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

81 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Integrating Turbo with Envizi via webMethods for Green IT data

This blog explains about the step-by-step instructions to pull green IT data from Turbonomic into Envizi via webMethods Integration.

Authors

Jeya Gandhi Rajan M
Madhukrishna Parike
JYOTI RANI
INDIRA KALAGARA

Contents

1. Prerequisite

1.1 Environment

  • Turbonomic v8.14.3 or higher
  • Turbonomic user with Observer role in. (Refer here to create the user)
  • Envizi's S3 bucket (Refer Steps 1 and 2 here to create the bucket). Make a note of the values of the Bucket, Folder, UserName, Access Key and Secret Access Key for further reference.
  • webMethods SaaS (Click here to signup for Trial) or on-prem

1.1 Envizi Organization hierarchy details

  • Following the steps in this article, we would be retreiving the energy consumption (electricity) data of the Datacenter using the Turbo APIs.

  • To proceed with the actually integration, we need to have the data center location and the relevant accounts ( to hold the electricity data) to be pre-configured in Envizi. Hence, in this article, we are using the below organization hierarchy in Envizi and the corresponding data center locations ex: IN Bank - IBMC-WDC07-Ashburn VA , etc and the specific accounts of the data center IN Bank - IBMC-WDC07-Electricity.

  • Feel free to use your own hierarchy, location and account names and make sure to update the same in the configurations below whereever required.

2. Architecture

Here is the architecture that describes about this Turbo and Envizi integration via webMethods.

webMethods Integration flow pulls the list of Cloud Regions and On-prem Data Centers from Turbo and sends it to Envizi's S3 bucket in a CSV file. This CSV file will be further processed by the Envizi internally.

3. webMethods Locations Workflow Configuration

In this workflow, we will invoke Turbonomic APIs to fetch DataCenter locations and transform the JSON API response into the XLS template expected by Envizi.

CLICK me for detailed instructions

3.1. Login to webMethods Integration

  • Login to your instance of webMethods integration with the respective credentials.

3.2. Create a new Project

  • Name Project Name as Turbo_wM_Envizi and Leave Source Control - Git server/account as Default. Note choose the project name as you desired.

3.3. Import the Workflows

  • Download the Workflow archive file here Locations.
  • Click on the Import and select the Workflow location that is downloaded in the above step.

3.4. Provide Workflow name, Workflow description, AWS service

  • Provide the Workflow name as Sustainability Solution - Locations and Workflow description. Please name the Workflow name and Workflow description as per your need.
  • For the Connect to Amazon Web Services configuration details, please click on + symbol
  • Configure the Add Account AWS page with Account Name, Access Key ID, Secret Access Key and Default Region.
  • Click on Import button

3.5. Configure the Workflow nodes

  • In this step Workflow nodes configuration needs to be updated.

3.5.1. Configure the node Turbonomic API Login

  • Mouse over to Turbonomic API Login node and click on Settings
  • Click on Next
  • In the Action configure page choose as below
  • Select HTTP Method: POST
  • URL: https://[Turbonomic-URL]/api/v3/login?hateoas=true

URL Params

  • Under URL Param 1 Key and Value to be updated
  • Key: hateoas
  • Value: true

Set Body Type

  • Set Body Type: x-www-form-urlencoded

Body

  • Note: username and password to access the Turbonomic API's are created as pre-requisite.

  • Under Body 1 Name & Value to be updated

  • Name: username

  • Value: Value of the username

  • Under Body 2 Name & Value to be updated

  • Name: password

  • Value: Value of the password

  • Rest of the values to be left as is.

  • Click on Next

Test this action

  • Click on Test button to see if the login is successful and Click on Done button once it is success.

3.5.2. Configure the node DataCentre Retrieve

  • Mouse over to DataCentre Retrieve node and click on Settings
  • Click on Next
  • In the Action configure page choose as below
  • Select HTTP Method: GET
  • URL: https://[Turbonomic-URL]]/api/v3/search?types=DataCenter

Headers

  • Under Headers 'Headers 1' Key & Value to be provided
  • Key: Cookie
  • Value: Drag and drop the set-cookie from the Turbonomic API Login node as shown in the screen
  • Click on Next

Test this action

  • Click on Test button to see if the DataCentre Retrieval is successful and Click on Done button once it is success.

3.5.3. Configure the node Query JSON

  • Mouse over to Query JSON node and click on Settings
  • Click on Next
  • In the Action configure page choose as below
  • Under Query JSON provide JSON Data and JSON Path Expression
  • JSON Data: Drag and Drop of previous node object DataCentre Retrieve
  • JSON Path Expression: responseObject
  • Click on Next button

Test this action

  • Click on Test button to see if the Query JSON is successful and Click on Done button once it is success.

3.5.4. Configure the node mapRequest

  • In this step the data transformation performed on the responseObject of the DataCenter types API https://[Turbo-URL]/api/v3/search?types=DataCenter against the Envizi expected format/columns.
  • This is a flow service which customize and maps the request to the custom output. For example displayName mapped to LOCATION, Country[] mapped to COUNTRY and ORGANIZATION, GROUP TYPE etc are hard-coded as per Eniviz template.
  • As shown in the above screen shot below columns have been mapped

  • displayName: LOCATION

  • Country[] under tags: COUNTRY

  • Latitude[] under tags: LATITUDE Y

  • Longitude under tags: LONGITUDE X

  • The rest of the columns have been updated as per Envizi inputed. These columns will have to be over-writen as Envizi suggested.

  • ORGANIZATION, GROUP TYPE, GROUP HIERARCHY NAME, GROUP NAME 1, GROUP NAME 2, GROUP NAME 3, LOCATION TYPE, LOCATION REFERENCE, LOCATION REF NO, LOCATION ID, STREET ADDRESS, CITY, STATE PROVINCE, POSTAL CODE, LOCATION CLOSE DATE

  • As highlighted in the scrrenshot the columns marked in Organge colour needs to be inputed as Envizi inputs.

  • The columns marked in Green colour mapped from the Turbo API.

  • Mouse over to mapRequest node and click on Settings
  • Click on Next
  • In the Action configure page drag and drop Query JSON object in the request at mapRequest.
  • Click on Next button

Test this action

  • Click on Test button to see if the mapRequest is successful and Click on Done button once it is success.

3.5.5. Configure the node JSON to CSV

  • Mouse over to JSON to CSV node and click on Settings
  • Click on Next
  • In the Action configure page drap and drop mapRequest on to value of the Inpurt JSON.
  • Header Type: key from the drop down list
  • Click on Next

Test this action

  • Click on Test button to see if the JSON to CSV is successful and Click on Done button once it is success.

3.5.6. Configure the node suscsvtoxl

  • This is a customized connector which transform the CSV format into xlsx format.
  • Mouse over to suscsvtoxl node and Click on Settings
  • Click on Next
  • In the Action configure page drap and drop value from the Transform onto CSV File(Base64) in the Convert CSV to XLSX input as shown below
  • Click on Next

Test this action

  • Click on Test button to see if the suscsvtoxl is successful and Click on Done button once it is success.

3.5.7. Configure the node S3 Upload File

  • Mouse over to S3 Upload File node and Click on Settings
  • Click on Next
  • Fill the details as below
  • Select action: S3 Upload File
  • Name: S3 Upload File . Name can be updated as per need.
  • Connect to Amazon Web Services: AWS_1 . This is the AWS service created step 3.4
  • Click Next

Bucket Name and other configuration

The AWS S3 bucket details noted as part of the pre-requisite is used here.

  • Upload File: Content
  • Raw Data: Drap and Drop the XLSX Data(Binary) under suscsvtoxl
  • Bucket Name: s3 bucket name (noted as part of pre-requisite)
  • File Name: Folder/filename provided in step 4. File name format as Envizi_SetupConfig_G5_YYYYMMDD.xlsx
  • ACL: bucket-owner-full-control
  • Region: region provided in step 4
  • Content-Type: application/vnd.openxmlformats-officedocument.spreadsheetml.sheet
  • Click Next

Test this action

  • Click on Test button to see if the S3 Upload File is successful and Click on Done button once it is success.

3.6. Activate the Workflow

  • Toggle ON the workflow to activate

3.7. Run the Workflow

  • Click on the run the workflow to generate the location feed and push the feed to AWS S3 bucket.

4. webMethods Accounts Workflow Configuration

In this workflow, we will invoke Turbonomic APIs to fetch Energy consumption for each DataCenter locations and transform the JSON API response into the CSV template expected by Envizi.

CLICK me for detailed instructions

4.1. Login to webMethods Integration

  • Login to your instance of webMethods integration with the respective credentials.

4.2. Create a new Project

  • Name Project Name as Turbo_wM_Envizi and Leave Source Control - Git server/account as Default. Note choose the project name as you desired.

4.3. Import the Workflows

  • Download the Workflow archive file here Accounts.
  • Click on the Import and select the Workflow location that is downloaded in the above step.

4.4. Provide Workflow name, Workflow description, AWS service

  • Provide the Workflow name as Sustainability Solution - Accounts and Workflow description. Please name the Workflow name and Workflow description as per your need.
  • For the Connect to Amazon Web Services configuration details, please click on + symbol
  • Configure the Add Account AWS page with Account Name, Access Key ID, Secret Access Key and Default Region.
  • Click on Import button
  • Click on Edit by moving mouse over the Workflow imported above.

4.5. Configure the Workflow nodes

  • In this step Workflow nodes configuration needs to be updated.

4.5.1. Configure the node Turbonomic API Login

  • Mouse over to Turbonomic API Login node and click on Settings
  • Click on Next
  • In the Action configure page choose as below
  • Select HTTP Method: POST
  • URL: https://[Tubonomic-URL]]/api/v3/login?hateoas=true

URL Params

  • Under URL Param 1 Key and Value to be updated
  • Key: hateoas
  • Value: true

Set Body Type

  • Set Body Type: multipart-form-data

Body

  • Note: username and password to access the Turbonomic API's are created as pre-requisite.

  • Under Body 1 Name & Value to be updated

  • Name: username

  • Value: Value of the username

  • Under Body 2 Name & Value to be updated

  • Name: password

  • Value: Value of the password

  • Rest of the values to be left as is.

  • Click on Next

Test this action

  • Click on Test button to see if the login is successful and Click on Done button once it is success.

4.5.2. Configure the node DataCentre Retrieve

  • Mouse over to DataCentre Retrieve node and click on Settings
  • Click on Next
  • In the Action configure page choose as below
  • Select HTTP Method: GET
  • URL: https://[Turbonomic-URL]]/api/v3/search?types=DataCenter

URL Params

  • Provide Key and Value under 'URL Param 1'
  • Key: types
  • Value: DataCentre

Headers

  • Under Headers > Headers 1 Key & Value to be provided
  • Key: Cookie
  • Value: Drag and drop the set-cookie from the Turbonomic API Login node as shown in the screen
  • Click on Next

Test this action

  • Click on Test button to see if the DataCentre Retrieval is successful and Click on Done button once it is success.

4.5.3. Configure the node DC Accounts Stats

  • Mouse over to DC Accounts Stats node and click on Settings
  • Click on Next
  • In the Action configure page choose as below
  • Select HTTP Method: POST
  • URL: https://[Turbonomic-URL]]/api/v3/entities/{{$a3.responseObject.0.uuid}}/stats . Note {{$a3.responseObject.0.uuid}} is the uuid from preveious API call which can be drag and drop from responseObject under DataCentre Retrieve as shown below

Headers

  • Under Headers Headers 1 Key & Value to be provided
  • Key: Cookie
  • Value: Drag and drop the set-cookie from the Turbonomic API Login node as shown in the screen

Set Body Type and Body

  • Set Body Type: JSON
  • Body: {"data":{ "startDate":"2024-12-06 00:00:05", "endDate": "2024-12-10 23:59:59","statistics": [ { "name": "Energy", "filters": [ { "type": "relation", "value": "sold" }]}]}}
  • Please note: The startDate and endDate has to be updated edited to retrieve the stats.
  • Click on Next
  • The rest of the values to be left as is.

Test this action

  • Click on Test button to see if the DC Accounts Stats is successful and Click on Done button once it is success.

4.5.4. Configure the node Query JSON

  • Mouse over to Query JSON node and click on Settings
  • Click on Next
  • In the Action configure page choose as below
  • JSON Data: Drag and Drop the DC Accounts Stats onto JSON Data
  • JSON Path Expression: responseObject
  • Click on Next

Test this action

  • Click on Test button to see if the Query JSON is successful and Click on Done button once it is success.

4.5.5. Configure the node AccountsMap

  • Mouse over to AccountsMap node and click on Settings
  • Click on Next
  • In the Action configure page choose as below
  • under AccountsMap
  • request: Drag and Drop of Query JSON
  • Click on Next

Test this action

Click on Test button to see if the AccountsMap is successful and Click on Done button once it is success.

4.5.6. Configure the node JSON to CSV

  • Mouse over to JSON to CSV node and click on Settings
  • Click on Next
  • In the Action configure page choose as below
  • under JSON to CSV
  • Input JSON: Drag and Drop response from AccountsMap
  • Header Type: Key
  • Rest of the values leave as is.
  • CLick on Next

Test this action

Click on Test button to see if the JSON to CSV is successful and Click on Done button once it is success.

4.5.7. Configure the node S3 Upload File

  • Mouse over to S3 Upload File node and Click on Settings
  • Click on Next
  • Fill the details as below
  • Select action: S3 Upload File
  • Name: S3 Upload File . Name can be updated as per need.
  • Connect to Amazon Web Services: AWS_1 . This is the AWS service created step 4.4
  • Click Next

Bucket Name and other configuration

The AWS S3 bucket details noted as part of the pre-requisite is used here.

  • Upload File: Content
  • Raw Data: Drap and Drop the csv under JSON to CSV
  • Bucket Name: s3 bucket name (noted as part of pre-requisite)
  • File Name: Folder/filename provided in step 4. File name format as Account_Setup_and_Data_Load_IBMCloud_electricity.csv
  • ACL: bucket-owner-full-control
  • Region: region provided in step 4
  • Content-Type: text/csv
  • Click Next

Test this action

  • Click on Test button to see if the S3 Upload File is successful and Click on Done button once it is success.

4.6. Activate the Workflow

  • Toggle ON to activate the Workflow

4.7. Run the Workflow

  • Run the Workflow to push the DataCentre electricity consumption stats to Envizi

5. Validate Workflow Execution

CLICK me for detailed instructions

5.1. Data in S3

  • The flows will pull the data from the Turbo and push it to S3. You can see the Data flow status in S3 like this.

5.2. Sample Data from S3

5.3. Processing S3 files in Envizi

  • Envizi automatically pull the data from S3 and process it. The accounts and account summary page looks like this now.

6. Schedule Workflow Execution

Locations and Accounts workflow can be scheduled for execution. Follow the steps below to define the schedule for workflow execution.

CLICK me for detailed instructions
  • Mouse over the Trigger node in the workflow and click on Settings
  • From the Trigger window, search and select Clock and Next
  • Change the settings to define the schedule for flow execution and click Done
  • Save the workflow and it will execute automatically as per the defined schedule.

Appendix

1. Create User in Turbonomoic

Let us create a local user in Turbonomic with the Observer role.

CLICK me for detailed instructions
  1. Create a new Local user in Turbonomoic by choosing the below menu option.

Home > SETTINGS > Local User > New Local User

  1. User name could be demo_observer, give some password and choose role as Observer

  2. Click Save button

  1. User gets created.

2. Reference

Tags

#envizi #Sustainability #turbonomic

#ESG Data and Environmental Intelligence #sustainability-highlights-home

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •