This blog explains about the step-by-step instructions to pull green IT data from Turbonomic into Envizi via webMethods Integration.
Jeya Gandhi Rajan M
Madhukrishna Parike
JYOTI RANI
INDIRA KALAGARA
- 1. Prerequisite
- 2. Architecture
- 3. webMethods Locations Workflow Configuration
- 4. webMethods Accounts Workflow Configuration
- 5. Validate Workflow Execution
- 6. Schedule Workflow Execution
- Turbonomic v8.14.3 or higher
- Turbonomic user with
Observer
role in. (Refer here to create the user) - Envizi's S3 bucket (Refer Steps 1 and 2 here to create the bucket). Make a note of the values of the
Bucket
,Folder
,UserName
,Access Key
andSecret Access Key
for further reference. - webMethods SaaS (Click here to signup for Trial) or on-prem
-
Following the steps in this article, we would be retreiving the energy consumption (electricity) data of the Datacenter using the Turbo APIs.
-
To proceed with the actually integration, we need to have the data center location and the relevant accounts ( to hold the electricity data) to be pre-configured in Envizi. Hence, in this article, we are using the below organization hierarchy in Envizi and the corresponding data center locations ex: IN Bank - IBMC-WDC07-Ashburn VA , etc and the specific accounts of the data center IN Bank - IBMC-WDC07-Electricity.
-
Feel free to use your own hierarchy, location and account names and make sure to update the same in the configurations below whereever required.
Here is the architecture that describes about this Turbo and Envizi integration via webMethods.
webMethods Integration flow pulls the list of Cloud Regions and On-prem Data Centers from Turbo and sends it to Envizi's S3 bucket in a CSV file. This CSV file will be further processed by the Envizi internally.
In this workflow, we will invoke Turbonomic APIs to fetch DataCenter locations and transform the JSON API response into the XLS template expected by Envizi.
CLICK me for detailed instructions
- Login to your instance of webMethods integration with the respective credentials.
- Name Project Name as
Turbo_wM_Envizi
and LeaveSource Control - Git server/account
as Default. Note choose the project name as you desired.
- Download the Workflow archive file here Locations.
- Click on the
Import
and select the Workflow location that is downloaded in the above step.
- Provide the
Workflow name
asSustainability Solution - Locations
andWorkflow description
. Please name theWorkflow name
andWorkflow description
as per your need. - For the
Connect to Amazon Web Services
configuration details, please click on+
symbol - Configure the
Add Account
AWS page withAccount Name
,Access Key ID
,Secret Access Key
andDefault Region
. - Click on
Import
button
- In this step Workflow nodes configuration needs to be updated.
- Mouse over to
Turbonomic API Login
node and click onSettings
- Click on
Next
- In the
Action configure
page choose as below - Select HTTP Method:
POST
- URL:
https://[Turbonomic-URL]/api/v3/login?hateoas=true
- Under
URL Param 1
Key and Value to be updated - Key:
hateoas
- Value:
true
- Set Body Type:
x-www-form-urlencoded
-
Note:
username
andpassword
to access the Turbonomic API's are created as pre-requisite. -
Under
Body 1
Name & Value to be updated -
Name:
username
-
Value:
Value of the username
-
Under
Body 2
Name & Value to be updated -
Name:
password
-
Value:
Value of the password
-
Rest of the values to be left as is.
-
Click on
Next
- Click on
Test
button to see if the login is successful and Click onDone
button once it is success.
- Mouse over to
DataCentre Retrieve
node and click onSettings
- Click on
Next
- In the
Action configure
page choose as below - Select HTTP Method:
GET
- URL:
https://[Turbonomic-URL]]/api/v3/search?types=DataCenter
- Under
Headers
'Headers 1' Key & Value to be provided - Key:
Cookie
- Value: Drag and drop the
set-cookie
from theTurbonomic API Login
node as shown in the screen - Click on
Next
- Click on
Test
button to see if the DataCentre Retrieval is successful and Click onDone
button once it is success.
- Mouse over to
Query JSON
node and click onSettings
- Click on
Next
- In the
Action configure
page choose as below - Under
Query JSON
provideJSON Data
andJSON Path Expression
- JSON Data: Drag and Drop of previous node object
DataCentre Retrieve
- JSON Path Expression:
responseObject
- Click on
Next
button
- Click on
Test
button to see if the Query JSON is successful and Click onDone
button once it is success.
- In this step the data transformation performed on the responseObject of the DataCenter types API
https://[Turbo-URL]/api/v3/search?types=DataCenter
against the Envizi expected format/columns. - This is a flow service which customize and maps the request to the custom output. For example
displayName
mapped toLOCATION
,Country[]
mapped toCOUNTRY
andORGANIZATION
,GROUP TYPE
etc are hard-coded as per Eniviz template.
-
As shown in the above screen shot below columns have been mapped
-
displayName
:LOCATION
-
Country[]
under tags:COUNTRY
-
Latitude[]
under tags:LATITUDE Y
-
Longitude
under tags:LONGITUDE X
-
The rest of the columns have been updated as per Envizi inputed. These columns will have to be over-writen as Envizi suggested.
-
ORGANIZATION
,GROUP TYPE
,GROUP HIERARCHY NAME
,GROUP NAME 1
,GROUP NAME 2
,GROUP NAME 3
,LOCATION TYPE
,LOCATION REFERENCE
,LOCATION REF NO
,LOCATION ID
,STREET ADDRESS
,CITY
,STATE PROVINCE
,POSTAL CODE
,LOCATION CLOSE DATE
-
As highlighted in the scrrenshot the columns marked in Organge colour needs to be inputed as Envizi inputs.
-
The columns marked in Green colour mapped from the Turbo API.
- Mouse over to
mapRequest
node and click onSettings
- Click on
Next
- In the
Action configure
page drag and dropQuery JSON
object in therequest
at mapRequest. - Click on
Next
button
- Click on
Test
button to see if themapRequest
is successful and Click onDone
button once it is success.
- Mouse over to
JSON to CSV
node and click onSettings
- Click on
Next
- In the
Action configure
page drap and dropmapRequest
on to value of theInpurt JSON
. - Header Type:
key
from the drop down list - Click on
Next
- Click on
Test
button to see if theJSON to CSV
is successful and Click onDone
button once it is success.
- This is a customized connector which transform the CSV format into xlsx format.
- Mouse over to
suscsvtoxl
node and Click onSettings
- Click on
Next
- In the
Action configure
page drap and dropvalue
from the Transform ontoCSV File(Base64)
in theConvert CSV to XLSX input
as shown below - Click on
Next
- Click on
Test
button to see if thesuscsvtoxl
is successful and Click onDone
button once it is success.
- Mouse over to
S3 Upload File
node and Click onSettings
- Click on
Next
- Fill the details as below
- Select action:
S3 Upload File
- Name:
S3 Upload File
. Name can be updated as per need. - Connect to Amazon Web Services:
AWS_1
. This is the AWS service created step 3.4 - Click
Next
The AWS S3 bucket details noted as part of the pre-requisite is used here.
- Upload File:
Content
- Raw Data: Drap and Drop the
XLSX Data(Binary)
under suscsvtoxl - Bucket Name: s3 bucket name (noted as part of pre-requisite)
- File Name: Folder/filename provided in step 4. File name format as Envizi_SetupConfig_G5_YYYYMMDD.xlsx
- ACL:
bucket-owner-full-control
- Region: region provided in step 4
- Content-Type:
application/vnd.openxmlformats-officedocument.spreadsheetml.sheet
- Click
Next
- Click on
Test
button to see if theS3 Upload File
is successful and Click onDone
button once it is success.
- Toggle
ON
the workflow to activate
- Click on the run the workflow to generate the location feed and push the feed to AWS S3 bucket.
In this workflow, we will invoke Turbonomic APIs to fetch Energy consumption for each DataCenter locations and transform the JSON API response into the CSV template expected by Envizi.
CLICK me for detailed instructions
- Login to your instance of webMethods integration with the respective credentials.
- Name Project Name as
Turbo_wM_Envizi
and LeaveSource Control - Git server/account
as Default. Note choose the project name as you desired.
- Download the Workflow archive file here Accounts.
- Click on the
Import
and select the Workflow location that is downloaded in the above step.
- Provide the
Workflow name
asSustainability Solution - Accounts
andWorkflow description
. Please name theWorkflow name
andWorkflow description
as per your need. - For the
Connect to Amazon Web Services
configuration details, please click on+
symbol - Configure the
Add Account
AWS page withAccount Name
,Access Key ID
,Secret Access Key
andDefault Region
. - Click on
Import
button
- Click on
Edit
by moving mouse over the Workflow imported above.
- In this step Workflow nodes configuration needs to be updated.
- Mouse over to
Turbonomic API Login
node and click onSettings
- Click on
Next
- In the
Action configure
page choose as below - Select HTTP Method:
POST
- URL:
https://[Tubonomic-URL]]/api/v3/login?hateoas=true
- Under
URL Param 1
Key and Value to be updated - Key:
hateoas
- Value:
true
- Set Body Type:
multipart-form-data
-
Note:
username
andpassword
to access the Turbonomic API's are created as pre-requisite. -
Under
Body 1
Name & Value to be updated -
Name:
username
-
Value:
Value of the username
-
Under
Body 2
Name & Value to be updated -
Name:
password
-
Value:
Value of the password
-
Rest of the values to be left as is.
-
Click on
Next
- Click on
Test
button to see if the login is successful and Click onDone
button once it is success.
- Mouse over to
DataCentre Retrieve
node and click onSettings
- Click on
Next
- In the
Action configure
page choose as below - Select HTTP Method:
GET
- URL:
https://[Turbonomic-URL]]/api/v3/search?types=DataCenter
- Provide
Key
andValue
under 'URL Param 1' - Key:
types
- Value:
DataCentre
- Under
Headers > Headers 1
Key & Value to be provided - Key:
Cookie
- Value: Drag and drop the
set-cookie
from theTurbonomic API Login
node as shown in the screen - Click on
Next
- Click on
Test
button to see if the DataCentre Retrieval is successful and Click onDone
button once it is success.
- Mouse over to
DC Accounts Stats
node and click onSettings
- Click on
Next
- In the
Action configure
page choose as below - Select HTTP Method:
POST
- URL:
https://[Turbonomic-URL]]/api/v3/entities/{{$a3.responseObject.0.uuid}}/stats
. Note {{$a3.responseObject.0.uuid}} is theuuid
from preveious API call which can be drag and drop fromresponseObject
underDataCentre Retrieve
as shown below
- Under
Headers
Headers 1 Key & Value to be provided - Key:
Cookie
- Value: Drag and drop the
set-cookie
from theTurbonomic API Login
node as shown in the screen
- Set Body Type:
JSON
- Body:
{"data":{ "startDate":"2024-12-06 00:00:05", "endDate": "2024-12-10 23:59:59","statistics": [ { "name": "Energy", "filters": [ { "type": "relation", "value": "sold" }]}]}}
- Please note: The
startDate
andendDate
has to be updated edited to retrieve the stats. - Click on
Next
- The rest of the values to be left as is.
- Click on
Test
button to see if the DC Accounts Stats is successful and Click onDone
button once it is success.
- Mouse over to
Query JSON
node and click onSettings
- Click on
Next
- In the
Action configure
page choose as below - JSON Data: Drag and Drop the
DC Accounts Stats
ontoJSON Data
- JSON Path Expression:
responseObject
- Click on
Next
- Click on
Test
button to see if theQuery JSON
is successful and Click onDone
button once it is success.
- Mouse over to
AccountsMap
node and click onSettings
- Click on
Next
- In the
Action configure
page choose as below - under
AccountsMap
- request: Drag and Drop of
Query JSON
- Click on
Next
Click on Test
button to see if the AccountsMap
is successful and Click on Done
button once it is success.
- Mouse over to
JSON to CSV
node and click onSettings
- Click on
Next
- In the
Action configure
page choose as below - under
JSON to CSV
- Input JSON: Drag and Drop
response
from AccountsMap - Header Type:
Key
- Rest of the values leave as is.
- CLick on
Next
Click on Test
button to see if the JSON to CSV
is successful and Click on Done
button once it is success.
- Mouse over to
S3 Upload File
node and Click onSettings
- Click on
Next
- Fill the details as below
- Select action:
S3 Upload File
- Name:
S3 Upload File
. Name can be updated as per need. - Connect to Amazon Web Services:
AWS_1
. This is the AWS service created step 4.4 - Click
Next
The AWS S3 bucket details noted as part of the pre-requisite is used here.
- Upload File:
Content
- Raw Data: Drap and Drop the
csv
underJSON to CSV
- Bucket Name: s3 bucket name (noted as part of pre-requisite)
- File Name: Folder/filename provided in step 4. File name format as Account_Setup_and_Data_Load_IBMCloud_electricity.csv
- ACL:
bucket-owner-full-control
- Region: region provided in step 4
- Content-Type:
text/csv
- Click
Next
- Click on
Test
button to see if theS3 Upload File
is successful and Click onDone
button once it is success.
- Toggle
ON
to activate the Workflow
- Run the Workflow to push the DataCentre electricity consumption stats to Envizi
CLICK me for detailed instructions
- The flows will pull the data from the Turbo and push it to S3. You can see the Data flow status in S3 like this.
- Envizi automatically pull the data from S3 and process it. The accounts and account summary page looks like this now.
Locations and Accounts workflow can be scheduled for execution. Follow the steps below to define the schedule for workflow execution.
CLICK me for detailed instructions
- Mouse over the
Trigger
node in the workflow and click onSettings
- From the Trigger window, search and select
Clock
andNext
- Change the settings to define the schedule for flow execution and click
Done
- Save the workflow and it will execute automatically as per the defined schedule.
Let us create a local user in Turbonomic with the Observer
role.
CLICK me for detailed instructions
- Create a new Local user in Turbonomoic by choosing the below menu option.
Home > SETTINGS > Local User > New Local User
-
User name could be
demo_observer
, give some password and choose role asObserver
-
Click
Save
button
- User gets created.
-
Turbonomic - Envizi Integration https://ibm.github.io/IBM-Sustainability-Software-Portfolio-Connectors/turbonomic-envizi/
-
Turbonomic - Envizi Integration https://github.com/IBM/turbonomic-envizi-appconnect-flows
-
Creating Envizi S3 bucket (Refer Steps 1 and 2 here to create the bucket)
-
Getting started with the Turbonomic REST API : https://www.ibm.com/docs/en/tarm/8.13.6?topic=reference-getting-started-turbonomic-rest-api
-
IBM Envizi ESG Suite https://www.ibm.com/docs/en/envizi-esg-suite
-
Integrate your ESG Data into Envizi using Integration Hub https://developer.ibm.com/tutorials/awb-envizi-integration-hub/
-
Sign up for webMethods SaaS Trial https://signup.softwareag.cloud/#/basic-b
#envizi #Sustainability #turbonomic
#ESG Data and Environmental Intelligence #sustainability-highlights-home