Skip to content

Latest commit

 

History

History
135 lines (86 loc) · 9.68 KB

taking-inventory-of-your-google-cloud-abridged.md

File metadata and controls

135 lines (86 loc) · 9.68 KB

Taking Inventory of Your Google Cloud

Splunk Cloud Architect Paul Davies recently authored and released the GCP Application Template, a blueprint of visualizations, reports, and searches focused on Google Cloud use cases. Many of the reports included in his application require Google Cloud asset inventory data to be periodically generated and sent into Splunk. But how exactly do you craft that inventory generation pipeline so you can "light-up" Paul's application dashboards and reports?

In this blog post, I'll describe and compare three methods operators can use to ingest Google Cloud asset inventory data into Splunk. The first method leverages a "pull" data ingest strategy while the other two methods I cover are "push" based. For each ingest method, I'll provide detailed setup instructions or provide pointers to them. Finally, I'll make a personal recommendation on what method I would choose if it was my own environment.

Note: Export vs. Feed

Google provides both a batch export and a change feed view of asset data. The GCP Application Template leverages data generated by the batch export data API and does not support the TemporalAsset schema used in the asset change feed API.

Pull

Method #1 - Pull from bucket

The first method involves using Cloud Scheduler to trigger a Cloud Function on a regular schedule. This Cloud Function then sends a batch export request to the Asset Inventory API. This results in a bulk export of the current cloud asset inventory to a Cloud Storage bucket. Finally, the Splunk Add-on for Google Cloud Platform (GCP-TA) is configured to periodically monitor and ingest new files appearing in the Cloud Storage bucket. The following diagram illustrates this process.

Components

Pros

  • Simple setup
  • Pull solution avoids Splunk HEC whitelisting considerations
  • GCP-TA is developed and supported by Splunk
  • Minimal Google Cloud infrastructure cost to implement

Cons

  • Very large JSON files may not dependably load via GCP-TA GCS input

Setup

Please refer to the detailed setup instructions in the GitHub repository.

Push

Method #2 - Serverless push-to-Splunk

This method leverages Cloud Functions not only for triggering an export of asset inventory data, but also to perform the delivery to a Splunk HEC. Cloud Scheduler regularly triggers a Cloud Function which in turn is responsible for initiating an Asset Inventory API bulk export to Cloud Storage. A second Cloud Function is configured to trigger on bucket object create/finalize events. This function will split the exported files into smaller files if necessary and deliver them directly to a Splunk HEC. Should the delivery fail, the messages are placed into a Pub/Sub topic for later redelivery attempts. The following diagram illustrates this process.

Components

Pros

  • Solution should incur minimal Google Cloud infrastructure costs
  • Automated redelivery attempts of deadletters
  • GCP Application Template is primarily tested with this method as it is written by the same author

Cons

Setup

Details on how to configure this method can be found in the splunk-gcp-functions GitHub repository.

Method #3 - Dataflow

The final method leverages Dataflow batch and streaming jobs to facilitate delivery of asset inventory data to a Splunk HEC. This approach uses Cloud Scheduler to regularly trigger a Cloud Function which in turn is responsible for initiating an Asset Inventory API bulk export to Cloud Storage. Another Cloud Function receives an event trigger when the export operation is complete. This function then starts a batch Dataflow job which converts newline-delimited JSON files into Pub/Sub messages and publishes them to a topic. In parallel, a streaming Dataflow pipeline is also running which subscribes to the aforementioned topic and delivers them to a Splunk HEC.

The following diagram illustrates this process.

Components

Pros

  • Dataflow templates developed and supported by Google
  • Natural choice for environments already leveraging streaming Dataflow to Splunk HEC log delivery pipeline
  • Dead letter topic support for undeliverable messages
  • Easy to monitor delivery backlog through Google Cloud Pub/Sub metrics
  • Encryption of HEC token using KMS

Cons

  • Both Dataflow templates are in beta, pre-GA launch stage
  • More complex setup required compared to other methods
  • Additional costs of running a Dataflow cluster

Setup

Please refer to the detailed setup instructions in the GitHub repository.

Recommendation

In this blog, I've described three methods for ingesting cloud asset inventory data into Splunk. But which is "the best?" Like many things in cloud, there isn't one universally "right" answer.

If you're just getting started and just want to get the reports up and running, I recommend trying the "pull from bucket" method described in option #1. The number of moving pieces is minimal and it is relatively easy to setup. It's also dirt cheap in comparison to other approaches and will likely take you pretty far.

If you're already using Dataflow to stream Cloud Logging events into Splunk, then I recommend pursuing option #3. Customers usually find themselves turning to the Dataflow method for sending Cloud Logging events to Splunk as they scale out their infrastructure and need to ensure what they deploy is easy to monitor, horizontally-scalable, and fault-tolerant. Why not leverage that same infrastructure investment to deliver asset inventory data to Splunk?

If option #3 is not within your reach or you view it as "overkill," the "serverless push-to-Splunk" method described in option #2 may be preferable. It has some of the same fault-tolerant and horizontally scalable properties I praise when positioning Dataflow but without the cost of running a batch streaming Dataflow job. Keep in mind that neither Google or Splunk support can assist in the operation of this method, however. You could find yourself "on your own" should things go wrong -- if you're building a production pipeline, skip this method. If you're having fun in the lab, go for it.

Conclusion

Whether you're just experimenting in a lab environment or building a production Dataflow pipeline, one of the methods described in this blog should have your cloud inventory ingest requirements covered. And once you've got that data into Splunk, you'll be generating dashboards and reports with the GCP Application Template in no time!

Resources