Google Cloud Dataflow provides a simple, powerful programming model for building both batch and streaming parallel data processing pipelines.
Dataflow SDK for Java is a distribution of a portion of the Apache Beam project. This repository hosts the code to build this distribution and any Dataflow-specific code/modules. The underlying source code is hosted in the Apache Beam repository.
General usage of Google Cloud Dataflow does not require use of this repository. Instead:
-
depend directly on a specific version of the SDK in the Maven Central Repository by adding the following dependency to development environments like Eclipse or Apache Maven:
<dependency> <groupId>com.google.cloud.dataflow</groupId> <artifactId>google-cloud-dataflow-java-sdk-all</artifactId> <version>version_number</version> </dependency>
-
download the example pipelines from the separate DataflowJavaSDK-examples repository.
This branch is a work-in-progress for the Dataflow SDK for Java, version 2.0.0. It is currently supported on the Cloud Dataflow service in Beta.
The key concepts in this programming model are:
PCollection
: represents a collection of data, which could be bounded or unbounded in size.PTransform
: represents a computation that transforms input PCollections into output PCollections.Pipeline
: manages a directed acyclic graph of PTransforms and PCollections that is ready for execution.PipelineRunner
: specifies where and how the pipeline should execute.
We provide two runners:
- The
DirectRunner
runs the pipeline on your local machine. - The
DataflowRunner
submits the pipeline to the Dataflow Service, where it runs using managed resources in the Google Cloud Platform (GCP).
The SDK is built to be extensible and support additional execution environments beyond local execution and the Google Cloud Dataflow Service. Apache Beam contains additional SDKs, runners, IO connectors, etc.
This repository consists of the following parts:
- The
sdk
module provides a set of basic Java APIs to program against. - The
examples
module provides a few samples to get started. We recommend starting with theWordCount
example.
The following command will build both the sdk
and example
modules and
install them in your local Maven repository:
mvn clean install
After building and installing, you can execute the WordCount
and other
example pipelines by following the instructions in this
README.
We welcome all usage-related questions on Stack Overflow
tagged with google-cloud-dataflow
.
Please use issue tracker on GitHub to report any bugs, comments or questions regarding SDK development.