Skip to content

Commit

Permalink
Merge pull request #3130 from hzyi-google/spanner-gapic-migration
Browse files Browse the repository at this point in the history
Spanner gapic migration
  • Loading branch information
pongad authored Apr 5, 2018
2 parents b3b7f04 + 50c6b8b commit 75d1d0b
Show file tree
Hide file tree
Showing 412 changed files with 23,491 additions and 2,082 deletions.
17 changes: 17 additions & 0 deletions .circleci/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -79,6 +79,19 @@ jobs:
- run:
name: Run integration tests for google-cloud-bigquery
command: ./utilities/verify_single_it.sh google-cloud-bigquery

bigtable_it:
working_directory: ~/googleapis
<<: *anchor_docker
<<: *anchor_auth_vars
steps:
- checkout
- run:
<<: *anchor_run_decrypt
- run:
name: Run integration tests for google-cloud-bigtable
command: ./utilities/verify_single_it.sh google-cloud-bigtable -Dbigtable.env=prod -Dbigtable.table=projects/gcloud-devel/instances/google-cloud-bigtable/tables/integration-tests

compute_it:
working_directory: ~/googleapis
<<: *anchor_docker
Expand Down Expand Up @@ -220,6 +233,10 @@ workflows:
filters:
branches:
only: master
- bigtable_it:
filters:
branches:
only: master
- compute_it:
filters:
branches:
Expand Down
17 changes: 12 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ Java idiomatic client for [Google Cloud Platform][cloud-platform] services.
- [Client Library Documentation][client-lib-docs]

This library supports the following Google Cloud Platform services with clients at a [GA](#versioning) quality level:
- [BigQuery](google-cloud-bigquery) (GA)
- [Stackdriver Logging](google-cloud-logging) (GA)
- [Cloud Datastore](google-cloud-datastore) (GA)
- [Cloud Natural Language](google-cloud-language) (GA)
Expand All @@ -22,7 +23,6 @@ This library supports the following Google Cloud Platform services with clients

This library supports the following Google Cloud Platform services with clients at a [Beta](#versioning) quality level:

- [BigQuery](google-cloud-bigquery) (Beta)
- [Cloud Data Loss Prevention](google-cloud-dlp) (Beta)
- [Stackdriver Error Reporting](google-cloud-errorreporting) (Beta)
- [Cloud Firestore](google-cloud-firestore) (Beta)
Expand All @@ -31,6 +31,7 @@ This library supports the following Google Cloud Platform services with clients
- [Cloud Spanner](google-cloud-spanner) (Beta)
- [Cloud Video Intelligence](google-cloud-video-intelligence) (Beta)
- [Stackdriver Trace](google-cloud-trace) (Beta)
- [Text-to-Speech](google-cloud-texttospeech) (Beta)

This library supports the following Google Cloud Platform services with clients at an [Alpha](#versioning) quality level:

Expand Down Expand Up @@ -58,22 +59,28 @@ If you are using Maven, add this to your pom.xml file
<dependency>
<groupId>com.google.cloud</groupId>
<artifactId>google-cloud</artifactId>
<version>0.38.0-alpha</version>
<version>0.43.0-alpha</version>
</dependency>
```
If you are using Gradle, add this to your dependencies
```Groovy
compile 'com.google.cloud:google-cloud:0.38.0-alpha'
compile 'com.google.cloud:google-cloud:0.43.0-alpha'
```
If you are using SBT, add this to your dependencies
```Scala
libraryDependencies += "com.google.cloud" % "google-cloud" % "0.38.0-alpha"
libraryDependencies += "com.google.cloud" % "google-cloud" % "0.43.0-alpha"
```
[//]: # ({x-version-update-end})

It also works just as well to declare a dependency only on the specific clients that you need. See the README of
each client for instructions.

If you're using IntelliJ or Eclipse, you can add client libraries to your project using these IDE plugins:
* [Cloud Tools for IntelliJ](https://cloud.google.com/tools/intellij/docs/client-libraries)
* [Cloud Tools for Eclipse](https://cloud.google.com/eclipse/docs/libraries)

Besides adding client libraries, the plugins provide additional functionality, such as service account key management. Refer to the documentation for each plugin for more details.

These client libraries can be used on App Engine standard for Java 8 runtime, App Engine flexible (including the Compat runtime). Most of the libraries do not work on the App Engine standard for Java 7 runtime, however, Datastore, Storage, and Bigquery should work.

If you are running into problems with version conflicts, see [Version Management](#version-management).
Expand Down Expand Up @@ -285,7 +292,7 @@ The easiest way to solve version conflicts is to use google-cloud's BOM. In Mave
<dependency>
<groupId>com.google.cloud</groupId>
<artifactId>google-cloud-bom</artifactId>
<version>0.38.0-alpha</version>
<version>0.43.0-alpha</version>
<type>pom</type>
<scope>import</scope>
</dependency>
Expand Down
6 changes: 2 additions & 4 deletions RELEASING.md
Original file line number Diff line number Diff line change
Expand Up @@ -109,11 +109,9 @@ Go to the [releases page](https://github.com/GoogleCloudPlatform/google-cloud-ja

Ensure that the format is consistent with previous releases (for an example, see the [0.1.0 release](https://github.com/GoogleCloudPlatform/google-cloud-java/releases/tag/v0.1.0)). After adding any missing updates and reformatting as necessary, publish the draft.

11. Create a new draft for the next release. Note any commits not included in the release that have been submitted before the release commit, to ensure they are documented in the next release.
11. Run `python utilities/bump_versions.py next_snapshot patch` to include "-SNAPSHOT" in the current project version (Alternatively, update the versions in `versions.txt` to the correct versions for the next release.). Then, run `python utilities/replace_versions.py` to update the `pom.xml` files. (If you see updates in `README.md` files at this step, you probably did something wrong.)

12. Run `python utilities/bump_versions next_snapshot patch` to include "-SNAPSHOT" in the current project version (Alternatively, update the versions in `versions.txt` to the correct versions for the next release.). Then, run `python utilities/replace_versions.py` to update the `pom.xml` files. (If you see updates in `README.md` files at this step, you probably did something wrong.)

13. Create and merge in another PR to reflect the updated project version. For an example of what this PR should look like, see [#227](https://github.com/GoogleCloudPlatform/google-cloud-java/pull/227).
13. Create and merge in another PR to reflect the updated project version.

Improvements
============
Expand Down
24 changes: 24 additions & 0 deletions TESTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
This library provides tools to help write tests for code that uses the following google-cloud services:

- [BigQuery](#testing-code-that-uses-bigquery)
- [Bigtable](#testing-code-that-uses-bigtable)
- [Compute](#testing-code-that-uses-compute)
- [Datastore](#testing-code-that-uses-datastore)
- [DNS](#testing-code-that-uses-dns)
Expand Down Expand Up @@ -41,6 +42,29 @@ Here is an example that clears the dataset created in Step 3.
RemoteBigQueryHelper.forceDelete(bigquery, dataset);
```

### Testing code that uses Bigtable

Bigtable integration tests can either be run against an emulator or a real Bigtable table. The
target environment can be selected via the `bigtable.env` system property. By default it is set to
`emulator` and the other option is `prod`.

To use the `emulator` environment, please install the gcloud sdk and use it to install the
`cbtemulator` via `gcloud components install bigtable`.

To use the `prod` environment:
1. Set up the target table using `google-cloud-bigtable/scripts/setup-test-table.sh`
2. Download the [JSON service account credentials file][create-service-account] from the Google
Developer's Console.
3. Set the environment variable `GOOGLE_APPLICATION_CREDENTIALS` to the path of the credentials file
4. Set the system property `bigtable.env=prod` and `bigtable.table` to the full table name you
created earlier. Example:
```shell
mvn verify -am -pl google-cloud-bigtable \
-Dbigtable.env=prod \
-Dbigtable.table=projects/my-project/instances/my-instance/tables/my-table
```


### Testing code that uses Compute

Currently, there isn't an emulator for Google Compute, so an alternative is to create a test
Expand Down
9 changes: 3 additions & 6 deletions google-cloud-bigquery/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,9 +12,6 @@ Java idiomatic client for [Google Cloud BigQuery][cloud-bigquery].
- [Product Documentation][bigquery-product-docs]
- [Client Library Documentation][bigquery-client-lib-docs]

> Note: This client is a work-in-progress, and may occasionally
> make backwards-incompatible changes.
Quickstart
----------
[//]: # ({x-version-update-start:google-cloud-bigquery:released})
Expand All @@ -23,16 +20,16 @@ If you are using Maven, add this to your pom.xml file
<dependency>
<groupId>com.google.cloud</groupId>
<artifactId>google-cloud-bigquery</artifactId>
<version>0.38.0-beta</version>
<version>1.25.0</version>
</dependency>
```
If you are using Gradle, add this to your dependencies
```Groovy
compile 'com.google.cloud:google-cloud-bigquery:0.38.0-beta'
compile 'com.google.cloud:google-cloud-bigquery:1.25.0'
```
If you are using SBT, add this to your dependencies
```Scala
libraryDependencies += "com.google.cloud" % "google-cloud-bigquery" % "0.38.0-beta"
libraryDependencies += "com.google.cloud" % "google-cloud-bigquery" % "1.25.0"
```
[//]: # ({x-version-update-end})

Expand Down
4 changes: 2 additions & 2 deletions google-cloud-bigquery/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<artifactId>google-cloud-bigquery</artifactId>
<version>0.38.1-beta-SNAPSHOT</version><!-- {x-version-update:google-cloud-bigquery:current} -->
<version>1.25.1-SNAPSHOT</version><!-- {x-version-update:google-cloud-bigquery:current} -->
<packaging>jar</packaging>
<name>Google Cloud BigQuery</name>
<url>https://github.com/GoogleCloudPlatform/google-cloud-java/tree/master/google-cloud-bigquery</url>
Expand All @@ -12,7 +12,7 @@
<parent>
<groupId>com.google.cloud</groupId>
<artifactId>google-cloud-pom</artifactId>
<version>0.38.1-alpha-SNAPSHOT</version><!-- {x-version-update:google-cloud-pom:current} -->
<version>0.43.1-alpha-SNAPSHOT</version><!-- {x-version-update:google-cloud-pom:current} -->
</parent>
<properties>
<site.installationModule>google-cloud-bigquery</site.installationModule>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -522,7 +522,7 @@ public int hashCode() {
* } catch (BigQueryException e) {
* // the dataset was not created
* }
* } </pre>
* }</pre>
*
* @throws BigQueryException upon failure
*/
Expand All @@ -538,7 +538,7 @@ public int hashCode() {
* String fieldName = "string_field";
* TableId tableId = TableId.of(datasetName, tableName);
* // Table field definition
* Field field = Field.of(fieldName, Field.Type.string());
* Field field = Field.of(fieldName, LegacySQLTypeName.STRING);
* // Table schema definition
* Schema schema = Schema.of(field);
* TableDefinition tableDefinition = StandardTableDefinition.of(schema);
Expand All @@ -553,6 +553,32 @@ public int hashCode() {
/**
* Creates a new job.
*
* <p>Example of loading a newline-delimited-json file with textual fields from GCS to a table.
* <pre> {@code
* String datasetName = "my_dataset_name";
* String tableName = "my_table_name";
* String sourceUri = "gs://cloud-samples-data/bigquery/us-states/us-states.json";
* TableId tableId = TableId.of(datasetName, tableName);
* // Table field definition
* Field[] fields = new Field[] {
* Field.of("name", LegacySQLTypeName.STRING),
* Field.of("post_abbr", LegacySQLTypeName.STRING)
* };
* // Table schema definition
* Schema schema = Schema.of(fields);
* LoadJobConfiguration configuration = LoadJobConfiguration.builder(tableId, sourceUri)
* .setFormatOptions(FormatOptions.json())
* .setCreateDisposition(CreateDisposition.CREATE_IF_NEEDED)
* .setSchema(schema)
* .build();
* // Load the table
* Job remoteLoadJob = bigquery.create(JobInfo.of(configuration));
* remoteLoadJob = remoteLoadJob.waitFor();
* // Check the table
* System.out.println("State: " + remoteLoadJob.getStatus().getState());
* return ((StandardTableDefinition) bigquery.getTable(tableId).getDefinition()).getNumRows();
* }</pre>
*
* <p>Example of creating a query job.
* <pre> {@code
* String query = "SELECT field FROM my_dataset_name.my_table_name";
Expand Down Expand Up @@ -861,8 +887,7 @@ public int hashCode() {
* Lists the table's rows.
*
* <p>Example of listing table rows, specifying the page size.
*
* <pre>{@code
* <pre> {@code
* String datasetName = "my_dataset_name";
* String tableName = "my_table_name";
* // This example reads the result 100 rows per RPC call. If there's no need to limit the number,
Expand All @@ -882,16 +907,15 @@ public int hashCode() {
* Lists the table's rows.
*
* <p>Example of listing table rows, specifying the page size.
*
* <pre>{@code
* <pre> {@code
* String datasetName = "my_dataset_name";
* String tableName = "my_table_name";
* TableId tableIdObject = TableId.of(datasetName, tableName);
* // This example reads the result 100 rows per RPC call. If there's no need to limit the number,
* // simply omit the option.
* TableResult tableData =
* bigquery.listTableData(tableIdObject, TableDataListOption.pageSize(100));
* for (FieldValueList row : rowIterator.hasNext()) {
* for (FieldValueList row : tableData.iterateAll()) {
* // do something with the row
* }
* }</pre>
Expand All @@ -904,17 +928,16 @@ public int hashCode() {
* Lists the table's rows. If the {@code schema} is not {@code null}, it is available to the
* {@link FieldValueList} iterated over.
*
* <p>Example of listing table rows.
*
* <pre>{@code
* <p>Example of listing table rows with schema.
* <pre> {@code
* String datasetName = "my_dataset_name";
* String tableName = "my_table_name";
* Schema schema = ...;
* String field = "my_field";
* String field = "field";
* TableResult tableData =
* bigquery.listTableData(datasetName, tableName, schema);
* for (FieldValueList row : tableData.iterateAll()) {
* row.get(field)
* row.get(field);
* }
* }</pre>
*
Expand All @@ -927,9 +950,8 @@ TableResult listTableData(
* Lists the table's rows. If the {@code schema} is not {@code null}, it is available to the
* {@link FieldValueList} iterated over.
*
* <p>Example of listing table rows.
*
* <pre>{@code
* <p>Example of listing table rows with schema.
* <pre> {@code
* Schema schema =
* Schema.of(
* Field.of("word", LegacySQLTypeName.STRING),
Expand Down Expand Up @@ -1047,28 +1069,21 @@ TableResult listTableData(
* queries. Since dry-run queries are not actually executed, there's no way to retrieve results.
*
* <p>Example of running a query.
*
* <pre>{@code
* String query = "SELECT distinct(corpus) FROM `bigquery-public-data.samples.shakespeare`";
* QueryJobConfiguration queryConfig = QueryJobConfiguration.of(query);
*
* // To run the legacy syntax queries use the following code instead:
* // String query = "SELECT unique(corpus) FROM [bigquery-public-data:samples.shakespeare]"
* // QueryJobConfiguration queryConfig =
* // QueryJobConfiguration.newBuilder(query).setUseLegacySql(true).build();
*
* <pre> {@code
* String query = "SELECT unique(corpus) FROM [bigquery-public-data:samples.shakespeare]";
* QueryJobConfiguration queryConfig =
* QueryJobConfiguration.newBuilder(query).setUseLegacySql(true).build();
* for (FieldValueList row : bigquery.query(queryConfig).iterateAll()) {
* // do something with the data
* }
* }</pre>
*
* <p>Example of running a query with query parameters.
*
* <pre>{@code
* String query =
* "SELECT distinct(corpus) FROM `bigquery-public-data.samples.shakespeare` where word_count > ?";
* <pre> {@code
* String query = "SELECT distinct(corpus) FROM `bigquery-public-data.samples.shakespeare` where word_count > @wordCount";
* // Note, standard SQL is required to use query parameters. Legacy SQL will not work.
* QueryJobConfiguration queryConfig = QueryJobConfiguration.newBuilder(query)
* .addPositionalParameter(QueryParameterValue.int64(5))
* .addNamedParameter("wordCount", QueryParameterValue.int64(5))
* .build();
* for (FieldValueList row : bigquery.query(queryConfig).iterateAll()) {
* // do something with the data
Expand All @@ -1092,18 +1107,6 @@ TableResult query(QueryJobConfiguration configuration, JobOption... options)
* <p>See {@link #query(QueryJobConfiguration, JobOption...)} for examples on populating a {@link
* QueryJobConfiguration}.
*
* <p>The recommended way to create a randomly generated JobId is the following:
*
* <pre>{@code
* JobId jobId = JobId.of();
* }</pre>
*
* For a user specified job id with an optional prefix use the following:
*
* <pre>{@code
* JobId jobId = JobId.of("my_prefix-my_unique_job_id");
* }</pre>
*
* @throws BigQueryException upon failure
* @throws InterruptedException if the current thread gets interrupted while waiting for the query
* to complete
Expand Down
Loading

0 comments on commit 75d1d0b

Please sign in to comment.