Skip to content

Commit

Permalink
feat: add default_value_expression to TableFieldSchema (#5131)
Browse files Browse the repository at this point in the history
PiperOrigin-RevId: 508443364
Source-Link: googleapis/googleapis@fc7d513
Source-Link: googleapis/googleapis-gen@6d4bae6
Copy-Tag: eyJwIjoiQmlnUXVlcnlTdG9yYWdlLy5Pd2xCb3QueWFtbCIsImgiOiI2ZDRiYWU2YTFjNjMwYzMxYzQ1YjBhNmQxYzUwZDY5YjVkYzg3ODgxIn0=

chore: update import paths for Go targets to match open source location
chore: update go_package in protos to match open source location
chore: add explicit release levels to Go gapic targets
PiperOrigin-RevId: 506711567
Source-Link: googleapis/googleapis@d02e582
Source-Link: googleapis/googleapis-gen@7f1c541
Copy-Tag: eyJwIjoiQmlnUXVlcnlTdG9yYWdlLy5Pd2xCb3QueWFtbCIsImgiOiI3ZjFjNTQxNTMxMjVlYjVhYmQ2MGEzMmRlNThjZmRhNmE3OThhNzBhIn0=

feat: Add location methods
PiperOrigin-RevId: 499263614
Source-Link: googleapis/googleapis@ba2db0d
Source-Link: googleapis/googleapis-gen@503d1e8
Copy-Tag: eyJwIjoiQmlnUXVlcnlTdG9yYWdlLy5Pd2xCb3QueWFtbCIsImgiOiI1MDNkMWU4ZWZmMDQ1MzVmYzVkNGI3NGZlMWM4OTRlNjdmZDU5NmIyIn0=

feat: add estimated number of rows to CreateReadSession response
PiperOrigin-RevId: 495122850
Source-Link: googleapis/googleapis@83b2baf
Source-Link: googleapis/googleapis-gen@6720765
Copy-Tag: eyJwIjoiQmlnUXVlcnlTdG9yYWdlLy5Pd2xCb3QueWFtbCIsImgiOiI2NzIwNzY1NDNjNDRmNmU0ZTE3NmQwNGZkN2YyZDY5NWNmYjY5ZWUzIn0=

chore(bazel): update gapic-generator-php to v1.6.2
PiperOrigin-RevId: 493134713
Source-Link: googleapis/googleapis@83d92bd
Source-Link: googleapis/googleapis-gen@223117b
Copy-Tag: eyJwIjoiQmlnUXVlcnlTdG9yYWdlLy5Pd2xCb3QueWFtbCIsImgiOiIyMjMxMTdiNzJhZjEyYzZiOWFiNjExZjMxMDcyOWMzYTJmNDkzZjJmIn0=

chore(bazel): update gapic-generator-php to v1.6.1
PiperOrigin-RevId: 489247285
Source-Link: googleapis/googleapis@c40ef67
Source-Link: googleapis/googleapis-gen@64e38ac
Copy-Tag: eyJwIjoiQmlnUXVlcnlTdG9yYWdlLy5Pd2xCb3QueWFtbCIsImgiOiI2NGUzOGFjZTc2Mjk0ZjFkOGQ1YjQzNWZhMTBjNjc2NWMyYjQ0ZjQ5In0=

feat: add missing_value_interpretations to AppendRowsRequest
PiperOrigin-RevId: 488693558
Source-Link: googleapis/googleapis@43bf96f
Source-Link: googleapis/googleapis-gen@1532dc6
Copy-Tag: eyJwIjoiQmlnUXVlcnlTdG9yYWdlLy5Pd2xCb3QueWFtbCIsImgiOiIxNTMyZGM2ZmQ1YjUyYTUzNjEzMzA0ZTc1YWFjMWI1ZGE0MDdiNmIzIn0=

chore(bazel): update gapic-generator-php to v1.6.0
PiperOrigin-RevId: 487322191
Source-Link: googleapis/googleapis@2a470e2
Source-Link: googleapis/googleapis-gen@50976a5
Copy-Tag: eyJwIjoiQmlnUXVlcnlTdG9yYWdlLy5Pd2xCb3QueWFtbCIsImgiOiI1MDk3NmE1NzU5NzdmYzkyZDFhYTc5NDI4YzdiMzE2N2UxNzZkZmZmIn0=

docs: remove stale header guidance for AppendRows
PiperOrigin-RevId: 485941276
Source-Link: googleapis/googleapis@a5f5928
Source-Link: googleapis/googleapis-gen@61ebfaa
Copy-Tag: eyJwIjoiQmlnUXVlcnlTdG9yYWdlLy5Pd2xCb3QueWFtbCIsImgiOiI2MWViZmFhMzI1MTAxYmM5YjI5ZWUzNDkwMGI0NWIyZjBkMjM5ODFlIn0=

feat: add proto annotation for non-ascii field mapping
PiperOrigin-RevId: 473815595
Source-Link: googleapis/googleapis@46cfc4c
Source-Link: googleapis/googleapis-gen@955ca31
Copy-Tag: eyJwIjoiQmlnUXVlcnlTdG9yYWdlLy5Pd2xCb3QueWFtbCIsImgiOiI5NTVjYTMxZDgxMTAwZDgxZWQ3ZTBhZTA1MjIzNThhYzQ1ZDUzY2M5In0=

feat: add location to WriteStream and add WriteStreamView support
PiperOrigin-RevId: 472835596
Source-Link: googleapis/googleapis@1cf9407
Source-Link: googleapis/googleapis-gen@59d0f10
Copy-Tag: eyJwIjoiQmlnUXVlcnlTdG9yYWdlLy5Pd2xCb3QueWFtbCIsImgiOiI1OWQwZjEwYmFkZWM4Njc5MTNlYmMyZWI0YmFjMDIxZjc5OTA0NTE5In0=

feat: allow users to set Apache Avro output format options through avro_serialization_options param in TableReadOptions message
Through AvroSerializationOptions, users can set enable_display_name_attribute, which populates displayName for every avro field with the original column name
Improved documentation for selected_fields, added example for clarity.
PiperOrigin-RevId: 468290142
Source-Link: googleapis/googleapis@62ae1af
Source-Link: googleapis/googleapis-gen@732b7f9
Copy-Tag: eyJwIjoiQmlnUXVlcnlTdG9yYWdlLy5Pd2xCb3QueWFtbCIsImgiOiI3MzJiN2Y5MjI0NzdkMjUzMjhiOTIzNTlmMDY2N2ZlOTVkZTUyZmEyIn0=

docs: clarify size limitations for AppendRowsRequest
chore: add preferred_min_stream_count to CreateReadSessionRequest
chore: add write_stream to AppendRowsResponse
PiperOrigin-RevId: 463602530
Source-Link: googleapis/googleapis@d33b3fa
Source-Link: googleapis/googleapis-gen@90995f6
Copy-Tag: eyJwIjoiQmlnUXVlcnlTdG9yYWdlLy5Pd2xCb3QueWFtbCIsImgiOiI5MDk5NWY2NDMzZDBlY2QyOTBmMTg2MTY4Y2U5NTdkNmEwZGI5YzY4In0=

chore(bazel): update gapic-generator-php to v1.5.0
PiperOrigin-RevId: 463455928
Source-Link: googleapis/googleapis@c549596
Source-Link: googleapis/googleapis-gen@434d232
Copy-Tag: eyJwIjoiQmlnUXVlcnlTdG9yYWdlLy5Pd2xCb3QueWFtbCIsImgiOiI0MzRkMjMyNDExYmJkOGYwZTQxZDcxODhjMjE3ODNjMWI2ZWY2OGY0In0=

chore(bazel): update protobuf to v3.21.3
chore(bazel): update gax-java to 2.18.4
PiperOrigin-RevId: 463115700
Source-Link: googleapis/googleapis@52130a9
Source-Link: googleapis/googleapis-gen@6a4d9d9
Copy-Tag: eyJwIjoiQmlnUXVlcnlTdG9yYWdlLy5Pd2xCb3QueWFtbCIsImgiOiI2YTRkOWQ5YmIzYWZiMjBiMGY1ZmE0ZjVkOWY2NzQwYjFkMGViMTlhIn0=

fix: Modify client lib retry policy for CreateWriteStream with longer backoff, more error code and longer overall time
PiperOrigin-RevId: 457061436
Source-Link: googleapis/googleapis@8ff130b
Source-Link: googleapis/googleapis-gen@2eb0fac
Copy-Tag: eyJwIjoiQmlnUXVlcnlTdG9yYWdlLy5Pd2xCb3QueWFtbCIsImgiOiIyZWIwZmFjYTcxN2Q5Y2Y0NGI4MzhiN2RiNWU4NjI0NTFiOGE4NmVmIn0=

feat: add fields to eventually contain row level errors
Committer: @gnanda
PiperOrigin-RevId: 456324780
Source-Link: googleapis/googleapis@f24b37a
Source-Link: googleapis/googleapis-gen@33f9d81
Copy-Tag: eyJwIjoiQmlnUXVlcnlTdG9yYWdlLy5Pd2xCb3QueWFtbCIsImgiOiIzM2Y5ZDgxNDA4MjExNzExNmM0YjY4YTZmNWFhYzNmNDJiZWMzNWMyIn0=

chore(bazel): update gapic-generator-php to v1.4.8
PiperOrigin-RevId: 455243785
Source-Link: googleapis/googleapis@4fce83e
Source-Link: googleapis/googleapis-gen@f6ebeb8
Copy-Tag: eyJwIjoiQmlnUXVlcnlTdG9yYWdlLy5Pd2xCb3QueWFtbCIsImgiOiJmNmViZWI4NGEwMDc2YjM5M2JmODZhYmM3MWUxYjgzMDlkZGQ3YTY2In0=

feat: Deprecate format specific `row_count` field in Read API
PiperOrigin-RevId: 438434001
Source-Link: googleapis/googleapis@727f08b
Source-Link: googleapis/googleapis-gen@2d62ab0
Copy-Tag: eyJwIjoiQmlnUXVlcnlTdG9yYWdlLy5Pd2xCb3QueWFtbCIsImgiOiIyZDYyYWIwZDNkNjUwYjdhYmE0ZWM1ZTVhOTZkZDhjZGJhZTg5ZmI3In0=

docs: improve documentation for write client
feat: update default timeout/retry information
feat: update parent annotation for BatchCommitWriteStreamsRequest
feat: expose additional StorageError enum values
PiperOrigin-RevId: 431973595
Source-Link: googleapis/googleapis@25d691b
Source-Link: googleapis/googleapis-gen@0ca2187
Copy-Tag: eyJwIjoiQmlnUXVlcnlTdG9yYWdlLy5Pd2xCb3QueWFtbCIsImgiOiIwY2EyMTg3OWFlMzEzNmEyZDcxZmE4MjRiNDlkZmFmOGEwYWUwZjJlIn0=

feat: add trace_id for Read API
PiperOrigin-RevId: 429809867
Source-Link: googleapis/googleapis@6052603
Source-Link: googleapis/googleapis-gen@840dd14
Copy-Tag: eyJwIjoiQmlnUXVlcnlTdG9yYWdlLy5Pd2xCb3QueWFtbCIsImgiOiI4NDBkZDE0YzkxYTBlYzY2ZDliZDFhMDI4ZDAxYjVmZDAyOWNhMDJlIn0=

fix: remove bigquery.readonly auth scope
PiperOrigin-RevId: 429331987
Source-Link: googleapis/googleapis@4b2bd26
Source-Link: googleapis/googleapis-gen@fa17f03
Copy-Tag: eyJwIjoiQmlnUXVlcnlTdG9yYWdlLy5Pd2xCb3QueWFtbCIsImgiOiJmYTE3ZjAzYjVhZTk2MzE2Y2QwMmYyOTk3ZjJmZDExOTZlMDM0ZTVlIn0=
  • Loading branch information
gcf-owl-bot[bot] authored Apr 25, 2023
1 parent a8211f2 commit 2fb00be
Show file tree
Hide file tree
Showing 76 changed files with 6,943 additions and 175 deletions.
27 changes: 27 additions & 0 deletions BigQueryStorage/metadata/V1/Annotations.php

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

Binary file modified BigQueryStorage/metadata/V1/Arrow.php
Binary file not shown.
12 changes: 7 additions & 5 deletions BigQueryStorage/metadata/V1/Avro.php

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

31 changes: 31 additions & 0 deletions BigQueryStorage/metadata/V1/Protobuf.php

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

Binary file modified BigQueryStorage/metadata/V1/Storage.php
Binary file not shown.
Binary file modified BigQueryStorage/metadata/V1/Stream.php
Binary file not shown.
Binary file added BigQueryStorage/metadata/V1/Table.php
Binary file not shown.
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
<?php
/*
* Copyright 2023 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

/*
* GENERATED CODE WARNING
* This file was automatically generated - do not edit!
*/

require_once __DIR__ . '/../../../vendor/autoload.php';

// [START bigquerystorage_v1_generated_BigQueryRead_CreateReadSession_sync]
use Google\ApiCore\ApiException;
use Google\Cloud\BigQuery\Storage\V1\BigQueryReadClient;
use Google\Cloud\BigQuery\Storage\V1\ReadSession;

/**
* Creates a new read session. A read session divides the contents of a
* BigQuery table into one or more streams, which can then be used to read
* data from the table. The read session also specifies properties of the
* data to be read, such as a list of columns or a push-down filter describing
* the rows to be returned.
*
* A particular row can be read by at most one stream. When the caller has
* reached the end of each stream in the session, then all the data in the
* table has been read.
*
* Data is assigned to each stream such that roughly the same number of
* rows can be read from each stream. Because the server-side unit for
* assigning data is collections of rows, the API does not guarantee that
* each stream will return the same number or rows. Additionally, the
* limits are enforced based on the number of pre-filtered rows, so some
* filters can lead to lopsided assignments.
*
* Read sessions automatically expire 6 hours after they are created and do
* not require manual clean-up by the caller.
*
* @param string $formattedParent The request project that owns the session, in the form of
* `projects/{project_id}`. Please see
* {@see BigQueryReadClient::projectName()} for help formatting this field.
*/
function create_read_session_sample(string $formattedParent): void
{
// Create a client.
$bigQueryReadClient = new BigQueryReadClient();

// Prepare any non-scalar elements to be passed along with the request.
$readSession = new ReadSession();

// Call the API and handle any network failures.
try {
/** @var ReadSession $response */
$response = $bigQueryReadClient->createReadSession($formattedParent, $readSession);
printf('Response data: %s' . PHP_EOL, $response->serializeToJsonString());
} catch (ApiException $ex) {
printf('Call failed with message: %s' . PHP_EOL, $ex->getMessage());
}
}

/**
* Helper to execute the sample.
*
* This sample has been automatically generated and should be regarded as a code
* template only. It will require modifications to work:
* - It may require correct/in-range values for request initialization.
* - It may require specifying regional endpoints when creating the service client,
* please see the apiEndpoint client configuration option for more details.
*/
function callSample(): void
{
$formattedParent = BigQueryReadClient::projectName('[PROJECT]');

create_read_session_sample($formattedParent);
}
// [END bigquerystorage_v1_generated_BigQueryRead_CreateReadSession_sync]
82 changes: 82 additions & 0 deletions BigQueryStorage/samples/V1/BigQueryReadClient/read_rows.php
Original file line number Diff line number Diff line change
@@ -0,0 +1,82 @@
<?php
/*
* Copyright 2023 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

/*
* GENERATED CODE WARNING
* This file was automatically generated - do not edit!
*/

require_once __DIR__ . '/../../../vendor/autoload.php';

// [START bigquerystorage_v1_generated_BigQueryRead_ReadRows_sync]
use Google\ApiCore\ApiException;
use Google\ApiCore\ServerStream;
use Google\Cloud\BigQuery\Storage\V1\BigQueryReadClient;
use Google\Cloud\BigQuery\Storage\V1\ReadRowsResponse;

/**
* Reads rows from the stream in the format prescribed by the ReadSession.
* Each response contains one or more table rows, up to a maximum of 100 MiB
* per response; read requests which attempt to read individual rows larger
* than 100 MiB will fail.
*
* Each request also returns a set of stream statistics reflecting the current
* state of the stream.
*
* @param string $formattedReadStream Stream to read rows from. Please see
* {@see BigQueryReadClient::readStreamName()} for help formatting this field.
*/
function read_rows_sample(string $formattedReadStream): void
{
// Create a client.
$bigQueryReadClient = new BigQueryReadClient();

// Call the API and handle any network failures.
try {
/** @var ServerStream $stream */
$stream = $bigQueryReadClient->readRows($formattedReadStream);

/** @var ReadRowsResponse $element */
foreach ($stream->readAll() as $element) {
printf('Element data: %s' . PHP_EOL, $element->serializeToJsonString());
}
} catch (ApiException $ex) {
printf('Call failed with message: %s' . PHP_EOL, $ex->getMessage());
}
}

/**
* Helper to execute the sample.
*
* This sample has been automatically generated and should be regarded as a code
* template only. It will require modifications to work:
* - It may require correct/in-range values for request initialization.
* - It may require specifying regional endpoints when creating the service client,
* please see the apiEndpoint client configuration option for more details.
*/
function callSample(): void
{
$formattedReadStream = BigQueryReadClient::readStreamName(
'[PROJECT]',
'[LOCATION]',
'[SESSION]',
'[STREAM]'
);

read_rows_sample($formattedReadStream);
}
// [END bigquerystorage_v1_generated_BigQueryRead_ReadRows_sync]
Original file line number Diff line number Diff line change
@@ -0,0 +1,82 @@
<?php
/*
* Copyright 2023 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

/*
* GENERATED CODE WARNING
* This file was automatically generated - do not edit!
*/

require_once __DIR__ . '/../../../vendor/autoload.php';

// [START bigquerystorage_v1_generated_BigQueryRead_SplitReadStream_sync]
use Google\ApiCore\ApiException;
use Google\Cloud\BigQuery\Storage\V1\BigQueryReadClient;
use Google\Cloud\BigQuery\Storage\V1\SplitReadStreamResponse;

/**
* Splits a given `ReadStream` into two `ReadStream` objects. These
* `ReadStream` objects are referred to as the primary and the residual
* streams of the split. The original `ReadStream` can still be read from in
* the same manner as before. Both of the returned `ReadStream` objects can
* also be read from, and the rows returned by both child streams will be
* the same as the rows read from the original stream.
*
* Moreover, the two child streams will be allocated back-to-back in the
* original `ReadStream`. Concretely, it is guaranteed that for streams
* original, primary, and residual, that original[0-j] = primary[0-j] and
* original[j-n] = residual[0-m] once the streams have been read to
* completion.
*
* @param string $formattedName Name of the stream to split. Please see
* {@see BigQueryReadClient::readStreamName()} for help formatting this field.
*/
function split_read_stream_sample(string $formattedName): void
{
// Create a client.
$bigQueryReadClient = new BigQueryReadClient();

// Call the API and handle any network failures.
try {
/** @var SplitReadStreamResponse $response */
$response = $bigQueryReadClient->splitReadStream($formattedName);
printf('Response data: %s' . PHP_EOL, $response->serializeToJsonString());
} catch (ApiException $ex) {
printf('Call failed with message: %s' . PHP_EOL, $ex->getMessage());
}
}

/**
* Helper to execute the sample.
*
* This sample has been automatically generated and should be regarded as a code
* template only. It will require modifications to work:
* - It may require correct/in-range values for request initialization.
* - It may require specifying regional endpoints when creating the service client,
* please see the apiEndpoint client configuration option for more details.
*/
function callSample(): void
{
$formattedName = BigQueryReadClient::readStreamName(
'[PROJECT]',
'[LOCATION]',
'[SESSION]',
'[STREAM]'
);

split_read_stream_sample($formattedName);
}
// [END bigquerystorage_v1_generated_BigQueryRead_SplitReadStream_sync]
Loading

0 comments on commit 2fb00be

Please sign in to comment.