SQLAlchemy dialect and API client for BigQuery.
from sqlalchemy import *
from sqlalchemy.engine import create_engine
from sqlalchemy.schema import *
engine = create_engine('bigquery://project')
table = Table('dataset.table', MetaData(bind=engine), autoload=True)
print(select([func.count('*')], from_obj=table).scalar())
from pybigquery.api import ApiClient
api_client = ApiClient()
print(api_client.dry_run_query(query=sqlstr).total_bytes_processed)
project
in bigquery://project
is used to instantiate BigQuery client with the specific project ID. To infer project from the environment, use bigquery://
– without project
Follow the Google Cloud library guide for authentication. Alternatively, you can provide the path to a service account JSON file in create_engine()
:
engine = create_engine('bigquery://', credentials_path='/path/to/keyfile.json')
To specify location of your datasets pass location
to create_engine()
:
engine = create_engine('bigquery://project', location="asia-northeast1")
To query tables from non-default projects or datasets, use the following format for the SQLAlchemy schema name: [project.]dataset
, e.g.:
# If neither dataset nor project are the default
sample_table_1 = Table('natality', schema='bigquery-public-data.samples')
# If just dataset is not the default
sample_table_2 = Table('natality', schema='bigquery-public-data')
By default, arraysize
is set to 5000
. arraysize
is used to set the batch size for fetching results. To change it, pass arraysize
to create_engine()
:
engine = create_engine('bigquery://project', arraysize=1000)
If you want to have the Client
use a default dataset, specify it as the "database" portion of the connection string.
engine = create_engine('bigquery://project/dataset')
When using a default dataset, don't include the dataset name in the table name, e.g.:
table = Table('table_name')
Note that specifying a default dataset doesn't restrict execution of queries to that particular dataset when using raw queries, e.g.:
# Set default dataset to dataset_a
engine = create_engine('bigquery://project/dataset_a')
# This will still execute and return rows from dataset_b
engine.execute('SELECT * FROM dataset_b.table').fetchall()
There are many situations where you can't call create_engine
directly, such as when using tools like Flask SQLAlchemy. For situations like these, or for situations where you want the Client
to have a default_query_job_config, you can pass many arguments in the query of the connection string.
The credentials_path
, credentials_info
, location
, and arraysize
parameters are used by this library, and the rest are used to create a QueryJobConfig
Note that if you want to use query strings, it will be more reliable if you use three slashes, so 'bigquery:///?a=b'
will work reliably, but 'bigquery://?a=b'
might be interpreted as having a "database" of ?a=b
, depending on the system being used to parse the connection string.
Here are examples of all the supported arguments. Any not present are either for legacy sql (which isn't supported by this library), or are too complex and are not implemented.
engine = create_engine(
'bigquery://some-project/some-dataset' '?'
'credentials_path=/some/path/to.json' '&'
'location=some-location' '&'
'arraysize=1000' '&'
'clustering_fields=a,b,c' '&'
'create_disposition=CREATE_IF_NEEDED' '&'
'destination=different-project.different-dataset.table' '&'
'destination_encryption_configuration=some-configuration' '&'
'dry_run=true' '&'
'labels=a:b,c:d' '&'
'maximum_bytes_billed=1000' '&'
'priority=INTERACTIVE' '&'
'schema_update_options=ALLOW_FIELD_ADDITION,ALLOW_FIELD_RELAXATION' '&'
'use_query_cache=true' '&'
'write_disposition=WRITE_APPEND'
)
To add metadata to a table:
table = Table('mytable',
...,
bigquery_description='my table description',
bigquery_friendly_name='my table friendly name',
bigquery_partition_by="partitioning_field",
bigquery_cluster_by=["cluster_field_1", "cluster_field_2"],
bigquery_require_partition_filter=False)
To add metadata to a column:
Column('mycolumn', doc='my column description')
Install using
pip install pybigquery
Load sample tables:
./scripts/load_test_data.sh
This will create a dataset test_pybigquery
with tables named sample_one_row
and sample
.
Set up an environment and run tests:
pyvenv .env source .env/bin/activate pip install -r dev_requirements.txt pytest