Python Client for Google BigQuery¶
Querying massive datasets can be time consuming and expensive without the right hardware and infrastructure. Google BigQuery solves this problem by enabling super-fast, SQL queries against append-mostly tables, using the processing power of Google’s infrastructure.
Quick Start¶
In order to use this library, you first need to go through the following steps:
Installation¶
Install this library in a virtualenv using pip. virtualenv is a tool to create isolated Python environments. The basic problem it addresses is one of dependencies and versions, and indirectly permissions.
With virtualenv, it’s possible to install this library without needing system install permissions, and without clashing with the installed system dependencies.
Supported Python Versions¶
Python >= 3.5
Deprecated Python Versions¶
Python == 2.7. Python 2.7 support will be removed on January 1, 2020.
Mac/Linux¶
pip install virtualenv
virtualenv <your-env>
source <your-env>/bin/activate
<your-env>/bin/pip install google-cloud-bigquery
Windows¶
pip install virtualenv
virtualenv <your-env>
<your-env>\Scripts\activate
<your-env>\Scripts\pip.exe install google-cloud-bigquery
Example Usage¶
Perform a query¶
from google.cloud import bigquery
client = bigquery.Client()
# Perform a query.
QUERY = (
'SELECT name FROM `bigquery-public-data.usa_names.usa_1910_2013` '
'WHERE state = "TX" '
'LIMIT 100')
query_job = client.query(QUERY) # API request
rows = query_job.result() # Waits for query to finish
for row in rows:
print(row.name)
Note
Because the BigQuery client uses the third-party requests
library
by default and the BigQuery-Storage client uses grpcio
library,
both are safe to share instances across threads. In multiprocessing
scenarios, the best practice is to create client instances after
multiprocessing.Pool
or multiprocessing.Process
invokes
os.fork()
.