Skip to main content
Importing data from S3 and Parquet format is commonly supported by the majority of analytical systems. See below for documentation links:

BigQuery

To import your data into BigQuery, see Loading Data from Parquet and also Hive Partitioned loads.

Snowflake

You can load data into Snowflake from S3 by following the Load from Cloud Document.

RedShift

You can COPY data from S3 or Parquet into Amazon Redshift by following the AWS COPY command documentation.

Clickhouse

You can directly query data in S3 / Parquet format in Clickhouse. As an example, if using GCS, you can query the data as follows:
SELECT count(distinct id) FROM s3('https://storage.googleapis.com/<bucket>/<prefix>/export_id=<export_id>/**',
 'access_key_id', 'access_secret', 'Parquet')
See Clickhouse S3 Integration Documentation for more information.

DuckDB

You can query the data from S3 in-memory with SQL using DuckDB. See S3 import Documentation.