GitHub to Parquet in minutes

GitHub

The GitHub API allows you to pull detailed information from various entities. Entire repositories can be cloned using this connector.

Parquet

Apache Parquet is an open-source, column-oriented data storage format of the Hadoop ecosystem designed to provide fast querying on large datasets. Parquet is routinely used for creating very highly scaled data lakes that can still be queried. Parquet is similar to other column-storage file formats that are available in Hadoop.

Estuary helps move data from

GitHub to Parquet in minutes with millisecond latency.

Estuary integrates with an ecosystem of free, open-source connectors to extract data from GitHub with low latency, allowing you to replicate that data to various systems for both analytic and operational purposes. Github data can be organized into a data lake or loaded into warehouses, databases, or streaming systems.

Data can then be directed to Parquet using materializations that are also open-source. Connectors have the ability to push data as quicikly as a destination will handle. Parquet likes files that are around 1 GB each. So, if you have high data volumes, Flow can keep your data lake up-to-date in near real-time.

Learn more about your data systems and Estuary.

Free consultation