Decoded GA4 as a Rill source —
read straight from GCS,
skip the warehouse scan.

Rill is built for fast operational dashboards. It runs DuckDB under the hood and prefers to read Parquet files directly. Decode GA4 lands a Parquet copy of your events in GCS on every run, so Rill connects to the bucket and serves dashboards without ever touching a BigQuery slot.

Connection: GCS Parquet Engine: DuckDB Template: events_external Cost: zero BigQuery scan
Summarize This ChatGPT Perplexity

Rill was designed to read Parquet, not to negotiate UNNESTed event_params from a warehouse export. Decode GA4 hands it the file format it actually wants.

Why most BI tools struggle with the raw GA4 export

The native GA4 BigQuery export is nested. Every event_param sits inside a repeated record, which means any visualisation tool either runs UNNEST on the warehouse for every chart, or it caches a flattened copy somewhere else. Both are slow and both cost money. Rill is no different — point it at the raw export and the metrics view gets messy fast.

Why Rill is different

Rill prefers to operate over flat columnar files. Its DuckDB engine reads Parquet from GCS, S3, or local disk and produces dashboards in the sub-second range. The catch is that GA4 does not give you Parquet. You have to produce it yourself, on a schedule, partitioned correctly, and keep it current.

What changes with Decode GA4

Decode writes a flat, partitioned Parquet copy of your decoded events to a GCS bucket on every run. You point Rill at that bucket. The metrics view references columns directly — event_param.page_location, event_name, geo.country — with no UNNEST and no staging table. BigQuery is still there if you want it, but for operational dashboards you can skip it entirely.

Option A

Point Rill at the raw BigQuery export

Use the BigQuery connector. Every metrics view query then runs UNNEST(event_params) against the raw export. The dashboard works, but every panel costs you BigQuery slot-time, and the metrics YAML becomes hard to read once you have more than a couple of dimensions.

Slow, scans BigQuery on every load
Option B

Build your own Parquet exporter

Write a Cloud Function or a scheduled query that flattens GA4 to Parquet and lands it in GCS. Maintain partitioning. Handle late-arriving data. Update the script every time GA4 adds a new parameter. Now you operate a pipeline alongside Rill itself.

A second pipeline to maintain
Option C

Use a dbt staging layer in BigQuery

Run dbt to flatten events, materialise a clean table in BigQuery, then point Rill at that. You get a tidy source, but you are paying for BigQuery storage twice and you are still scanning the warehouse on every dashboard refresh. Not what Rill was built for.

Still paying BigQuery scan costs
Feature Decode GA4 source Hand-built Rill source
Storage format Rill readsParquet, partitionedNested BigQuery rows
Query engine on dashboardsDuckDB on ParquetBigQuery slot-time
Cost per dashboard refreshEffectively zeroBigQuery bytes scanned
Schema drift handlingAuto-detected upstreamManual SQL update
Time to first dashboardMinutes after installDays of pipeline work
Pipelines to maintainZeroOne — and it will break

Install the Marketplace template, point Rill at the GCS bucket, define a metrics view. That is the whole flow.

  1. [ 1 ]

    Subscribe via Google Cloud Marketplace

    Decode GA4 is a single Marketplace listing. Usage-based billing through your existing GCP invoice. The subscription takes about a minute and there is no monthly minimum.

  2. [ 2 ]

    Deploy the events_external template

    The template lands flat, partitioned Parquet files in a GCS bucket you control. The same files back the BigQuery external table — Rill just reads them directly, skipping the warehouse layer.

  3. [ 3 ]

    Install the Rill CLI and start a project

    Run curl -s https://cdn.rilldata.com/install.sh | bash, then rill start my-project. Add a GCS source pointing at the Decode bucket path, with a service account that has Storage Object Viewer.

  4. [ 4 ]

    Define a metrics view

    Reference the source in a YAML metrics view. Pick measures (count, distinct users) and dimensions (page_location, event_name, geo.country) directly from the flat columns. The dashboard is at http://localhost:9009 once rill start my-project is running.

Wire decoded GA4 into Rill in four steps. None of them are Rill-specific work — it is the same GCS-to-DuckDB pattern Rill uses for every Parquet source.

01

GCP

Run the Decode GA4 installer with the events_external template. Note the destination GCS bucket path.

02

GCP

Create a service account with Storage Object Viewer on the Decode GCS bucket.

03

Rill

rill start my-project, then add a GCS source YAML pointing at the bucket.

04

Rill

Define a metrics view with measures and dimensions, run rill start my-project, and load localhost:9009.

Rill reads the Parquet files directly from your GCS bucket using DuckDB. Nothing leaves your project. BigQuery sits idle for these dashboards — the same Decode events are still there if other tools need them.

01

Sub-second dashboard refresh

DuckDB on partitioned Parquet is fast. Operational dashboards refresh in the time it takes the chart to render, not the time it takes BigQuery to allocate slots.

02

No BigQuery scan costs

Rill reads the GCS bucket, not the warehouse. Storage costs stay where they always were. Compute costs disappear from your BigQuery bill for anything Rill is serving.

03

Direct event parameter columns

page_location, page_referrer, event_name and the rest are flat columns in the Parquet files. Rill metrics views reference them directly — no UNNEST, no nested-record gymnastics.

04

Partitioned by date out of the box

Decode lays the Parquet files out by partition_date. Rill picks the partitioning up automatically and prunes the files it does not need to read for a given dashboard time range.

05

Schema evolution without breakage

When GA4 adds a new event parameter, the next decode run includes it. The Parquet schema picks up the new column. Rill sees it on the next refresh.

06

Local dev that mirrors production

rill start runs the same DuckDB engine against the same GCS files locally. There is no separate staging stack — what works locally is what ships.

01

Operational marketing dashboards

Daily session, source, and landing page views that the marketing team checks every morning. Rill renders them sub-second from the Parquet files. The query never touches BigQuery, so the cost stays predictable regardless of how often anyone reloads.

02

Conversion funnel monitoring

view_item, add_to_cart, begin_checkout, purchase as direct columns over event_name. A funnel metrics view in Rill reads the partitioned Parquet, drops a date filter on the dashboard, and gives an analyst something usable in under a minute.

03

Embedded customer-facing analytics

Rill is small enough to run as part of a customer-facing tool. With Decode GA4 doing the upstream flattening, you point Rill at a per-tenant Parquet path and ship a tenant-aware dashboard without standing up a warehouse for each one.

Does Rill have to read from GCS, or can it use BigQuery?

It can do either. Rill ships with both connectors. We recommend GCS for operational dashboards — it is what the engine was designed for and it skips your BigQuery bill — but if you have a reason to keep everything in the warehouse, the BigQuery route works too. See setup →

What permissions does Rill need on the Decode bucket?

A GCP service account with Storage Object Viewer on the bucket Decode writes to. Rill reads the credentials in its standard way — environment variable, key file, or workload identity in cloud deploys. Nothing Rill-specific.

How fresh is the data Rill sees?

As fresh as your decode schedule. The default is daily; you can run more often if your use case needs it. Each run lands a new partition in GCS and Rill picks it up on the next dashboard refresh — no cache to invalidate.

Can I deploy this to Rill Cloud, not just locally?

Yes. The same GCS source YAML works in Rill Cloud. You provide the service account credentials in the Cloud project settings and the dashboards build against the same Parquet files. Full prerequisites →

Deploy in under 5 minutes

A Rill dashboard,
without the warehouse tax.

Subscribe via Google Cloud Marketplace, point the events_external template at a GCS bucket, and have Rill serving sub-second dashboards from decoded Parquet before lunch.

Get Started on Marketplace → Read the documentation

Google Cloud Marketplace · Usage-based · No monthly minimum