OpenCost Deployment for Google Cloud
This guide explains how to deploy OpenCost for a GKE cluster.
Prerequisites
Before you begin, ensure you have the following:
Infrastructure:
An GKE cluster in Standard mode (not Autopilot*) with Workload Identity enabled
Required cloud permissions (GCP IAM roles):
roles/container.admin– required for GKE administrationroles/iam.serviceAccountAdmin– required to manage Service Accountsroles/storage.admin– required to manage GCS bucketroles/apikeys.admin– required to manage API keys
Required tools:
Google Cloud CLI (
gcloud)kubectlHelm (v3.x or later)
*Autopilot is not supported due to the managed Prometheus configuration.
Step 1. Set up access
Create a service account
In the Google Cloud console, navigate to IAM & Admin -> Service Accounts.
Click +Create service account.
Set the following:
Name:
opencost-sa(the service account name used in this guide)Description: Service account for OpenCost
Assign the required role:
roles/compute.viewer
Click Done to create the account.
Configure workload identity
Return to the Service Accounts list. Open the newly created service account.
Select the tab ‘Principals with access’. Click Grant access.
Add the following principal:
<YOUR_GCP_PROJECT_ID>.svc.id.goog[opencost/opencost-sa]where<YOUR_GCP_PROJECT_ID>is a placeholder for your GCP Project ID andopencost-sais a placeholder for your Google service accountAssign the Workload Identity User role and save.
Create a service account key
Navigate back to the Service Accounts list. Click on Actions (three-dot) menu → Manage keys.
Click Add Key → Create new key.
Select JSON and click Create.
Download and store the key securely. The key will be required later when creating the Kubernetes secret.
Create an API key
Navigate to APIs & Services -> Credentials.
Click +Create Credentials and select API key.
Copy the generated key.
From the Actions (three-dot) menu of the new key, select Edit API key.
Under API restrictions, select Restrict key. Check the box ‘Cloud Billing API’ (
cloudbilling.googleapis.com) → OK. Click Save.
Step 2. Set up the storage bucket
Navigate to Cloud Storage -> Buckets.
Click +Create.
Configure the bucket:
Name: e.g.,
opencost-bucketLocation type: Region
Storage class: Standard
Leave the remaining settings as is.
Click Create.
Navigate to Permissions → + Grant Access.
Select Principal:
opencost-sa@<YOUR_GCP_PROJECT_ID>.iam.gserviceaccount.comwhere<YOUR_GCP_PROJECT_ID>is a placeholder for your GCP Project IDSelect Role:
Storage Object User
Click Save.
Note that the storage bucket name must be globally unique.
Step 3. Create the cluster connection file
Navigate to Kubernetes Engine → Clusters.
Click the Actions (three-dot) menu next to the cluster and select Connect.
Copy the
gcloudcommand and run it in your terminal. This generates the kubeconfig file needed to access the cluster.
Step 4. Deploy Prometheus
Add the Prometheus Helm repository:
CODEhelm repo add prometheus-community https://prometheus-community.github.io/helm-chartsCreate the Prometheus namespace:
CODEkubectl create namespace prometheus-systemInstall Prometheus:
CODEhelm install prometheus prometheus-community/prometheus \ --namespace prometheus-system \ --set prometheus-pushgateway.enabled=false \ --set alertmanager.enabled=false
If you use an external Prometheus instance, skip this step and review the notes in Step 5.
Step 5. Deploy OpenCost
Add the OpenCost Helm repository:
CODEhelm repo add opencost https://opencost.github.io/opencost-helm-chartCreate the OpenCost namespace:
CODEkubectl create namespace opencostCreate a Kubernetes secret with the service account key. Replace
/path/to/your/service-account-key.jsonwith your file path.CODEkubectl create secret generic google-application-credentials \ --from-file=config.json=/path/to/your/service-account-key.json \ --namespace opencostInstall OpenCost. Replace
<YOUR_PROJECT_ID>and<YOUR_API_KEY>in the command below:CODEhelm --namespace opencost upgrade --install opencost opencost/opencost -f - <<EOF podSecurityContexts: runAsUser: 1001 runAsGroup: 1001 fsGroup: 1001 extraVolumes: - name: configs emptyDir: {} serviceAccount: create: true name: opencost-sa annotations: iam.gke.io/gcp-service-account: "opencost-sa@<YOUR_PROJECT_ID>.iam.gserviceaccount.com" opencost: prometheus: namespaceName: prometheus-system exporter: cloudProviderApiKey: <YOUR_API_KEY> extraVolumeMounts: - mountPath: /var/configs name: configs readOnly: false podAnnotations: prometheus.io/path: /metrics prometheus.io/port: "9003" prometheus.io/scrape: "true" EOF
The podAnnotations ensure that Prometheus scrapes metrics from the OpenCost pods, enabling label-based cost attribution in reports.
If your cluster already has an external Prometheus, configure OpenCost to use the existing Prometheus instance by setting the following Helm values (replace <YOUR_PROMETHEUS_URL> with the correct endpoint):
opencost:
prometheus:
external:
enabled: true
url: <YOUR_PROMETHEUS_URL>
If your cluster uses prometheus-operator, enable ServiceMonitor; otherwise OpenCost metrics will not be scraped automatically. Set the following Helm values:
opencost:
metrics:
serviceMonitor:
enabled: true
Step 6. Deploy the Parquet exporter
Install the OpenCost Parquet Exporter.
To store reports in subfolders or use a naming prefix, use the command below, replacing
<YOUR_CLUSTER_PREFIX>,<YOUR_GKE_CLUSTER_ID>* and"opencsot-bucket"(the storage bucket name used in this guide) with the appropriate values.CODEhelm install parquet-exporter opencost/opencost-parquet-exporter \ --namespace opencost \ --set schedule="0 */12 * * *" \ --set existingServiceAccount=opencost-sa \ --values - <<EOF resources: limits: cpu: "1" memory: 1Gi requests: cpu: 100m memory: 100Mi env: - name: OPENCOST_PARQUET_SVC_HOSTNAME value: "opencost.opencost.svc.cluster.local" - name: OPENCOST_PARQUET_STORAGE_BACKEND value: "gcp" - name: OPENCOST_PARQUET_FILE_KEY_PREFIX value: "<YOUR_CLUSTER_PREFIX>"/"<YOUR_GKE_CLUSTER_ID>" - name: OPENCOST_PARQUET_JSON_SEPARATOR value: "_" - name: OPENCOST_PARQUET_GCP_BUCKET_NAME value: "opencost-bucket" - name: OPENCOST_PARQUET_GCP_CREDENTIALS_JSON valueFrom: secretKeyRef: name: google-application-credentials key: config.json EOF
*Note that for <YOUR_GKE_CLUSTER_ID>, Cloudaware expects the value in the format projects/<YOUR_PROJECT_ID>/zones/region/clusters/<YOUR_CLUSTER_NAME>. To retrieve this value, run the following command:
gcloud container clusters describe <CLUSTER_NAME> \
--project <GCP_PROJECT_ID> \
--region <GCP_REGION> \
--format="value(selfLink)" | sed 's|.*/projects/|projects/|'
To store reports in the root folder, use the command below, replacing
<YOUR_GKE_CLUSTER_ID>* and"opencsot-bucket"(the storage bucket name used in this guide) with the appropriate values.CODEhelm install parquet-exporter opencost/opencost-parquet-exporter \ --namespace opencost \ --set schedule="0 */12 * * *" \ --set existingServiceAccount=opencost-sa \ --values - <<EOF resources: limits: cpu: "1" memory: 1Gi requests: cpu: 100m memory: 100Mi env: - name: OPENCOST_PARQUET_SVC_HOSTNAME value: "opencost.opencost.svc.cluster.local" - name: OPENCOST_PARQUET_STORAGE_BACKEND value: "gcp" - name: OPENCOST_PARQUET_FILE_KEY_PREFIX value: "<YOUR_GKE_CLUSTER_ID>/" - name: OPENCOST_PARQUET_JSON_SEPARATOR value: "_" - name: OPENCOST_PARQUET_GCP_BUCKET_NAME value: "opencost-bucket" - name: OPENCOST_PARQUET_GCP_CREDENTIALS_JSON valueFrom: secretKeyRef: name: google-application-credentials key: config.json EOF
*Note that for <YOUR_GKE_CLUSTER_ID>, Cloudaware expects the value in the format projects/<YOUR_PROJECT_ID>/zones/<YOUR-ZONE>/clusters/<YOUR_CLUSTER_NAME>. To retrieve this value, run the following command:
gcloud container clusters describe <CLUSTER_NAME> \
--project <GCP_PROJECT_ID> \
--region <GCP_REGION> \
--format="value(selfLink)" | sed 's|.*/projects/|projects/|'
For multi-cluster environments, deploy the Parquet Explorer separately on each cluster.
Step 7. Verify the deployment
Check that all pods are running:
CODEkubectl get pods -n prometheus kubectl get pods -n opencostAccess the OpenCost UI:
CODEkubectl port-forward -n opencost service/opencost 9003:9003Then open
http://localhost:9003in your browser.
The first Parquet export may take up to 24 hours. To verify the export, check your storage bucket for newly created files.
Next steps
Return to the parent guide to proceed with the Kubernetes Billing setup in Cloudaware.