OpenCost Deployment for AWS
This guide explains how to deploy OpenCost for an Amazon EKS cluster.
Prerequisites
Before starting the deployment, ensure you have the following:
Infrastructure & permissions:
Amazon EKS cluster
Administrator access to the cluster
Tools:
awsCLI toolkubectlCLI toolhelmCLI tool (v3.x or later)
Step 1. Set up S3 Bucket
Create an S3 bucket for OpenCost Parquet exports:
In AWS, navigate to the S3 console.
Click Create bucket.
Set Bucket name, e.g.,
opencost-exports. The name* must be unique and referenced consistently in later steps.Leave the default settings as is and click Create bucket.
The bucket must be created in the same AWS Region as your Amazon EKS cluster.
*To allow Cloudaware to read files from this bucket, update the Cloudaware billing policy in your AWS Account by specifying the bucket ARN. Read more
Step 2. Set up IAM Role for Service Account
Create an IAM role that the OpenCost pod will assume to write data to S3.
Retrieve your Cluster’s OIDC Provider URL.
Navigate to the Amazon Elastic Kubernetes Service console -> Clusters. Select your cluster.
In the 'Overview' tab under the 'Details' section, locate the 'OpenID Connect provider URL' field. Save this value.
Create the IAM Policy. This policy grants permission for OpenCost to write to your S3 bucket.
Navigate to the IAM console -> Policies -> Create policy.
Switch to the ‘JSON’ tab and paste the following policy, replacing
<YOUR_OPENCOST_S3_BUCKET_NAME>with your S3 bucket name:CODE{ "Version": "2012-10-17", "Statement": [ { "Action": ["s3:PutObject", "s3:GetObject"], "Effect": "Allow", "Resource": "arn:aws:s3:::<YOUR_OPENCOST_S3_BUCKET_NAME>/*" } ] }Name the policy
opencost-s3-policyand create it.
Create the IAM Role.
Go back to the IAM console -> Roles -> Create role.
For Trusted entity type, select Web identity.
For Identity provider, choose the OIDC provider retrieved earlier (e.g.,
oidc.eks.us-east-1.amazonaws.com/id/YOUR_OIDC_ID).For Audience, use
sts.amazonaws.com.Click Add condition and define the following:
Key: your EKS OIDC provider
Condition:
StringEqualsValue:
system:serviceaccount:opencost:opencost-aws
Attach the
opencost-s3-policypolicy created in the previous step.Name the role
opencost-eks-roleand create it.
Step 3. Deploy Prometheus
Add the Prometheus Helm repository:
CODEhelm repo add prometheus-community https://prometheus-community.github.io/helm-chartsCreate the Prometheus namespace:
CODEkubectl create namespace prometheus-systemInstall Prometheus:
CODEhelm install prometheus prometheus-community/prometheus \ --namespace prometheus-system \ --set prometheus-pushgateway.enabled=false \ --set alertmanager.enabled=false
Step 4. Deploy OpenCost
Add the OpenCost Helm repository:
CODEhelm repo add opencost https://opencost.github.io/opencost-helm-chartCreate the OpenCost namespace:
CODEkubectl create namespace opencostInstall OpenCost with IRSA-enabled service account.
Install OpenCost and note the service account with the IAM role ARN created in the previous step. Replace<YOUR_IAM_ROLE_ARN>with the ARN of theopencost-eks-role(e.g.,arn:aws:iam::123456789012:role/opencost-eks-role) in the command below.CODEhelm --namespace opencost install opencost opencost/opencost -f - <<EOF serviceAccount: create: true name: opencost-aws annotations: eks.amazonaws.com/role-arn: "<YOUR_IAM_ROLE_ARN>" opencost: prometheus: namespaceName: prometheus-system podAnnotations: prometheus.io/path: /metrics prometheus.io/port: "9003" prometheus.io/scrape: "true" EOF
The podAnnotations ensure that Prometheus scrapes metrics from the OpenCost pods, enabling label-based metadata in reports.
Step 5. Deploy Parquet Exporter
Install the component that exports OpenCost data to S3. The same service account (opencost-aws) will be used.
To store reports in subfolders or use a naming prefix, use the command below, replacing
<YOUR_CLUSTER_PREFIX>,<YOUR_OPENCOST_S3_BUCKET_NAME>and<YOUR_CLUSTER_ARN>with the appropriate values.CODEhelm install parquet-exporter opencost/opencost-parquet-exporter \ --namespace opencost \ --set schedule="0 */12 * * *" \ --set existingServiceAccount=opencost-aws \ --values - <<EOF resources: limits: cpu: "1" memory: 1Gi requests: cpu: 100m memory: 100Mi env: - name: OPENCOST_PARQUET_SVC_HOSTNAME value: "opencost.opencost.svc.cluster.local" - name: OPENCOST_PARQUET_STORAGE_BACKEND value: "aws" - name: OPENCOST_PARQUET_S3_BUCKET value: "<YOUR_OPENCOST_S3_BUCKET_NAME>" - name: OPENCOST_PARQUET_FILE_KEY_PREFIX value: "<YOUR_CLUSTER_PREFIX>"/"<YOUR_CLUSTER_ARN>" - name: OPENCOST_PARQUET_JSON_SEPARATOR value: "_" EOFTo store reports in the root folder, use the command below, replacing
<YOUR_OPENCOST_S3_BUCKET_NAME>and<YOUR_CLUSTER_ARN>with the appropriate values.CODEhelm install parquet-exporter opencost/opencost-parquet-exporter \ --namespace opencost \ --set schedule="0 */12 * * *" \ --set existingServiceAccount=opencost-aws \ --values - <<EOF resources: limits: cpu: "1" memory: 1Gi requests: cpu: 100m memory: 100Mi env: - name: OPENCOST_PARQUET_SVC_HOSTNAME value: "opencost.opencost.svc.cluster.local" - name: OPENCOST_PARQUET_STORAGE_BACKEND value: "aws" - name: OPENCOST_PARQUET_S3_BUCKET value: "<YOUR_OPENCOST_S3_BUCKET_NAME>" - name: OPENCOST_PARQUET_FILE_KEY_PREFIX value: "/<YOUR_CLUSTER_ARN>" - name: OPENCOST_PARQUET_JSON_SEPARATOR value: "_" EOF
*For multi-cluster environments, deploy the Parquet Explorer separately on each cluster.
Step 6. Verify Deployment
Verify that all pods are running:
CODEkubectl get pods -n prometheus-system kubectl get pods -n opencostAccess the OpenCost UI using port-forwarding:
CODEkubectl port-forward -n opencost service/opencost 9090:9090Open
http://localhost:9090in your browser.
The first Parquet export may take up to 24 hours. To verify the export, check your S3 bucket for newly created files.
Next Steps
Return to the parent guide to proceed with the Kubernetes Billing setup in Cloudaware.