The cloudwatch-exporter pod is using kube2iam to assume an AWS role with
the "AmazonRDSFullAccess" policy:
iam.amazonaws.com/role: k8s-ns-cloudwatch-exporter-role
config.yml: |-
---
region: us-east-1
delay_seconds: 30
metrics:
- aws_namespace: AWS/RDS
aws_metric_name: CPUUtilization
aws_dimensions: [DBInstanceIdentifier]
aws_statistics: [Average]
When using aws cli with the same role in a k8s pod:
bash-4.2# aws cloudwatch list-metrics --namespace AWS/RDS --metric-name
CPUUtilization --dimensions Name=DBInstanceIdentifier,Value=some-pg
{
"Metrics": [
{
"Namespace": "AWS/RDS",
"MetricName": "CPUUtilization",
"Dimensions": [
{
"Name": "DBInstanceIdentifier",
"Value": "some-pg"
}
]
}
]
}
localhost:9104/metrics only shows:
# HELP cloudwatch_requests_total API requests made to CloudWatch # TYPE
cloudwatch_requests_total counter
cloudwatch_requests_total{action="listMetrics",namespace="AWS/RDS",} 47.0 #
HELP aws_resource_info AWS information available for resource # TYPE
aws_resource_info gauge # HELP cloudwatch_exporter_scrape_duration_seconds
Time this CloudWatch scrape took, in seconds. # TYPE
cloudwatch_exporter_scrape_duration_seconds gauge
cloudwatch_exporter_scrape_duration_seconds 0.068557439 # HELP
cloudwatch_exporter_scrape_error Non-zero if this scrape failed. # TYPE
cloudwatch_exporter_scrape_error gauge cloudwatch_exporter_scrape_error 0.0
# HELP tagging_api_requests_total API requests made to the Resource Groups
Tagging API # TYPE tagging_api_requests_total counter
while expecting something like:
aws_rds_cpuutilization_average
What am I missing?
--
You received this message because you are subscribed to the Google Groups
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/prometheus-users/76d0f2a4-753d-4c3f-aeaf-af777e5c830bn%40googlegroups.com.