Spark leverages the hadoop S3A connector,
https://hadoop.apache.org/docs/current/hadoop-aws/tools/hadoop-aws/connecting.html
Specifics of S3 metrics are documented with AWS,
https://docs.aws.amazon.com/AmazonS3/latest/userguide/metrics-dimensions.html
Hope this helps.
Best Regards
Soumasish Goswam
Hi,
I was looking for metrics specifying how many objects ("files") were read /
written when using Spark over S3.
The metrics specified at [
https://spark.apache.org/docs/3.5.1/monitoring.html#component-instance--executor]
do not have objects written / read from s3 metric.
I do see the Hadoop de