Hi Daren thanks for the FLIP

Just a couple of questions and comments?

> Usable in both DataStream and Table API/SQL
What about python API? this is sth we should consider ahead since the
abstract element converter doesn't have a Flink type mapping to be used
from python, this is a issue we faced with DDB before

> Therefore, the connector will provide a CloudWatchMetricInput model that
user can use to pass as input to the connector. For example, in DataStream
API, it could be a MapFunction called just before passing to the sink as
follows:
I am not quite sure I follow, are you suggesting we introduce a
specific new converter class or relay that to users? also since you
mentioned FLIP-171, are you suggesting to implement this sink as an
extension to Async Sink, in that case It is more confusing to me how we are
going to use the map function with the AsyncSink.ElementConvertor.

>public class SampleToCloudWatchMetricInputMapper implements MapFunction<
Sample, CloudWatchMetricInput>

Is CloudWatchMetricInput a newly introduced model class, I couldn't find it
in the sdkv2, If we are introducing it then it might be useful to add to
the FLIP since this is part of the API.


> Supports both Bounded (Batch) and Unbounded (Streaming)

What do you propose to handle them differently? I can't find a specific
thing in the FLIP

Regarding table API

> 'metric.dimension.keys' = 'cw_dim',

I am not in favor of doing this as this will complicate the schema
validation on table creation, maybe we can use the whole schema as
dimensions excluding the values and the count, let me know your thoughts
here.

> 'metric.name.key' = 'cw_metric_name',

So we are making the metric part of the row data? have we considered not
doing that instead and having 1 table map to 1 metric instead of namespace?
It might be more suitable to enforce some validations on the dimensions
schema this way. Ofc this will probably have is introduce some intermediate
class in the model to hold the dimensions, values and counts without the
metric name and namespace that we will extract from the sink definition,
let me know your thoughts here?


>`cw_value` BIGINT,
Are we going to allow all numeric types for values?

>    protected void submitRequestEntries(
          List<MetricDatum> requestEntries,
          Consumer<List<MetricDatum>> requestResult)

nit: This method should be deprecated after 1.20. I hope the repo is
upgraded by the time we implement this

> Error Handling
Away from poison pills, what error handling are you suggesting? Are we
following the footsteps of the other AWS connectors with error
classification, is there any effort to abstract it on the AWS side?

And on the topic of poison pills, If I understand correctly that is a topic
that has been discussed for a while, this ofc breaks the at-least-once
semantic and might be confusing to the users, additionally since cloud
watch API fails the full batch how are you suggesting we identify the
poison pills? I am personally in favor of global failures in this case but
would love to hear the feedback here.



Best Regards
Ahmed Hamdy


On Mon, 7 Apr 2025 at 11:29, Wong, Daren <daren...@amazon.co.uk.invalid>
wrote:

> Hi Dev,
>
> I would like to start a discussion about FLIP: Amazon CloudWatch Metric
> Sink Connector
> https://docs.google.com/document/d/1G2sQogV8S6M51qeAaTmvpClOSvklejjEXbRFFCv_T-c/edit?usp=sharing
>
> This FLIP is proposing to add support for Amazon CloudWatch Metric sink in
> flink-connector-aws repo. Looking forward to your feedback, thank you
>
> Regards,
> Daren
>

Reply via email to