[ 
https://issues.apache.org/jira/browse/FLINK-3923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15296706#comment-15296706
 ] 

ASF GitHub Bot commented on FLINK-3923:
---------------------------------------

Github user rmetzger commented on a diff in the pull request:

    https://github.com/apache/flink/pull/2016#discussion_r64257599
  
    --- Diff: 
flink-streaming-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/FlinkKinesisProducer.java
 ---
    @@ -160,12 +168,13 @@ public void 
setCustomPartitioner(KinesisPartitioner<OUT> partitioner) {
        public void open(Configuration parameters) throws Exception {
                super.open(parameters);
     
    -           KinesisProducerConfiguration config = new 
KinesisProducerConfiguration();
    -           config.setRegion(this.region);
    -           config.setCredentialsProvider(new StaticCredentialsProvider(new 
BasicAWSCredentials(this.accessKey, this.secretKey)));
    +           KinesisProducerConfiguration producerConfig = new 
KinesisProducerConfiguration();
    +
    +           
producerConfig.setRegion(configProps.getProperty(KinesisConfigConstants.CONFIG_AWS_REGION));
    +           
producerConfig.setCredentialsProvider(AWSUtil.getCredentialsProvider(configProps));
                //config.setCollectionMaxCount(1);
                //config.setAggregationMaxCount(1);
    --- End diff --
    
    Okay, nice.
    One thing that would be really helpful would be additional testing. One big 
issue is that the Kinesis connector doesn't handle "flow control" nicely.
    If I have a Flink job that is producing data at a higher rate than the 
number of shards permits, I'm getting a lot of failures. Ideally, the producer 
should only accept as much data as it can handle and block otherwise.
    Do you have any ideas how to achieve that?


> Unify configuration conventions of the Kinesis producer to the same as the 
> consumer
> -----------------------------------------------------------------------------------
>
>                 Key: FLINK-3923
>                 URL: https://issues.apache.org/jira/browse/FLINK-3923
>             Project: Flink
>          Issue Type: Sub-task
>          Components: Kinesis Connector, Streaming Connectors
>    Affects Versions: 1.1.0
>            Reporter: Robert Metzger
>            Assignee: Abdullah Ozturk
>
> Currently, the Kinesis consumer and producer are configured differently.
> The producer expects a list of arguments for the access key, secret, region, 
> stream. The consumer is accepting properties (similar to the Kafka connector).
> The objective of this issue is to change the producer so that it is also 
> using a properties-based configuration (including an input validation step)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to