3:45, M Singh wrote:
Hi:
I am using multiple (almost 30 and growing) Flink streaming applications that
read from the same kinesis stream and get
ProvisionedThroughputExceededException exception which fails the job.
I have seen a reference
http://mail-archives.apache.org/mod_mbox/flink
n Grebennikov | g...@dfdx.me
>
>
> On Mon, Jun 15, 2020, at 13:45, M Singh wrote:
>> Hi:
>>
>> I am using multiple (almost 30 and growing) Flink streaming applications
>> that read from the same kinesis stream and get
>> ProvisionedThroughputExceededEx
30 and growing) Flink streaming applications that
read from the same kinesis stream and get
ProvisionedThroughputExceededException exception which fails the job.
I have seen a reference
http://mail-archives.apache.org/mod_mbox/flink-user/201811.mbox/%3CCAJnSTVxpuOhCNTFTvEYd7Om4s=q2vz5-8+m4nvuutmj2oxu
fdx.me
On Mon, Jun 15, 2020, at 13:45, M Singh wrote:
> Hi:
>
> I am using multiple (almost 30 and growing) Flink streaming applications that
> read from the same kinesis stream and get
> ProvisionedThroughputExceededException exception which fails the job.
> I have seen
Hi:
I am using multiple (almost 30 and growing) Flink streaming applications that
read from the same kinesis stream and get
ProvisionedThroughputExceededException exception which fails the job.
I have seen a reference
http://mail-archives.apache.org/mod_mbox/flink-user/201811.mbox
imit parameter is tracking the amount of bytes/records
>> written into a specific shard.
>> If the parallelism of your Sink is >1 (which is probably the case),
>> multiple tasks == multiple KPL instances which may be writing to the same
>> shard.
>> So for each indi
gt; So for each individual KPL the RateLimit is not breached, but if multiple
> parallel tasks are writing to the same shard the RateLimit gets breached
> and a ProvisionedThroughputExceededException is being thrown.
>
> What we've tried:
>
>- Using a random partition k
riting to the same shard the RateLimit gets breached
and a ProvisionedThroughputExceededException is being thrown.
What we've tried:
- Using a random partition key to spread the load evenly between the
shards. This did not work for us...
- We tried to make records being written to
If it's running in parallel aren't you just adding readers which maxes out
your provisioned throughput? probably doesn't belong in here but rather a
Kinesis thing, but i suggest increasing your number of shards?
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
org.apache.flink.streaming.connectors.kinesis.proxy.KinesisProxy -
Got recoverable SdkClientException. Backing off for 258 millis (Rate
exceeded for shard shardId- in stream CSV under account
x . (Service: AmazonKinesis; Status Code: 400; Error Code:
ProvisionedThroughputExceededException; Request ID:
e1c0caa4-8c4c
:
ProvisionedThroughputExceededException; Request ID:
e1c0caa4-8c4c-7738-b59f-4977bc762cf3))
flink-sbistl919-taskexecutor-0-CACSVML-15736.log:2018-11-09 07:46:16,844
WARN org.apache.flink.streaming.connectors.kinesis.proxy.KinesisProxy -
Got recoverable SdkClientException. Backing off for 203 millis (Rate
exceeded
11 matches
Mail list logo