[jira] [Created] (FLINK-35819) Upgrade to AWS SDK 2.26.19

2024-07-11 Thread Burak Ozakinci (Jira)
Burak Ozakinci created FLINK-35819:
--

 Summary: Upgrade to AWS SDK 2.26.19
 Key: FLINK-35819
 URL: https://issues.apache.org/jira/browse/FLINK-35819
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / AWS
Affects Versions: aws-connector-4.4.0
Reporter: Burak Ozakinci
 Fix For: aws-connector-4.4.0


As part of the work in FLINK-31922, AWS SDK needs to be updated to use 
RetryStrategy constructs instead of deprecated RetryPolicy classes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-36125) File not found exception on restoring state handles with file merging

2024-08-21 Thread Burak Ozakinci (Jira)
Burak Ozakinci created FLINK-36125:
--

 Summary: File not found exception on restoring state handles with 
file merging
 Key: FLINK-36125
 URL: https://issues.apache.org/jira/browse/FLINK-36125
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Checkpointing
Affects Versions: 1.20.0
Reporter: Burak Ozakinci
 Fix For: 2.0.0, 1.20.1


1.20 app with file merging with across checkpoints option enabled.

{*}execution.checkpointing.file-merging.enabled{*}: true

{*}execution.checkpointing.file-merging.across-checkpoint-boundary{*}: true

App uses Kubernetes for HA and S3 for HDFS with RocksDB state backend.

Summary of events:
 * App started to fail and restarted due to Zookeeper connection failure
 * It tried to use the previous directory with the log below while restoring
 * 
{code:java}
Reusing previous directory 
s3://0825968657d315d337418c6a010371c3320d5792/6676b9fb581c449ea78d95efdd338ee1-021367244083-1724062514899/checkpoints/6676b9fb581c449ea78d95efdd338ee1/taskowned/job_6676b9fb581c449ea78d95efdd338ee1_tm_10.99.195.1-6122-34e093
 for checkpoint file-merging. {code}
 
 * FileMergingSnapshotManager could not find the checkpoint file under 
`taskowned` directory and the app started to failover. 

 

 
{code:java}
java.io.FileNotFoundException: No such file or directory: 
s3://0825968657d315d337418c6a010371c3320d5792/6676b9fb581c449ea78d95efdd338ee1-021367244083-1724062514899/checkpoints/6676b9fb581c449ea78d95efdd338ee1/taskowned/job_6676b9fb581c449ea78d95efdd338ee1_tm_10.99.196.233-6122-acb0af/5ce4c69f-f02a-4f91-a656-9abdfa9d47fd
   at 
org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.lambda$restoreStateHandles$11(FileMergingSnapshotManagerBase.java:880)
at java.base/java.util.HashMap.computeIfAbsent(HashMap.java:1134)   
at 
org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.lambda$restoreStateHandles$12(FileMergingSnapshotManagerBase.java:861)
at 
java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
at 
java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195)
  at 
java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:177)
  at 
java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
at 
java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
at 
java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:177)
  at 
java.base/java.util.stream.Streams$StreamBuilderImpl.forEachRemaining(Streams.java:411)
  at 
java.base/java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:658)
at 
java.base/java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:274)
  at 
java.base/java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948)
at 
java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) 
 at 
java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474)
   at 
java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
  at 
java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
at 
java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) 
 at 
java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:497)
 at 
java.base/java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:274)
  at java.base/java.util.ArrayList$Itr.forEachRemaining(ArrayList.java:1033)
  at 
java.base/java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
at 
java.base/java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:734)
  at 
java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) 
 at 
java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474)
   at 
java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
  at 
java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
at 
java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) 
 at 
java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:497)
 at 
java.base/java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:274)
  at 
java.base/java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948)
at 
java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) 
 at 
java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474)
   at 
java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEa

[jira] [Created] (FLINK-36947) Rate limiting the GetRecord call frequency

2024-12-20 Thread Burak Ozakinci (Jira)
Burak Ozakinci created FLINK-36947:
--

 Summary: Rate limiting the GetRecord call frequency
 Key: FLINK-36947
 URL: https://issues.apache.org/jira/browse/FLINK-36947
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / AWS, Connectors / Kinesis
Affects Versions: aws-connector-5.0.0
Reporter: Burak Ozakinci


GetRecords call interval config on the old version of the connector is not 
present in the new one. This causes throttling on some cases due to excessive 
polling from KInesis stream, especially on multiple consumers.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)