created SST files generated by RocksDB's compaction are
affected by the compaction strategy and will not have a fixed size.
Of course, if you disable incremental checkpoint, the 'incremental
checkpoint size' will equal to 'checkpoint size'.
Hope this helps you.
Xiangyu F
.
Regards,
Xiangyu Feng
Salva Alcántara 于2024年6月12日周三 15:31写道:
> I have some jobs where I can configure the TTL duration for certain
> operator state. The problem I'm noticing is that when I make changes in the
> TTL configuration the new state descriptor becomes incompatible
e. Hope this helps you!
[1]
https://flink.apache.org/2023/05/25/apache-flink-1.17.1-release-announcement/
[2]
https://flink.apache.org/2023/11/29/apache-flink-1.17.2-release-announcement/
Thx,
Xiangyu Feng
Zhanghao Chen 于2024年1月29日周一 11:26写道:
> Hi Deepti,
>
> Regarding the life cycle fo
/thread/b8w5cx0qqbwzzklyn5xxf54vw9ymys1c
[3]
https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/deployment/java_compatibility/
Regards,
Xiangyu Feng
Deepti Sharma S via user 于2024年1月24日周三 20:22写道:
> Hello Team,
>
>
>
> Can you please let me know the lifecycle for Flink 1.x
Hi Praveen,
I'm not sure what do you need from Java 17. In Bytedance, we have tried to
compile Flink on Java 8/Java 11 and run on Java 17. It works well in
production and also takes advantage of new Java 17 features.
You can try this way if you have urgent need for Java 17.
Hope this helps u.
R
strategy and might be
proportional to the overall size of the LSM Tree.
Hope this solves yours doubts.
Xiangyu Feng
Oscar Perez via user 于2023年11月27日周一 19:55写道:
> Hi,
>
> We have a long running job in production and we are trying to understand
> the metrics for this job, see attache
Hi Patricia,
Try to use this:
--add-opens=java.base/java.util=ALL-UNNAMED
--add-opens=java.base/java.lang=ALL-UNNAMED
Regards,
Xiangyu
patricia lee 于2023年11月14日周二 15:43写道:
> Hi,
>
>
> I upgraded the project to Flink 1.18.0 and Java 17. I am also using
> flink-kafka-connector 3.0.1-1.18 from
Hi Kean,
I would like to share with you our analysis of the pros and cons about
enabling Bloomfilter in production.
Pros:
By enabling BloomFilter, RocksDB.get() can filter out data files that not
contains this key for sure and hence reduce some random disk reads. This
performance improvement is d
Hi Zhuliang,
I would suggest u reading the comments from
'ExternallyInducedSource.java'[1].
"Sources that implement this interface do not trigger checkpoints when
receiving a
trigger message from the checkpoint coordinator, but when their input
data/events
indicate that a checkpoint should be tri
Hi Kirti,
AFAIK, u should pay attention to how are the filesystems mounted on the
pod instead of what should be configured as the tmp directory.
In common cases, user may mount a filesystem with a small space(less than
30GB) for system and a filesystem with a large space(more than 200GB) to
stor
Hi Yifan,
AFAIK, if you want to query a job’s state from outside Flink, you can use
Queryable State[1].
Hope this helps.
[1]
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/fault-tolerance/queryable_state/
Xiangyu
Yifan He via user 于2023年9月6日周三 13:10写道:
> Hi team,
>
t3.ts
AND event2.customer_id = event3.customer_id
GROUP BY
event2.customer_id;
Regards,
Xiangyu
Jiten Pathy 于2023年8月23日周三 16:38写道:
> Hi Xiangyu,
> Yes, that's correct. It is the requestId, we will have for each request.
>
> On Wed, 23 Aug 2023 at 13:47, xiangyu feng wrote:
>
>
Hi Pathy,
I want to know if the 'id' in {id, customerId, amount, timestamp} stands
for 'requestId'? If not, how is this 'id' field generated and can we add
'requestId' field in the event?
Thx,
Xiangyu
Jiten Pathy 于2023年8月22日周二 14:04写道:
> Hi,
> We are currently evaluating Flink for our analyti
Hi David,
keyBy() is implemented with hash partitioning. If you use the keyBy
function, the records for a given key will be shuffled to a downstream
operator subtask. See more in [1].
[1]
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/operators/overview/#keyby
Regards,
Hi Tucker,
Can you describe more about your running job and how the trigger timer is
configured? Also it would be better if you can attach a FlameGraph to show
the CPU usage when the timer is triggered.
Best,
Xiangyu
Tucker Harvey via user 于2023年8月1日周二 05:51写道:
> Hello Flink community! My team
Hi Patricia,
JDK17 will be supported in 1.18 release. See more in this jira[1].
[1] https://issues.apache.org/jira/browse/FLINK-15736
Best,
Xiangyu
patricia lee 于2023年7月31日周一 16:25写道:
> Hi,
>
> I was advised to upgrade the JDK of our flink 1.7 to 17. However, in the
> documeation it only say
16 matches
Mail list logo