On Wed, Jun 12, 2024 at 7:13 PM Chetas Joshi wrote:
> Got it. Thanks!
>
> On Wed, Jun 12, 2024 at 6:49 PM Zhanghao Chen
> wrote:
>
>> > Does this mean it won't trigger a checkpoint before scaling up or
>> scaling down?
>>
>> The in-place rescaling
ade.
>
> [1]
> https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-release-1.8/docs/custom-resource/job-management/#stateful-and-stateless-application-upgrades
>
>
> Best,
> Zhanghao Chen
> --
> *From:* Chetas Joshi
> *Sent:* Th
.8/docs/custom-resource/autoscaler/
>
>
> Best,
> Zhanghao Chen
> --
> *From:* Sachin Sharma
> *Sent:* Saturday, June 8, 2024 3:02
> *To:* Gyula Fóra
> *Cc:* Chetas Joshi ; Oscar Perez via user <
> user@flink.apache.org>
> *Subje
Hi Community,
I want to understand the following logs from the flink-k8s-operator
autoscaler. My flink pipeline running on 1.18.0 and using
flink-k8s-operator (1.8.0) is not scaling up even though the source vertex
is back-pressured.
2024-06-06 21:33:35,270 o.a.f.a.ScalingMetricCollector
[DEBUG]
Hello,
On a k8s cluster, I have the flink-k8s-operator running 1.8 with autoscaler
= enabled (in-place) and a flinkDeployment (application mode) running
1.18.1.
The flinkDeployment i.e. the flink streaming application has a mock data
producer as the source. The source generates data points every
24 at 6:59 PM Chetas Joshi wrote:
> Hello,
>
> Set up
>
> I am running my Flink streaming jobs (upgradeMode = stateless) on an AWS
> EKS cluster. The node-type for the pods of the streaming jobs belongs to a
> node-group that has an AWS ASG (auto scaling group).
&g
Hello,
Set up
I am running my Flink streaming jobs (upgradeMode = stateless) on an AWS
EKS cluster. The node-type for the pods of the streaming jobs belongs to a
node-group that has an AWS ASG (auto scaling group).
The streaming jobs are the FlinkDeployments managed by the
flink-k8s-operator (1.8
tecting these
> limits https://issues.apache.org/jira/browse/FLINK-33771
>
> You can set:
> kubernetes.operator.cluster.resource-view.refresh-interval: 5 min
>
> to turn this on. Alternatively a simpler approach would be to directly
> limit the parallelism of the scaling dec
Hello,
I am running a flink job in the application mode on k8s. It's deployed as a
FlinkDeployment and its life-cycle is managed by the flink-k8s-operator.
The autoscaler is being used with the following config
job.autoscaler.enabled: true
job.autoscaler.metrics.window: 5m
job.autoscaler.stabiliz
disabling the
> autoscaler and manually setting pipeline.jobvertex-parallelism-overrides in
> the flink config.
>
> Cheers,
> Gyula
>
> On Thu, May 2, 2024 at 3:49 AM Chetas Joshi
> wrote:
>
>> Hello,
>>
>> We recently upgraded the operator to 1.8.0 to leverage th
Hello,
We recently upgraded the operator to 1.8.0 to leverage the new autoscaling
features (
https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-release-1.8/docs/custom-resource/autoscaler/).
The FlinkDeployment (application cluster) is set to flink v1_18 as well. I
am able to observ
ers below:
>
>
> On Tue, Apr 16, 2024, 06:39 Chetas Joshi wrote:
>
>> Hello,
>>
>> I am running a batch flink job to read an iceberg table. I want to
>> understand a few things.
>>
>> 1. How does the FlinkSplitPlanner decide which fileScanTask
Hello,
I am running a batch flink job to read an iceberg table. I want to
understand a few things.
1. How does the FlinkSplitPlanner decide which fileScanTasks (I think one
task corresponds to one data file) need to be clubbed together within a
single split and when to create a new split?
2. Whe
; It could be found here:
>
> https://github.com/apache/iceberg/blob/main/flink/v1.18/flink/src/main/java/org/apache/iceberg/flink/source/reader/IcebergSourceReaderMetrics.java#L32-L39
>
> Added here:
> https://github.com/apache/iceberg/pull/5554
>
> I hope this helps,
> Pet
Hello,
I am using Flink to read Iceberg (S3). I have enabled all the metrics
scopes in my FlinkDeployment as below
metrics.scope.jm: flink.jobmanager
metrics.scope.jm.job: flink.jobmanager.job
metrics.scope.tm: flink.taskmanager
metrics.scope.tm.job: flink.taskmanager.job
metrics.scope.task: flin
Hello,
I am using iceberg-flink-runtime lib (1.17-1.4.0) and running the following
code to read an iceberg table in BATCH mode.
var source = FlinkSource
.forRowData()
.streaming(false)
.env(execEnv)
.tableLoader(tableLoader)
.limit((long) operation.getLimit())
.filters(bui
Hello all,
I am using the flink-iceberg-runtime lib to read an iceberg table into a
Flink datastream. I am using Glue as the catalog. I use the flink table API
to build and query an iceberg table and then use toDataStream to convert it
into a DataStream. Here is the code
Table table = streamTable
17 matches
Mail list logo