Hey Hang,
Thanks for the prompt response. Does this mean the Flink counters emitted via
statsd are cumulative then? From the spec
https://github.com/b/statsd_spec#counters
> A counter is a gauge calculated at the server. Metrics sent by the client
> increment or decrement the value of the gau
Hi Yangxueyong,
Are you sure this is your Flink SQL job? This SQL statement looks very
strange, the table 'test_flink_res2' is both source and sink, and the join
key is null.
Best,
Shammon FY
On Wed, May 10, 2023 at 12:54 PM yangxueyong
wrote:
> flink1.16.1
>
> mysql8.0.33
>
> jdbc-3.1.0-1.16
Hi,
Thanks for the reply.
I don't think I can use IAM integration and avoid distributing keys to the
application because my Flink application is running outside AWS EC2, in
native K8s cluster nodes, from where I am distributing to S3 services
hosted on AWS.
If there is a procedure to still integr
flink1.16.1
mysql8.0.33
jdbc-3.1.0-1.16
I have a sql,
insert into test_flink_res2(id,name,address)
select a.id,a.name,a.address from test_flink_res1 a left join test_flink_res2 b
on a.id=b.id where a.name='abc0.11317691217472489' and b.id is null;
Why does flinksql convert this statement into
Hi,
if for some reason there exists a checkpoint by same name.
>
Could you give more details about your scenarios here?
>From your description, I guess this problem occurred when a job restart,
does this restart is triggered personally?
In common restart processing, the job will retrieve the late
Hello Shammon/Team,
We are using same Flink version which is 1.14.2 but only change is earlier we
were using Flink Kafka Consumer not we are moving with Kafka Source. I dont see
any difference in Job planner but I see Kafka source is introducing more
latency while performing Flinka Kafka consum
Hi, amenreet,
As Hangxiang said, we should use a new checkpoint dir if the new job has
the same jobId as the old one . Or else you should not use a fixed jobId
and the checkpoint dir will not conflict.
Best,
Hang
Hangxiang Yu 于2023年5月10日周三 10:35写道:
> Hi,
> I guess you used a fixed JOB_ID, and
Hi, Iris,
The metrics have already be calculated in Flink. So we only need to report
these metric as the gauges.
For example, the metric `metricA` is a Flink counter and is increased from
1 to 2. The statsd gauge will be 2 now. If we register it as a statsd
counter, we will send 1 and 2 to the st
Hi,
I guess you used a fixed JOB_ID, and configured the same checkpoint dir as
before ?
And you may also start the job without before state ?
The new job cannot know anything about before checkpoints, that's why the
new job will fail when it tries to generate a new checkpoint.
I'd like to suggest y
Hi Madan,
Could you give the old and new versions of flink and provide the job plan?
I think it will help community to find the root cause
Best,
Shammon FY
On Wed, May 10, 2023 at 2:04 AM Madan D via user
wrote:
> Hello Team,
>
> We have been using Flink Kafka consumer and recently we have bee
Hello Team,
We have been using Flink Kafka consumer and recently we have been moving to
Flink Kafka source to get more advanced features but we have been observing
more rebalances right after data consumed and moving to next operator than
Flink Kafka consumer.
Can you please let us know what m
> Could you please open a jira
Done: https://issues.apache.org/jira/browse/FLINK-32041
> PR (in case you fixed this already)
Haven't fixed it yet! But if I find time to do it I will!
Thanks!
On Tue, May 9, 2023 at 4:49 AM Tamir Sagi
wrote:
> Hey,
>
> I also encountered something similar with
Hi all,
Is there any way to prevent restart of flink job, or override the
checkpoint metadata, if for some reason there exists a checkpoint by same
name. I get the following exception and my job restarts, have been trying
to find solution for a very long time but havent found anything useful yet,
Hi all,
Hi Team,
I am deploying my job in application mode on Flink-1.16.0, but keep
constantly receiving this error from a long time:
2023-04-10 13:48:39,366 INFO org.apache.kafka.clients.NetworkClient
[] - [Consumer clientId=event-executor-client-1,
groupId=event-executor-gr
hi Anuj,
As Martijn said IAM is the preferred option but if you've no other way than
access keys then environment variables is a better choice.
Such case conf doesn't contain plain text keys.
Just a side note, putting `s3a.access.key` into Flink conf file is not
configuring Hadoop S3. The way how
Hi Anuj,
You can't provide the values for S3 in job code, since the S3 filesystems
are loaded via plugins. Credentials must be stored in flink-conf.yaml. The
recommended method for setting up credentials is by using IAM, not via
Access Keys. See
https://nightlies.apache.org/flink/flink-docs-master
Hi,
Thanks for the reply.
Yes my flink deployment is on K8s but I am not using Flink-k8s operator.
If i understood correctly, even with init-container the flink-conf.yaml
(inside the container) would finally contain unencrypted values for access
tokens. We don't want to persist such sensitive dat
Hey folks trying to troubleshoot why counter metrics are appearing as gauges on
my end. Is it expected that the StatsdMetricsReporter is reporting gauges for
counters as well?
Looking at this one:
https://github.com/apache/flink/blob/master/flink-metrics/flink-metrics-statsd/src/main/java/org/a
Hey,
I also encountered something similar with different error. I enabled HA with
RBAC.
org.apache.flink.kubernetes.shaded.io.fabric8.kubernetes.client.KubernetesClientException","message":"Failure
executing: GET at: https://172.20.0.1/api/v1/nodes. Message:
Forbidden!Configured service accou
Hi, Pritam,
I see Martijn has responsed the ticket.
Kafka source (FLIP-27) will commit offsets in two places: kafka consumer
auto commit and invoke `consumer.commitAsync` when checkpoint is completed.
- If the checkpoint is enabled and commit.offsets.on.checkpoint = true,
kafka connector commits
20 matches
Mail list logo