Hi Peter,
Have you compared the DAT topologies in 1.15 / 1.14?
I think it's expected that "Records Received", "Bytes Sent" and "Records
Sent" are 0. These metrics trace the internal data exchanges between Flink
tasks. External data changes, i.e., source reading / sink writing data from
/ to exter
You have not configured the tumbling window at all. Please refer to [1] for
more details.
Regards,
Dian
[1]
https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/dev/table/sql/queries/window-agg/#group-window-aggregation
On Wed, Apr 6, 2022 at 10:46 PM ivan.ros...@agilent.com <
ivan.ro
Thank you for the clarification! After some discussion, I think we'll be
using processing time as an alternative for our use case.
Just for my education, if I really need ingestion-time. It seem like I can
get it by either of the below approach?
// 1. an ingestion time watermark strategy
new Wat
Hello,
We are using Flink 1.13.6, Application Mode, k8s HA. To upgrade job, we use
POST, url=http://gsp-jm:8081/jobs//savepoints,
then we wait for up to 5 minutes for completion, periodically pulling status
(GET,
url=http://gsp-jm:8081/jobs/0
Hi there,
I just successfully upgraded our Flink cluster to 1.15.0 rc0 - also the
corresponding job is running on this version. Looks great so far!
In the Web UI I noticed some metrics are missing, especially "Records
Received", "Bytes Sent" and "Records Sent". Those were shown in v 1.14.4.
See a
Hello,
I'm trying to understand tumbling windows at the level of the python table api.
For this short example:
Input csv
Print output
2022-01-01 10:00:23.0, "data line 3"
2022-01-01 10:00:24.0, "data line 4"
2022-01-01 10:00:18.0, "data line 1"
2022-01-01 10:00:25.
Hi,
We have an in-house platform that we want to integrate with external
clients via HDFS. They have lots of existing files and they continuously
put more data to HDFS. Ideally, we would like to have a Flink job that
takes care of ingesting data as one of the requirements is to execute SQL
on top
HI, Kevin
I have not reproduced this problem.
What is the impact of this problem? Can't get this parameter correctly in user
main method?
Could you provide a screenshot of the JobManager configuration on the UI
> 2022年4月2日 上午10:23,Kevin Lee 写道:
>
> It's a typo
>
> I run this demo on yarn cl
Hey Matthew,
Thanks also for sharing the code that you're working on. What are your
plans with the connector? I could imagine that others would also be
interested, so perhaps you wanted to add it to https://flink-packages.org/
?
Best regards,
Martijn Visser
https://twitter.com/MartijnVisser82
ht
Happy to help,
Let us know if it helped in your use case.
On Tue, Apr 5, 2022 at 1:34 AM Yaroslav Tkachenko
wrote:
> Hi Marios,
>
> Thank you, this looks very promising!
>
> On Mon, Apr 4, 2022 at 2:42 AM Marios Trivyzas wrote:
>
>> Hi again,
>>
>> Maybe you can use the
>> https://nightlies.ap
Thanks Guowei! I'll check it out.
Best,
Zhanghao Chen
From: Guowei Ma
Sent: Wednesday, April 6, 2022 16:01
To: Zhanghao Chen
Cc: user@flink.apache.org
Subject: Re: Why first op after union cannot be chained?
Hi Zhanghao
AFAIK, you might to see the `StreamingJo
Dear all,
I'm trying to submit SQL jobs to session cluster on k8s, which need
external local udfs jars.
I meet some proplems below:
- 'pipeline.jars' will be overwritten by '-j' option which only accepts
one jar.
- 'pipeline.classpaths' will not be uploaded, so local files could not
Hi Zhanghao
AFAIK, you might to see the `StreamingJobGraphGenerator` not the
`JobGraphGenerator` which is only used by the old flink stream sql stack.
>From comment of the `StreamingJobGraphGenerator::isChainableInput` the `an
union operator` does not support chain currently.
Best,
Guowei
On We
I'm not quite familiar with ES conector. However, I guess you could check
if there is data going into the sink connector. One way to achieve this is
to set the pipeline.operator-chaining as false and then you could see the
count of input elements for the sink operator.
PS: Just removed the communi
14 matches
Mail list logo