Hi!
It seems that what you want is for *each* row compute the sum of the five
previous rows (including the current row). This is the use case of over
aggregation, not sliding window. I don't know if table API supports this,
but you can see [1] for over aggregation in Flink SQL.
But if you insist
Hi. Thank you for the clarification.
I updated my code as below and got the desired result.
result = table.window(Slide.over(
row_interval(WINDOW_SIZE)).every(row_interval(WINDOW_SLIDE)).on(col('proctime')).alias("w"))
\
.group_by(col('w')) \
.select(call(read_raw_data, co
Hi!
Executing a set of statements with SQL client is supported since Flink 1.13
[1]. Please consider upgrading your Flink version.
[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/sqlclient/#execute-a-set-of-sql-statements
方汉云 于2021年11月1日周一 下午8:31写道:
> Hi,
>
>
>
Hi!
You're not only grouping by the over window but also grouping by the value,
thus only the records with the same value will be in the same group. I
guess this is no intended.
Long Nguyễn 于2021年11月2日周二 上午3:05写道:
> I have set up a program that takes bits 0 and 1 from a Kafka topic and
> then u
Hello David,
Thanks for the detailed explanation. This is really helpful!
I also had an offline discussion with Yang Wang. He also told me this could be
done in the future, but not part of the recent plan.
As suggested, I think I can do the follow things to achieve some auto-scaling
component
I have set up a program that takes bits 0 and 1 from a Kafka topic and then
uses Flink to create a sliding count window of size 5. In that window, I'd
like to output 1 if there are 3 or more of the bit 1, otherwise, output 0.
Currently, I follow the way of calculating the sum of bits in the window.
Hi folks,
the exception says
that
org.apache.flink.client.program.OptimizerPlanEnvironment$ProgramAbortException
cannot be loaded. The ProgramAbortException has been moved in Flink 1.11.0
[1] to a different place. Not sure what cloudflow.flink.Flinkstreamlet does
but it seems to rely on some olde
Hi,
I used offical flink-1.12.5 package,configuration sql-client-defaults.yaml,run
bin/sql-client.sh embedded
cat conf/sql-client-defaults.yaml
catalogs:
# A typical catalog definition looks like:
- name: myhive
type: hive
hive-conf-dir: /apps/conf/hive
default-database: de
Any thoughts on these ?
Thanks,
Prasanna.
On Sat, Oct 30, 2021 at 7:25 PM Prasanna kumar <
prasannakumarram...@gmail.com> wrote:
> Hi ,
>
> We have the following Flink Job that processes records from kafka based on
> the rules we get from S3 files into broadcasted state.
> Earlier we were able t
Hi,
If you are using sql-client, you can try:
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/sqlclient/#execute-a-set-of-sql-statements
If you are using TableEnvironment, you can try statement set too:
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/common/
Hi Deepa,
I cannot see the attached image. Maybe you can post the description of the
problem.
I am also moving this discussion to the user ML.
Cheers,
Till
On Mon, Nov 1, 2021 at 9:05 AM Sekaran, Sreedeepa <
sreedeepa.seka...@westpac.com.au> wrote:
> Hi Team,
>
>
>
> We are working with Flink
Hi,
It seems that there is a jar conflict. You can check your
dependencies. Some guava dependencies conflict with the corresponding
Hadoop version.
You can try to exclude all guava dependencies.
Best,
Jingsong
On Mon, Nov 1, 2021 at 6:07 PM 方汉云 wrote:
>
> Hive version2.3.8
> Flink version 1.12
Hive version2.3.8
Flink version 1.12.5
error message:
Exception in thread "main" org.apache.flink.table.client.SqlClientException:
Unexpected exception. This is a bug. Please consider filing an issue.
at org.apache.flink.table.client.SqlClient.main(SqlClient.java:215)
Caused by: org.apach
Hey Puneet,
The reason for multiple calls to the condition is as mentioned before,
because once it is evaluated for the TAKE and the second time for the
IGNORE edge. The reason is that every edge is evaluated independently.
There is no joint context or caching of conditions. I agree from a
perspec
Thanks Fabian,
That was the information I was missing.
(Late reply ... same here, FlinkForward 😊 )
Thias
-Original Message-
From: Fabian Paul
Sent: Donnerstag, 28. Oktober 2021 08:38
To: Schwalbe Matthias
Cc: Mason Chen ; user
Subject: Re: FlinkKafkaConsumer -> KafkaSource Stat
Hey everyone, we have a new two-part post published on the Apache Flink
blog about the sort-based blocking shuffle implementation in Flink. It
covers benchmark results, design and implementation details, and more! We
hope you like it and welcome any sort of feedback on it. :)
https://flink.apac
16 matches
Mail list logo