Jing Zhang created FLINK-25604:
--
Summary: Remove useless aggregate function
Key: FLINK-25604
URL: https://issues.apache.org/jira/browse/FLINK-25604
Project: Flink
Issue Type: Sub-task
Thanks everyone for your voting.
If there are no objections, I'll close this vote and send a vote result mail:
- create a sub project named `flink-table-store`.
Best,
Jingsong
On Tue, Jan 11, 2022 at 2:51 PM Jingsong Li wrote:
>
> Hi Fabian,
>
> Thanks for your information.
>
> If gradle is mat
Hi all,
I'd like to start a vote on FLIP-199: Change some default config values of
blocking shuffle for better usability [1] which has been discussed in this
thread [2].
The vote will be open for at least 72 hours unless there is an objection or
not enough votes.
[1]
https://cwiki.apache.org/con
Hi Fabian,
Thanks for your information.
If gradle is mature later, it should not be too difficult to migrate
from maven, we can consider it later.
Best,
Jingsong
On Tue, Jan 11, 2022 at 12:00 PM 刘首维 wrote:
>
> Thanks for driving this, Jingsong.
> +1 (non-binding) for separate repository.
>
>
>
lupan created FLINK-25602:
-
Summary: make the BlobServer use aws s3
Key: FLINK-25602
URL: https://issues.apache.org/jira/browse/FLINK-25602
Project: Flink
Issue Type: Improvement
Components
lupan created FLINK-25603:
-
Summary: make the BlobServer use aws s3
Key: FLINK-25603
URL: https://issues.apache.org/jira/browse/FLINK-25603
Project: Flink
Issue Type: Improvement
Components
Thank you Xingbo. I meanwhile also got my Azure pipeline working and
was able to build the artifacts. Although in general it would be nice
if not every release volunteer had to set up their separate Azure
environment.
Martijn,
The release is staged, except for the website PR:
https://issues.apac
Thanks for driving this, Jingsong.
+1 (non-binding) for separate repository.
Best Regards,
Shouwei
-- --
??:
"dev"
Ada Wong created FLINK-25601:
Summary: Update 'state.backend' in flink-conf.yaml
Key: FLINK-25601
URL: https://issues.apache.org/jira/browse/FLINK-25601
Project: Flink
Issue Type: Improvement
Hi Gen,
Thanks for your feedback.
I think you are talking about how we are going to store the caching
data. The first option is to write the data with a sink to an external
file system, much like the file store of the Dynamic Table. If I
understand correctly, it requires a distributed file system
Wenlong Lyu created FLINK-25600:
---
Summary: Support new statement set syntax in sql client and update
docs
Key: FLINK-25600
URL: https://issues.apache.org/jira/browse/FLINK-25600
Project: Flink
Zhuang Liu created FLINK-25599:
---
Summary: The description of taskmanager.memory.task.heap.size in
the official document is incorrect
Key: FLINK-25599
URL: https://issues.apache.org/jira/browse/FLINK-25599
Hi Fabian,
Thanks for the comments!
By "add a source mixin interface", are you suggesting to update
the org.apache.flink.api.connector.source.Source interface to add the API
"RecordEvaluator getRecordEvaluator()"? If so, it seems to add more
public API and thus more complexity than the solution i
Roman Khachatryan created FLINK-25598:
-
Summary: Changelog materialized state discarded on failure
Key: FLINK-25598
URL: https://issues.apache.org/jira/browse/FLINK-25598
Project: Flink
I
David Morávek created FLINK-25597:
-
Summary: Document which URI based config options works with local
/ general filesystem
Key: FLINK-25597
URL: https://issues.apache.org/jira/browse/FLINK-25597
Proje
Thank you for starting the discussion. Being able to change the logging
level at runtime is very valuable in my experience.
Instead of introducing our own API (and eventually even persistence), could
we just periodically reload the log4j or logback configuration from the
environment/filesystem? I
Hi everyone,
Hope you enjoyed the Holiday Season.
I would like to start the discussion on the improvement purpose
FLIP-210 [1] which aims to provide a way to change log levels at
runtime to simplify issues and bugs detection as reported in the
ticket FLINK-16478 [2].
Firstly, thanks Xingxing Di a
+1 for the separate repository, and the name "flink-table-store".
Best,
Godfrey
Becket Qin 于2022年1月10日周一 22:22写道:
>
> Thanks for the FLIP, Jingsong.
>
> +1 (binding)
>
> Naming wise, I am also slightly leaning towards calling it
> "flink-table-store".
>
> Thanks,
>
> Jiangjie (Becket) Qin
>
> On
Thanks for the FLIP, Jingsong.
+1 (binding)
Naming wise, I am also slightly leaning towards calling it
"flink-table-store".
Thanks,
Jiangjie (Becket) Qin
On Mon, Jan 10, 2022 at 7:39 PM Fabian Paul wrote:
> Hi all,
>
> I just wanted to give my two cents for the build system discussion. In
>
Jing Zhang created FLINK-25596:
--
Summary: Specify hash/sortmerge join in SQL hint
Key: FLINK-25596
URL: https://issues.apache.org/jira/browse/FLINK-25596
Project: Flink
Issue Type: Sub-task
Jing Zhang created FLINK-25595:
--
Summary: Specify hash/sort aggregate strategy in SQL hint
Key: FLINK-25595
URL: https://issues.apache.org/jira/browse/FLINK-25595
Project: Flink
Issue Type: Sub-
Jing Zhang created FLINK-25594:
--
Summary: Take parquet metadata into consideration when source is
parquet files
Key: FLINK-25594
URL: https://issues.apache.org/jira/browse/FLINK-25594
Project: Flink
Jing Zhang created FLINK-25593:
--
Summary: A redundant scan could be skipped if it is an input of
join and the other input is empty after partition prune
Key: FLINK-25593
URL: https://issues.apache.org/jira/browse/FLI
+1 (binding)
Cheers,
Till
On Mon, Jan 10, 2022 at 2:30 PM Etienne Chauchot
wrote:
> +1
>
> Best
>
> Etienne Chauchot
>
> Le 10/01/2022 à 10:22, Till Rohrmann a écrit :
> > Hi everyone,
> >
> > I'd like to start a vote on FLIP-201: Persist local state in working
> > directory [1] which has been
Hi Qin,
Thanks for bringing up this issue. AFAIK, there is no such mechanism in
Flink for dynamic task re-assignment at runtime, as states need to be
correctly re-distributed across the nodes, which is highly error-prone and
not well-suited for the current computation model.
However, if the data-
Jing Zhang created FLINK-25592:
--
Summary: Improvement of parser, optimizer and execution for Flink
Batch SQL
Key: FLINK-25592
URL: https://issues.apache.org/jira/browse/FLINK-25592
Project: Flink
+1
Best
Etienne Chauchot
Le 10/01/2022 à 10:22, Till Rohrmann a écrit :
Hi everyone,
I'd like to start a vote on FLIP-201: Persist local state in working
directory [1] which has been discussed in this thread [2].
The vote will be open for at least 72 hours unless there is an objection or
not
Fabian Paul created FLINK-25591:
---
Summary: Use FileSource for StreamExecutionEnvironment.readFiles
Key: FLINK-25591
URL: https://issues.apache.org/jira/browse/FLINK-25591
Project: Flink
Issue T
Hi Dong,
Thank you for updating the FLIP and making it applicable for all
sources. I am a bit unsure about the implementation part. I would
propose to add a source mixin interface that implements
`getRecordEvaluator` and sources that want to allow dynamically
stopping implement that interface.
An
Hi all,
I just wanted to give my two cents for the build system discussion. In
general, I agree with David's opinion to start new projects with
Gradle but during the development of the external connector
repository, we found some difficulties that still need to be solved. I
do not want to force an
Hi, Ronak,
We can not set specific 'value.deserializer' in table option.
'key.deserializer' and 'value.deserializer' is always set to
'org.apache.kafka.common.serialization.ByteArrayDeserializer'.
If you want to implement a format, you could take a look at the code
JsonFormatFactory.java in flink
Anton Kalashnikov created FLINK-25590:
-
Summary: Logging warning of insufficient memory for all configured
buffers
Key: FLINK-25590
URL: https://issues.apache.org/jira/browse/FLINK-25590
Project:
I'm also in favour of "flink-table-store".
Best,
Jark
On Mon, 10 Jan 2022 at 16:18, David Morávek wrote:
> Hi Jingsong,
>
> the connector repository prototype I've seen is being built on top of
> Gradle [1], that's why I was referring to it (I think one idea was also to
> migrate the main repos
Alexander Preuss created FLINK-25589:
Summary: Update Chinese version of Elasticsearch connector docs
Key: FLINK-25589
URL: https://issues.apache.org/jira/browse/FLINK-25589
Project: Flink
There could be a problem with the fat image actually depending on the
slim version; I'm not sure if the official-images repo supports that.
We should however be able to generate 2 separate standalone dockerfiles.
On 23/12/2021 11:16, Till Rohrmann wrote:
Hi David,
Thanks for starting this disc
Hi Hang,
My question is can we use specific ‘value.deserializer’ in table option via
kafka connector is there any way or not ? I have already kept 'value.format' in
below code snippet so is that enough and handle deserializer by itself
internally?
How to create custom format can you please sha
What CI resources do you actually intend use? Asking since the ASF GHA
resources are afaik quite overloaded.
On 05/01/2022 11:48, Martijn Visser wrote:
Hi everyone,
I wanted to summarise the email thread and see if there are any open items
that still need to be discussed, before we can finalis
Francesco Guardiani created FLINK-25588:
---
Summary: Add jdk8 and datetime module to jackson shaded
Key: FLINK-25588
URL: https://issues.apache.org/jira/browse/FLINK-25588
Project: Flink
Hi, Ronak,
I think you should implement a custom format by yourself instead of
overriding. The 'value.format' is a required table option.
Best,
Hang
Ronak Beejawat (rbeejawa) 于2022年1月10日周一
17:09写道:
> Hi Team,
>
> Is there any way we use value.deserializer in Connector Options from kafka
> via
Correction: My previous vote in this thread is actually non-binding. 8)
On Wed, Jan 5, 2022 at 2:57 PM Matthias Pohl wrote:
> +1 (binding)
>
> - Verified the checksums
> - Checked the website PR
> - Diff'd the NOTICE files comparing it to 14.0 to check for anything
> suspicious
> - build Flink s
Hi,
There is already an on-going issue about it. (
https://issues.apache.org/jira/browse/FLINK-24456)
Best,
hang
聂荧屏 于2022年1月10日周一 10:06写道:
> hello
>
>
> Is there any plan to develop batch mode of Flink SQL Kafka connector?
>
> I would like to use kafka connector for daily/hourly/minute-by-min
Hi everyone,
I'd like to start a vote on FLIP-201: Persist local state in working
directory [1] which has been discussed in this thread [2].
The vote will be open for at least 72 hours unless there is an objection or
not enough votes.
[1] https://cwiki.apache.org/confluence/x/wJuqCw
[2] https://
Hi Yun,
I assume that most people will use this feature with k8s like
deployment environments. But in theory it works everywhere where you can
establish a stable relationship between volumes and Flink processes. If
Flink processes are restarted on different nodes, then of course you need
volumes t
Hi Team,
Is there any way we use value.deserializer in Connector Options from kafka via
sql api?
PFB below code snippt :
tableEnv.executeSql("CREATE TABLE cmrTable (\r\n"
+ " org_id STRING\r\n"
+ " ,cluster_id STRING\r\n"
+ " ,globalcallid_callmanage
This is really great news. Thanks a lot for all the work Dong, Yun, Zhipeng
and others!
Cheers,
Till
On Fri, Jan 7, 2022 at 2:36 PM David Morávek wrote:
> Great job! <3 Thanks Dong and Yun for managing the release and big thanks
> to everyone who has contributed!
>
> Best,
> D.
>
> On Fri, Jan
I think this feature could indeed help recovery faster on the case of node
failure.
It seems this feature could only work well with k8s-like deployment environment?
Best,
Yun Tang
From: David Morávek
Sent: Wednesday, January 5, 2022 19:51
To: dev
Subject: Re:
Yun Gao created FLINK-25587:
---
Summary: HiveCatalogITCase crashed on Azure with exit code 239
Key: FLINK-25587
URL: https://issues.apache.org/jira/browse/FLINK-25587
Project: Flink
Issue Type: Bug
Hi Jingsong,
the connector repository prototype I've seen is being built on top of
Gradle [1], that's why I was referring to it (I think one idea was also to
migrate the main repository to Gradle eventually). I think Martijn / Fabian
may be bit more familiar with the connectors repository effort a
48 matches
Mail list logo