Jiangjie Qin created FLINK-27554:
Summary: The asf-site does not build on Apple Silicon
Key: FLINK-27554
URL: https://issues.apache.org/jira/browse/FLINK-27554
Project: Flink
Issue Type: Impr
Hi Jark,
Thanks for the explanation, it answered my question well ~
Only one thing, if the keyRow count is N and value rowCount is M (N, M >
1), the cartesian product might not cover all use cases.
But I think we don't need to worry about it for now, since this case is
rare and we can discuss it
Zheng Hu created FLINK-27553:
Summary: Clarify the semantic of RecordWriter interface.
Key: FLINK-27553
URL: https://issues.apache.org/jira/browse/FLINK-27553
Project: Flink
Issue Type: Sub-task
João Boto created FLINK-27552:
-
Summary: Prometheus metrics
Key: FLINK-27552
URL: https://issues.apache.org/jira/browse/FLINK-27552
Project: Flink
Issue Type: Bug
Components: Runtime /
Hi Affe,
Regarding the implementation, from the interface of
`DeserializationSchema#deserialize(byte[], Collector)`, it might emit
multiple rows.
So this is just a more generic implementation instead of hard-code dropping
rows.
Even though, currently, there is no built-in key format that will emi
Gyula Fora created FLINK-27551:
--
Summary: Consider implementing our own status update logic
Key: FLINK-27551
URL: https://issues.apache.org/jira/browse/FLINK-27551
Project: Flink
Issue Type: Imp
Hi JingSong.
Thanks for your feedback.
> reorganize the FLIP, what Pluggable Endpoint Discovery is, and how users
to add new Endpoint, before introducing SQLGatewayService.
I update the FLIP: reorganize the order and add more details.
> Then I have some doubts about the name SQLGatewayService,
Hi, Cao, thanks for your feedback.
—— "Firstly, table alias could be supported in the hint?"
I think table alias couldn't be supported in the hint just like what you say
and only table name and view name are supported in the hint temporarily.
Currently, the alias will be ignored when the Sql
I got what you want, maybe something like DISTRIBUTED BY in Hive SQL.
The community is planning to support this feature but has not started yet.
@Godfrey will drive this work.
Best,
Jark
On Mon, 9 May 2022 at 13:45, lpengdr...@163.com wrote:
> Hi
> Thanks for your reply.
> The way I wan
Junfan Zhang created FLINK-27550:
Summary: Remove checking yarn queues before submitting job to Yarn
Key: FLINK-27550
URL: https://issues.apache.org/jira/browse/FLINK-27550
Project: Flink
Iss
I was reading the source code of Kafka SQL Source Connector, I noticed that
in DynamicKafkaDeserializationSchema[1], when the schema emits multiple
keys, the code is doing a cartesian product of the key rows and value rows.
I know that in CDC, a format can emit multiple rows (UPDATE_BEFORE and
UPD
Hi
Thanks for your reply.
The way I want is not only for hash-lookup-join, there are manay
operators need a hash-operation to solve the skew-problem. Lookup-join is a
special scene.
So I hope there is a operator could make a shuffle. Maybe it's a way to
solve the problems ?
Ran Tao created FLINK-27549:
---
Summary: Inconsistent bytecode version when enable jdk11 or higher
version
Key: FLINK-27549
URL: https://issues.apache.org/jira/browse/FLINK-27549
Project: Flink
Issu
Hi all,
Thanks for your review on flink-web PR.
I have resolved comments, please take a look if you have time~
Hi Becket and Yu,
About quick-start, I created a JIRA to improve:
https://issues.apache.org/jira/browse/FLINK-27548
Hi Yun, thanks for your PR, I have merged it.
Best,
Jingsong
On Mon
Jingsong Lee created FLINK-27548:
Summary: Improve quick-start of table store
Key: FLINK-27548
URL: https://issues.apache.org/jira/browse/FLINK-27548
Project: Flink
Issue Type: Improvement
Ran Tao created FLINK-27547:
---
Summary: Hardcode of pom aboout java11 & java17 target java
version may cause hidden error
Key: FLINK-27547
URL: https://issues.apache.org/jira/browse/FLINK-27547
Project: Flin
Thanks Xuyang for driving.
zoucao also mentioned alias.
Can you explain in the FLIP why alias is not supported? What are the
difficulties and maybe we can try to overcome them. Or how do we need
to report errors if we don't support it.
Best,
Jingsong
On Mon, May 9, 2022 at 10:53 AM Xuyang wrot
Hi,
If you are looking for the hash lookup join, there is an in-progress
FLIP-204[1] working for it.
Btw, I still can't see your picture. You can upload your picture to some
image service and share a link here.
Best,
Jark
[1]:
https://cwiki.apache.org/confluence/display/FLINK/FLIP-204%3A+Introd
Thank Jingsong for the explanation.
+1(binding) from my side.
Best,
Jark
On Mon, 9 May 2022 at 11:37, Jingsong Li wrote:
> Hi Jark,
>
> > I have checked the LICENSE and NOTICE files but found that the uber jar
> (flink-table-store-dist) doesn't provide an appropriate NOTICE file to
> list the
Zheng Hu created FLINK-27546:
Summary: Add append only writer which implements the RecordWriter
interface.
Key: FLINK-27546
URL: https://issues.apache.org/jira/browse/FLINK-27546
Project: Flink
Hi Jark,
> I have checked the LICENSE and NOTICE files but found that the uber jar
(flink-table-store-dist) doesn't provide an appropriate NOTICE file to
list the bundled
dependencies.
Because the dependencies typed in by flink-table-store-dist are all
dependencies of flink-sql-jar, they already
Sorry!
The destroied picture is the attachment ;
lpengdr...@163.com
发件人: lpengdr...@163.com
发送时间: 2022-05-09 11:16
收件人: user-zh; dev
主题: 【Could we support distribute by For FlinkSql】
Hello:
Now we cann't add a shuffle-operation in a sql-job.
Sometimes , for example, I have a kafka-source(
Congrats Yang!
Best,
LuNing Wang
Dian Fu 于2022年5月7日周六 17:21写道:
> Congrats Yang!
>
> Regards,
> Dian
>
> On Sat, May 7, 2022 at 12:51 PM Jacky Lau wrote:
>
> > Congrats Yang and well Deserved!
> >
> > Best,
> > Jacky Lau
> >
> > Yun Gao 于2022年5月7日周六 10:44写道:
> >
> > > Congratulations Yang!
> >
Hello:
Now we cann't add a shuffle-operation in a sql-job.
Sometimes , for example, I have a kafka-source(three partitions) with
parallelism three. And then I have a lookup-join function, I want process the
data distribute by id so that the data can split into thre parallelism evenly
(The so
Hi, Jark. Thanks for your review. >Join Hint is a public API for SQL
syntax. It should work for both streaming and batch SQL.I agree with your
opinion. But currently, only in batch the optimizer has different Join
strategies for Join and there is no choice of join strategies in the stream.
The
Dian Fu created FLINK-27545:
---
Summary: Update examples in PyFlink shell
Key: FLINK-27545
URL: https://issues.apache.org/jira/browse/FLINK-27545
Project: Flink
Issue Type: Bug
Components:
Chengkai Yang created FLINK-27544:
-
Summary: Example code in 'Structure of Table API and SQL Programs'
is out of date and cannot run
Key: FLINK-27544
URL: https://issues.apache.org/jira/browse/FLINK-27544
Hi Folks,
I have recently started using Flink and am excited to contribute to
the same. I am looking at two subcomponents:
https://issues.apache.org/jira/browse/FLINK-26041?jql=project%20%3D%20FLINK%20AND%20component%20%3D%20%22Runtime%20%2F%20Coordination%22%20AND%20status%20%3D%20Open
https://
It prints below INFO logs but not the logs that are part of flink classes.
[INFO] T E S T S
[INFO] ---
[INFO] Running org.apache.flink.streaming.api.graph.StreamGraphGeneratorTest
1830 [main] INFO
org.apache.flink.streaming.api.graph.StreamGrap
Hi Xuyang, thanks for driving this valuable discussion.
I think it makes sense to improve batch processing capability for FlinkSQL.
Using Query hints can make the optimization more flexible and accurate.
After going through the design doc, I have some confusion maybe you can
help to clarify.
First
Jingsong Lee created FLINK-27543:
Summary: Introduce StatsProducer to refactor code in
DataFileWriter
Key: FLINK-27543
URL: https://issues.apache.org/jira/browse/FLINK-27543
Project: Flink
You can try to modify resources/log4j2-test.properties.
`rootLogger.level = OFF` => `rootLogger.level = INFO`
Best,
Jingsong
On Sun, May 8, 2022 at 7:29 PM Prabhu Joseph wrote:
>
> Hi, I am trying to understand a Flink Unit Test case and so was checking
> the logs of it on my local machine. But
Hi, I am trying to understand a Flink Unit Test case and so was checking
the logs of it on my local machine. But I could not find the log file.
Could anyone tell me where/how to get the logs for the unit test case?
In Hadoop, under surefire-reports, the log, out and err file of test class
will be
33 matches
Mail list logo