Bowen Li created FLINK-15240:
Summary: is_generic key is missing for Flink table stored in
HiveCatalog
Key: FLINK-15240
URL: https://issues.apache.org/jira/browse/FLINK-15240
Project: Flink
Issu
Rui Li created FLINK-15239:
--
Summary: TM Metaspace memory leak
Key: FLINK-15239
URL: https://issues.apache.org/jira/browse/FLINK-15239
Project: Flink
Issue Type: Bug
Components: Table SQL
xiaojin.wy created FLINK-15238:
--
Summary: A sql can't generate a valid execution plan
Key: FLINK-15238
URL: https://issues.apache.org/jira/browse/FLINK-15238
Project: Flink
Issue Type: Bug
xiaodao created FLINK-15237:
---
Summary: CsvTableSource and ref class better to Extract into a
module
Key: FLINK-15237
URL: https://issues.apache.org/jira/browse/FLINK-15237
Project: Flink
Issue
Congxian Qiu(klion26) created FLINK-15236:
-
Summary: Add a safety net for concurrent checkpoints on TM side
Key: FLINK-15236
URL: https://issues.apache.org/jira/browse/FLINK-15236
Project: Flin
Bowen Li created FLINK-15235:
Summary: create a Flink distribution for hive that includes all
Hive dependencies
Key: FLINK-15235
URL: https://issues.apache.org/jira/browse/FLINK-15235
Project: Flink
Bowen Li created FLINK-15234:
Summary: hive table created from flink catalog table cannot have
null properties in parameters
Key: FLINK-15234
URL: https://issues.apache.org/jira/browse/FLINK-15234
Project
Jark Wu created FLINK-15233:
---
Summary: Improve Kafka connector properties make append
update-mode as default
Key: FLINK-15233
URL: https://issues.apache.org/jira/browse/FLINK-15233
Project: Flink
Jingsong Lee created FLINK-15232:
Summary: Print match candidates to improve
NoMatchingTableFactoryException
Key: FLINK-15232
URL: https://issues.apache.org/jira/browse/FLINK-15232
Project: Flink
Zhenghua Gao created FLINK-15231:
Summary: Wrong HeapVector in AbstractHeapVector.createHeapColumn
Key: FLINK-15231
URL: https://issues.apache.org/jira/browse/FLINK-15231
Project: Flink
Issue
Hi all,
It seems that this discussion is idle, but I want to resume it because
I think this will make our community run faster and keep us in the safe
side.
I would like to summarize the discussions so far (please correct me if I'm
wrong):
1. we all agree to have a VOTE in mailing list for such
Hi Timo,
I understand we need further discussion about syntax/dialect for 1.11. But
as Jark has pointed out, the current implementation violates the accepted
design of FLIP-63, which IMO qualifies as a bug. Given that it's a bug and
has great impact on the usability of our Hive integration, do you
kevin created FLINK-15230:
-
Summary: flink1.9.1 table API JSON schema array type exception
Key: FLINK-15230
URL: https://issues.apache.org/jira/browse/FLINK-15230
Project: Flink
Issue Type: Bug
Bowen Li created FLINK-15229:
Summary: DDL for kafka connector following documentation doesn't
work
Key: FLINK-15229
URL: https://issues.apache.org/jira/browse/FLINK-15229
Project: Flink
Issue T
Seth Wiesman created FLINK-15228:
Summary: Drop vendor specific deployment documentation
Key: FLINK-15228
URL: https://issues.apache.org/jira/browse/FLINK-15228
Project: Flink
Issue Type: Imp
Seth Wiesman created FLINK-15227:
Summary: Fix broken documentation build
Key: FLINK-15227
URL: https://issues.apache.org/jira/browse/FLINK-15227
Project: Flink
Issue Type: Bug
Re
Biju Nair created FLINK-15226:
-
Summary: Running job with application parameters fails on a job
cluster
Key: FLINK-15226
URL: https://issues.apache.org/jira/browse/FLINK-15226
Project: Flink
Iss
Hi all,
There was recently a private report to the Flink PMC, as well as publicly
[1] about Flink's ability to execute arbitrary code. In scenarios where
Flink is accessible by somebody unauthorized, this can lead to issues.
The PMC received a similar report in November 2018.
I believe it would b
Hi Timo,
I am OK if you think they are not bug and they should not be included in
1.10.
I think they have been accepted in FLIP-63. And there is no objection. It
has been more than three months since the discussion of FLIP-63. It's been
six months since Flink added these two syntaxs.
But I can a
Hi Jingsong,
I will also add my opinion here for future discussions:
We had long discussions around SQL syntax in the past (e.g. for
WATERMARK or the concept of SYSTEM/TEMPORARY catalog objects) but in the
end all parties were happy and we came up with a good long-term solution
that is unlike
Chesnay Schepler created FLINK-15225:
Summary:
LeaderChangeClusterComponentsTest#testReelectionOfDispatcher occasionally
requires 30 seconds
Key: FLINK-15225
URL: https://issues.apache.org/jira/browse/FLINK-1
Zhu Zhu created FLINK-15224:
---
Summary: Resource requirements are not respected when fulfilling a
slot request with unresolvedRootSlots from a SlotSharingManager
Key: FLINK-15224
URL: https://issues.apache.org/jira/brows
Jark Wu created FLINK-15223:
---
Summary: Csv connector should unescape delimiter parameter
character
Key: FLINK-15223
URL: https://issues.apache.org/jira/browse/FLINK-15223
Project: Flink
Issue Type
Yu Li created FLINK-15222:
-
Summary: Move state benchmark utils into core repository
Key: FLINK-15222
URL: https://issues.apache.org/jira/browse/FLINK-15222
Project: Flink
Issue Type: Improvement
The upstream pull request[1] is open—usually they merge within a day and
the images are available shortly thereafter.
[1] https://github.com/docker-library/official-images/pull/7112
--
Patrick Lucas
On Thu, Dec 12, 2019 at 8:11 AM Hequn Cheng wrote:
> Hi Patrick,
>
> The release has been annou
chaiyongqiang created FLINK-15221:
-
Summary: supporting Exactly-once for table APi
Key: FLINK-15221
URL: https://issues.apache.org/jira/browse/FLINK-15221
Project: Flink
Issue Type: Wish
Thanks for your explanation. I think the proposal is reasonable.
On Thu, Dec 12, 2019 at 3:32 AM Yangze Guo wrote:
> Thanks for the feedback, Gary.
>
> Regarding the WordCount test:
> - True. There is no test coverage increment compared to others.
> However, I think each test case better not hav
Paul Lin created FLINK-15220:
Summary: Add startFromTimestamp in KafkaTableSource
Key: FLINK-15220
URL: https://issues.apache.org/jira/browse/FLINK-15220
Project: Flink
Issue Type: Improvement
Arvid Heise created FLINK-15219:
---
Summary: LocalEnvironment is not initializing plugins
Key: FLINK-15219
URL: https://issues.apache.org/jira/browse/FLINK-15219
Project: Flink
Issue Type: Improv
A quick idea is that we separate the deployment from user program that it
has always been done
outside the program. On user program executed there is always a
ClusterClient that communicates with
an existing cluster, remote or local. It will be another thread so just for
your information.
Best,
ti
Hi Jingsong,
Thanks for the explanation, I think I misunderstood your point at the
begining.
As FLIP-63 proposed, INSERT OVERWRITE and INSERT PARTITION syntax are added
to Flink's
SQL syntax, but CREATE PARTITION TABLE should be limited under Hive
dialect.
However, the current implementation is op
Hi Peter,
Another concern I realized recently is that with current Executors
abstraction(FLIP-73)
I'm afraid that user program is designed to ALWAYS run on the client side.
Specifically,
we deploy the job in executor when env.execute called. This abstraction
possibly prevents
Flink runs user progr
Hi Jark,
Let's recall FLIP-63,
We supported these syntax in hive dialect at 1.9. All of my reasons for
launching FLIP-63 are to bring partition support to Flink itself.
Not only batch, but also we have the need to stream jobs to write partition
files today, which is also one of our very important
Hi jingsong,
Watermark is not a standard syntax, that's why we had a FLIP and long
discussion to add it to
Flink's SQL syntax. I think if we want to add INSERT OVERWRITE and
PARTITION syntax to
Flink's own syntax, we also need a FLIP or a VOTE, and this may can't
happen soon (we should
hear more
Thanks Hequn for driving the release and everyone who makes this release
possible!
Thanks,
Zhu Zhu
Wei Zhong 于2019年12月12日周四 下午3:45写道:
> Thanks Hequn for being the release manager. Great work!
>
> Best,
> Wei
>
> 在 2019年12月12日,15:27,Jingsong Li 写道:
>
> Thanks Hequn for your driving, 1.8.3 fixed
35 matches
Mail list logo