Hi, Ron
Thanks for the explanation,
+1 (binding) from my side
Best,
Godfrey
刘大龙 于2022年4月22日周五 13:45写道:
>
>
> Hi, godfrey
>
> The add/delete jar syntax parse is supported in table environment side
> currently, but the execution is implemented in SqlClient side. After this
> FLIP, we will move
Hi, godfrey
The add/delete jar syntax parse is supported in table environment side
currently, but the execution is implemented in SqlClient side. After this FLIP,
we will move the execution to table environment, so here is no public api
change. Moreover, I have updated the description in Core
Chenyu Zheng created FLINK-27350:
Summary: JobManager doesn't bring up new TaskManager during
failure recovery
Key: FLINK-27350
URL: https://issues.apache.org/jira/browse/FLINK-27350
Project: Flink
hi Ron,
I don't see any section mentioned `delete jar`, could you update it?
Best,
Godfrey
Jing Zhang 于2022年4月21日周四 17:57写道:
>
> Ron,
> +1 (binding)
>
> Thanks for driving this FLIP.
>
> Best,
> Jing Zhang
>
> Jark Wu 于2022年4月21日周四 11:31写道:
>
> > Thanks for driving this work @Ron,
> >
> > +1 (
Hi Paul.
Thanks for your feedback.!
> SQL client would remain supporting REST endpoint only?
Yes. The SQL Client will only support REST endpoint. The FLIP's goal is to
migrate the FLINK Gateway into the Hive ecosystem. But I think we can leave
this as the future work.
Best,
Shengkai
Paul Lam 于
Hi, Paul.
Thanks for you feedback!
> Is operation status exposed to users or just for internal usage to
represent the job status
Only in the synchronized job submission, the operation status contains the
job status. When user chooses the synchronized job submission, it means
- user expects to wa
Thanks for driving the effort, Sebastion. I think the motivation makes a
lot of sense. Just a few suggestions / questions.
1. I think watermark alignment is sort of a general use case, so should we
just add the related methods to SourceReader directly instead of
introducing the new interface of Wi
Hi Paul
Sorry for the late response. I propose my thoughts here.
1. I think the keyword QUERY may confuse users because the statement also
works for the DML statement. I find the Snowflakes[1] supports
- CREATE TASK
- DROP TASK
- ALTER TASK
- SHOW TASKS
- DESCRIPE TASK
I think we can follow sno
Echo Lee created FLINK-27349:
Summary: The timeout of some methods such as
JobMaster#requestJobXXX(Time timeout) does not take effect
Key: FLINK-27349
URL: https://issues.apache.org/jira/browse/FLINK-27349
Ahmet Gürbüz created FLINK-27348:
Summary: Flink KafkaSource doesn't set groupId
Key: FLINK-27348
URL: https://issues.apache.org/jira/browse/FLINK-27348
Project: Flink
Issue Type: Bug
Jingsong Lee created FLINK-27347:
Summary: Create flink-table-store-hive module
Key: FLINK-27347
URL: https://issues.apache.org/jira/browse/FLINK-27347
Project: Flink
Issue Type: Sub-task
Jingsong Lee created FLINK-27346:
Summary: [umbrella] Introduce Hive reader for table store
Key: FLINK-27346
URL: https://issues.apache.org/jira/browse/FLINK-27346
Project: Flink
Issue Type:
+1 to public Flink Docker image for snapshot.
Best,
Jingsong
On Fri, Apr 22, 2022 at 2:23 AM Alexander Fedulov
wrote:
>
> Hi everyone,
>
> in the scope of work on externalizing connectors [1] it became evident that
> we need to add the process of releasing SNAPSHOT (nightly) Docker images
> for
ChangZhuo Chen (陳昌倬) created FLINK-27345:
Summary: operator does not update related resource when
flinkConfiguration, logConfiguration are updated.
Key: FLINK-27345
URL: https://issues.apache.org/jira/brow
Paul Lin created FLINK-27344:
Summary: FLIP-222: Support full query lifecycle statements in SQL
client
Key: FLINK-27344
URL: https://issues.apache.org/jira/browse/FLINK-27344
Project: Flink
Issu
Hi Rohith,
I guess the `word_count.py` you tried to execute is from flink release-1.15
or master branch. For pyflink 1.14 you are using, you need to change
`t_env.get_config().set_string("parallelism.default", "1")` to
`t_env.get_config().get_configuration().set_string("parallelism.default",
"1")`
pengyusong created FLINK-27343:
--
Summary: flink jdbc sink with default param will lead buffer
records in one batch unorder
Key: FLINK-27343
URL: https://issues.apache.org/jira/browse/FLINK-27343
Project:
Hi,
I set up pyflink on my system for python version 3.7.5 and pyflink version
1.14.4.
When i try to run word_count.py using command "python word_count.py" I get
this error
Traceback (most recent call last):
File "word_count.py", line 146, in
word_count(known_args.input, known_args.output)
Hi everyone,
in the scope of work on externalizing connectors [1] it became evident that
we need to add the process of releasing SNAPSHOT (nightly) Docker images
for Flink. Let me briefly explain why this is the case:
- currently, our container-based E2E tests rely on building Flink Docker
images
Thanks Yun Tang for your clarifications.
Let me keep my original structure and reply in these points...
3. Should we generalise the Temporal***State to offer arbitrary key types and
not just Long timestamps?
The use cases you detailed do indeed look similar to the ones we were
optimising in our
> However, a single source operator may read data from multiple
splits/partitions, e.g., multiple Kafka partitions, such that even with
watermark alignment the source operator may need to buffer excessive amount
of data if one split emits data faster than another.
For this part from the motivation
Thanks for working on this!
I wonder if "supporting" split alignment in SourceReaderBase and then doing
nothing if the split reader does not implement AlignedSplitReader could be
misleading? Perhaps WithSplitsAlignment can instead be added to the
specific source reader (i.e. KafkaSourceReader) to
Chesnay Schepler created FLINK-27342:
Summary: Link to Apache privacy policy
Key: FLINK-27342
URL: https://issues.apache.org/jira/browse/FLINK-27342
Project: Flink
Issue Type: Technical D
Yun Gao created FLINK-27341:
---
Summary: TaskManager running together with JobManager are bind to
127.0.0.1
Key: FLINK-27341
URL: https://issues.apache.org/jira/browse/FLINK-27341
Project: Flink
Iss
Hi Sebastian, Hi Dawid,
As part of this FLIP, the `AlignedSplitReader` interface (aka the stop &
resume behavior) will be implemented for Kafka and Pulsar only, correct?
+1 in general. I believe it is valuable to complete the watermark aligned
story with this FLIP.
Cheers,
Konstantin
On
Sergey Nuyanzin created FLINK-27340:
---
Summary: [JUnit5 Migration] Module: flink-python
Key: FLINK-27340
URL: https://issues.apache.org/jira/browse/FLINK-27340
Project: Flink
Issue Type: Sub
+1 (binding),
* checked licenses diff to my previous checks on rc2, this time
everything seems ok
* checked checksums, signatures, there are no binaries
* compiled from sources
* run a standalone cluster, clicked through the UI
* run StateMachineExample took a savepoint in native format a
To be explicit, having worked on it, I support it ;) I think we can
start a vote thread soonish, as there are no concerns so far.
Best,
Dawid
On 13/04/2022 11:27, Sebastian Mattheis wrote:
Dear Flink developers,
I would like to open a discussion on FLIP 217 [1] for an extension of
Watermark
Ron,
+1 (binding)
Thanks for driving this FLIP.
Best,
Jing Zhang
Jark Wu 于2022年4月21日周四 11:31写道:
> Thanks for driving this work @Ron,
>
> +1 (binding)
>
> Best,
> Jark
>
> On Thu, 21 Apr 2022 at 10:42, Mang Zhang wrote:
>
> > +1
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > --
> >
>
Chesnay Schepler created FLINK-27339:
Summary: Some classes don't have a package
Key: FLINK-27339
URL: https://issues.apache.org/jira/browse/FLINK-27339
Project: Flink
Issue Type: Technic
Hi Shengkai,
Thanks for starting the discussion.
As the FLIP doesn’t mention SQL client, I’m assuming that SQL client would
remain supporting REST endpoint only?
Best,
Paul Lam
> 2022年4月21日 14:45,Shengkai Fang 写道:
>
> Hi, Flink developers.
>
> I want to start a discussion about the FLIP-22
Hi Shengkai,
Good to see FLIP-91 is revisited after such a long time. Big +1 for the
proposal.
I’ve been using the SQL gateway for a while, just put in my 2 cents:
1. Is operation status exposed to users or just for internal usage to represent
the job status?
I’m assuming the latter, or e
luoyuxia created FLINK-27338:
Summary: Improve spliting file for Hive soure
Key: FLINK-27338
URL: https://issues.apache.org/jira/browse/FLINK-27338
Project: Flink
Issue Type: Improvement
Yang Wang created FLINK-27337:
-
Summary: Prevent session cluster to be deleted when there are
running jobs
Key: FLINK-27337
URL: https://issues.apache.org/jira/browse/FLINK-27337
Project: Flink
Jingsong Lee created FLINK-27336:
Summary: Avoid merging when there is only one record
Key: FLINK-27336
URL: https://issues.apache.org/jira/browse/FLINK-27336
Project: Flink
Issue Type: Impro
35 matches
Mail list logo