I entered code of Monitoring the Wikipedia Edit Stream which is a flink
example in intellij idea. I can run it without any problem. But when I make
a jar file of that, the jar file is not run. To make jar file, I follow this
path:
Flile ---> Project Structure ---> Artifacts ---> jar ---> From modu
Bowen Li created FLINK-12549:
Summary: include exceptions thrown by HMS client in
CatalogException in HiveCatalogBase
Key: FLINK-12549
URL: https://issues.apache.org/jira/browse/FLINK-12549
Project: Flink
Leonid Ilyevsky created FLINK-12548:
---
Summary: FlinkKafkaConsumer issues configuring underlying
KafkaConsumer
Key: FLINK-12548
URL: https://issues.apache.org/jira/browse/FLINK-12548
Project: Flink
Hey all,
Short update on the Flink Forward Call For Presentations: We've extended
the submission deadline till May 31 ... so there's more time to finish the
talk abstracts.
Also, the organizers are now able to cover travel costs for speakers in
cases where an employer can not cover them.
On Fr
Haibo Sun created FLINK-12547:
-
Summary: Deadlock when the task thread downloads jars using
BlobClient
Key: FLINK-12547
URL: https://issues.apache.org/jira/browse/FLINK-12547
Project: Flink
Issu
Konstantin Knauf created FLINK-12546:
Summary: Base Docker images on `library/flink`
Key: FLINK-12546
URL: https://issues.apache.org/jira/browse/FLINK-12546
Project: Flink
Issue Type: Imp
Hi,
1. Renaming “Runtime / Operators” to “Runtime / Task” or something like
> “Runtime / Processing”. “Runtime / Operators” was confusing me, since it
> sounded like it covers concrete implementations of the operators, like
> “WindowOperator” or various join implementations.
>
I'm fine with this
Till Rohrmann created FLINK-12545:
-
Summary: TableSourceTest.testNestedProject failed on Travis
Key: FLINK-12545
URL: https://issues.apache.org/jira/browse/FLINK-12545
Project: Flink
Issue Ty
zhijiang created FLINK-12544:
Summary: Deadlock during releasing memory in SpillableSubpartition
Key: FLINK-12544
URL: https://issues.apache.org/jira/browse/FLINK-12544
Project: Flink
Issue Type:
Thanks for your reply.
For the first question, it's not strictly necessary. But I perfer not to
have a TableEnvironment argument in Estimator.fit() or
Transformer.transform(), which is not part of machine learning concept, and
may make our API not as clean and pretty as other systems do. I would l
Andrey Zagrebin created FLINK-12543:
---
Summary: Consider naming convention for config options of shuffle
services
Key: FLINK-12543
URL: https://issues.apache.org/jira/browse/FLINK-12543
Project: Flin
Hi,
Why is it necessary to acquire a TableEnvironment from a Table?
I think you even said yourself what we should do: "I believe it's better to
make the api
clean and hide the detail of implementation as much as possible.”. In my
opinion this means we can only depend on the generic Table API mo
Thanks for your answer.
We also thought about this solution, but finally we rejected it.
In additional use case when needed to get unique reporting devices during time
range this solution can't help us.
Because could happen fallowing:
East-Site reported devices: 1,2,3
West-Site reported devices:
Hi Hwanju & Chesney,
Regarding various things that both of you mentioned, like accounting of state
restoration separately or batch scheduling, we can always acknowledge some
limitations of the initial approach and maybe we can address them later if we
evaluate it worth the effort.
Generally sp
Hi Thomas and Hwanju,
thanks for starting this discussion. As far as I know, there has not been a
lot of prior discussion or related work with respect to this topic.
Somewhat related is the discussion about job isolation in a session cluster
[1].
Whenever there is resource leak on Flink's side, w
I think you are right that some connectors will still need some special
metrics due to their peculiarities. I guess that this won't be addressed
with the FLIP but it could be a starting point.
Cheers,
Till
On Fri, May 17, 2019 at 8:26 AM Kailash Dayanand
wrote:
> Hello Till,
>
> Thanks a lot f
It's better not to depend on flink-table-planner indeed. It's currently
needed for 3 points: registering udagg, judging the tableEnv batch or
streaming, converting table to dataSet to collect data. Most of these
requirements can be fulfilled by flink-table-api-java-bridge and
flink-table-api-scala-
Hi Piotrek,
Thanks for insightful feedback and indeed you got most tricky parts and
concerns.
> 1. Do we currently account state restore as “RUNNING”? If yes, this might be
> incorrect from your perspective.
As Chesnay said, initializeState is called in StreamTask.invoke after
transitioning t
18 matches
Mail list logo