Biao Liu created FLINK-11137:
Summary: Unexpected RegistrationTimeoutException of TaskExecutor
Key: FLINK-11137
URL: https://issues.apache.org/jira/browse/FLINK-11137
Project: Flink
Issue Type: B
cailiuyang created FLINK-11138:
--
Summary: RocksDB's wal seems to be useless in
rocksdb-state-backend that reduce the disk's overhead
Key: FLINK-11138
URL: https://issues.apache.org/jira/browse/FLINK-11138
HI Timo,
Thanks for your feedback! And I'm glad to hear that you are already
thinking about import issues!
1. I commented on the solution you mentioned in FLINK-11067. I have the
same questions with Dian Fu, about the design of compatibility in the
google doc, I look forward to your reply.
2. Ab
Dian Fu created FLINK-11136:
---
Summary: Fix the logical of merge for DISTINCT aggregates
Key: FLINK-11136
URL: https://issues.apache.org/jira/browse/FLINK-11136
Project: Flink
Issue Type: Test
Paul Lin created FLINK-11135:
Summary: Reduce priority of the deprecated Hadoop specific Flink
conf options
Key: FLINK-11135
URL: https://issues.apache.org/jira/browse/FLINK-11135
Project: Flink
Hi Till and Piotrek,
Thanks for the clarification. That solves quite a few confusion. My
understanding of how cache works is same as what Till describe. i.e.
cache() is a hint to Flink, but it is not guaranteed that cache always
exist and it might be recomputed from its lineage.
Is this the core
Mark Cho created FLINK-11134:
Summary: Invalid REST API request should not log the full
exception in Flink logs
Key: FLINK-11134
URL: https://issues.apache.org/jira/browse/FLINK-11134
Project: Flink
Hi Timo,
Thanks a lot for sharing the solution so quickly. I have left some comments on
the JIRA page mainly about the backwards compatibility. Looking forward to your
reply.
Thanks,
Dian
> 在 2018年12月11日,下午10:48,Timo Walther 写道:
>
> Hi Dian,
>
> I proposed a solution that should be backward
Mark Cho created FLINK-11133:
Summary: FsCheckpointStorage is unaware about S3 entropy when
creating directories
Key: FLINK-11133
URL: https://issues.apache.org/jira/browse/FLINK-11133
Project: Flink
Yes,Thank you so much.
Fabian Hueske 于2018年12月12日周三 上午9:48写道:
> Hi,
>
> Welcome to the Flink community.
> I gave you contributor permissions for Jira.
>
> Best, Fabian
>
> Am Mi., 12. Dez. 2018 um 02:28 Uhr schrieb shen lei <
> shenleifight...@gmail.com>:
>
> > Hi there ,
> > I am interested in
Hi,
Welcome to the Flink community.
I gave you contributor permissions for Jira.
Best, Fabian
Am Mi., 12. Dez. 2018 um 02:28 Uhr schrieb shen lei <
shenleifight...@gmail.com>:
> Hi there ,
> I am interested in Flink, and I want to find some easy jira issues to
> study flink.If possible,I hope
Hi there ,
I am interested in Flink, and I want to find some easy jira issues to
study flink.If possible,I hope to make some contribution to flink.
My jira account id : shenlang. Thank you very much.
I like that we are having a general discussion about how to use Python and
Flink together in the future.
The current python support has some shortcomings that were mentioned
before, so we clearly need something better.
Parts of the community have worked together with the Apache Beam project,
which
Hi all,
thanks for summarizing the discussion @Shuyi. I think we need to include
the "table update mode" problem as it might not be changed easily in the
future. Regarding "support row/map/array data type", I don't see a
problem why we should not support them now as the data types are already
Edmond created FLINK-11132:
--
Summary: Restore From Savepoint on HA Setup
Key: FLINK-11132
URL: https://issues.apache.org/jira/browse/FLINK-11132
Project: Flink
Issue Type: Bug
Components:
Hequn Cheng created FLINK-11131:
---
Summary: Enable unused import checkstyle on flink-core and
flink-runtime tests
Key: FLINK-11131
URL: https://issues.apache.org/jira/browse/FLINK-11131
Project: Flink
Hi Dian,
I proposed a solution that should be backwards compatible and solves our
Maven dependency problems in the corresponding issue.
I'm happy about feedback.
Regards,
Timo
Am 11.12.18 um 11:23 schrieb fudian.fd:
Hi Timo,
Thanks a lot for your reply. I think the cause to this problem i
Did you take a look at Apache Beam? It already provides a comprehensive
Python SDK and can be used with Flink:
https://beam.apache.org/roadmap/portability/#python-on-flink
We are using it at Lyft for Python streaming pipelines.
Thomas
On Tue, Dec 11, 2018 at 5:54 AM Xianda Ke wrote:
> Hi Till,
shenlei created FLINK-11130:
---
Summary: Migrate flink-table runtime
KeyedCoProcessOperatorWithWatermarkDelay class
Key: FLINK-11130
URL: https://issues.apache.org/jira/browse/FLINK-11130
Project: Flink
Hi Till,
1. So far as I know, most of the users at Alibaba are using SQL. Some of
users at Alibaba want integrated python libraries with Flink for streaming
processing, and Jython is unusable.
2. Python UDFs for SQL:
* declaring python UDF based on Alibaba's internal DDL syntax.
* start a Python
Hi Qianjin,
I've given you the required permissions.
Cheers,
Till
On Tue, Dec 11, 2018 at 10:54 AM qianjin xu wrote:
> hi there:
>
> I have added two subtasks for Migrate flink-table runtime classes, I want
> to apply for flink contributor permission。 issues id is as follows:
>FLINK
Hi Becket,
I was aiming at semantics similar to 1. I actually thought that `cache()`
would tell the system to materialize the intermediate result so that
subsequent queries don't need to reprocess it. This means that the usage of
the cached table in this example
{
val cachedTable = a.cache()
va
Hi Jeff,
what you are proposing is to provide the user with better programmatic job
control. There was actually an effort to achieve this but it has never been
completed [1]. However, there are some improvement in the code base now.
Look for example at the NewClusterClient interface which offers a
Hi Timo,
Thanks a lot for your reply. I think the cause to this problem is that
TableEnvironment.getTableEnvironment() returns the actual TableEnvironment
implementations instead of an interface or an abstract base class. Even the
porting of FLINK-11067 is done, I'm afraid that the problem may
Hi Becket,
> {
> val cachedTable = a.cache()
> val b = cachedTable.select(...)
> val c = a.select(...)
> }
>
> Semantic 1. b uses cachedTable as user demanded so. c uses original DAG as
> user demanded so. In this case, the optimizer has no chance to optimize.
> Semantic 2. b uses cachedTable
lining created FLINK-11129:
--
Summary: dashbord for job which contain important information
Key: FLINK-11129
URL: https://issues.apache.org/jira/browse/FLINK-11129
Project: Flink
Issue Type: Improvem
hi there:
I have added two subtasks for Migrate flink-table runtime classes, I want
to apply for flink contributor permission。 issues id is as follows:
FLINK-11097
FLINK-11099
jira account id:
x1q1j1
thanks
qianjin Xu
Denys Fakhritdinov created FLINK-11128:
--
Summary: Methods of org.apache.flink.table.expressions.Expression
are private[flink]
Key: FLINK-11128
URL: https://issues.apache.org/jira/browse/FLINK-11128
Hi Folks,
I am trying to integrate flink into apache zeppelin which is an interactive
notebook. And I hit several issues that is caused by flink client api. So
I'd like to proposal the following changes for flink client api.
1. Support nonblocking execution. Currently, ExecutionEnvironment#execut
Ufuk Celebi created FLINK-11127:
---
Summary: Make metrics query service establish connection to
JobManager
Key: FLINK-11127
URL: https://issues.apache.org/jira/browse/FLINK-11127
Project: Flink
Hi Aljoscha,
thanks for your feedback. I also don't like the fact that an API depends
on runtime. I will try to come up with a better design while
implementing a PoC. The general goal should be to make table programs
still runnable in an IDE. So maybe there is a better way of doing it.
Regar
31 matches
Mail list logo