Hi guys,
Could anyone kindly give me the contributor permission? My JIRA ID is
kevin.cyj.
Thanks,
Yingjie
Yingjie Cao created FLINK-11000:
---
Summary: Introduce Resource Blacklist Mechanism
Key: FLINK-11000
URL: https://issues.apache.org/jira/browse/FLINK-11000
Project: Flink
Issue Type: Improvement
ideal-hp created FLINK-10999:
Summary: Adding time multiple times causes Runtime :
java.sql.Timestamp cannot be cast to java.lang.Long
Key: FLINK-10999
URL: https://issues.apache.org/jira/browse/FLINK-10999
Hi Becket,
Ops, sorry I didn’t notice that you intend to reuse existing `TableFactory`. I
don’t know why, but I assumed that you want to provide an alternate way of
writing the data.
Now that I hopefully understand the proposal, maybe we could rename `cache()`
to
void materialize()
or going
Chesnay Schepler created FLINK-10998:
Summary: flink-metrics-ganglia has LGPL dependency
Key: FLINK-10998
URL: https://issues.apache.org/jira/browse/FLINK-10998
Project: Flink
Issue Type:
Hi Piotrek,
For the cache() method itself, yes, it is equivalent to creating a BUILT-IN
materialized view with a lifecycle. That functionality is missing today,
though. Not sure if I understand your question. Do you mean we already have
the functionality and just need a syntax sugar?
What's more
Chesnay Schepler created FLINK-10997:
Summary: Avro-confluent-registry does not bundle any dependency
Key: FLINK-10997
URL: https://issues.apache.org/jira/browse/FLINK-10997
Project: Flink
Hi,Piotr
Sorry for so late to response.
First of all I think Flink runtime can assigned a thread for a StreamTask,
which likes 'Actor' model. The number of threads for a StreamTask should
not be proportional to the operator or other things. This will give Flink
the ability to scale horizontally
Something like:
val x = tab.window(Tumble ... as 'w)
.groupBy('w, 'k1, 'k2)
.flatAgg(tableAgg('a)).as('w, 'k1, 'k2, 'col1, 'col2)
x.insertInto("sinkTable") // fails because result schema has changed from
((start, end, rowtime), k1, k2, col1, col2) to ((start, end, rowtime,
newProperty), k
Hi,
Interesting idea. I’m trying to understand the problem. Isn’t the `cache()`
call an equivalent of writing data to a sink and later reading from it? Where
this sink has a limited live scope/live time? And the sink could be implemented
as in memory or a file sink?
If so, what’s the problem w
Hi Fabian,
I don't fully understand the question you mentioned:
Any query that relies on the composite type with three fields will fail
after adding a forth field.
I am appreciate if you can give some detail examples ?
Regards,
JIncheng
Fabian Hueske 于2018年11月23日周五 下午4:41写道:
> Hi,
>
> My
Hi everyone,
thanks for the great feedback so far. I updated the document with the
input I got so far
@Fabian: I moved the porting of flink-table-runtime classes up in the list.
@Xiaowei: Could you elaborate what "interface only" means to you? Do you
mean a module containing pure Java `inter
Hi Timo,
Thanks for writing this down +1 from my side :)
> I'm wondering that whether we can have rule in the interim when Java and
> Scala coexist that dependency can only be one-way. I found that in the
> current code base there are cases where a Scala class extends Java and vise
> versa. Th
aitozi created FLINK-10996:
--
Summary: Enable state ttl in cep
Key: FLINK-10996
URL: https://issues.apache.org/jira/browse/FLINK-10996
Project: Flink
Issue Type: Improvement
Components: CEP
Hi,
My concerns are about the case when there is no additional select() method,
i.e.,
tab.window(Tumble ... as 'w)
.groupBy('w, 'k1, 'k2)
.flatAgg(tableAgg('a)).as('w, 'k1, 'k2, 'col1, 'col2)
In this case, 'w is a composite field consisting of three fields (end,
start, rowtime).
Once we
zhijiang created FLINK-10995:
Summary: Copy intermediate serialization results only once for
broadcast mode
Key: FLINK-10995
URL: https://issues.apache.org/jira/browse/FLINK-10995
Project: Flink
16 matches
Mail list logo