To add, Hive integration depends on a few features that are actively developed.
If the completion of those features don't leave enough time for us to
integrate, then our work can potentially go beyond the proposed date.
Just wanted to point out such a dependency adds uncertainty.
Thanks,
Xuefu
+1 on the idea. This will certainly help promote Flink in China industries. On
a side note, it would be great if anyone in the list can help source ideas, bug
reports, and feature requests to dev@ list and/or JIRAs so as to gain broader
attention.
Thanks,
Xuefu
---
Hi Stephan,
Thanks for bringing up the discussions. I'm +1 on the merging plan. One
question though: since the merge will not be completed for some time and there
are might be uses trying blink branch, what's the plan for the development in
the branch? Personally I think we may discourage big c
;
>> On Wed, Jan 2, 2019 at 1:35 PM Eron Wright > <mailto:eronwri...@gmail.com>> wrote:
>>
>> I propose that the community review and merge the PRs that I
>> posted, and then evolve the design thru 1.8 and beyond. I
>
k/pull/7390
- https://github.com/apache/flink/pull/7392
- https://github.com/apache/flink/pull/7393
Thanks and enjoy 2019!
Eron W
On Sun, Nov 18, 2018 at 3:04 PM Zhang, Xuefu wrote:
Hi Xiaowei,
Thanks for bringing up the question. In the current design, the properties for
meta objects are
Hi Jincheng,
Thanks for bringing this up. It seems making good sense to me. However, one
concern I have is about backward compatibility. Could you clarify whether
existing user program will break with the proposed changes?
The answer to the question would largely determine when this can be intr
gy and rowtime field selection (i.e. which
> >>>>> existing
> >>>>>>>> field
> >>>>>>>>>> used to generate watermark) in one clause, so that we can
> >>>> define
> >>>>>>>> multiple
&
Hi Wenhui,
Thanks for bringing the topics up. Both make sense to me. For higher-order
functions, I'd suggest you come up with a list of things you'd like to add.
Overall, Flink SQL is weak in handling complex types. Ideally we should have a
doc covering the gaps and provide a roadmap for enhanc
> > (columnName [, columnName]* )
> > > >
> > > > rowTimeColumn ::=
> > > > columnName
> > > >
> > > > tableOption ::=
> > > > property=value
> > > > offset ::=
> > >
uvaH39Y/edit?usp=sharing
>
> On Mon, Nov 26, 2018 at 7:01 PM Zhang, Xuefu
> wrote:
>
>> Hi Shuyi,
>>
>> I'm wondering if you folks still have the bandwidth working on this.
>>
>> We have some dedicated resour
Hi Shuyi,
I'm wondering if you folks still have the bandwidth working on this.
We have some dedicated resource and like to move this forward. We can
collaborate.
Thanks,
Xuefu
--
发件人:wenlong.lwl
日 期:2018年11月05日 11:15:35
收件人
Hi Timo,
Thanks for the effort and the Google writeup. During our external catalog
rework, we found much confusion between Java and Scala, and this Scala-free
roadmap should greatly mitigate that.
I'm wondering that whether we can have rule in the interim when Java and Scala
coexist that depen
LES' statement
- SHOW FUNCTIONS [FROM schema/catalog.schema] - show functions from
current or a specified schema. Add 'from schema' to existing 'SHOW TABLES'
statement'
Thanks, Bowen
On Wed, Nov 14, 2018 at 10:39 PM Zhang, Xuefu
wrote:
> Than
t; or "can edit" mode?
Thanks, Bowen
On Mon, Nov 12, 2018 at 9:51 PM Zhang, Xuefu wrote:
Hi Piotr
I have extracted the API portion of the design and the google doc is here.
Please review and provide your feedback.
Thanks,
Xuefu
--
Maybe someone else can also suggests if
we can split it further? Maybe changes in the interface in one doc, reading
from hive meta store another and final storing our meta informations in hive
meta store?
Piotrek
> On 9 Nov 2018, at 01:44, Zhang, Xuefu wrote:
>
> Hi Piotr,
>
> T
n the interface in one doc, reading
from hive meta store another and final storing our meta informations in hive
meta store?
Piotrek
> On 9 Nov 2018, at 01:44, Zhang, Xuefu wrote:
>
> Hi Piotr,
>
> That seems to be good idea!
>
> Since the google doc for the design is currently
ew. The current thread sends to both dev and user list.
>>>
>>> This email thread is more like validating the general idea and direction
>>> with the community, and it's been pretty long and crowded so far. Since
>>> everyone is pro for the idea, we can move fo
Hi there,
As communicated in an email thread, I'm proposing Flink-Hive metastore
integration. I have a draft design doc that I'd like to convert it to a FLIP.
Thus, it would be great if anyone who can grant me the write access to
Confluence. My Confluence ID is xuefu.
@Timo Waltherand @Fabian
mentations. What do you think?
Shuyi
On Tue, Oct 30, 2018 at 11:32 AM Zhang, Xuefu wrote:
Hi all,
I have also shared a design doc on Hive metastore integration that is attached
here and also to FLINK-10556[1]. Please kindly review and share your feedback.
Thanks,
Xuefu
[1] https://issues.apach
LCITE-2280) in Calcite might help here.
Thanks
Shuyi
On Wed, Oct 10, 2018 at 8:02 PM Zhang, Xuefu wrote:
Hi Fabian/Vno,
Thank you very much for your encouragement inquiry. Sorry that I didn't see
Fabian's email until I read Vino's response just now. (Somehow Fabian's went to
k with the Calcite
community, and a recent effort called babel
(https://issues.apache.org/jira/browse/CALCITE-2280) in Calcite might help here.
Thanks
Shuyi
On Wed, Oct 10, 2018 at 8:02 PM Zhang, Xuefu wrote:
Hi Fabian/Vno,
Thank you very much for your encouragement inquiry. Sorry that I
might need to work with the Calcite
community, and a recent effort called babel
(https://issues.apache.org/jira/browse/CALCITE-2280) in Calcite might help here.
Thanks
Shuyi
On Wed, Oct 10, 2018 at 8:02 PM Zhang, Xuefu wrote:
Hi Fabian/Vno,
Thank you very much for your encouragement inquiry.
to it to move FlinkSQL's adoption and ecosystem even further.
Thanks,
Bowen
在 2018年10月12日,下午3:37,Jörn Franke 写道:
Thank you very nice , I fully agree with that.
Am 11.10.2018 um 19:31 schrieb Zhang, Xuefu :
Hi Jörn,
Thanks for your feedback. Yes, I think Hive on Flink makes sense
(xuefuz) Contributor permissions for Flink's
Jira.
You can now assign issues to yourself.
Best, Fabian
Am Fr., 12. Okt. 2018 um 01:18 Uhr schrieb Zhang, Xuefu
:
Hi there,
Could anyone kindly add me as a contributor to Flink project?
Thanks,
Xuefu
batch;
This way we could completely get rid of Flink SQL configuration files.
Thanks,
Taher Koitawala
Integrating
On Fri 12 Oct, 2018, 2:35 AM Zhang, Xuefu, wrote:
Hi Rong,
Thanks for your feedback. Some of my earlier comments might have addressed some
of your points, so here I'd like t
Hi there,
Could anyone kindly add me as a contributor to Flink project?
Thanks,
Xuefu
eavour as independent projects (hive
> engine, connector) to avoid too tight coupling with Flink. Maybe in a more
> distant future if the Hive integration is heavily demanded one could then
> integrate it more tightly if needed.
>
> What is meant by 11?
>> Am 11.
e Hive integration is heavily demanded one could then
> integrate it more tightly if needed.
>
> What is meant by 11?
>> Am 11.10.2018 um 05:01 schrieb Zhang, Xuefu :
>>
>> Hi Fabian/Vno,
>>
>> Thank you very much for your encouragement inquiry. Sorry that I d
Endeavour as independent projects (hive
engine, connector) to avoid too tight coupling with Flink. Maybe in a more
distant future if the Hive integration is heavily demanded one could then
integrate it more tightly if needed.
What is meant by 11?
Am 11.10.2018 um 05:01 schrieb Zhang, Xuefu :
Hi Fa
n you go into details of what you are proposing? I can think of a couple ways
to improve Flink in that regard:
* Support for Hive UDFs
* Support for Hive metadata catalog
* Support for HiveQL syntax
* ???
Best, Fabian
Am Di., 9. Okt. 2018 um 19:22 Uhr schrieb Zhang, Xuefu
:
Hi all,
Along wit
Hi all,
Along with the community's effort, inside Alibaba we have explored Flink's
potential as an execution engine not just for stream processing but also for
batch processing. We are encouraged by our findings and have initiated our
effort to make Flink's SQL capabilities full-fledged. When c
31 matches
Mail list logo