Thanks Chesnay, so if I read it well it shouldn't be too long (at least
less time than between regular 1.x releases).

On Mon, Jan 29, 2018 at 4:24 PM, Chesnay Schepler <ches...@apache.org>
wrote:

> As of right now there is no specific date, see also
> https://flink.apache.org/news/2017/11/22/release-1.4-and-1.5-timeline.html
> .
>
>
> On 29.01.2018 13:41, Christophe Jolif wrote:
>
> Thanks a lot. Is there any timeline for 1.5 by the way?
>
> --
> Christophe
>
> On Mon, Jan 29, 2018 at 11:36 AM, Tzu-Li (Gordon) Tai <tzuli...@apache.org
> > wrote:
>
>> Hi Christophe,
>>
>> Thanks a lot for the contribution! I’ll add reviewing the PR to my
>> backlog.
>> I would like / will try to take a look at the PR by the end of this week,
>> after some 1.4.1 blockers which I’m still busy with.
>>
>> Cheers,
>> Gordon
>>
>>
>> On 29 January 2018 at 9:25:27 AM, Fabian Hueske (fhue...@gmail.com)
>> wrote:
>>
>> Hi Christophe,
>>
>> great! Thanks for your contribution.
>> I'm quite busy right now, but I agree that we should have support for ES
>> 5.3 and Es 6.x for the next minor release 1.5.
>>
>> Best,
>> Fabian
>>
>>
>> 2018-01-26 23:09 GMT+01:00 Christophe Jolif <cjo...@gmail.com>:
>>
>>> Ok, I got it "done". I have a PR for ES5.3 (FLINK-7386) just rebasing
>>> the original one that was never merged (#4675). And added ES 6.X through
>>> RestHighLevelClient on top (FLINK-8101).  This is:
>>> https://github.com/apache/flink/pull/5374. And believe it or not but
>>> someone else submitted a PR for those two as well today! See:
>>> https://github.com/apache/flink/pull/5372. So looks like there is some
>>> traction to get it done? If would really be good if a committer could look
>>> at those PRs and let us know which one is closer to get merge so we focus
>>> on it instead of duplicating work ;)
>>>
>>> Thanks,
>>> --
>>> Christophe
>>>
>>> On Fri, Jan 26, 2018 at 1:46 PM, Christophe Jolif <cjo...@gmail.com>
>>> wrote:
>>>
>>>> Fabien,
>>>>
>>>> Unfortunately I need more than that :) But this PR is definitely a
>>>> first step.
>>>>
>>>> My real need is Elasticsearch 6.x support through RestHighLevel client.
>>>> FYI Elastic has deprecated the TransportClient that Flink connector
>>>> leverages and it will be removed in Elasticsearch 8 (presumably ~1.5 years
>>>> from now at their current release pace). Also TransportClient is not
>>>> working with hosted version of Elasticsearch like Compose.io. So I think it
>>>> makes a lot of sense to start introduce a sink based on RestHighLevel
>>>> client. I'll be looking at creating a PR for that.
>>>>
>>>> Thanks,
>>>>
>>>> --
>>>> Christophe
>>>>
>>>> On Fri, Jan 26, 2018 at 10:11 AM, Fabian Hueske <fhue...@gmail.com>
>>>> wrote:
>>>>
>>>>> Great, thank you!
>>>>> Hopefully, this pushes the PR forward.
>>>>>
>>>>> Thanks, Fabian
>>>>>
>>>>> 2018-01-25 22:30 GMT+01:00 Christophe Jolif <cjo...@gmail.com>:
>>>>>
>>>>>> Hi Fabian,
>>>>>>
>>>>>> FYI I rebased the branch and tested it and it worked OK on a sample.
>>>>>>
>>>>>> --
>>>>>> Christophe
>>>>>>
>>>>>> On Mon, Jan 22, 2018 at 2:53 PM, Fabian Hueske <fhue...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Hi Adrian,
>>>>>>>
>>>>>>> thanks for raising this issue again.
>>>>>>> I agree, we should add support for newer ES versions.
>>>>>>> I've added 1.5.0 as target release for FLINK-7386 and bumped the
>>>>>>> priority up.
>>>>>>>
>>>>>>> In the meantime, you can try Flavio's approach (he responded to the
>>>>>>> mail thread you linked) and fork and fix the connector.
>>>>>>> You could also try the PR for FLINK-7386 [1] and comment on the pull
>>>>>>> request whether it works for you or not.
>>>>>>>
>>>>>>> Best, Fabian
>>>>>>>
>>>>>>> [1] https://github.com/apache/flink/pull/4675
>>>>>>>
>>>>>>>
>>>>>>> 2018-01-22 13:54 GMT+01:00 Adrian Vasiliu <vasi...@fr.ibm.com>:
>>>>>>>
>>>>>>>> Hello,
>>>>>>>>
>>>>>>>> With a local run of Flink 1.4.0, ElasticsearchSink fails for me with
>>>>>>>> a local run of Elasticsearch 5.6.4 and 5.2.1, while the same code
>>>>>>>> (with adjusted versions of dependencies) works fine with Elasticsearch 
>>>>>>>> 2.x
>>>>>>>> (tried 2.4.6).
>>>>>>>> I get:
>>>>>>>> java.lang.NoSuchMethodError: org.elasticsearch.action.bulk.
>>>>>>>> BulkProcessor.add(Lorg/elasticsearch/action/ActionRequest;)L
>>>>>>>> org/elasticsearch/action/bulk/BulkProcessor
>>>>>>>>
>>>>>>>> (env: Mac OSX 10.13.2, oracle jdk 1.8.0_112)
>>>>>>>>
>>>>>>>> Now, this looks similar to the issue referred in
>>>>>>>> http://apache-flink-user-mailing-list-archive.2336050.n4.nab
>>>>>>>> ble.com/Elasticsearch-Sink-Error-td15246.html
>>>>>>>> which points to
>>>>>>>> "Flink Elasticsearch 5 connector is not compatible with
>>>>>>>> Elasticsearch 5.2+ client"
>>>>>>>> https://issues.apache.org/jira/browse/FLINK-7386
>>>>>>>>
>>>>>>>> Side-remark: when trying with Elasticsearch 5.6.4 via a docker
>>>>>>>> container, for some reason the error I get is different: 
>>>>>>>> "RuntimeException:
>>>>>>>> Client is not connected to any Elasticsearch nodes!" (while 
>>>>>>>> Elasticsearch
>>>>>>>> 2.4.6 works fine via docker too).
>>>>>>>>
>>>>>>>> FLINK-7386 <https://issues.apache.org/jira/browse/FLINK-7386> being
>>>>>>>> pending since August 2017, would it mean that there is nowadays still 
>>>>>>>> no
>>>>>>>> way to make Flink 1.4.0's sink work with Elasticsearch 5.2+? My 
>>>>>>>> use-case
>>>>>>>> involves Compose for Elasticsearch 5.6.3, shared by different apps, 
>>>>>>>> and I
>>>>>>>> can't really downgrade its Elasticsearch version.
>>>>>>>> Or would there be signs it will be fixed in Flink 1.5.0?
>>>>>>>>
>>>>>>>> Any lights welcome.
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> Adrian
>>>>>>>>
>>>>>>>>
>>>>>>>> Sauf indication contraire ci-dessus:/ Unless stated otherwise above:
>>>>>>>> Compagnie IBM France
>>>>>>>> Siège Social : 17 avenue de l'Europe
>>>>>>>> <https://maps.google.com/?q=17+avenue+de+l%27Europe&entry=gmail&source=g>,
>>>>>>>> 92275 Bois-Colombes Cedex
>>>>>>>> RCS Nanterre 552 118 465
>>>>>>>> Forme Sociale : S.A.S.
>>>>>>>> Capital Social : 657.364.587 €
>>>>>>>> SIREN/SIRET : 552 118 465 03644 - Code NAF 6202A
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>>
>


-- 
Christophe

Reply via email to