Hi Madhu
Would you be able to provide the use case here in ElasticSearch with Flink?
Thanks
Deepak
On Sat, Dec 5, 2015 at 1:25 AM, Madhukar Thota
wrote:
> Sure. I can submit the pull request.
>
> On Fri, Dec 4, 2015 at 12:37 PM, Maximilian Michels
> wrote:
>
>> Hi Madhu,
>>
>> Great. Do you wa
I think we need to find a solution for this problem soon.
Another user is most likely affected:
http://stackoverflow.com/q/34090808/568695
I've filed a JIRA for the problem:
https://issues.apache.org/jira/browse/FLINK-3121
On Mon, Nov 30, 2015 at 5:58 PM, Aljoscha Krettek
wrote:
> Maybe. In th
shouldn't be better to have both connectors for ES?one for 1.x and another
for 2.x?
On 4 Dec 2015 20:55, "Madhukar Thota" wrote:
> Sure. I can submit the pull request.
>
> On Fri, Dec 4, 2015 at 12:37 PM, Maximilian Michels
> wrote:
>
>> Hi Madhu,
>>
>> Great. Do you want to contribute it back v
Sure. I can submit the pull request.
On Fri, Dec 4, 2015 at 12:37 PM, Maximilian Michels wrote:
> Hi Madhu,
>
> Great. Do you want to contribute it back via a GitHub pull request? If
> not that's also fine. We will try look into the 2.0 connector next
> week.
>
> Best,
> Max
>
> On Fri, Dec 4, 2
Hi Max,
I forgot to include flink-storm-examples dependency in the application to
use BoltFileSink.
However, the file created by the BoltFileSink is empty. Is there any other
stuff which I need to do to write it into a file by using BoltFileSink?
I am using the same code what you mentioned,
bui
Hi Welly,
Those two resources are really great. Thanks a lot for sharing.
On Fri, Dec 4, 2015 at 7:48 PM, Welly Tambunan wrote:
> Hi Madhu,
>
> You can also check this page for the details on internals
>
> https://cwiki.apache.org/confluence/display/FLINK/Flink+Internals
> http://www.slideshare.
Hi All
Sorry for spamming your inbox.
I am really keen to work on a big data project full time(preferably remote
from India) , if not I am open to volunteering as well.
Please do let me know if there is any such opportunity available
--
Thanks
Deepak
Thanks for the comments everyone. For my part, i'm interested most in using
Hadoop's OutputFormats for writing out data at the end of a streaming job.
I also agree that while these "convenience methods" make for good example
code in slide decks, they're often not helpful for "real" applications. T
Hi Madhu,
Great. Do you want to contribute it back via a GitHub pull request? If
not that's also fine. We will try look into the 2.0 connector next
week.
Best,
Max
On Fri, Dec 4, 2015 at 4:16 PM, Madhukar Thota wrote:
> i have created working connector for Elasticsearch 2.0 based on
> elasticse
Hi Max,
Yeah, I did route the ³count² bolt output to a file and I see the output.
I can see the Storm and Flink output matching.
However, I am not able to use the BoltFileSink class in the 1.0-SNAPSHOT
which I built. I think it¹s better to wait for a day for the Maven sync to
happen so that I can
i have created working connector for Elasticsearch 2.0 based on
elasticsearch-flink connector. I am using it right now but i want official
connector from flink.
ElasticsearchSink.java
import org.apache.flink.api.java.utils.ParameterTool;
import org.apache.flink.configuration.Configuration;
impor
Hi Madhu,
Not yet. The API has changed slightly. We'll add one very soon. In the
meantime I've created an issue to keep track of the status:
https://issues.apache.org/jira/browse/FLINK-3115
Thanks,
Max
On Thu, Dec 3, 2015 at 10:50 PM, Madhukar Thota
wrote:
> is current elasticsearch-flink conn
Hi Madhu,
You can also check this page for the details on internals
https://cwiki.apache.org/confluence/display/FLINK/Flink+Internals
http://www.slideshare.net/KostasTzoumas/flink-internals
Cheers
On Fri, Dec 4, 2015 at 10:14 AM, madhu phatak wrote:
> Hi,
> Thanks a lot for the resources.
> O
Hi Kien Truong,
This behavior is intentional. The "parallelism.default" config entry
refers to the default parallelism to set when submitting Flink
programs. Since Flink programs are assembled on the Client (your
machine) it is set to 1 there. When you submit from the cluster, it
picks up the corr
Hi Naveen,
Were you using Maven before? The syncing of changes in the master
always takes a while for Maven. The documentation happened to be
updated before Maven synchronized. Building and installing manually
(what you did) solves the problem.
Strangely, when I run your code on my machine with t
Thanks Welly!
We have already corrected that in the snapshot documentation at
https://ci.apache.org/projects/flink/flink-docs-release-0.10/apis/streaming_guide.html#transformations
I fixed it also for the 0.10 documentation.
Best,
Max
On Fri, Dec 4, 2015 at 6:24 AM, Welly Tambunan wrote:
>
> H
Hi Maximilian,
I just downloaded the version from your google drive and used that to run
my test topology that accesses HBase.
I deliberately started it twice to double the chance to run into this
situation.
I'll keep you posted.
Niels
On Thu, Dec 3, 2015 at 11:44 AM, Maximilian Michels wrote
17 matches
Mail list logo