Hi,
I think for 2) you can use Kafka Consumer and push messages to vertex event
bus, which already have REST implementation (vertx-jersey).
I would say, Vertx cluster can be used as receive data irrespective of
topic and then publish to particular kafka topic. Then consume messages
from kafka by
Hi,
For (1) and perhaps even for (2) where distribution/filtering on scale is
required, I would look at using Apache Storm with kafka.
For (3) , it seems you just need REST services wrapping kafka
consumers/producers. I would start with usual suspects like jersey.
regards
On Tue, Mar 24, 2015 a
avro seems to be the standard at linked-in
i know json and protobuf are used at a few places
On Tue, Mar 24, 2015 at 11:49 PM, Rendy Bambang Junior <
rendy.b.jun...@gmail.com> wrote:
> Hi,
>
> I'm a new Kafka user. I'm planning to send web usage data from application
> to S3 for EMR and MongoDB
Hi,
I'm a new Kafka user. I'm planning to send web usage data from application
to S3 for EMR and MongoDB using Kafka.
What is common form to write as message in Kafka for data ingestion use
case? I am doing a little homework and find Avro as one of the options.
Thanks.
Rendy
Thank you for the explanation.
Patch submitted https://issues.apache.org/jira/browse/KAFKA-2048
On Wed, Mar 25, 2015 at 8:29 AM, Jiangjie Qin
wrote:
> It should be another ticket. This is a AbstractFetcherThread issue rather
> than a mirror maker issue.
>
> I kind of think this case you saw was
It should be another ticket. This is a AbstractFetcherThread issue rather
than a mirror maker issue.
I kind of think this case you saw was a special case as it¹s not actually
a runtime error but a coding bug. Fetcher thread should not die by design.
So I don¹t think we have a way to restart fetche
The other question I have is the fact that consumer client is unaware of
the health status of underlying fetcher thread. If the fetcher thread dies
like the case I encountered is there a way that consumer can restart the
fetcher thread or release ownership of partitions so that other consumers
can
Thanks JIanjie. Can I reuse KAFKA-1997 or should I create a new ticket?
On Wed, Mar 25, 2015 at 7:58 AM, Jiangjie Qin
wrote:
> Hi Xiao,
>
> I think the fix for IllegalStateExcepetion is correct.
> Can you also create a ticket and submit a patch?
>
> Thanks.
>
> Jiangjie (Becket) Qin
>
> On 3/24/
Hi Xiao,
I think the fix for IllegalStateExcepetion is correct.
Can you also create a ticket and submit a patch?
Thanks.
Jiangjie (Becket) Qin
On 3/24/15, 4:31 PM, "tao xiao" wrote:
>Hi community,
>
>I wanted to know if the solution I supplied can fix the
>IllegalMonitorStateException
>issue.
Hi community,
I wanted to know if the solution I supplied can fix the
IllegalMonitorStateException
issue. Our work is pending on this and we'd like to proceed ASAP. Sorry for
bothering.
On Mon, Mar 23, 2015 at 4:32 PM, tao xiao wrote:
> I think I worked out the answer to question 1.
> java.lan
Yes, Kafka use replica to tolerate node failures. Depending on which level
of durability and availability guarantee you need, you might need
different settings on the broker and producer. Kafka cluster will
automatically take care of node failure in a cluster for you.
Jiangjie (Becket) Qin
On 3/2
Thanks, Clark!
On Tue, Mar 24, 2015 at 1:55 PM, Clark Haskins wrote:
> I just bumped the limit to 300. There will be room, but we will likely run
> out of food so come early!
>
> -Clark
>
> On Tue, Mar 24, 2015 at 1:53 PM, Ed Yakabosky <
> eyakabo...@linkedin.com.invalid> wrote:
>
> >
> >
> > Th
I just bumped the limit to 300. There will be room, but we will likely run
out of food so come early!
-Clark
On Tue, Mar 24, 2015 at 1:53 PM, Ed Yakabosky <
eyakabo...@linkedin.com.invalid> wrote:
>
>
> The 200-person limit is based on # of seats, but there is a big empty
> space at the back of
The 200-person limit is based on # of seats, but there is a big empty
space at the back of the auditorium. At a minimum, there will be standing
room.
On 3/24/15, 1:40 PM, "Patrick Lucas" wrote:
>On Mon, Mar 23, 2015 at 1:23 PM, Clark Haskins wrote:
>>
>> Just a reminder about the Meetup to
On Mon, Mar 23, 2015 at 1:23 PM, Clark Haskins wrote:
>
> Just a reminder about the Meetup tomorrow night @ LinkedIn.
>
It looks like the meetup is at capacity—congratulations!
Before I make the trek down from SF, could you speak to whether you
generally overbook these things, or if being 10th o
If the entire kafka cluster is down, then the cluster mirror can be a solution
for disaster recovery. If a hardware failure that corrupts the disk of a node
in the kafka cluster, and if there are enough replications configured for each
topic partitions, would that be a solution for disaster reco
Hi Guozhang,
Yeah the main motivation is to not require de-serialization but still allow
the consumer to de-serialize into objects if they really want to. Another
motivation for iterating over the ByteBuffer on the fly is that we can
prevent copies all together. This has an added implication thoug
Hi guys,
we have three Kafka use cases for which we have written our own PoC
implementations,
but where I am wondering whether there might be any fitting open source
solution/tool/framework out there.
Maybe someone of you has some ideas/pointers? :)
1) Message routing/distribution/filter tool
We
Hi Rajiv,
Just want to clarify, that the main motivation for iterating over the byte
buffer directly instead of iterating over the records is for not enforcing
de-serialization, right? I think that can be done by passing the
deserializer class info into the consumer record instead of putting the
d
19 matches
Mail list logo