No, no trick question, I'm just excited to have maven artifacts.
We'll finally be able to get rid of our forked version (both internally and
in Druid) that I haven't been able to get away from because I apparently
fail at making sbt do things ;).
--Eric
On Thu, Jan 17, 2013 at 11:09 PM, Neha Na
Is that a trick question, Eric ? The answer is yes :-)
Thanks,
Neha
On Thu, Jan 17, 2013 at 3:08 PM, Eric Tschetter wrote:
> Neha,
>
> Will the beta release include artifacts in maven? :)
>
> --Eric
>
>
> On Thu, Jan 17, 2013 at 4:55 PM, Neha Narkhede >wrote:
>
> > We think we should be able t
Neha,
Will the beta release include artifacts in maven? :)
--Eric
On Thu, Jan 17, 2013 at 4:55 PM, Neha Narkhede wrote:
> We think we should be able to share a stable 0.8 beta with the community by
> the end of this month. This is also the time we will deploy 0.8 in
> production at LinkedIn, t
We think we should be able to share a stable 0.8 beta with the community by
the end of this month. This is also the time we will deploy 0.8 in
production at LinkedIn, to ensure that we catch and resolve any bugs that
get introduced at scale before the official open source release of 0.8
which might
Hy all,
We're starting to play with Kafka for an ambitious project which may be in
production within the next 6 months.
Kafka 0.8 is very promising and answers quite all our needs, the point is that
we really want to go with 0.8, but should we consider that a stable version is
only a question
That may be an alternative feasible approach. You can
call ConsumerConnector.shutdown() to close the consumer cleanly.
Thanks,
Jun
On Thu, Jan 17, 2013 at 6:20 AM, navneet sharma wrote:
> That makes sense.
>
> I tried an alternate approach- i am using high level consumer and going
> through Ha
What version of Kafka are you using?
Thanks,
Jun
On Wed, Jan 16, 2013 at 10:10 PM, Bo Sun wrote:
> I'v got a problem like this.
> 1. I use the groupname "GourpA" to consume the kafka topic "topicA" .
> several days later , we cannot got the new data from the consumer.
> 2. Then i use the group
That makes sense.
I tried an alternate approach- i am using high level consumer and going
through Hadoop HDFS APIs and pushing data in HDFS.
I am not creating any jobs for that.
The only problem i am seeing here is that the consumer is designed to run
forever. Which means i need to find out how