he code to
> start. You'll thank me when you hit your next offset-related issue.
>
> Philip
>
> On Nov 18, 2013, at 11:10 PM, Oleg Ruchovets wrote:
>
> > Hi Philip.
> >
> > It looks like this is our case:
> > https://github.com/nathanmarz/storm-contrib/pu
t; and then ask Kafka for the earliest -- or latest offset (take your pick) --
> and then re-issue the fetch using the returned offset.
>
> Are you using a separate path in ZK for the second topology? It is of a
> completely different nature than the first?
>
> Philip
>
>
>
We are working with kafka (0.7.2) + storm.
1) We deployed 1st topology which subscribed on Kafka topic and it is
working fine already couple of weeks.
2) Yesterday we deploy 2nd topology which subscribed on the same Kafka
topic , but 2nd topology immediately failed with exception:
*What c
Hello.
We are planning to go production in a couple of months.
Can the community share the best practices.
What should be configured? What we need to pay especial attention on?
we are using kafka 0.7.2 release.
Thanks
Oleg.
Yes , I am definitely interested with such capabilities. We also using
kafka 0.7.
Guys I already asked , but nobody answer: what community using to
consume from kafka to hdfs?
My assumption was that if Camus support only Avro it will not be suitable
for all , but people transfer from kafka to ha
I am also interested with hadoop+kafka capabilities. I am using kafka 0.7 ,
so my question : What is the best way to consume contect from kafka and
write it to hdfs? At this time I need the only consuming functionality.
thanks
Oleg.
On Wed, Aug 7, 2013 at 7:33 PM, wrote:
> Hi all,
>
> Over at
t it only checkpoints the consumer
> offset after all messages before that offset have been processed
> successfully. Could you confirm this from the Storm guys?
>
> Thanks,
>
> Jun
>
>
> On Thu, Aug 1, 2013 at 4:31 AM, Oleg Ruchovets
> wrote:
>
> > I try to resol
It is possible to consume the same message more than once with the same
> consumer. However WHAT you actually do with the message (such as idempotent
> writes) is the tricker part.
>
> Regards
> Milind
>
>
>
> On Wed, Jul 31, 2013 at 8:22 AM, Oleg Ruchovets >wrote:
>
Hi ,
I just don't know which mail list is correct to post this question( storm
or kafka)? Sorry for cross post.
I just read the documentation which describe guaranteed message
processing with storm -
https://github.com/nathanmarz/storm/wiki/Guaranteeing-message-processing.
The question actua
occurred and that is
exactly how it happens in my situation.
Thanks
Oleg.
On Tue, Jul 23, 2013 at 12:54 PM, Oleg Ruchovets wrote:
> Ok , got it , so the problem actually came from zookeeper. Can someone
> pointing me how can I clean up zookeeper to get rid of these messages.
>
> T
3 at 9:36 AM, Neha Narkhede >wrote:
>
> > If the console producer/consumer works fine, it would be safe to assume
> > the broker is up.
> >
> > Thanks,
> > Neha
> >
> >
> > On Tue, Jul 23, 2013 at 8:44 AM, Oleg Ruchovets >wrote:
> >
&
, Jul 23, 2013 at 11:44 AM, Oleg Ruchovets wrote:
> Hi Jun ,
>
>I made such tests:
> *bin/kafka-console-producer.sh --zookeeper localhost:2181 --topic
> test*
>
> This is a message
> This is another message
>
> *> bin/kafka-console-consu
otherwise?
>
> Thanks,
>
> Jun
>
>
> On Tue, Jul 23, 2013 at 7:46 AM, Oleg Ruchovets >wrote:
>
> > Hi All.
> >
> >I have on one machine kafka installation. I needed to move it to
> another
> > machine and I copied a kafka folder to that machine
Hi All.
I have on one machine kafka installation. I needed to move it to another
machine and I copied a kafka folder to that machine.
when I started kafka in new machine I got such output:
[2013-07-23 17:03:29,858] INFO Got user-level KeeperException when
processing sessionid:0x1400bd6e29600
Hi ,
I need to produce/consume json to/from Kafka.
Can you please point me on example how to do it.
I am using java and kafka 0.7.2
Thanks
Oleg.
Can you please share Kafka version you've used for the tests?
On Thu, May 23, 2013 at 8:57 PM, Jason Weiss wrote:
> Folks,
>
> As I posted to the group here yesterday, my 3 server test in AWS produced
> an average of 273,132 events per second with a fixed-size 2K message
> payload. (Please see
Does Kafka 0.8 become official beta?
On Mon, Apr 29, 2013 at 8:52 AM, Jun Rao wrote:
> We have updated the 0.8 documentation in our website (
> http://kafka.apache.org/index.html). Please review the docs. We have the
> following blockers for the 0.8 beta release:
>
> additional docs:
> * exampl
Sounds good :-)
On Sun, Apr 28, 2013 at 8:24 PM, Neha Narkhede wrote:
> You can ask that question on the camus mailing list.
>
> Thanks,
> Neha
> On Apr 28, 2013 10:14 AM, "Oleg Ruchovets" wrote:
>
> > Thank you Neha.
> > Is Camus stable eno
r 28, 2013 4:30 AM, "Oleg Ruchovets" wrote:
>
> > Hi ,
> >I am looking for simple way to transfer from kafka to hadoop.
> >
> >I found such solutions in Github:
> > https://github.com/linkedin/camus
> > https://github.com/miniway/
Hi ,
I am looking for simple way to transfer from kafka to hadoop.
I found such solutions in Github:
https://github.com/linkedin/camus
https://github.com/miniway/kafka-hadoop-consumer
https://github.com/kafka-dev/kafka/tree/master/contrib/hadoop-consumer
Question:
What
> Thanks,
> >
> > Jun
> >
> >
> > On Fri, Apr 26, 2013 at 8:19 AM, Oleg Ruchovets > >wrote:
> >
> > > Hi.
> > >I have simple kafka producer/consumer application. I have one
> producer
> > > and 2 consumers. consume
roups.
>
>
> On Fri, Apr 26, 2013 at 11:28 AM, Jun Rao wrote:
>
> > Have you looked at #4 in http://kafka.apache.org/faq.html ?
> >
> > Thanks,
> >
> > Jun
> >
> >
> > On Fri, Apr 26, 2013 at 8:19 AM, Oleg Ruchovets > >wrote:
&g
no synchronization required between those
> two consumers.
>
> In other words, what you want to do is fine. Please read the Kafka
> design doc if you have not done so:
>
> http://kafka.apache.org/design.html
>
> Philip
>
> On Sun, Apr 21, 2013 at 9:16 AM, Oleg Ruc
#x27;Toole wrote:
> On Sun, Apr 21, 2013 at 8:53 AM, Oleg Ruchovets
> wrote:
> > Hi Philip.
> >Does it mean to store the same data twice - each time to different
> > partition? I tried to save data only one time. Using two partitions means
> > to store data twice?
Read the design doc on the Kafka site.
>
> The short answer is to use two partitions for your topic.
>
> Philip
>
> On Apr 21, 2013, at 12:37 AM, Oleg Ruchovets wrote:
>
> > Hi,
> > I have one producer for kafka and have 2 consumers.
> > I want to consume pr
Hi,
I have one producer for kafka and have 2 consumers.
I want to consume produced events to hdfs and storm. Copy to hdfs I will do
every hour but to storm every 10 seconds.
Question: Is it supported by kafka? Where can I read how to organize 1
producer and 2 consumers?
Thanks
Oleg.
maintain or write my own client: -- is there
documentation which explains how to write clients for kafka?
Thanks
Oleg.
On Thu, Apr 18, 2013 at 7:41 PM, Jun Rao wrote:
> Can you use the java client on Windows?
>
> Thanks,
>
> Jun
>
>
> On Thu, Apr 18, 2013 at 12:11 AM
Hi .
I am working on project:
Project has producer which runs on Windows OS (C#) . Consumer is java on
Linux.
Question:
What is the way to write to Kafka from Windows OS? As I understand C#
client is legacy. So what is the way to write to Kafka from Windows? Can I
use C/C++ client from Wind
on
disc?
What is the serialization polisy?
In case I want to delete / remove topic how can I do in using API?
Thanks
Oleg.
On Mon, Apr 8, 2013 at 6:38 AM, Swapnil Ghike wrote:
> Was a kafka broker running when your producer got this exception?
>
> Thanks,
> Swapnil
>
> On 4/7
try to execute kafka 0.7.2 and got such exception:
log4j:WARN No appenders could be found for logger
(org.I0Itec.zkclient.ZkConnection).
log4j:WARN Please initialize the log4j system properly.
Exception in thread "main" java.lang.NumberFormatException: null
at java.lang.Integer.parseInt(Integer.
I am executing a simple code like this:
public class FirstKafkaTester {
public Producer initProducer(){
Properties props = new Properties();
props.put("zk.connect", "127.0.0.1:2181");
props.put("broker.list", "localhost:9092");
props.put("serializer.class", "kaf
ndencies.
> Website and docs will be updated as part of that.
>
> https://issues.apache.org/jira/browse/KAFKA-833
>
> Thanks,
> Neha
>
> On Thu, Apr 4, 2013 at 7:45 AM, Oleg Ruchovets
> wrote:
> > Thanks Jun , When does 0.8 have to be released? Also I didn't find AP
Thanks Jun , When does 0.8 have to be released? Also I didn't find API for
0.8 version? Is there a link to it?
Thanks
Oleg.
On Thu, Apr 4, 2013 at 5:26 PM, Jun Rao wrote:
> Not yet, but will be for the 0.8 release.
>
> Thanks,
>
> Jun
>
>
> On Thu, Apr 4, 20
Hi , Is there public repository for Kafka distribution?
Thanks
Oleg.
be super hard to
> write a basic one (that doesn't require zookeeper for example). C# has a
> lot of great features that would make it great for a solid kafka client.
>
>
> On Wed, Apr 3, 2013 at 3:58 PM, Oleg Ruchovets
> wrote:
>
> > Yes , I agree. So there is
so you can then use the
> official clients?
>
> -David
>
>
> On 4/3/13 3:22 PM, Oleg Ruchovets wrote:
>
>> I see , Is it a good Idea to use Node.js client? C# will produce messages
>> to Node.js and Node.js will push it to the Kafka?
>> Is there pote
ting one and a jira as well: KAFKA-639
>
> Joel
>
> On Wed, Apr 3, 2013 at 9:46 AM, Oleg Ruchovets
> wrote:
>
> > Hi ,
> >Is there a stable C# client for Kafka? Is there a rest API for Kafka?
> >
> > Thanks
> > Oleg.
> >
>
Hi ,
Is there a stable C# client for Kafka? Is there a rest API for Kafka?
Thanks
Oleg.
>
> You may also want to checkout Kafka->Hadoop pipeline at LinkedIn:
> https://github.com/linkedin/camus
>
> Thanks,
>
> Jun
>
> On Wed, Apr 3, 2013 at 3:40 AM, Oleg Ruchovets
> wrote:
>
> > Hi ,
> >I want to install it to the existing hadoop
Hi ,
I want to install it to the existing hadoop environment.
I want to read data from Kafka and put it to HDFS.
I use a Hortonworks distributions.
Questions:
I didn't find any guide how to install Kafka ( I read quick start
http://kafka.apache.org/quickstart.html). Do I only need to
40 matches
Mail list logo