otoole.com
On Saturday, February 28, 2015 9:33 PM, Guozhang Wang
wrote:
Is this you are looking for?
http://kafka.apache.org/07/documentation.html
On Fri, Feb 27, 2015 at 7:02 PM, Philip O'Toole <
philip.oto...@yahoo.com.invalid> wrote:
> There used to be available a ve
There used to be available a very lucid page describing Kafka 0.7, its design,
and the rationale behind certain decisions. I last saw it about 18 months ago.
I can't find it now. Is it still available? I can find the 0.8 version, it's up
there on the site.
Any help? Any links?
Philip
--
,
though.
To answer your question, I was thinking ephemerals with replication, yes.
With a reservation, it's pretty easy to get e.g. two i2.xlarge for an
amortized cost below a single m2.2xlarge with the same amount of EBS
storage and provisioned IOPs.
On Mon, Sep 29, 2014 at 9:40 P
If only Kafka had rack awarenessyou could run 1 cluster and set up the
replicas in different AZs.
https://issues.apache.org/jira/browse/KAFKA-1215
As for your question about ephemeral versus EBS, I presume you are proposing to
use ephemeral *with* replicas, right?
Philip
---
Yes, IMHO, that is going to be way too many topics. Use a smaller number of
topics, and embedded attributes like "tag" and "user" in the messages written
to Kafka.
Phiilp
-
http://www.philipotoole.com
On Friday, September 5, 2014 4:21 AM, Sharninder
Agreed. I can't see this being a good use for Kafka.
Philip
-
http://www.philipotoole.com
On Thursday, September 4, 2014 9:57 PM, Sharninder wrote:
Since you want all chats and mail history persisted all the time, I
personally wouldn't recommend k
>>Only problem is no of connections to Kafka is increased.
*Why* is it a problem?
Philip
ingle ConsumerStream) and use
commitOffset API to commit all partitions managed by each
ConsumerConnector after the thread finished processing the messages.
Does that solve the problem, Bhavesh?
Gwen
On Tue, Sep 2, 2014 at 5:47 PM, Philip O'Toole
wrote:
> Yeah, from reading that I suspect
,
Bhavesh
On Tue, Sep 2, 2014 at 5:20 PM, Philip O'Toole <
philip.oto...@yahoo.com.invalid> wrote:
> No, you'll need to write your own failover.
>
> I'm not sure I follow your second question, but the high-level Consumer
> should be able to do what you want if you
y when batch is done.
Thanks,
Bhavesh
On Tue, Sep 2, 2014 at 4:43 PM, Philip O'Toole <
philip.oto...@yahoo.com.invalid> wrote:
> Either use the SimpleConsumer which gives you much finer-grained control,
> or (this worked with 0.7) spin up a ConsumerConnection (this is a HighLe
Either use the SimpleConsumer which gives you much finer-grained control, or
(this worked with 0.7) spin up a ConsumerConnection (this is a HighLevel
consumer concept) per partition, turn off auto-commit.
Philip
-
http://www.philipotoole.com
On Tuesd
Retention is per topic, per Kafka broker, it is nothing to do with the
Producer. You do not need to restart the Producer for retention changes to take
effect. You do, however, need to restart the broker however. Once restarted,
all messages will then be subject to the new policy.
Philip
r(client=kafka,
> group="wombat.%s" % socket.gethostname(),
> topic=topic, partitions=partitions,
> fetch_size_bytes = 1024 * 1024,
> auto_commit=False, buffer_size = 256 * 1024,
> max_buffer_size = 2048 * 1024)
>
>if options.offset:
>
, Aug 20, 2014 at 10:04 AM, Philip O'Toole
wrote:
Nice work. That tool I put together was getting a bit old. :-)
>
>
>I updated the Kafka "ecosystem" page with details of both tools.
>
>https://cwiki.apache.org/confluenc
Nice work. That tool I put together was getting a bit old. :-)
I updated the Kafka "ecosystem" page with details of both tools.
https://cwiki.apache.org/confluence/display/KAFKA/Ecosystem
Philip
-
http://www.philipotoole.com
On Wednesday, August 2
It's not a bug, right? It's the way the system works (if I have been following
the thread correctly) -- when the retention time passes, the message is gone.
Either consume your messages sooner, or increase your retention time. Kafka is
not magic, it can only do what it's told.
In practise I hav
If you have studied the docs yet, you should, as this is a broad question which
needs background to understand the answer.
But in summary, the high-level Consumer does more for you, and importantly,
provides balancing between Consumers. The SimpleConsumer does less for you, but
gives you more c
Kafka can ingest any kind of data, and connect to many types of systems. Much
work exists in this area already, for hooking a wide variety of systems to
Kafka. If your system isn't supported, then you write a Kafka Producer to pull
(or receive) messages from your system, and write them to Kafka.
>>- we have a low data traffic compared to your figures: around 30 GB a
day. Will it be an issue?
I have personal experience that Kafka deals extremely well with very
low-volumes, as well as very high. I have used Kafka for small integration-test
setups, as well as large production systems. Kaf
I am not familiar with "JMSXGroupID", but it sounds like you could just use the
Producer API which allows your code to choose the partition to which a given
message is sent -- perform some modulo math on your GroupID, given your number
of partitions ("partitioner.class"). And since only 1 Consum
Todd -- can you share details of the ZK cluster you are running, to support
this scale? Is it one single Kafka cluster? Are you using 1 single ZK cluster?
Thanks,
Philip
-
http://www.philipotoole.com
On Monday, August 11, 2014 9:32 PM, Todd Palino
nance overhead. My thought is to use the
> existing kafka cluster, with the hope that the topic deletion api will be
> available soon.Meantime just have a cron job cleaning up the outdated
> topics from zookeeper.
>
> Let me know what you think,
> Thanks,
> Chen
>
>
t; would rather go this path instead of setting up another queue system.
>
> Chen
>
> Chen
>
>
> On Mon, Aug 11, 2014 at 6:07 PM, Philip O'Toole <
> philip.oto...@yahoo.com.invalid> wrote:
>
>> It's still not clear to me why you need to create s
fka user case.
>
> Chen
>
>
>> On Mon, Aug 11, 2014 at 5:01 PM, Philip O'Toole
>> wrote:
>> I'd love to know more about what you're trying to do here. It sounds like
>> you're trying to create topics on a schedule, trying to make it eas
I'd love to know more about what you're trying to do here. It sounds like
you're trying to create topics on a schedule, trying to make it easy to locate
data for a given time range? I'm not sure it makes sense to use Kafka in this
manner.
Can you provide more detail?
Philip
---
e
> thought the zk listeners are in separate async threads (and that's what it
> looks like looking at the kafka consumer code).
>
> Maybe I should increase the zk session timeout and see if that helps.
>
>
>> On Thu, Aug 7, 2014 at 2:56 PM, Philip O'Toole
>
Policies for which messages to drop, retain, etc seem like something you should
code in your application. I personally would not like to see this extra
complexity added to Kafka.
Philip
--
http://www.philipotoole.com
> On Aug 7, 2014, at 2:44 PM, Bhavesh Mistry
Fluentd might work or simply configure rsyslog or syslog-ng on the box to watch
the Apache log files, and send them to a suitable Producer (for example I wrote
something that will accept messages from a syslog client, and stream them to
Kafka. https://github.com/otoolep/syslog-gollector)
More
A big GC pause in your application, for example, could do it.
Philip
-
http://www.philipotoole.com
On Thursday, August 7, 2014 11:56 AM, Philip O'Toole
wrote:
I think the question is what in your consuming application could cause it no
I think the question is what in your consuming application could cause it not
to check in with ZK for longer than the timeout.
-
http://www.philipotoole.com
On Thursday, August 7, 2014 8:16 AM, Jason Rosenberg wrote:
Well, it's possible that when
Brokers can host multiple partitions for the same topic without any problems.
Philip
-
http://www.philipotoole.com
On Wednesday, July 23, 2014 2:15 PM, Kashyap Mhaisekar
wrote:
HI,
Is the maximum no. of partitions for a topic dependent on the no. of
How many partitions in your topic? Are you talking about Producing or
Consuming? All those factors will determine the number of TCP connections to
your Kafka cluster.
In any event, Kafka can support lots, and lots, and lots, of connections (I've
run systems with hundreds of connections to a 3-
hat is changing), and
its simplicity is very appealing vis-à-vis the 0.8 series.
Philip
On Fri, Jul 18, 2014 at 11:46 PM, Philip O'Toole
wrote:
> Thanks Jay -- some good ideas there.
>
> I agree strongly that fewer, more solid, non-Java clients are better than
> many shallow one
Thanks Jay -- some good ideas there.
I agree strongly that fewer, more solid, non-Java clients are better than many
shallow ones. Interesting that you feel we could do some more work in this
area, as I thought it was well served (even if they have proliferated).
One area I would like see docume
discussion.
-Jay
On Thu, Jul 17, 2014 at 9:28 PM, Philip O'Toole
wrote:
> First things first. I friggin' think Kafka rocks. It's a system that have
> given me a lot of joy, and I've spent a lot of fun hours (and sometimes not
> so fun) looking at consumer lag metr
First things first. I friggin' think Kafka rocks. It's a system that have given
me a lot of joy, and I've spent a lot of fun hours (and sometimes not so fun)
looking at consumer lag metrics. I'd like to give back, beyond spreading the
gospel about it architecturally and operationally.
My only
You should find code here that will help you get a HTTP app server together,
that writes to Kafka on the back-end.
https://cwiki.apache.org/confluence/display/KAFKA/Clients
https://cwiki.apache.org/confluence/display/KAFKA/Ecosystem
On Wednesday, July 16, 2014 9:36 PM, Philip O'
FWIW, I happen to know Subodh -- we worked together many years back. We
discussed this a little off-the-list, but perhaps my thoughts might be of wider
interest.
Kafka, in my experience, works best when Producers have a persistent TCP
connection to the Broker(s) (and possibly Zookeeper). I gues
I went looking for a Syslog Collector, written in Go, which would stream to
Kafka. I couldn't find any, so put one together myself -- others might be
interested. It optionally performs basic parsing of an RFC5424 header too,
before streaming the messages to Kafka. As always, YMMV.
https://gith
e link resets every so often, and
>>>> definitely did today.
>>>>
>>>> Assume it is this, are you surprised the thread went down? Perhaps we need
>>>> to catch this?
>>>>
>>>> Philip
>>>>
>>>>> O
alidation failed. Is there any issue
> with the network?
>
> Thanks,
>
> Jun
>
>
>> On Mon, Feb 10, 2014 at 5:00 PM, Philip O'Toole wrote:
>>
>> Saw this thrown today, which brought down a Consumer thread -- we're using
>> Consumers built on t
I should we *think* this exception brought down the Consumer thread. The
problematic partition on our system was 2-29, so this is definitely the
related thread.
Philip
On Mon, Feb 10, 2014 at 5:00 PM, Philip O'Toole wrote:
> Saw this thrown today, which brought down a Consumer thread
Saw this thrown today, which brought down a Consumer thread -- we're using
Consumers built on the High-level consumer framework. What may have
happened here? We are using a custom C++ Producer which does not do
compression, and which hasn't changed in months, but this error is
relatively new to us,
es bytes to kafka, and java reads bytes from kafka.
>
> Is there something special about the way the messages are being serialized
> in C++?
>
> --Tom
>
>
> On Fri, Jan 31, 2014 at 2:36 PM, Philip O'Toole wrote:
>
> > Is this a Kafka C++ lib you wrote yourself,
Is this a Kafka C++ lib you wrote yourself, or some open-source library?
What version of Kafka?
Philip
On Fri, Jan 31, 2014 at 1:30 PM, Otis Gospodnetic <
otis.gospodne...@gmail.com> wrote:
> Hi,
>
> If Kafka Producer is using a C++ Kafka lib to produce messages, how can
> Kafka Consumers writt
pped and
> data will be lost.
>
> Do you have by any chance a pointer to an existing implementation of a
> such producer?
>
> Thanks
>
>
> Le 30 janv. 2014 à 15:13, Philip O'Toole a écrit :
>
> > What exactly are you struggling with? Your question is too br
What exactly are you struggling with? Your question is too broad. What you want
to do is eminently possible, having done it myself from scratch.
Philip
> On Jan 30, 2014, at 6:00 AM, Thibaud Chardonnens wrote:
>
> Hello — I am struggling about how to design a robust implementation of a
> pro
http://kafka.apache.org/07/configuration.html
Hello -- I can look the at the code too, but how does this setting interact
with compression? After all, a Producer doing compression doesn't know the
size of a "message" on the wire it will send to a Kafka broker until after
it has been compressed. An
We use Zookeeper, as is standard with Kafka.
Our systems are idempotent, so we only store offsets when the message is
fully processed. If this means we occasionally replay a message due to some
corner-case, or simply a restart, it doesn't matter.
Philip
On Mon, Dec 9, 2013 at 12:28 PM, S Ahmed
OK, I am only familiar with 0.72.
Philip
On Mon, Dec 9, 2013 at 4:54 AM, Sanket Maru wrote:
> I am using kafka 0.8.0
>
>
> On Mon, Dec 9, 2013 at 6:09 PM, Philip O'Toole wrote:
>
> > What version are you running?
> >
> > Philip
> >
> >
>
What version are you running?
Philip
On Mon, Dec 9, 2013 at 4:30 AM, Sanket Maru wrote:
> I am working on a small project and discovered that our consumer hasn't
> been executed for over a month now.
>
> How can i check the unprocessed events ? From which date the events are
> available and wh
Take apart the hard disk, and flip the magnets in the motors so it spins in
reverse. The Kafka software won't be any the wiser. That should give you
exactly what you need, combined with high-performance sequential reads.
:-D
> On Dec 6, 2013, at 7:43 AM, Joe Stein wrote:
>
> hmmm, I just rea
Sweet -- thanks Jun.
On Thu, Dec 5, 2013 at 9:25 PM, Jun Rao wrote:
> That's right. Remove the local log dir from brokers that you don't want to
> have the topic.
>
> Thanks,
>
> Jun
>
>
> On Thu, Dec 5, 2013 at 9:22 PM, Philip O'Toole wrote:
>
>
ady exists on at least one broker in a cluster, it
> won't be created on newly added brokers.
>
> Thanks,
>
> Jun
>
>
> On Thu, Dec 5, 2013 at 4:29 PM, Philip O'Toole wrote:
>
> > Hello,
> >
> > Say we are using Zookeeper-based Producers, a
We're running 0.72.
Thanks,
Philip
On Thu, Dec 5, 2013 at 4:29 PM, Philip O'Toole wrote:
> Hello,
>
> Say we are using Zookeeper-based Producers, and we specify a topic to be
> written to. Since we don't specify the actual brokers, is there a way to
> preve
Hello,
Say we are using Zookeeper-based Producers, and we specify a topic to be
written to. Since we don't specify the actual brokers, is there a way to
prevent a topic from appearing on a specific broker? What if we set the
topic's partition count to 0 on the broker we don't want it to appear?
W
Simple tool I wrote to monitor 0.7 consumers.
https://github.com/otoolep/stormkafkamon
On Wed, Dec 4, 2013 at 12:49 PM, David DeMaagd wrote:
> You can use either the MaxLag MBean (0.8):
>
> http://kafka.apache.org/documentation.html#monitoring
>
> Or the ConsumerOffsetChecker (0.7 or 0.8, can't
at 5:49 PM, S Ahmed wrote:
>>>>
>>>> Interesting. So twitter storm is used to basically process the messages on
>>>> kafka? I'll have to read-up on storm b/c I always thought the use case
>>>> was a bit different.
>>>>
>>>>
A couple of us here at Loggly recently spoke at AWS reinvent, on how we use
Kafka 0.72 in our ingestion pipeline. The slides are at the link below, and may
be of interest to people on this list.
http://www.slideshare.net/AmazonWebServices/infrastructure-at-scale-apache-kafka-twitter-storm-elast
retry code structured? Have you open sourced it?
>
>> On Nov 28, 2013, at 16:08, Philip O'Toole wrote:
>>
>> By FS I guess you mean file system.
>>
>> In that case, if one is that concerned, why not run a single Kafka broker on
>> the same m
Parra wrote:
>
> Philip, what about if the broker goes down?
> I may be missing something.
>
> Diego.
> El 28/11/2013 21:09, "Philip O'Toole" escribió:
>
>> By FS I guess you mean file system.
>>
>> In that case, if one is that concerned, why n
There are many options. Another simple consumer could read from it to a second
broker.
Philip
> On Nov 28, 2013, at 4:18 PM, Steve Morin wrote:
>
> Philip,
> How would do you mirror this to a main Kafka instance?
> -Steve
>
>> On Nov 28, 2013, at 16:14, Philip
I should add in our custom producers we buffer in RAM if required, so Kafka can
be restarted etc. But I would never code streaming to disk now. I would just
run a Kafka instance on the same node.
Philip
On Nov 28, 2013, at 4:08 PM, Philip O'Toole wrote:
> By FS I guess you mean fil
By FS I guess you mean file system.
In that case, if one is that concerned, why not run a single Kafka broker on
the same machine, and connect to it over localhost? And disable ZK mode too,
perhaps.
I may be missing something, but I never fully understand why people try really
hard to build
ave the logic for handling ZK session
> expirations. So, they should recover automatically. The issue is that if
> there is a real failure in the broker/consumer while the VPN is down, the
> failure may not be detected.
>
> Thanks,
>
> Jun
>
>
> On Wed, Nov
stablish a new ZK session and new connections to the brokers.
>
> Thanks,
>
> Jun
>
>
> On Tue, Nov 26, 2013 at 9:33 PM, Philip O'Toole wrote:
>
> > I want to use a ZK cluster for my Kafka cluster, which is only available
> > over a cross-country VPN tunnel. Th
I want to use a ZK cluster for my Kafka cluster, which is only available over a
cross-country VPN tunnel. The VPN tunnel is prone to resets, every other day or
so, perhaps down for a couple of minutes at a time.
Is this a concern? Any setting changes I should make to mitigate any potential
is
t should be changed.
>
> Thanks
> Oleg.
>
>
>
> On Tue, Nov 19, 2013 at 11:51 AM, Philip O'Toole wrote:
>
>> Don't get scared, this if perfectly normal and easily fixed. :-) The second
>> topology attempted to fetch messages from an offset in Kafka
Don't get scared, this if perfectly normal and easily fixed. :-) The second
topology attempted to fetch messages from an offset in Kafka that does not
exists. This could happen due to Kafka retention policies (messages
deleted) or a bug in your code. Your code needs to catch this exception,
and the
We use 0.72 -- I am not sure if this matters with 0.8.
Why would one choose a partition, as opposed to a random partition choice?
What design pattern(s) would mean choosing a partition? When is it a good
idea?
Any feedback out there?
Thanks,
Philip
D'oh. Bad config on our part. Something we thought we had fixed long ago, but
it crept back in.
Make sure fetch sizes are big enough!
Philp
On Oct 31, 2013, at 7:18 PM, "Philip O'Toole" wrote:
> We suddenly started seeing these messages from one our consumers tonigh
We suddenly started seeing these messages from one our consumers tonight.
What do they mean? Our high-level consumers -- most of them -- are not
moving.
What has happened?
Philip
2013-11-01 01:58:09,033 [ERROR] [FetcherRunnable.error] error in
FetcherRunnable for pro
cessed:6-6: fetched offset =
On Wed, Oct 30, 2013 at 8:13 PM, Lee, Yeon Ok (이연옥) wrote:
> Hi, all.
> I just got curiosity why Apache Kafka is better than any other Message
> System in terms of throughput, and durability.
>
Because it's brilliant, that's why. :-)
> What’s the fact to let Kafka have better performance?
>
Be
You have two choices.
-- Do what you say, and write your own consumer, based on the
SimpleConsumer. Handle all commits, ZK accesses, and balancing yourself.
-- Use a ConsumerConnector for every partition, and call commitOffsets()
explicitly when you have processed a message. This does a commit fo
I would like to second that. It would be real useful.
Philip
On Oct 8, 2013, at 9:31 AM, Jason Rosenberg wrote:
> What I would like to see is a way for inactive topics to automatically get
> removed after they are inactive for a period of time. That might help in
> this case.
>
> I added a c
Is this with 0.7 or 0.8?
On Wed, Oct 2, 2013 at 12:59 PM, Joe Stein wrote:
> Are you sure the consumers are behind? could the pause be because the
> stream is empty and producing messages is what is behind the consumption?
>
> What if you shut off your consumers for 5 minutes and then start the
rapu
> wrote:
>
>> Yes I understand that. I am letting the producer/consumer use zookeeper to
>> discover brokers.
>> I can clearly see in the logs(brokers) that both the brokers create a new
>> topic log for the same topic.
>>
>> The brokers are i
e brokers create a new
> topic log for the same topic.
>
> The brokers are in different availability zones. Does that matter?
> Suchi
>
>
> On Fri, Sep 20, 2013 at 4:20 PM, Philip O'Toole wrote:
>
>> Seems to me you are confusing partitions and brokers. P
Seems to me you are confusing partitions and brokers. Partition count has
nothing to do with the number of brokers to which a message a sent -- just
the number of partitions into which that message will be split when it gets
to a broker.
You need to explicitly set the destination brokers in the Pr
using 0.7 or 0.8?
>
> Jun
>
>
> On Mon, Sep 9, 2013 at 12:49 PM, Philip O'Toole wrote:
>
> > Hello Kafka users and developers,
> >
> > We at Loggly launched our new system last week, and Kafka is a critical
> > part. I just wanted to say a sincere
t;
> Cheers,
>
> -Jay
>
>
> On Mon, Sep 9, 2013 at 12:49 PM, Philip O'Toole wrote:
>
> > Hello Kafka users and developers,
> >
> > We at Loggly launched our new system last week, and Kafka is a critical
> > part. I just wanted to say a sincere thank-
Hello Kafka users and developers,
We at Loggly launched our new system last week, and Kafka is a critical
part. I just wanted to say a sincere thank-you to the Kafka team at
LinkedIn who put this software together. It's really, really great, and has
allowed us to build a solid, performant, system.
What options is Kafka running with?
On Thu, Aug 29, 2013 at 3:59 PM, Mark wrote:
> I tried changing the ports and still no luck. Does it work with JConsole
> and/or do I need anything in my class path?
>
>
> On Aug 29, 2013, at 3:44 PM, Surendranauth Hiraman <
> suren.hira...@sociocast.com> wro
Aug 29, 2013, at 1:04 PM, Philip O'Toole wrote:
>
> > On Thu, Aug 29, 2013 at 11:09 AM, Mark
> wrote:
> >
> >> 1) Should a producer be aware of which broker to write to or is this
> >> somehow managed by Kafka itself. For example, If I have 2 brokers with
On Thu, Aug 29, 2013 at 11:11 AM, Mark wrote:
> Also, are the consumer offsets store in Kafka or Zookeeper?
>
Zookeeper.
>
> On Aug 29, 2013, at 11:09 AM, Mark wrote:
>
> > 1) Should a producer be aware of which broker to write to or is this
> somehow managed by Kafka itself. For example, If
On Thu, Aug 29, 2013 at 11:09 AM, Mark wrote:
> 1) Should a producer be aware of which broker to write to or is this
> somehow managed by Kafka itself. For example, If I have 2 brokers with a
> configured partition size of 1 will my messages be written in a round-robin
> type of fashion to each b
It means the first.
Philip
On Thu, Aug 29, 2013 at 8:55 AM, Mark wrote:
> If I have 3 brokers with 3 partitions does that mean:
>
> 1) I have 3 partitions per broker so I can have up to 9 consumers
>
> or
>
> 2) There is only 1 partition per brokers which means I can have only 3
> consumers
>
Well, you can only store data in Kafka, you can't put application logic in
there.
Storm is good for processing data, but it is not a data store, so that is out.
Redis might work, but it is only an in-memory store (seems like it does have
persistence, but I don't know much about that).
You cou
Yes, the Kafka team has told me that this is how it works (at least for 0.72).
Philip
On Fri, Aug 23, 2013 at 7:53 AM, Yu, Libo wrote:
> Hi team,
>
> Right now, from a stream, an iterator can be obtained which has a blocking
> hasNext().
> So what is the implementation behind the iterator? I as
ffset for a partition (and would
> need to persist this list of offsets to allow resume after failure).
>
> The assumption of guaranteed order is essential for the throughput the
> application achieves.
>
> Thanks,
> Ross
>
>
>
> On 23 August 2013 14:36, Philip O
I am curious. What is it about your design that requires you track order so
tightly? Maybe there is another way to meet your needs instead of relying on
Kafka to do it.
Philip
On Aug 22, 2013, at 9:32 PM, Ross Black wrote:
> Hi,
>
> I am using Kafka 0.7.1, and using the low-level SyncProduc
No, there isn't, not at the very start when there is no state in
Zookeeper. Once there is state the Kafka team have told me that
rebalancing will not result in any dupes.
However, if there is no state in Zookeeper and your partitions are
empty, simply wait until all consumers have balanced before
1 topic.
I don't understand the second question.
Philip
On Aug 21, 2013, at 9:52 AM, Tom Brown wrote:
> Philip,
>
> How many topics per broker (just one?) And what is the read/write profile
> of your setup?
>
> --Tom
>
>
> On Wed, Aug 21, 2013 at
ore, but no
> solution there.
> http://mail-archives.apache.org/mod_mbox/kafka-users/201211.mbox/%3CCANZjK9i87enoPY15rzh2Bg4D8+H1jvkSCkro=f3EROjn_4T=r...@mail.gmail.com%3E
>
> If any one can offer any help, I'll really appreciate it.
>
> -
>
> Frank Yao
> @V
lly run 3 brokers in production
environments, giving us a total of 24 partitions. Throughput has been superb.
For integration testing however, we usually use just 1 or 2 partitions.
Philip
>
> Thanks in advance!
>
> --Tom
--
Philip O'Toole
Senior Developer
Logg
from
an actual Kafka topic). Yes, the queue may need synchronization to
ensure each job only gets pulled off the queue once, but you said it's
low volume so performance shouldn't be a concern.
Philip
On Tue, Aug 13, 2013 at 7:13 PM, Eric Sites wrote:
> Responses inline
>
> On
My experience is solely with 0.72. More inline.
On Tue, Aug 13, 2013 at 6:47 PM, Eric Sites wrote:
> Hello everyone,
>
> I have a very low volume topic that has 2 consumers in the same group. How do
> I get each consumer to only consume 1 message at a time and if the the first
> consumer is bus
at case, the '/chroot' is only appended to the end of the
> list of host/port pairs, e.g.:
>
> host1.xyz.com:1234,host2.xyz.com:1234,host3.xyz.com:1234/chroot
>
> This is often not obvious to readers of the docs.
>
> Jason
>
>
>
>
> On Fri, A
Have you read the docs? They are well written. It's all there, including the
paths.
Philip
On Aug 9, 2013, at 3:24 PM, Vadim Keylis wrote:
> I am trying to setup kafka service and connect to zookeeper that would be
> shared with Other projects. Can someone advice how to configure namespace
If I understand what you are asking, I have dealt successfully with the
same type of issue. It can take more than one Boost async_write() over a
broken connection before the client software notices that the connection is
gone.
The best way to detect if a connection is broken is not by detecting th
1 - 100 of 146 matches
Mail list logo