Hi, Edward..
yup .. you are correct.. when we get to a little over 1000 messages the
program was failing with the exception stack trace i included below.
I fixed the test so it passes as long as the consumer gets all messages
sent by the producer.. even if an exception is thrown during shut dow
We don't have an official client api for creating topics yet. You can try
instantiating kafka.admin.CreateTopicCommand$ and use the following method:
createTopic(zkClient: ZkClient, topic: String, numPartitions: Int = 1,
replicationFactor: Int = 1, replicaAssignmentStr: String = "")
To instantiat
I've checked it, it's per partition, what i'm talking about is more of
global log size limit, like if i have only 200gb, i want to set global
limit on log size, not per partition, so i won't have to change it
later if i add topics/partitions.
Thanks.
On Tue, Nov 5, 2013 at 8:37 PM, Neha Narkhede
You are probably looking for log.retention.bytes. Refer to
http://kafka.apache.org/documentation.html#brokerconfigs
On Tue, Nov 5, 2013 at 3:10 PM, Kane Kane wrote:
> Hello,
>
> What would happen if disk is full? Does it make sense to have
> additional variable to set the maximum size for all l
>. Could a fetch request from a consumer cause a Topic creation request
(seems
implausible).
Yes, that seems like a way the broker can get into this situation.
Thanks,
Neha
On Tue, Nov 5, 2013 at 4:58 PM, Jason Rosenberg wrote:
> I don't know if I have a way to see the access logs on the
I don't know if I have a way to see the access logs on the LB..(still
trying to track that down).
One thing I do see though, is that there are fetch requests from consumers,
that are then followed by these Topic creation log messages, e.g. (replaced
some of the specific strings in this log li
Hello,
What would happen if disk is full? Does it make sense to have
additional variable to set the maximum size for all logs combined?
Thanks.
Shafaq is right, except the default value is 100 bytes, so a little bit
less than 1MB.
On Tue, Nov 5, 2013 at 2:31 PM, Shafaq wrote:
> The message is sent as serialized over wire so there is no concept of width
> but only length.
>
> The message size is specified in the broker config and de
Hi Shafaq,
"Yes" to your first question.
For second question that depends on what would you like to do when your
request's offset is not valid. This is possible, for example, if your logs
gets deleted according to retention policy and your last committed offset
in ZK is within that deleted range.
The message is sent as serialized over wire so there is no concept of width
but only length.
The message size is specified in the broker config and default is 10 MB off
the top of my head to prevent OOM.
On Nov 5, 2013 1:06 PM, "Jason Pohl" wrote:
> I am new to pub-sub architecture and am looki
I tried catching RuntimeException as well as catching TopicExistsException, but
the problem seems to be that the catch block is not being invoked at all.
Is it possible that the scala code needs to have an @throws annotation?
sgg
On Nov 5, 2013, at 2:24 PM, wrote:
> Hi All:
> I am trying t
Actually that will not help. The main probably swallows the exception.
On Tue, Nov 5, 2013 at 4:13 PM, Edward Capriolo wrote:
> Maybe try catching RuntimeException
>
>
> On Tue, Nov 5, 2013 at 2:24 PM, wrote:
>
>> Hi All:
>> I am trying to programmatically create Topics from a Java client.
>>
>
Thanks for clarifying, I see what you are saying. I am able to
cross-publish all the listed Scala versions to my local Maven repo with
“sbt +publish"
crossScalaVersions := Seq("2.8.0","2.8.2", "2.9.1", "2.9.2", "2.10.1"),
Bob
On 11/5/13, 11:05 AM, "Joe Stein" wrote:
>When we are talking ab
Maybe try catching RuntimeException
On Tue, Nov 5, 2013 at 2:24 PM, wrote:
> Hi All:
> I am trying to programmatically create Topics from a Java client.
>
> I am using a suggestion from
> http://stackoverflow.com/questions/16946778/how-can-we-create-a-topic-in-kafka-from-the-ide-using-api/18480
I am new to pub-sub architecture and am looking at Kafka to be a data hub.
My question is there any limit to how wide a message should be in terms of
number of fields. Not so much an absolute limit or a theoretical one, but more
of a practical limit.
What is the "widest" message that you typi
Hi All:
I am trying to programmatically create Topics from a Java client.
I am using a suggestion from
http://stackoverflow.com/questions/16946778/how-can-we-create-a-topic-in-kafka-from-the-ide-using-api/18480684#18480684
Essentially invoking the CreateTopicCommand.main().
String [] ar
In our case, jmxtrans returns:
ReplicaManager.ISRShrinksPerSec.Count
ReplicaManager.ISRShrinksPerSec.MeanRate
ReplicaManager.ISRExpandsPerSec.OneMinuteRate
ReplicaManager.ISRExpandsPerSec.Count
ReplicaManager.ISRExpandsPerSec.MeanRate
ReplicaManager.ISRExpandsPerSec.OneMinuteRate
ReplicaManager.
Using a custom partitioner lets you do a "gather" step and exploit data
locality.
Example use case: topic messages consumer splits message by customer id.
Each customer id has their own database table. With a custom partitioner,
you can send all data for a given customer id to same partition and
We use 0.72 -- I am not sure if this matters with 0.8.
Why would one choose a partition, as opposed to a random partition choice?
What design pattern(s) would mean choosing a partition? When is it a good
idea?
Any feedback out there?
Thanks,
Philip
When we are talking about the setting in Build.scala there are two uses for
it and we need to think of them separately.
The first is building and running the broker. The default for this is 2.8.0
The second is for building your client library (for producers/consumers) to
work with the Kafka brok
Attached patch to upgrade sbt-assembly.
On Tue, Nov 5, 2013 at 10:59 AM, Ngu, Bob wrote:
> Just to clarify, I also already have it working with 2.10.2 via this
> setting change in Build.scala
> scalaVersion := “2.10.2"
>
> My question is if this should be the default setting going on forth. A
Just to clarify, I also already have it working with 2.10.2 via this
setting change in Build.scala
scalaVersion := “2.10.2"
My question is if this should be the default setting going on forth. Also
since I am using Maven and not sbt, I am not sure that the cross
publishing works as expected, d
The 0.7.2 producer should already do that on a clean shutdown of the
producer.
On Tue, Nov 5, 2013 at 9:09 AM, Sriram Ramarathnam wrote:
> I am using kafka 0.7.2. I want to make sure that the producer queue is
> flushed before I can call a close on the producer. Is there a way to
> achieve this?
I am using kafka 0.7.2. I want to make sure that the producer queue is
flushed before I can call a close on the producer. Is there a way to
achieve this?
Thanks,
Sriram
Hi,
I wanted to control the commitOffset signal to ZK from High-level kafka
consumer. This will enable to process the message by consumer and incase
of failure the offset is not moved.
If I do
auto.commit.enable= false
and then use commitallOffset Api of high-level consumer, is that way to
a
You might find this helpful -
http://kafka.apache.org/documentation.html#operations
On Mon, Nov 4, 2013 at 11:00 PM, Oleg Ruchovets wrote:
> Hello.
> We are planning to go production in a couple of months.
>
> Can the community share the best practices.
> What should be configured? What we ne
Thanks Neha! I guess auto-commit it is for now...
On Tue, Nov 5, 2013 at 5:08 AM, Neha Narkhede wrote:
> Currently, the only way to achieve that is to use the SimpleConsumer API.
> We are considering the feature you mentioned for the 0.9 release -
>
> https://cwiki.apache.org/confluence/display/
That I can manage. Thanks so much.
On Tue, Nov 5, 2013 at 6:46 AM, Neha Narkhede wrote:
> Yes, it will commit offsets only for the partitions that the consumer owns.
> But over time, the set of partitions that a consumer owns can change.
>
> Thanks,
> Neha
> On Nov 5, 2013 12:17 AM, "Vadim Keyli
Kane,
Would you mind attaching a patch to
https://issues.apache.org/jira/browse/KAFKA-1116 to enable cross
compilation to scala 2.10.2?
Thanks,
Jun
On Mon, Nov 4, 2013 at 9:53 PM, Kane Kane wrote:
> I'm using it with scala 2.10.2.
>
> On Mon, Nov 4, 2013 at 9:41 PM, Jun Rao wrote:
> > Do yo
Ok, so this can happen, even if the node has not been placed back into
rotation, at the metadata vip?
Hmmm... if the broker is not placed in the metadata vip, how did it end up
receiving metadata requests? You may want to investigate that by checking
the public access logs.
Thanks,
Neha
On Nov 4,
Can this unintentional topic creation be avoided by setting
auto.create.topics.enable=false?
Yes, but that is not the right fix.
Thanks,
Neha
On Nov 4, 2013 10:21 PM, "Priya Matpadi"
wrote:
> Can this unintentional topic creation be avoided by setting
> auto.create.topics.enable=false?
>
>
> On
Yes, it will commit offsets only for the partitions that the consumer owns.
But over time, the set of partitions that a consumer owns can change.
Thanks,
Neha
On Nov 5, 2013 12:17 AM, "Vadim Keylis" wrote:
> I am using creating Consumer.createJavaConsumerConnector(kafka 0.8) for
> each topic/p
Guozhang,
I'll try to come up with some steps to repicate today. I didn't notice
anything obvious in the logs.
Joe
Sent from my Droid Charge on Verizon 4G LTE Guozhang Wang wrote:
Hello Joe,
Do you see any exceptions in the controller or state-change logs?
Where are these two topics original
I am using creating Consumer.createJavaConsumerConnector(kafka 0.8) for
each topic/partition. Would it be safe to assume that commit offset will
apply only to stream/partition managed by that connector?
Thanks,
Vadim
On Mon, Nov 4, 2013 at 8:43 PM, Neha Narkhede wrote:
> You need to set "auto
34 matches
Mail list logo