Hi ,
I am trying to enable gzip compression for my events. But after I switched
compression.codec to "1" I found the produced events were even not be
persisted to disk log file. Of course, the consumer could not receive any
compressed events. I sent 10,000 or more events but the broker's log file
ponding errors in
> the broker logs? With the configuration below, I don't think any errors
> will be reported back to the producer.
>
> You could also try setting erquest.required.acks=1 to see if errors are
> reported back to the client.
>
> On 8/29/13 4:40 AM, "Lu Xuecha
_request_required_acks = "1";
final static String p_producer_type = "async";
final static String p_batch_num = "100";
final static String p_compression_codec = "1";
final static String p_message_send_retries = "3";
final static
; Jun
>
>
> On Thu, Aug 29, 2013 at 6:29 AM, Lu Xuechao wrote:
>
> > Let me post my test code here. I could see producer.send(data); returned
> > with no error.
> >
> > public class TestProducer extends Thread {
> > private final Producer producer;
Update: Sending compressed events with console producer works:
kafka-console-producer.bat --broker-list localhost:9092 --sync --topic
topic1 --compress
I am working on Windows 7.
On Fri, Aug 30, 2013 at 8:40 AM, Lu Xuechao wrote:
> After I sent 1,000 compressed events, I saw these messa
No.
On Fri, Aug 30, 2013 at 11:57 AM, Jun Rao wrote:
> These are the metadata requests. Do you see Producer requests from your
> client?
>
> Thanks,
>
> Jun
>
>
> On Thu, Aug 29, 2013 at 5:40 PM, Lu Xuechao wrote:
>
> > After I sent 1,000 compressed even
Hi Jun,
Thanks for you help. Finally, I found the reason by enabling producer side
DEBUG info output. The snappy jar is not included in the classpath. Added
it and it worked.
Thanks again.
On Fri, Aug 30, 2013 at 12:53 PM, Lu Xuechao wrote:
> No.
>
>
> On Fri, Aug 30, 2013 at 1
Hi, Joe. wiki updated. Hope it helps.
On Fri, Aug 30, 2013 at 3:22 PM, Joe Stein wrote:
> I feel like this is maybe a usual case as we have heard it before now a few
> bits
>
> Lu Xuechao would you mind updating the FAQ
> https://cwiki.apache.org/confluence/display/KAFKA/FA
Hi Team,
I have some questions regarding Kafka partitions:
1. Based on my understanding, the partitions of the same broker have
contention on disk IO. Say If I have 10 hard drives, can I specify all the
partitions spread evenly on those drives?
2. If I configure default.replication.factor=2, the
t;
> On Wed, Sep 11, 2013 at 9:59 PM, Lu Xuechao wrote:
>
> > Hi Team,
> >
> > I have some questions regarding Kafka partitions:
> >
> > 1. Based on my understanding, the partitions of the same broker have
> > contention on disk IO. Say If I have 10 hard d
Hi,
I wonder if it is supported to enable compression for storage on brokers
but the producer sends messages uncompressed and consumer receives messages
uncompressed?
thanks.
Hi,
We observed below correlation between kafka configuration and performance
data:
1. producer insertion rate drops as compression enabled, especially gzip;
2. when the producer batch size is below 200, we got high CPU/network IO on
brokers, got high network IO on producers/consumers;
What's th
?
>
> Thanks,
> Neha
>
>
> On Mon, Oct 21, 2013 at 12:01 PM, Lu Xuechao wrote:
>
> > Hi,
> >
> > We observed below correlation between kafka configuration and performance
> > data:
> >
> > 1. producer insertion rate drops as compression enabled,
Hi,
I am following the
https://cwiki.apache.org/confluence/display/KAFKA/0.8.0+SimpleConsumer+Example
When I send KeyedMessage with StringEncoder, I can get the
messages sent:
for (MessageAndOffset messageAndOffset : fetchResponse.messageSet(m_topic,
m_partition)) {
//handle messages
}
But whe
It seems the reason is I enabled gzip compression.
what the code would like to consume compressed messages?
thanks.
On Thu, Oct 31, 2013 at 11:26 AM, Lu Xuechao wrote:
> Hi,
>
> I am following the
> https://cwiki.apache.org/confluence/display/KAFKA/0.8.0+SimpleConsumer+Exampl
A/FAQ#FAQ-Whybrokersdonotreceiveproducersentmessages%3F
> ?
>
> Thanks,
>
> Jun
>
>
> On Thu, Oct 31, 2013 at 2:23 PM, Lu Xuechao wrote:
>
> > It seems the reason is I enabled gzip compression.
> >
> > what the code would like to consume compressed messages?
> >
> > thank
I enabled gzip compression. Each topic has 10 partitions and each partition
is handled by 1 simple consumer thread. All consumers stop to iterate after
iterate first several responses. The responses still return with bytes, but
cannot iterate.
On Thu, Oct 31, 2013 at 9:59 PM, Lu Xuechao wrote
checked fetchResponse.hasError() but has no error.
On Fri, Nov 1, 2013 at 7:45 AM, Jun Rao wrote:
> Did you check the error code associated with each partition in the fetch
> response?
>
> Thanks,
>
> Jun
>
>
> On Thu, Oct 31, 2013 at 9:59 PM, Lu Xuechao wrote:
&g
The consumer starts from offset 0. Yes, in the log dir.
On Fri, Nov 1, 2013 at 4:06 PM, Jun Rao wrote:
> Which offset did you use for fetching? Is there data in the kafka log dir?
>
> Thanks,
>
> Jun
>
>
> On Fri, Nov 1, 2013 at 11:48 AM, Lu Xuechao wrote:
>
> &g
19 matches
Mail list logo