@Ewen added license.txt (boost v1.0)
thanks
svante
2015-03-24 2:15 GMT+01:00 Ewen Cheslack-Postava :
> You don't get edit permission by default, you need to get one of the admins
> to give you permission.
>
> @Daniel, I've added libkafka-asio.
>
> @svante I started to add csi-kafka, but couldn
RecordAccumulator is actually not part of the public api since it's
internal. The public apis are only those in
http://kafka.apache.org/082/javadoc/index.html?org/apache/kafka/clients/producer/KafkaProducer.html
Thanks,
Jun
On Mon, Mar 23, 2015 at 9:23 PM, Grant Henke wrote:
> Thanks for valid
Thanks for validating that. I was thinking of solving it in the same
fashion. Though I was unsure if there was/would be a use case to have
multiple CompressionTypes in the same RecordAccumulator since the API was
originally created this way.
I would be happy to file a jira and can take on making t
Hi, Grant,
The append api seems indeed a bit weird. The compression type is a producer
level config. Instead of passing it in for each append, we probably should
just pass it in once during the creation RecordAccumulator. Could you file
a jira to track this?
Thanks,
Jun
On Mon, Mar 23, 2015 at
I am reading over the new producer code in an effort to understand the
implementation more thoroughly and had some questions/feedback.
Currently org.apache.kafka.clients.producer.internals.RecordAccumulator
append method accepts the compressionType on a per record basis. It looks
like the code wou
I think what you are saying is that in RequestChannel, we can start
generating header/body for new request types and leave requestObj null. For
existing requests, header/body will be null initially. Gradually, we can
migrate each type of requests by populating header/body, instead of
requestObj. Th
You don't get edit permission by default, you need to get one of the admins
to give you permission.
@Daniel, I've added libkafka-asio.
@svante I started to add csi-kafka, but couldn't find a license?
On Sun, Mar 22, 2015 at 8:29 AM, svante karlsson wrote:
> Cool, Looks nice. I was looking for
---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/32422/
---
Review request for kafka.
Bugs: KAFKA-1554
https://issues.apache.org/jira/b
[
https://issues.apache.org/jira/browse/KAFKA-1554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14377009#comment-14377009
]
Mayuresh Gharat commented on KAFKA-1554:
Created reviewboard https://reviews.apach
[
https://issues.apache.org/jira/browse/KAFKA-1554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Mayuresh Gharat updated KAFKA-1554:
---
Status: Patch Available (was: Open)
> Corrupt index found on clean startup
>
[
https://issues.apache.org/jira/browse/KAFKA-1554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Mayuresh Gharat updated KAFKA-1554:
---
Attachment: KAFKA-1554.patch
> Corrupt index found on clean startup
>
[
https://issues.apache.org/jira/browse/KAFKA-1688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14376977#comment-14376977
]
Prasad Mujumdar commented on KAFKA-1688:
[~bosco] and [~parth.brahmbhatt] With my
I'm thinking of a different approach, that will not fix everything, but
will allow adding new requests without code duplication (and therefore
unblock KIP-4):
RequestChannel.request currently takes a buffer and parses it into an "old"
request object. Since the objects are byte-compatibly, we shoul
The transferTo stuff is really specialized for sending a fetch response
from a broker. Since we can't get rid of the scala FetchResponse
immediately, we can probably keep the way that fetch responses are sent
(through FetchResponseSend) right now until the protocol definition is
extended.
Thanks,
[
https://issues.apache.org/jira/browse/KAFKA-2041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Benoy Antony updated KAFKA-2041:
Attachment: kafka-2041-001.patch
In the attached patch, a _Keyer_ trait is defined. This trait has a
Benoy Antony created KAFKA-2041:
---
Summary: Add ability to specify a KeyClass for KafkaLog4jAppender
Key: KAFKA-2041
URL: https://issues.apache.org/jira/browse/KAFKA-2041
Project: Kafka
Issue Ty
[
https://issues.apache.org/jira/browse/KAFKA-1888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Abhishek Nigam updated KAFKA-1888:
--
Attachment: KAFKA-1888_2015-03-23_11:54:25.patch
> Add a "rolling upgrade" system test
> ---
[
https://issues.apache.org/jira/browse/KAFKA-1888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Abhishek Nigam updated KAFKA-1888:
--
Status: Patch Available (was: Open)
> Add a "rolling upgrade" system test
> ---
[
https://issues.apache.org/jira/browse/KAFKA-1888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14376399#comment-14376399
]
Abhishek Nigam commented on KAFKA-1888:
---
Updated reviewboard https://reviews.apache.
---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30809/
---
(Updated March 23, 2015, 6:54 p.m.)
Review request for kafka.
Bugs: KAFKA-188
Navina Ramesh created KAFKA-2040:
Summary: Update documentation with the details of async producer
Key: KAFKA-2040
URL: https://issues.apache.org/jira/browse/KAFKA-2040
Project: Kafka
Issue T
[
https://issues.apache.org/jira/browse/KAFKA-1507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14376040#comment-14376040
]
Sriharsha Chintalapani commented on KAFKA-1507:
---
[~jkreps] Since create/upda
[
https://issues.apache.org/jira/browse/KAFKA-2036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14372199#comment-14372199
]
Arsenii Krasikov edited comment on KAFKA-2036 at 3/23/15 1:58 PM:
--
[
https://issues.apache.org/jira/browse/KAFKA-2036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Arsenii Krasikov updated KAFKA-2036:
Attachment: (was: patch)
> Consumer and broker have different networks
> ---
[
https://issues.apache.org/jira/browse/KAFKA-2036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Arsenii Krasikov updated KAFKA-2036:
Attachment: patch
simplified patch
> Consumer and broker have different networks
>
[
https://issues.apache.org/jira/browse/KAFKA-873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14375884#comment-14375884
]
Tommy Becker commented on KAFKA-873:
I think it would be nice. The status of zkclient
I'm using kafka 0.8.2.0
I'm working on a C++ client library and I'm adding consumer offset
management to the client. (https://github.com/bitbouncer/csi-kafka)
I know that the creation of zookeeper "paths" is not handled by kafkabroker
so I've manually created
/consumers/consumer_offset_sample/of
27 matches
Mail list logo