Thanks kaufman..Yes That was the issue . Now I can delete.
On 15 October 2016 at 00:50, Kaufman Ng wrote:
> The "chroot" is in ZK path is really a placeholder. It's not a real path
> in ZK.
>
> Try the default ZK path: zookeeper.xx.com:2181/
>
>
> On Thu, Oct 13, 2016 at 9:06 PM, Ratha v wrote
Yeah, this works! The consumer API is able to update the consumer offset.
The only downside is to stop the real consumers.
On Fri, Oct 14, 2016 at 2:47 PM, Kevin A wrote:
> I didn't find an off-the-shelf tool to do this when I needed to a few weeks
> ago (which was kind of surprising).
>
> I use
Hello Kafka users, developers and client-developers,
One more RC for 0.10.1.0. We're hoping this is the final one so that we can
meet the release target date of Oct. 17 (Monday). Please let me know as
soon as possible if you find any major problems.
Release plan: https://cwiki.apache.org/confluen
I didn't find an off-the-shelf tool to do this when I needed to a few weeks
ago (which was kind of surprising).
I used the kafka-python library (my company's wrappers around it, actually)
to pretend I was in the consumer group I wanted to update and called commit
with the offsets I wanted.
First
Hi Jeff,
Could you explain how you send messages to __consumer_offsets to overwrite
offsets? Thanks!
Yifan
On Fri, Oct 14, 2016 at 9:55 AM, Jeff Widman wrote:
> I also would like to know this.
>
> Is the solution to just use a console producer against the internal topics
> that store the offse
I also would like to know this.
Is the solution to just use a console producer against the internal topics
that store the offsets?
On Wed, Oct 12, 2016 at 2:26 PM, Yifan Ying wrote:
> Hi,
>
> In old consumers, we use the following command line tool to manually update
> offsets stored in zk:
>
>
Using the kafka-topics.sh script, simply set the retention in a way to remove
the message:
Kafka-topics.sh –zookeeper --alter –config retention.ms=
--topic
This is actually deprecated, but still works in newer kafka 0.10.0. Note:
cleanup=delete is required for this. This policy will only e
WRT performance, yes, changing message size will affect the performance of
producers and consumers. Please study the following to understand the
relationship between message size and performance (graphs at the bottom
visualize the relationship nicely):
https://engineering.linkedin.com/kafka/ben
Hello Daniccan. I apologize for the dumb question, but did you also check
“message.max.bytes” on the broker? Default is about 1meg (112 bytes) for
kafka 0.10.0. if you need to publish larger messages, you will need to adjust
that on the brokers and then restart them.
-David
On 10/14/16,
Hi,
I have a question about kafka, could you please help to have a look?
I want to send a message from producer with snappy compression codec. So I run
the command "bin/kafka-console-producer.sh --compression-codec snappy
--broker-list localhost:9092 --topic test", after that I checked the dat
Hi Team,
I need a help for my query
Is there any way to remove the message from kafka queue with out stopping
zookeeper or topic server or cluster.
Thanks,
Rudra
Hi,
Kindly request to help with a doubt regarding the "max.request.size"
configuration that we use in the Kafka Producer. I get the following exceptions
sometimes in my project.
org.apache.kafka.common.errors.RecordTooLargeException: The request included a
message larger than the max message s
Hi,
I've a problem with the MirrorMaker, while trying to replicate a few topics
to another Kafka cluster.
Source cluster is a 4 node Kafka cluster with static broker IDs from 0 to
3. Target cluster is a 15 node Kafka cluster with dynamic broker IDs,
assigned from 1000 or 1001 (whatever). Producer
The "chroot" is in ZK path is really a placeholder. It's not a real path
in ZK.
Try the default ZK path: zookeeper.xx.com:2181/
On Thu, Oct 13, 2016 at 9:06 PM, Ratha v wrote:
> Hi Jianbin;
> I tried like this; Where I provided my zookeeper host. But it says[1] . I
> use kafka 0.10. And I see
14 matches
Mail list logo