four kafka nodes, i get these errors
one node, it works well.
2015-11-25 14:52 GMT+08:00 Fredo Lee :
>
> The content below is the report for kafka
>
> when i try to fetch coordinator broker, i get 6 for ever.
>
>
>
> [2015-11-25 14:48:28,638] ERROR [KafkaApi-1] error when handling request
> Name:
The content below is the report for kafka
when i try to fetch coordinator broker, i get 6 for ever.
[2015-11-25 14:48:28,638] ERROR [KafkaApi-1] error when handling request
Name: FetchRequest; Version: 1; CorrelationId: 643; ClientId:
ReplicaFetcherThread-0-4; ReplicaId:
1; MaxWait: 500 ms; Min
Thanks, that helps! We are using the new consumer, but we aren't sure yet
whether it will be easier in our case to reset the offsets directly in the
consumers or do it externally, so we wanted to experiment with both.
-Jack
On Tue, Nov 24, 2015 at 7:18 PM Jason Gustafson wrote:
> The consumer m
The consumer metadata request was renamed to group coordinator request
since the coordinator plays a larger role in 0.9 for managing groups, but
its protocol format is exactly the same on the wire.
As Gwen suggested, I would recommend trying the new consumer API which
saves the trouble of accessin
With ack=1, if your code is does sth. like:
producer1.send(msg1).get() // get() blocks until a response is received.
producer2.send(msg2).get() // get() blocks until a response is received.
Then the ordering is guaranteed though under a broker failure the acked
messages may be lost if they have n
Can you provide some more detail? What version of Kafka are you using?
Which consumer are you using? Are you getting errors in the consumer logs?
It would probably be helpful to see your consumer configuration as well.
-Jason
On Tue, Nov 24, 2015 at 7:18 AM, Kudumula, Surender <
surender.kudum...
Is it safe to run this on an active production topic? A topic was created
without a replication factor of 2 and I want to increase it from 1 to 2 to
have fault tolerance.
http://kafka.apache.org/documentation.html#basic_ops_increase_replication_factor
I see, thanks. Yes, re-assignment. Got my terminology off.
On Tue, Nov 24, 2015 at 4:45 PM, Gwen Shapira wrote:
> ah, re-assignment!
>
> If your re-assignment involves moving leaders to other servers, there will
> be a tiny downtime when the new leader election happens.
> Otherwise the main ris
ah, re-assignment!
If your re-assignment involves moving leaders to other servers, there will
be a tiny downtime when the new leader election happens.
Otherwise the main risk is copying the partition over to the new node - it
can use lots of network and IO. On a busy system we recommend scripting
Not adding. Taking some of the partitions from one kafka server and
spreading them to another.
On Mon, Nov 23, 2015 at 5:40 PM, Gwen Shapira wrote:
> By re-partition you mean adding partitions to an existing topics?
>
> There are two things to note in that case:
> 1. It is "hitless" because all
Cool. Thanks Gwen for your response.
Regards,
Amit
On 11/24/15, 3:19 PM, "Gwen Shapira" wrote:
>The new producer is async by default.
>
>You can see few examples of how to use it here:
>https://github.com/gwenshap/kafka-examples/tree/master/SimpleCounter/src/m
>ain/java/com/shapira/examples/pro
Excellent, We are planning to use Kafka 0.9.0 so your last point is very
useful information. can you please point me to some documentation or code
where I can understand how this auto-generation works?
In the 0.9.0 documentation is see that the default value for broker.id is
-1. That means it will
The new producer is async by default.
You can see few examples of how to use it here:
https://github.com/gwenshap/kafka-examples/tree/master/SimpleCounter/src/main/java/com/shapira/examples/producer/simplecounter
On Tue, Nov 24, 2015 at 10:40 AM, Amit Karyekar
wrote:
> Hi folks,
>
> We are work
You should definitely use the same id if you still have the data - it makes
life so much better.
There are 3 common ways to do it:
1. Use the last 3 digits of the IP as the broker ID (assuming Docker gives
you the same IP when the container relaunches)
2. Use a deployment manager that can register
Hi there.
I have a similar question but is related to this scenario.
Docker server running in EC2 instance with an EBS volume attached to it.
Kafka running in a Docker container in this host with server.properties
"autogenerated" using a bootstrap script.
Part of the bootstrapping process is to
Are you using the new consumer API (KafkaConsumer) or the older
ZookeeperConnector?
KafkaConsumer has seek() API allowing you to replay events from any point.
You can also manually commit a specific offset.
Gwen
On Tue, Nov 24, 2015 at 2:11 PM, Jack Lund
wrote:
> We’re running Kafka 0.9.0, and
We’re running Kafka 0.9.0, and storing our offsets in Kafka. We need a way
to reset the offsets for a consumer so that we can replay the stream. We
looked at this wiki page:
https://cwiki.apache.org/confluence/display/KAFKA/Committing+and+fetching+consumer+offsets+in+Kafka,
but the code doesn’t see
Congratulations on this release! Happy to see the security related
features which we are going to start using soon.
-Jaikiran
On Tuesday 24 November 2015 10:46 PM, Jun Rao wrote:
The Apache Kafka community is pleased to announce the release for
Apache Kafka 0.9.0.0. This a major release that in
Hi folks,
We are working with Kafka 8.2.2 and want to use producer.type as async for
sending messages to broker.
In Kakfka 8.2.2, some new producer properties have been introduced. However,
there is no new name for the property producer.type mentioned in the
documentation.
We’ve the following
Hi Prabhjot
Thanks for your quick response. I never get any response to my queries:( .
Anyway my issue here is I have my kafka running as part of hdp cluster. I have
my producer in the same cluster producing to a topic in kafka and consumer
which is on another node trying to consume the messages
Hi friends,
we're seeing that our producers are often having to resend messages and was
wondering why that might be happening.
I'm attaching the image and here's the link to chart if the image doesn't
go through:
https://apps.sematext.com/spm-reports/s/ZE4Q8aD2gO
[image: Inline image 2]
We're run
Hi Surender,
Please elaborate on your design
Consumers don't talk to producers directly, Kafka is a brokered system, and
Kafka sits between producers and consumers
Also, consumers consume from partitions of a topic and producers write to
partitions in a topic
These partitions and the logical abstr
Hi all
Is there anyway we can ensure in 0.8 that kafka remote producer and remote
consumer work on the same groupId as my java consumer cannot consume messages
from remote producer. Thanks
Thanks a lot Prabhjot!
The issue is mitigated by running the preferred replica leader election
tool! Before that, I noticed that it simply could not do leader
election---when I created a new topic, that topic is not available for a
long time until preferred replica leader election finishes.
For th
The Apache Kafka community is pleased to announce the release for
Apache Kafka 0.9.0.0. This a major release that includes (1)
authentication (through SSL and SASL) and authorization, (2) a new
java consumer, (3) a Kafka connect framework for data ingestion and
egression, and (4) quotas. It also in
Thanks everyone for making the 0.9.0.0 release happen, we have 523 tickets
contributed from 87 contributors in the community!
Guozhang
On Mon, Nov 23, 2015 at 8:49 PM, Jun Rao wrote:
> Thanks everyone for voting.
>
> The following are the results of the votes.
>
> +1 binding = 4 votes (Neha Nar
Thanks for everyone that contributed to this release! It has been a long
time in the works with some really great new additions for folks waiting
with excitement of the new consumer, security and connect (copycat) and
everything else baked in.
Thanks again!
~ Joe Stein
- - - - - - - - - - - -
hi all
Any help will be appreciated guys???
Regards
Surender Kudumula
Big Data Consultant - EMEA
Analytics & Data Management
surender.kudum...@hpe.com
M +44 7795970923
Hewlett-Packard Enterprise
Cain Rd,
Bracknell
RG12 1HN
UK
-Original Message-
From: Kudumula, Surender
Sent: 24 No
You can store offsets wherever you prefer, and it's separate from processes
you mentioned. Unfortunately custom offset storage support has to be
entirely on client side, one cannot extend (easily) Kafka broker with
support for different offset storage. This has as a consequence that
existing Kafka
Thank you for the detailed explanation.
Is it essential that offsets be stored in Kafka, or could they be stored
outside the kafka/zookeeper system? Will it affect how logs are managed,
and when “older” messages are purged? Or are they two independent systems?
On 11/2/15, 03:51, "Stevo Slavić"
There is a configuration for broker, "max.connections.per.ip", and default
is Int.MaxValue.
Here's the doc: https://kafka.apache.org/documentation.html#brokerconfigs
2015-11-24 18:55 GMT+08:00 Muqtafi Akhmad :
> Hello guys, currently I am trying to figure out some things about Kafka
> broker co
Hello,
i am trying to send compressed messages to kafka. topic/broker
configurations are default, i have provided "compression.type" of "snappy"
on kafka producer. the uncompressed message size is 1160668 bytes, error i
get is
*org.apache.kafka.common.errors.RecordTooLargeException: The message i
Hello guys, currently I am trying to figure out some things about Kafka
broker connections, there are two things I wondering :
1. Is there any limit to number of Kafka broker connection?
2. is there any configuration to limit the number of Kafka broker
connection?
Any clue or information is apprec
Hi all
I have been trying to think why its happening. Can anyone point me to the right
direction in terms of the config iam missing. Producer is on another node in
the same cluster and consumer is on different node. But as I said the command
line client works and consumes all the messages. If I
Thank you.
1. Is this Scala code indeed the source code of the consumer that I am using?:
(import kafka.consumer.*)
kafka.consumer.Consumer.createJavaConsumerConnector(consumerConfig).createMessageStreams(topicCountMap,
decoder, decoder).get(topic).get(0).iterator();
2. Even if that's so, can y
35 matches
Mail list logo