Hi Neha,
Yes, I understand that but when transmitting single message (I can not set
List of all topics) Only Single one. So I will to add same message in
buffer with different topic. If Kakfa protocol, allows to add multiple
topic then message does not have to be re-transmited over the wire to a
Not really. You need producers to send data to Kafka.
On Mon, Oct 20, 2014 at 9:05 PM, Bhavesh Mistry
wrote:
> Hi Kakfa Team,
>
>
> I would like to send a single message to multiple topics (two for now)
> without re-transmitting the message from producer to brokers. Is this
> possible?
>
> Both
Hi Kakfa Team,
I would like to send a single message to multiple topics (two for now)
without re-transmitting the message from producer to brokers. Is this
possible?
Both Producers Scala and Java does not allow this. I do not have to do
this all the time only based on application condition.
Hi,
I have a question about 'topic.metadata.refresh.interval.ms' configuration.
As I know, the default value of it is 10 minutes.
Does it means that producer will change the partition at every 10 minutes?
What I am experiencing is producer does not change to another
partition at every 10 minutes.
I am running a performance test and from what I am seeing is that messages
are taking about 100ms to pop from the queue itself and hence making the
test slow. I am looking for pointers of how I can troubleshoot this issue.
There seems to be plenty of CPU and IO available. I am running 22 producers
It has been replaced with:
log.retention.check.interval.ms
We will update the docs. Thanks for reporting this.
Joel
On Mon, Oct 20, 2014 at 11:08:38AM -0400, Libo Yu wrote:
> http://kafka.apache.org/documentation.html#brokerconfigs
>
> In section 6.3, there is an example your production server
As Neha mentioned, with rep factor 2x, this shouldn't normally cause
an issue.
Taking the broker down will cause the leader to move to another
replica; consumers and producers will rediscover the new leader; no
rebalances should be triggered.
When you bring the broker back up, unless you run a pr
This is another potential use-case for message metadata. i.e., if we
had a DC/environment field in the header you could easily set up a
two-way mirroring pipeline. The mirror-maker can just filter out
messages that originated in the source cluster. For this to be
efficient the mirror maker should r
Hi Kyle,
I added new documentation, which will hopefully help. Please take a look here:
https://issues.apache.org/jira/browse/KAFKA-1555
I've heard rumors that you are very very good at documenting, so I'm
looking forward to your comments.
Note that I'm completely ignoring the acks>1 case since
Hi everyone,
We run a 3 Kafka cluster using 0.8.1.1 with all topics having a replication
factor of 3 meaning every broker has a replica of every partition.
We recently ran into this issue (
https://issues.apache.org/jira/browse/KAFKA-1028) and saw data loss within
Kafka. We understand why it happ
If the whole cluster is down and you allow unclean leader election on the
broker, some exposed messages on the broker could be lost when restarting
the brokers. When that happens, the consumers may need to reset their
offset since the current offsets may no longer be valid. By default, the
offset w
Yes I did. It is set to 2.
On Oct 20, 2014 5:38 PM, "Neha Narkhede" wrote:
> Did you ensure that your replication factor was set higher than 1? If so,
> things should recover automatically after adding the killed broker back
> into the cluster.
>
> On Mon, Oct 20, 2014 at 1:32 AM, Shlomi Hazan w
I may be out of date, but I believe security measures are only in the
proposal stage. Your use case most likely involves sending data from the
internet at large to the Kafka instance. This will result in all data sent
to the Kafka instance being consumable by the internet at large. This is
unlikely
>> What is the use case requiring that?
I'm looking for an open source library that I can use in Android and iOS,
instead of hand rolling my own.
On Mon, Oct 20, 2014 at 10:21 AM, Joe Stein wrote:
> What is the use case requiring that? If you try to integrate kafka in the
> two different mobile
What is the use case requiring that? If you try to integrate kafka in the
two different mobile platforms you will get many separate development
cycles and none will work in many mobile networked environments. You can
HTTP/HTTPS POST the same Avro objects (or Thrift or ProtoBuf) from each
platform.
Thanks for the tip. I would like to avoid hand rolling any code if
possible. For example, on Android I would like to ask if people are able to
include and use the kafka jars with no problem? And on iOS, if there is a
way to include any C or other relevant code.
On Mon, Oct 20, 2014 at 8:49 AM, Har
Hi Josh,
Why not have Rest api service running where you post messages
from your mobile clients. Rest api can run kafka producers
accepting these messages pushing it into kafka brokers. Here
is an example where we did similar service for kafka
https:/
hi
Is it possible to have iOS and android to run the code needed for Kafka
producers ? I want to have mobile clients connect to Kafka broker
Thanks,
Josh
http://kafka.apache.org/documentation.html#brokerconfigs
In section 6.3, there is an example your production server configuration.
> Date: Mon, 20 Oct 2014 07:49:56 -0700
> Subject: Re: log.cleanup.interval.mins still valid for 0.8.1?
> From: neha.narkh...@gmail.com
> To: users@kafka.apache.org
Which example are you referring to?
On Mon, Oct 20, 2014 at 7:47 AM, Libo Yu wrote:
> Hi all,
>
>
> This config property does not appear in the table of broker config
> properties. But it appears in the example on the Web page. So I wonder if
> this is still a valid config property for 0.8.1. Th
Hi all,
This config property does not appear in the table of broker config properties.
But it appears in the example on the Web page. So I wonder if this is still a
valid config property for 0.8.1. Thanks.
Libo
Another way to set up this kind of mirroring is by deploying 2 clusters in
each DC - a local Kafka cluster and an aggregate Kafka cluster. The mirror
maker copies data from both the DC's local clusters into the aggregate
clusters. So if you want access to a topic with data from both DC's, you
subsc
Did you ensure that your replication factor was set higher than 1? If so,
things should recover automatically after adding the killed broker back
into the cluster.
On Mon, Oct 20, 2014 at 1:32 AM, Shlomi Hazan wrote:
> Hi,
>
> Running some tests on 0811 and wanted to see what happens when a brok
Mohit,
I wonder if it is related to
https://issues.apache.org/jira/browse/KAFKA-1585. When zookeeper expires a
session, it doesn't delete the ephemeral nodes immediately. So if you end
up trying to recreate ephemeral nodes quickly, it could either be in the
valid latest session or from the previou
Hi,
We have 2 data centers that produce events. Each DC has to process events from
both DCs.
I had the following in mind:
DC 1 | DC 2
events |events
+ + + | + + +
| | | | | | |
v
You can run with a single node zookeeper cluster also.
See
http://zookeeper.apache.org/doc/r3.3.4/zookeeperStarted.html#sc_InstallingSingleMode
Cheers,
Erik.
Op 9 okt. 2014, om 22:52 heeft S Ahmed het volgende
geschreven:
> I want kafka features (w/o the redundancy) but don't want to have
Dear Experts,
We recently updated to kafka v0.8.1.1 with zookeeper v3.4.5. I have of
topic with 30 partitions and 2 replicas. We are using High level consumer
api.
Each consumer process which is a storm topolofy has 5 streams which
connects to 1 or more partitions. We are not using storm's inbuilt
Hi,
Running some tests on 0811 and wanted to see what happens when a broker is
taken down with 'kill'. I bumped into the situation at the subject where
launching the broker back left him a bit out of the game as far as I could
see using stack driver metrics.
Trying to rebalance with "verify consum
28 matches
Mail list logo