Hi Jay,
Thanks!! Can you please share the contact person to include this in
Confluent Coneector Hub page.
Regards,
Surendra M
-- Surendra Manchikanti
On Fri, Apr 22, 2016 at 4:32 PM, Jay Kreps wrote:
> This is great!
>
> -Jay
>
> On Fri, Apr 22, 2016 at 2:28 PM, Surendra , Manchikanti <
> sur
To answer my own question (partially), I have learned that
max.partition.fetch.bytes
, which defaults to a very large large number, will affect the number of
records returned by each call to poll()
I also learned that seekToBeginning is a partition-level thing, but
props.put("auto.offset.r
This is great!
-Jay
On Fri, Apr 22, 2016 at 2:28 PM, Surendra , Manchikanti <
surendra.manchika...@gmail.com> wrote:
> Hi,
>
> I have implemented KafkaConnector for Solr, Please find the below github
> link.
>
> https://github.com/msurendra/kafka-connect-solr
>
> The initial release having SolrS
2 producer's to same topic should not be a problem. There can be multiple
producers and consumers of same kafka topic.
I am not sure what can be wrong here. I can this at my end If you can share
producer code and any config of topic ot broker that you changed and is not
default.
Please also check
Hi,
I have implemented KafkaConnector for Solr, Please find the below github
link.
https://github.com/msurendra/kafka-connect-solr
The initial release having SolrSinkConnector Only, SolrSourceConnector
under development will add it soon.
Regards,
Surendra M
Hi. I've not set that value. My producer properties are as follows :
acks=all
retries=0
bath.size=1638
linger.ms=1
buffer.memory=33554432
compression.type=gzip
client.id=sds-merdevl
I have this running on two hosts with the same config. I thought that having
the same client.id on each would just
Generally a proactive metadata refresh request is sent by producer and
consumer every 5 minutes but this interval can be overriden with property "
metadata.max.age.ms" which has default value 30 i.e 5 minutes. Check if
you have set this property very low in your producer?
On Fri, Apr 22, 2016
I am VERY new to confluent, and need to switch over kafka to use the 0.9 offset
storage vs the old zookeeper offset storage. I would like to follow this
process http://kafka.apache.org/documentation.html#offsetmigration but do not
know where Confluent stores its consumer configuration.
How can
Do I understand correctly that poll() will return a subset of the messages
in a topic each time it is called? So if I want to replay all messages, I
would seek to the beginning and call poll in a loop? Not easily knowing
when I was done, without a high watermark
https://issues.apache.org/jira/brow
Hi all,
I'm developing an AMQP - Kafka bridge and I'm facing with a great limitation of
current 0.9.1 Kafka client APIs.
The consumer.poll() method is synchronous and as we know it's needed in order
to send the heartbeat even if no records are available. It means that poll()
needs to be called
I'm testing a kafka install and using the java client. I have a topic set up
and it appears to work great, but after a while I noticed my log starting to
fill up with what appears to be some kind of loop for metadata updates.
example::
2016-04-22 15:43:55,139 DEBUG s=s-root_out env="md"
[kaf
Hi,
We are using kakfa mirror-maker to replicate kakfa between two data
centers. Kafka have 15 topics with 5 partitions each. Mirror-maker have 5
producers and 5 streams and queue.enqueue.timeout.ms=-1 and its running on
target data center. The normal lag offsets between source and target kafka
is
Hi Phil,
Regarding pause and resume,I have not tried this approach but i think this
approach may not be feasible. If your consumer no longer has that partition
assigned from which record being processed was fetched or even if partition
is assigned again to consumer somehow you may still not be abl
Thanks for good suggestions, But I postponed the idea of setup apache kafka
due to unfortunately high issues on kafka setup and my very little
experience on distributed architecture / kafka / zookeeper .
On Fri, Apr 22, 2016 at 3:34 PM, Lohith Samaga M
wrote:
> Hi,
> Please set up a Kafk
Hi,
Please set up a Kafka cluster. So, you can get high throughput as well
as high availability.
Best regards / Mit freundlichen Grüßen / Sincères salutations
M. Lohith Samaga
-Original Message-
From: Gaurav Agarwal [mailto:gaurav130...@gmail.com]
Sent: Friday, April 22, 2016 1
Hi,we use kafka_2.10-0.8.2.1,and our vm machine config is:4 core,8G
our cluster is consist of three brokers,and our broker config is default 2
replica,our broker load often very high once in a while,
load is greater than 1.5 on average core。
we have about 70 topics on this cluster
when we use Top
Hi
You can have one or two instances of Kafka but you can have one or two
Kafka topic dedicated to each application according to the need. Partition
will have u in increasing the throughput and consumer group id can help u
to make queue as topic or queue.
On Apr 22, 2016 12:37 PM, "Kuldeep Kamboj"
Thanks for reply,
I understand the your point, But my whole strategy depend on first issue
and that is how I can integrate Apps in architecture. Partition / Consumer
groups have different purpose. If I need to setup three kafka instances
each for App ?
On Fri, Apr 22, 2016 at 12:30 PM, Lohith Sam
Hi,
It is better NOT to share topics among applications. You may have a
wrapper application reading from the queue/topic and routing it to the correct
application, but it is simpler for each application to read from its own topic.
Best regards / Mit freundlichen Grüßen / Sincères salutat
19 matches
Mail list logo