Static IP. Buying static IP may help. I am not aws expert
On Wed, Nov 14, 2018 at 12:47 PM Srinivas Rapolu wrote:
> Hello Kafka experts,
>
> We are running Kafka on AWS, main question is what is the best way to
> retain broker.id on new instance spun-up in-place of instance/broker
> failed.
>
>
Is this helps?
https://stackoverflow.com/questions/42546501/the-retention-period-for-offset-topic-of-kafka/44277227#44277227
On Mon, Oct 1, 2018 at 2:31 PM Kaushik Nambiar
wrote:
> Hello,
> Any updates on the issue?
>
>
> Regards,
> Kaushik Nambiar
>
> On Wed, Sep 26, 2018, 12:37 PM Kaushik Na
odes" I meant being able to connect to
> the Kafka brokers port on those machines, that would be enough to use
> the java admin client as described above.
>
> Hope this helps.
>
> Best regards,
> Sönke
>
>
> On Mon, Feb 26, 2018 at 5:25 PM, naresh Goud
> wrot
It should require zookeeper connection always, because intern kafka brokers
will interact with zookeeper for all meta data about topics.
But its interesting, how would you give departments to access to kafka nodes
@Sönke,
Could you please shade some light on giving departements access to kafka
no
Hi Pravin,
Your correct.
you can run application with multiple times so they will be started on
multiples JVM's ( run1 :- java yourclass (which runs in one JVM)
; run2: java yourclass(which runs in another JVM ) )
or else
you can run application on multiple machines i.e multiple appli
fkaUtils.scala
>
> FYI
>
> On Sun, Feb 18, 2018 at 5:17 PM, naresh Goud
> wrote:
>
>> Hello Team,
>>
>> I see "KafkaUtils.createStream() " method not available in spark 2.2.1.
>>
>> Can someone please confirm if these methods
Hello Team,
I see "KafkaUtils.createStream() " method not available in spark 2.2.1.
Can someone please confirm if these methods are removed?
below is my pom.xml entries.
2.11.8
2.11
org.apache.spark
spark-streaming_${scala.tools.version}
2.2.1
provided
Its good tool for your requirement.
Probably you need to look at kafka conncet/ Kafka streams APIs.
Thank you,
Naresh
On Fri, Feb 2, 2018 at 8:50 PM, Matan Givo wrote:
> Hi,
>
> My name is Matan Givoni and I am a team leader in a small startup company.
>
> We are starting a development on a c
This is the topic used and created by Kafka internally to store consumer
offsets while use consumer programs running.
Thank you,
Naresh
On Sun, Feb 4, 2018 at 1:38 PM Ted Yu wrote:
> Which Kafka version are you using ?
> Older versions of kafka (0.10 and prior) had some bugs in the log-cleaner
Can you check if jira KAFKA-3894 helps?
Thank you,
Naresh
On Tue, Jan 16, 2018 at 10:28 AM Shravan R wrote:
> We are running Kafka-0.9 and I am seeing large __consumer_offsets on some
> of the partitions of the order of 100GB or more. I see some of the log and
> index files are more than a yea
10 matches
Mail list logo