Hi All,
When trying to reconfigure kafka standalone connector I am getting below
Exception. Could you please help on this?
Caused by: com.fasterxml.jackson.databind.JsonMappingException: Can not
deserialize instance of java.lang.String out of START_ARRAY token
Error payload:
{"connector.class"
Hi Team,
I am trying to integrate Mulesoft and Kafka.
Getting below error while running the project --
org.apache.kafka.common.errors.ApiException: The configured groupId is
invalid
And there is no standard field to set groupId in Mulesoft.
Can you please help if it has something to do with the
John,
AFAIK not, however, this was suggested as part of the following JIRA:
https://issues.apache.org/jira/browse/KAFKA-3726
Feel free to upvote.
–
Best regards,
Radek Gruchalski
ra...@gruchalski.com
On October 4, 2016 at 12:17:08 AM, John Vines (jvi...@gmail.com) wrote:
Obligatory sorry if I
Obligatory sorry if I missed it, but I looked at couldn't find anything.
Does Kafka support any sort of actions to perform when an event gets
truncated due to the retention policy?
Background: I'm looking into using kafka to augment our existing rigid data
flow by using it as a messaging system,
Hi,
My Kafka version is 0.8.2.2 Replica factor is
2. auto.leader.rebalance.enable=true
I stopped a broker in my cluster. After a few minutes I started this
broker. The broker was busy catching up huge lag and reached 120MB/s disk
write limit. Additionally there are 23 partitions whose only undead
Shri,
SSL in 0.9.0.1 is not beta and can be used in production. If you want
to put authorizer on top of SSL to enable ACLs for clients and topics
that's possible too.
Thanks,
Harsha
On Mon, Oct 3, 2016 at 8:30 AM Shrikant Patel wrote:
> We are are 0.9.0.1 and want to use SSL for ACL and
Newbie question, but what exactly does log.cleaner.enable=true do, and how
do I know if I need to set it to be true?
Also, if config changes like that need to be made once a cluster is up and
running, what's the recommended way to do that? Do you killall -12 kafka
and then make the change, and the
Yes, offset topic compaction is just the normal compaction.
Thanks
Tom Crayford
Heroku Kafka
On Monday, 3 October 2016, Tobias Adamson wrote:
> Hi
> We are using Kafka 0.10.1 with offsets commits being stored inside of Kafka
> After a while these topics become extremely large and we are wonder
Hi
We are using Kafka 0.10.1 with offsets commits being stored inside of Kafka
After a while these topics become extremely large and we are wondering if we
need to enable log.cleaner.enable=true (currently false) to make sure the
internal
offset topics get compacted and keep their size down?
Reg
I have pushed a hotfix to both trunk and 0.10.1, could you check if the
issue is resolved by now?
On Mon, Oct 3, 2016 at 7:18 AM, Hamidreza Afzali <
hamidreza.afz...@hivestreaming.com> wrote:
> Thanks Guozhang. We use ProcessorTopologyTestDriver for unit tests.
>
> Hamid
>
>
> > On 28 Sep 2016, a
We are are 0.9.0.1 and want to use SSL for ACL and securing communication
between borker, producer and consumer.
Was \ Is the SSL based ACL in beta for this version of Kafka???
We don't want upgrade to 0.10.x unless it absolutely needed.
Thanks,
Shri
___
They will be automatically added and removed.
On Mon, 3 Oct 2016 at 14:59 Gary Ogden wrote:
> What if topics are created or deleted after the application has started?
> Will they be added/removed automatically, or do we need to restart the
> application to pick up the changes?
>
> On 1 October 2
Thanks Guozhang. We use ProcessorTopologyTestDriver for unit tests.
Hamid
> On 28 Sep 2016, at 11:48 AM, Hamidreza Afzali
> wrote:
>
> Hi,
>
> We are using the latest Kafka 0.10.1 branch. The combination of
> ProcessorTopologyTestDriver and WindowedStreamPartitioner is resulting in a
> div
What if topics are created or deleted after the application has started?
Will they be added/removed automatically, or do we need to restart the
application to pick up the changes?
On 1 October 2016 at 04:42, Damian Guy wrote:
> That is correct.
>
> On Fri, 30 Sep 2016 at 18:00 Gary Ogden wrote:
I have a use case, and I'm wondering if it's possible to do this with Kafka.
Let's say we will have customers that will be uploading JSON to our system,
but that JSON layout will be different between each customer. They are able
to define the schema of the JSON being uploaded.
They will then be a
I think, but don't know for sure, it doesn't matter for consumers, since
the messages you read are still 'old' images. I would expect errors van you
use an old producer, and/or when consuming the record from the old producer.
On Mon, Oct 3, 2016 at 7:09 AM Nikhil Goyal wrote:
> Hi guys,
>
> I cr
16 matches
Mail list logo