Hi -
I would like to use the embedded API of Kafka Connect as per
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=58851767.
But cannot find enough details regarding the APIs and implementation
models. Is there any sample example that gives enough details about
embedded Kafka Conne
Our application that we use with kafka requires that we know the request has
reliably reached the leader and 1 insync node (we usemin.insync.replicas=2 and
producer acks=all) before we continue with our db transaction. To do this we
call the future objects get with a timeout of 5 seconds and r
I would just look at an example:
https://github.com/confluentinc/kafka-connect-jdbc
https://github.com/confluentinc/kafka-connect-hdfs
On 7/12/17, 8:27 AM, "Debasish Ghosh" wrote:
Hi -
I would like to use the embedded API of Kafka Connect as per
https://cwiki.apache.org/con
Hi,
received a first answer today in
https://issues.apache.org/jira/browse/KAFKA-1499 from Manikumar.
So it looks like a topic with mixed compression type can be resulting in the
described scenario.
I wrote a follow-up question in the ticket whether it could be prevented by
post-configuring a
After some debugging, I figured it out. The name of my custom store
was "mapStore", so the store tried to log changes to a topic with that
name, but my test case never created such a topic. Unsurprisingly,
the producer couldn't get metadata for a nonexistent topic, so it
failed with a timeout.
W
Ups. That is definitely a bug in the test. Thanks for pointing it out!
Do you wanna open a PR it? If not, I can take care, too.
-Matthias
On 7/12/17 8:48 AM, Joel Dice . wrote:
> After some debugging, I figured it out. The name of my custom store
> was "mapStore", so the store tried to log chan
Yeah, I'll open a PR.
2017-07-12 11:07 GMT-06:00 Matthias J. Sax :
> Ups. That is definitely a bug in the test. Thanks for pointing it out!
>
> Do you wanna open a PR it? If not, I can take care, too.
>
> -Matthias
>
> On 7/12/17 8:48 AM, Joel Dice . wrote:
>> After some debugging, I figured it ou
Thanks .. I found this blog post from Confluent
https://www.confluent.io/blog/hello-world-kafka-connect-kafka-streams/ ..
It's possibly pre-KIP 26, as the code base says ..
/**
* This is only a temporary extension to Kafka Connect runtime until there
is an Embedded API as per KIP-26
*/
Has KIP-
Nothing painful really at this point.
We were just evaluating kafka, and noticed the discrepancy. Was wondering
what's the 'design way' of using that tool.
Unlikely we gonna use it for prod monitoring at all. Just for initial
experiments.
On Tue, Jul 11, 2017 at 1:23 PM, Jan Filipiak
wrote:
>
Hi Vahid,
Thanks for your response. Below are more details:
1. I do not have JAAS file created. The set up I have on 3-node Kafka
cluster is 2-way SSL. Not using Plaintext or SASL as I do not have enabled
Kerberos or Sentry.
2. All 3 nodes server.properties files have:
authorizer.class.name...
Hello Jan,
Thanks for your feedbacks. Let me try to clarify a few things with the
problems that we are trying to resolve and the motivations with the current
proposals.
As Matthias mentioned, one issue that we are trying to tackle is to reduce
the number of overloaded functions in the DSL due to
Hi Karan,
Why do you try to submit class which does not exist? Don't you want to create
it first?
Jozef
Sent from [ProtonMail](https://protonmail.ch), encrypted email based in
Switzerland.
> Original Message
> Subject: Re: Kafka-Spark Integration - build failing with sbt
> Loc
Hi Jozef
- the class does exist ( but in different location i,e. under src/main)
btw, i was able to resolve the issue ..
sbt by default considers src/main/scala as default source location,
I'd changed the location to a different one.
I changed the build.sbt to point to the required location, that
Guys, Would anyone know about it ?
On Tue, Jul 11, 2017 at 6:20 AM, Raghav wrote:
> Hi
>
> I followed https://kafka.apache.org/documentation/#security to create
> keystore and trust store using Java Keytool. Now, I am looking to do the
> same stuff programmatically using Java. I am struggling to
Hello Eno,
I have to think through this approach. I could split the messages using the
source attribute. However one challenge is the fact that I would need to do
many joins. The example I gave is simplified. The real problem has about 10
sources of data. And there are various possible matches.
Hey! I was hoping I could get some input from people more experienced with
Kafka Streams to determine if they'd be a good use case/solution for me.
I have multi-tenant clients submitting data to a Kafka topic that they want
ETL'd to a third party service. I'd like to batch and group these by
tena
Hi
Can I have a one consumer group with automatic subcription and one group
with manual assignment of partitions. To explain the scenario more, have
a topic1 and several consumer processes are using group1 and each of the
consumers in the groups got partitions assigned automatically by kafka.
For
17 matches
Mail list logo