So a bit of feedback as well.
I hope Kafka connect would work the following way (just a proposal )
You send a configuration which points to a class but also a version for
that class (connector ). Kafka connect then has some sort of capability to
pull that class from a dependency repository and is
If I KStream.leftJoin(Ktable) this article (
https://www.confluent.io/blog/distributed-real-time-joins-and-aggregations-on-user-activity-events-using-kafka-streams/)
seems to suggest that I could have one:many. (ktable:kstream)
Accurate?
On Mon, Jan 30, 2017 at 4:35 PM, Matthias J. Sax
wrote:
see inline.
best, koert
On Tue, Jan 31, 2017 at 1:56 AM, Ewen Cheslack-Postava
wrote:
> On Mon, Jan 30, 2017 at 8:24 AM, Koert Kuipers wrote:
>
> > i have been playing with kafka connect in standalone and distributed
> mode.
> >
> > i like standalone because:
> > * i get to configure it using a
Hello.
I've got a local virtual development environment, with:
- kafka 0.10.1.1
- java version "1.8.0_121"
I don't need anything special, this is just for trial, so I set up zk and
kafka and the stream processor to use /tmp for data, log and state.
It's not persistent, but I can always try
We use following method to deserialize the message consumed using Simple
Consumer -
DatumReader datumReader = new SpecificDatumReader<>(className);
ByteArrayInputStream inputStream = new ByteArrayInputStream(byteArray);
Decoder decoder = DecoderFactory.get().binaryDecoder(inputStream, null);
T obj
I think this might be an issue related to having
auto.create.topics.enabled=true (the default).
Try setting auto.create.topics.enabled=false in server.properties.
On Tue, 31 Jan 2017 at 17:29 Peter Kopias wrote:
> Hello.
>
> I've got a local virtual development environment, with:
> - kafka 0.
Yes.
See my answer below: it highlights the difference between joining two
KTables (only primary key join) and KStream-KTable join (even if I did
mention KStream-GlobalKTable join for this case).
However, KStream-KTable joins are not symmetric, because updates to
KTable do not trigger a join comp
The auto create topic was intentionally set to true.
Also auto topic creation should not cause warnings/exceptions neither on
client nor server side I believe.
The odd correlation id error messages won't describe the situation very
well, and the no leader messages should never ever happen in a 1
Hi Peter,
We are doing heavy microservice development (many of which comm with
kafka), and a typical service has an integration test suite that runs
under docker-compose. We found the fastest way to get our service up
and running is to disable topic auto-create and use the topic-creation
p
I am not sure if I understand the complete scenario yet.
> I need to delete all items from that source that
> doesn't exist in the latest CSV file.
Cannot follow here. I thought your CSV files provide the data you want
to process. But it seems you also have a second source?
How does your Streams
Sorry for not being clear. Let me explain by example. Let's say I have two
sources S1 and S2. The application that I need to write will load the files
from these sources every 24 hours. The results will be KTable K.
For day 1:
S1=[A, B, C] => the result K = [A,B,C]
S2=[D,E,F] => K will be [A
Hi All,
I am using Kafka 2.10-0.8.1 and I am seeing issues while consuming messages
using Simple Consumer API.
As new messages are produced, those are not retrieved by Simple consumer
API and kafka server console shows the following-
[2017-01-31 10:19:47,007] INFO Closing socket connection to /12
Thanks for the update.
What is not clear to me: why do you only need to remove C, but not
D,E,F, too, as source2 does not deliver any data on day 2?
Furhtermore, IQ is designed to be use outside of you Streams code, and
thus, you should no use it in SourceTask (not sure if this would even be
poss
Sorry for the confusion, I stopped the example before processing the file
from S2.
So in day 2, if we get
S2=[D,E, Z], we will have to remove F and add Z; K = [A,B,D,E,Z]
To elaborate more, A, B and C belong to S1 ( items have field to state
their source). Processing files from S1 should never de
Hi,
I want to collect feedback about the idea to publish docs for current
trunk version of Apache Kafka.
Currently, docs are only published for official release. Other projects
also have docs for current SNAPSHOT version. So the question rises, if
this would be helpful for Kafka community, too.
+1
On 1 Feb 2017 07:27, "Matthias J. Sax" wrote:
> Hi,
>
> I want to collect feedback about the idea to publish docs for current
> trunk version of Apache Kafka.
>
> Currently, docs are only published for official release. Other projects
> also have docs for current SNAPSHOT version. So the ques
Hi,
I’m using the confluent kafka images and whenever I turn off a broker I get
the following messages:
[2017-02-01 02:44:48,310] WARN Failed to send SSL Close message
(org.apache.kafka.common.network.SslTransportLayer)
java.io.IOException: Broken pipe
at sun.nio.ch.FileDispatcherImpl.write0(Nat
+1
On Tue, Jan 31, 2017 at 5:57 PM, Matthias J. Sax wrote:
> Hi,
>
> I want to collect feedback about the idea to publish docs for current
> trunk version of Apache Kafka.
>
> Currently, docs are only published for official release. Other projects
> also have docs for current SNAPSHOT version. So
Thanks for bringing this up, Matthias.
+1
On Wed, Feb 1, 2017 at 8:15 AM, Gwen Shapira wrote:
> +1
>
> On Tue, Jan 31, 2017 at 5:57 PM, Matthias J. Sax
> wrote:
> > Hi,
> >
> > I want to collect feedback about the idea to publish docs for current
> > trunk version of Apache Kafka.
> >
> > Curr
19 matches
Mail list logo