x27;\x00\x00\x00\x00\x00\x00', headers=[], checksum=None,
serialized_key_size=4, serialized_value_size=6, serialized_header_size=-1)
With a binary key and value (key and value seems to always be the same). Any
ideas?
Thanks
Pirow Engelbrecht
System Engineer
E.
pirow.engelbre...@etion.co.
new);
I’ve added a KeyValueProcessor that puts incoming Kafka key-value pairs into
the store. The problem is that if the application starts for the first time, it
does not process any key-value pairs already in the Kafka topic. Is there a way
around this?
Thanks
Pirow Engelbrech
vents the topic to be consumed from the start. I also cannot use
streams.cleanUp() as this will reset all the sinks in my topology (y other sink
ingests records from the input topic).
Thanks
Pirow Engelbrecht
System Engineer
E.
pirow.engelbre...@etion.co.za
T. +27 12 678 9740 (ext. 9879)
M. +27 63
ports will be above 3
* There are several different ways of exposing a port externally, e.g.
through a load balancer
* Do you have a DNS entry for the kafka service to be resolvable externally?
Kind regards
Pirow Engelbrecht | Senior Design Engineer
Tel +27 12 678 9740 (ext. 9879
Hi Sönke,
Thank you for the clarification – much appreciated.
Kind regards
Pirow Engelbrecht | Senior Design Engineer
Tel +27 12 678 9740 (ext. 9879) | Cell +27 63 148 3376
76 Regency Drive | Irene | Centurion | 0157<https://goo.gl/maps/v9ZbwjqpPyL2>
[cid:image001.jpg@01D5FD28.3140B0E0]
Hi Sönke,
OK, thanks, so compacted topics is supported, but an exact replica (i.e. Kafka
records at the same offsets) of the original topic is not possible as there is
nothing to replicate. Is my understanding correct?
Thanks
Pirow Engelbrecht | Senior Design Engineer
Tel +27 12 678 9740 (ext
an be used to replicate
compacted topics? I am especially interested in the case where a cluster has
been off-line for a few hours and have to catch up via replication and the
topic has been compacted since it was last seen by the off-line cluster.
Thanks
Pirow Engelbrecht | Senior Design Eng
Hello Gérald,
I have the exact same problem. Mirrormaker 2.0 Javadocs documentation is only
slated for release 2.5.0 (see https://issues.apache.org/jira/browse/KAFKA-8930).
I am also prototyping Mirrormaker 2.0 and I have successfully run the
Mirrormaker 2.0 scripts (connect-mirror-maker.sh) u
.errors.InvalidReplicationFactorException: Replication
factor: 3 larger than available brokers: 1.
at
org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
~[kafka-clients-2.3.0.jar!/:na]
at
org.apache.kafka.common.internals.KafkaFutureImpl.acc
: Service
metadata:
labels:
app: nginx-service
name: nginx-service
spec:
type: NodePort
ports:
- name: nginx-port
port: 80
nodePort: 32080
targetPort: 80
selector:
app: nginx
externalIPs:
- 172.17.8.220
Thanks
Pirow Engelbrecht | Senior Design Engineer
Tel +27 1
10 matches
Mail list logo