Hi users,
My project is already developed with Spring 3.0.5.RELEASE, We are planning
to use Kafka for new requirements. I am trying to use spring-kafka (from
1.0.x to 1.1.x) but it is not supporting with Spring 3.0.5.RELEASE.
Application is throwing "class not found exception
org.springframework.
If you want independent clusters based on the same Kafka cluster, they need
independent values for config/offset/status topics. The Kafka Connect
framework doesn't provide its own sort of namespacing or anything, so if
you used the same topics in the same cluster, the values between different
Conne
Dear *
I am facing some issue in Kafka Windows, the Log directories size are keep
on increasing day by day..
Currently the traffic on my kafka set up is minimal , but if this log
directory size does not get fixed then its going to be serious issue...
My Kafka SetUp
- 2 Windows Server
- ka
if i were to run multiple distributed connect clusters (so with different
group.id) does each connect cluster need its own offset.storage.topic,
config.storage.topic and status.storage.topic? or can they safely be shared
between the clusters?
thanks!
koert
ah okay that makes sense. also explains why for a distributed source i
actually has to set it twice:
security.protocol=SASL_PLAINTEXT
producer.security.protocol=SASL_PLAINTEXT
if anyone runs into this issue and just wants it to work... this is what is
in my configs now:
security.protocol=SASL_PLAI
I would start here: http://docs.confluent.io/3.1.0/streams/index.html
On 11/26/16, 8:27 PM, "Alan Kash" wrote:
Hi,
New to Kafka land.
I am looking into Interactive queries feature, which transforms Topics into
Tables with history, neat !
1. What kind of querie
Hi,
New to Kafka land.
I am looking into Interactive queries feature, which transforms Topics into
Tables with history, neat !
1. What kind of queries we can run on the store ? Point or Range ?
2. Is Indexing supported ? primary or seconday ?
3. Query language - SQL ? Custom Java Native Query ?
Avro JSON encoding is a wire-level format. The AvroConverter accepts Java
runtime data (e.g. primitive types like Strings & Integers, Maps, Arrays,
and Connect Structs).
The component that most closely matches your needs is Confluent's REST
proxy, which supports the Avro JSON encoding when receivi
I think you're seeing one of the confusing name changes between old and new
consumers. A quick grep suggests that you are correct that the parameter
for the old consumer is fetch.wait.max.ms, but the parameter for the new
consumer is fetch.max.wait.ms. Since the link you gave is for the new
consume
Kevin,
Generally you're right that mirroring, whether with MirrorMaker or
Confluent's Replicator, shouldn't be too expensive wrt CPU. However, do be
aware that in both cases, if you are using compression you'll need to
decompress and recompress due to the way they copy data. This could
possibly in
Are you sure you have not produced any other data into that topic, e.g.
perhaps you were testing the regular kafka-console-producer before? This
would cause it to fail on the non-Avro messages (as Dayong says, because
the initial magic byte mismatches).
Can you try starting the consumer first with
The REST proxy cannot guarantee that if there are messages in Kafka it will
definitely return them. There will always be some latency between the
request to the REST proxy and fetching data from Kafka, and because of the
way the Kafka protocol works this could be delayed by the fetch timeout.
The r
Yes, that's correct. For reference you can just take a look at the
DefaultPartitioner which does nearly same (with additional logic to do
round robin when there isn't a key): https://github.com/
apache/kafka/blob/trunk/clients/src/main/java/org/
apache/kafka/clients/producer/internals/DefaultPartit
Jens,
Sorry, I'm very late to this thread but figure it might be worth following
up since I think this is a cool feature of the new consumer but isn't well
known. You actually have *quite* a bit of flexibility in controlling how
partition assignment happens. The key hook is the
partition.assignmen
When a key is available, you generally include it because you want all
messages with the same key to always end up in the same partition. This
allows all messages with the same key to be processed by the same consumer
(e.g. allowing you to aggregate all data for a single user if you key on
user ID)
Koert,
I think what you're seeing is that there are actually 3 different ways
Connect can interact with Kafka. For both standalone and distributed mode,
you have producers and consumers that are part of the source and sink
connector implementations, respectively. Security for these are configured
I know that the Kafka team is working on a new way to reason about time. My
team's solution was to not use punctuate...but this only works if you have
guarantees that all of the tasks will receive messages..if not all the
partitions. Another solution is to periodically send canaries to all
pa
Our community does not have a roadmap as such. But there are few
initiatives that are currently being worked on and are likely to be
included in 2017 releases:
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals
If members of the community will have additional improvement
Hello,
I was wondering if anyone from kafka (users/dev or commits) can send or
direct me to the kafka software roadmap for 2017 and onwards.
Let me know if I need to subscribe or if this is available somwhere on your
website.
Greatly appreciated!
Costa Tsirbas
514.443.1439
Hi Frank,
If you are running on a single node then the RocksDB state should be
re-used by your app. However, it relies on the app being cleanly shutdown
and the existence of ".checkpoint" files in the state directory for the
store, .i.e, /tmp/kafka-streams/application-id/0_0/.checkpoint. If the fi
20 matches
Mail list logo