HI,
I am using confluent-3.0.0-2.11, with kafka and streams versions
org.apache.kafka:kafka_2.11:0.10.0.0-cp1 and
org.apache.kafka:kafka-streams:0.10.0.0-cp1 respectively. The problem seems
to be with null keys because the original messages are not produced with
keys, and I am creating a key value
Hi Ismael,
thanks for the pointer to the latest WebSphere documentation - I wasn’t aware
of that release.
We currently have customers that run our software frontend on an older
WebSphere version that runs on Java 7 and push data to kafka brokers in the
backend. Replacing Kafka brokers wouldn’t
Hi ,
I want to create kafka cluster for HA.
Do we need to create 3 brokers?, or is it okay if create only 2 we are using
only 1 partition for every topic, there is no parallelism while fetching data.
Please suggest.
Thanks,
Snehalata
Hi All,
We are using kafka 2.10_0.9 new version, but consumer we are using old high
level and low level api.
I am trying to fetch earliest valid offset for topic, but it is returning
latest offset if the data(log) is deleted after certain interval(which is
configured in server properties)
We have a lot of tooling thats still dependent on offsets being in
zookeeper but we were hoping to upgrade to the new consumer to solve
another issue and would prefer not have to do both at the same time.
On Tue, Jun 21, 2016 at 1:17 AM Gerard Klijs
wrote:
> No, why would you want to store the o
+1
On Tue, 21 Jun 2016 at 09:59 Marcus Gründler
wrote:
> Hi Ismael,
>
> thanks for the pointer to the latest WebSphere documentation - I wasn’t
> aware
> of that release.
>
> We currently have customers that run our software frontend on an older
> WebSphere version that runs on Java 7 and push d
Hi all,
We've got a problem with high CPU usage on a 0.9 client. We've got a monitoring
system that polls kafka topics for metadata (to get the last message offset)
every so often, and this has started using very high CPU continuously. We're
seeing the following being spammed in the logs every
Now I changed the test producer call like this.
C:\development\kafka\kafka_2.11-0.10.0.0\kafka_2.11-0.10.0.0\bin\windows>.\kafka-console-producer.bat
--broker-list localhost:8393 --topic test --prod
ucer.config ..\..\config\producer.properties
and updated producer.properties like this
security.pr
Could you share your stack trace upon failure?
On Tue, Jun 21, 2016 at 12:05 AM, Unmesh Joshi
wrote:
> HI,
>
> I am using confluent-3.0.0-2.11, with kafka and streams versions
> org.apache.kafka:kafka_2.11:0.10.0.0-cp1 and
> org.apache.kafka:kafka-streams:0.10.0.0-cp1 respectively. The problem
While working on upgrading from 0.8.2.1 to 0.10.0.0, I found out that
AdminUtils has changed -- and not in a backwards-compatible manner. I
gather this is not a public API since I can't find any Javadoc for it. So,
in 0.10.0.0 are there public replacements for:
AdminUtils.fetchTopicMetadataFro
I'm using 0.9.0.1 consumers on 0.9.0.1 brokers. In a single Java service,
we have 4 producers and 5 consumers. They are all KafkaProducer and
KafkaConsumer instances (the new consumer.)
Since the 0.9 upgrade, this service is now OOMing after a being up for a
few minutes. Heap dumps show >80MB of o
Hi Chris,
Yes, `AdminUtils` is not public API. The plan is to introduce `AdminClient`
as part of KIP-4.
The Kafka protocol additions for `createTopic` and `deleteTopic` are
currently being discussed and it looks like they will be part of the next
Kafka release based on current progress.
The API
Ismael:
What is KIP-4? Until this is available, I think I'm stuck with
AdminUtils. Is there any (java)doc available for it, or am I going to
have to dig through the scala files and figure out what has changed? The
new REST interface is a possibility, but, if I recall, it does not
support, a
On Wed, Jun 22, 2016 at 12:32 AM, Chris Barlock wrote:
>
> What is KIP-4?
https://cwiki.apache.org/confluence/display/KAFKA/KIP-4+-+Command+line+and+centralized+administrative+operations
> Until this is available, I think I'm stuck with
> AdminUtils. Is there any (java)doc available for it, o
I could reproduce it with following steps. Adding Stacktrace in the end.
1. Create a stream and consume it without Windowing.
KTable aggregation = locationViews
.map((key, value) -> {
GenericRecord parsedRecord = parse(value);
String parsedKey = parsedRecord.get("
A bit more investigation shows that because logging is always enabled in
both RocksDBKeyValueStoreSupplier and RocksDBWindowStoreSupplier, the
aggregated key/values get written to a topic in Kafka. RocksDBWindowStore
always stores keys with timestamp attached. RocksDBStore stores raw keys.
If the
Anybody have any idea on this?
Thanks
Pari
On 20 June 2016 at 14:36, Pariksheet Barapatre
wrote:
> Hello All,
>
> I have data coming from sensors into kafka cluster in text format
> delimited by comma.
>
> How to offload this data to Hive periodically from Kafka. I guess, Kafka
> Connect should
17 matches
Mail list logo