Hi everyone,
I have a question regarding the 'KafkaConsumer' and its API in regards to
committing offsets. (kafka-clients 0.9.0.1)
*The scenario -*
I am working with auto commit set to disabled because I want to implement a
retry mechanism and eventually transfer the message to another topic that
On Friday 12 August 2016 08:45 PM, Oleg Zhurakousky wrote:
It hangs indefinitely in any container.
I don't think that's accurate. We have been running Kafka brokers and
consumers/producers in docker containers for a while now and they are
functional. Of course, you need to make sure you use t
Hi all,
we just had a case with Kafka 0.9 where an index rebuild for ~200M
segments took on average 45 seconds. All indexes of a partition were
corrupt. There are 13 segments and the rebuild took 10 minutes.
After the rebuild, these are representative sizes:
% ll -h /data/xyz-0
-rw-r--r-- 1
Hi,
Message object consists of partition, topic, offset and message.
https://kafka.apache.org/090/javadoc/index.html?org/apache/kafka/clients/consumer/KafkaConsumer.html
You many use this to get current offsets for topic-partition combination.
Thanks
Sudev
On Tue, Aug 16, 2016 at 2:20 PM, Amir Z
This is not the case in my situation.
I am using an older version where it doesn't have the subscriber model, and
processing messages can occur concurrently in multiple threads.
If I use the KafkaConsumer with multiple threads and partitions -
kafkaConsumer.run(5) //5 thread count -
How can I comm
havent used cmake in over 10 years so Im a bit lost..
cmake -G "Visual Studio 12 Win64" -DGFLAGS=1 -DSNAPPY=1 -DJEMALLOC=1 -DJNI=1
CMake Error: Could not create named generator Visual Studio 12 Win64
?Please advise
Martin
__
> From: mathieu.fenn...@r
Hi,
Yesterday my company completed a “successful” migration from Kafka brokers
v0.9.0.1 to Kafka 0.10.0.
However the migration can’t be considered completely successfully because
we accidentally lost our offsets. Fortunately our apps are designed to be
able to replay from the beginning on the top
clj-kafka uses the old consumer API's and offset storage in ZK. If I were
you I'd migrate to https://github.com/weftio/gregor which wraps the new
consumer API and stores offsets in Kafka.
I'm going to assume you didn't migrate ZK state based off this?
__
On 16 August 2016 at 12:15, Javier Holg
It is accurate since it’s an API/implementation problem and therefore container
independent. Sure if everything is configured correctly and broker is
accessible then things do work, but try to shut down consumer when broker is
not accessible. And when I mean shut down I am not implying shutting
Hey Martin,
I had to modify the -G argument to that command to include the visual
studio year. If you run "cmake /?", it will output all the available
generators. My cmake looked like:
cmake -G "Visual Studio 12 2013 Win64" -DJNI=1 ..
I think this is probably a change in cmake since the ro
Hello,
I have a very basic doubt.
I created a kafka topic and produced 10 messages using the
kafka-console-producer utility. When I consume messages from this topic, it
consumes 10 messages - fine. However, it shows that I have processed a
total of 11 messages. This number is +1 the total number
Hi Radek,
No, I'm not familiar with these tools. I see that Curator's TestingServer
looks pretty straight-forward, but, I'm not really sure what
kafka.util.TestUtils
is. I can't find any documentation referring to this, and it doesn't seem
to be a part of any published maven artifacts in the Kaf
> From: mathieu.fenn...@replicon.com
> Date: Tue, 16 Aug 2016 06:57:16 -0600
> Subject: Re: DLL Hell
> To: users@kafka.apache.org
>
> Hey Martin,
>
> I had to modify the -G argument to that command to include the visual
> studio year. If you run "cmake /?", it will output all the available
>
Hey Harsha,
I noticed that you proposed that Storm should drop support for Java 7 in
master:
http://markmail.org/message/25do6wd3a6g7cwpe
It's useful to know what other Apache projects are doing in this regard, so
I'm interested in the timeline being proposed for Storm's transition. I
could not
Hi Guozhang,
Thanks for the feedback. What would you think about
including ProcessorTopologyTestDriver in a released artifact from kafka
streams in a future release? Or alternatively, what other approach would
you recommend to incorporating it into another project's tests? I can copy
it wholesa
I really don't know anything about how cmake works, so, I can't explain
that error for you. Maybe it needs VS installed to be able to generate
those files, but, I don't know.
You will definitely need to have native Windows build tools installed. You
can probably avoid having Visual Studio (the I
Mathieu,
The TestUtils class is available with the following artifact:
[org.apache.kafka/kafka_2.xx "0.10.0.0" :classifier “test”]
where XX is your desired scala version.
Usage (roughly translated from clojure to scala, quotes need redioing, copy
/ paste won’t work):
val brokerCfg = TestUtils.
Hi Sam,
Thanks for your quick answer.
We are passing “offsets.storage = kafka” to clj.kafka.consumer.zk/consumer (
https://github.com/pingles/clj-kafka/blob/master/src/clj_kafka/consumer/zk.clj#L24
)
I would expect then that offsets would be stored in both Kafka and ZK
(according to Kafka docume
Amir,
We had similar requirement to consume every message reliably; the approach
I picked was to push any message with unsuccessful consumption to a
secondary topic for later entertainment; in our case the message/events
were non-dependant so we use to make second attempt for consumption and on
any
BTW we used 0.8.x
On 8/16/16, 9:51 AM, "Ajay Sharma" wrote:
>Amir,
>We had similar requirement to consume every message reliably; the approach
>I picked was to push any message with unsuccessful consumption to a
>secondary topic for later entertainment; in our case the message/events
>were non-d
Hi,
I’ve read and become somewhat indoctrinated by the possibility discussed in I
Heart Logs of having an event stream written to Kafka feeding a process that
loads a database. That process should, I think, store the last offset it has
processed in its downstream database rather than in ZooKeep
Mathieu,
FWIW here are some pointers to run embedded Kafka/ZK instances for
integration testing. The second block of references below uses Curator's
TestingServer for running embedded ZK instances. See also the relevant
pom.xml for how the integration tests are being run (e.g. disabled JVM
reusa
Addendum:
> Unfortunately, Apache Kafka does not publish these testing facilities as
maven artifacts -- that's why everyone is rolling their own.
Some testing facilities (like kafka.utils.TestUtils) are published via
maven, but other helpful testing facilities are not.
Since Radek provided a sni
About moving some streams text utils into a separate package: I think this
has been requested before with a filed JIRA
https://issues.apache.org/jira/browse/KAFKA-3625
Guozhang
On Tue, Aug 16, 2016 at 10:18 AM, Michael Noll wrote:
> Addendum:
>
> > Unfortunately, Apache Kafka does not publish
java7 is end of life. http://www.oracle.com/technetwork/java/eol-135779.html
+1
On Tue, Aug 16, 2016 at 6:43 AM Ismael Juma wrote:
> Hey Harsha,
>
> I noticed that you proposed that Storm should drop support for Java 7 in
> master:
>
> http://markmail.org/message/25do6wd3a6g7cwpe
>
> It's usef
Hi, I have a problem for which I am not able to find the solution. Below is
the problem statement :
I have two Kafka-Steaming api processors say P1 and P2, both want to read
same message(s) from same topic say T. Topic T is having only one partition
and contains some configuration information and
For your information, I am using Confluent-3.0.0 (having Streaming api-0.10)
On Wed, Aug 17, 2016 at 10:23 AM, Deepak Pengoria wrote:
> Hi, I have a problem for which I am not able to find the solution. Below
> is the problem statement :
>
> I have two Kafka-Steaming api processors say P1 and P2
You could create another partition in topic T, and publish the same message to
both partitions. You would have to configure P2 to read from the other
partition. Or you could have P1 write the message to another topic and
configure P2 to listen to that topic.
-David
On 8/16/16, 11:54 PM, "Dee
Hi,
I am wondering why not have two different GroupIDs for the processors? This
would ensure that both P1 & P2 read each message from the topic.
- Tanay
On Wed 17 Aug, 2016 11:06 am David Garcia, wrote:
> You could create another partition in topic T, and publish the same
> message to both par
29 matches
Mail list logo