Thank you Apostolis Glenis.
On Thu, Dec 14, 2017 at 7:30 PM, Apostolis Glenis
wrote:
> I have also created a monitoring application for Kafka that uses
> prometheus.
> You can look at the source code here:
>
> https://github.com/aglenis/kafka_monitoring_pandas
>
> 2017-12-13 9:53 GMT+02:00 Irtiz
Hello,
I’ve been trying most of the afternoon to get Kafka installed and running the
basic quick start.
I am running into the following errors related to firing up zookeeper. From
the kafka directory:
Andreas-iMac:kafka_2.11-1.0.0 Andrea$ bin/zookeeper-server-start.sh
config/zookeeper.proper
I am afraid, that atm there is not good support for this... :(
However, we plan to release official test utilities soon (planned for
v1.1) that should contain proper support for punctuations.
So stay tuned.
-Matthias
On 12/15/17 7:31 AM, Tom Wessels wrote:
> Howdy. I've been using the Process
It's not recommended to write a custom partitioner because it's pretty
difficult to write a correct one. There are many dependencies and you
need deep knowledge of Kafka Streams internals to get it write.
Otherwise, your custom partitioner breaks Kafka Streams.
That is the reason why it's not docu
Hello, I want to get the start and end of partition form kafka by API, such
as??AdminClient. How can I do?
Howdy. I've been using the ProcessorTopologyTestDriver for testing my Kafka
Streams topologies, and it's worked great. But now I'd like to test the
punctuation functionality in my topology, and I don't see anything in
ProcessorTopologyTestDriver that allows for that. The KStreamTestDriver has
a pun
I am using Kafka 0.11.0.1 and Kafka seems to be recovered by itself.
I will post my finding.
> On Dec 14, 2017, at 2:18 PM, Ted Yu wrote:
>
> Can you look at the log from controller to see if there is some clue
> w.r.t. partition
> 82 ?
> Was unclean leader election enabled ?
>
> BTW which rel
Hi,
After upgrading to 1.0 we're getting strange producer/broker behaviour not
experienced on <1.0.
As a test we run a single threaded producer just sending "TEST" against our
cluster with the following producer settings, on a topic with replica's=3
and min.isr=2:
linger.ms=10
acks=all
retries=10
OK. tnx!
On Fri, 15 Dec 2017 at 15:08 Damian Guy wrote:
> I believe that just controls when the segment gets deleted from disk. It is
> removed from memory before that. So i don't believe that will help.
>
> On Fri, 15 Dec 2017 at 13:54 Wim Van Leuven <
> wim.vanleu...@highestpoint.biz>
> wrote:
I believe that just controls when the segment gets deleted from disk. It is
removed from memory before that. So i don't believe that will help.
On Fri, 15 Dec 2017 at 13:54 Wim Van Leuven
wrote:
> So, in our setup, to provide the historic data on the platform, we would
> have to define all topic
So, in our setup, to provide the historic data on the platform, we would
have to define all topics with a retention period of the business time we
want to keep the data. However, on the intermediate topics, we would only
require the data to be there as long as necessary to be able to process the
da
Is it really? I checked some records on kafka topics using commandline
consumers to print key and timestamps and timestamps was logged as
CreateTime:1513332523181
But that would explain the issue. I'll adjust the retention on the topic
and rerun.
Thank you already for the insights!
-wim
On Fri,
Hi,
It is likely due to the timestamps you are extracting and using as the
record timestamp. Kafka uses the record timestamps for retention. I suspect
this is causing your segments to roll and be deleted.
Thanks,
Damian
On Fri, 15 Dec 2017 at 11:49 Wim Van Leuven
wrote:
> Hello all,
>
> We are
Hello all,
We are running some Kafka Streams processing apps over Confluent OS
(v3.2.0) and I'm seeing unexpected but 'consitent' behaviour regarding
segment and index deletion.
So, we have a topic 'input' that contains about 30M records to ingest. A
1st processor transforms and pipes the data on
You would add new brokers to the cluster, and then do a partition
reassignment to move some partitions to the new broker.
In the simplest example:
Say you have 1 topic with 3 partitions.
partition 0: brokers: 1,2
partition 1: brokers: 2,3
partition 2: brokers: 3,1
If you added 3 more brokers, y
Hi,
I want to use the custom partitioner in streams, I couldnt find the same in
the documentation. I want to make sure that during map phase, the keys
produced adhere to the customized partitioner.
-Sameer.
Another interesting datapoint:
Taking a deeper look at partition 21:
brann@kafka1:/data/kafka/logs/__consumer_offsets-21$ ls -la
total 20176
drwxr-xr-x2 kafka kafka 4096 Dec 15 08:11 .
drwxr-xr-x 1605 kafka kafka 167936 Dec 15 08:31 ..
-rw-r--r--1 kafka kafka0 Dec 15 08:03 0
on `kafka_2.11-1.0.1-d04daf570` we are upgrading the log format from
0.9.0.1 to 0.11.0.1 and after the upgrade have set
inter.broker.protocol.version=1.0
log.message.format.version=0.11.0.1
We have applied this upgrade to 5 clusters by upgrading broker 1, leaving
it for a day, then coming back wh
18 matches
Mail list logo