When I remove/comment parent dependency likein
Module A pom.xml, it seems ok, and streaming kafka's depends on
kafka_2.11:0.8.2.1.
The Modula A's tree:
[INFO] +- org.apache.spark:spark-core_2.11:jar:2.1.0:provided
[INFO] | +- org.apache.avro:avro-mapred:jar:hadoop2:1.7.7:provided
[INFO] |
Can you please post the full output of maven dependency:tree from the
parent POM in both scenarios?
Thanks,
Liam Clarke
On Tue, 18 Dec. 2018, 3:26 pm big data Hi,
>
> No other dependency include kafka's jar.
>
> The project structure is:
>
> pom.xml
>
> |---Module A
>
>|--pom.xml
>
> |---M
Hi,
No other dependency include kafka's jar.
The project structure is:
pom.xml
|---Module A
|--pom.xml
|---Module B
|---Module C
When spark-streaming-kafka_2.11:1.6.3 in in Module A's pom.xml, from
dependency tree, we can see in depends on kafka_2.11:0.8.2.1
When we move spark-streaming
Hi,
Is it a transitive dependency of any of your other dependencies?
Cheers,
Liam Clarke
On Tue, 18 Dec. 2018, 2:57 pm big data Hi,
> our project includes this dependency by:
>
> org.apache.spark
> spark-streaming-kafka_2.11
> 1.6.3
>
> From dependency tree, we can see it depende
Hi,
our project includes this dependency by:
org.apache.spark
spark-streaming-kafka_2.11
1.6.3
From dependency tree, we can see it dependency kafka_2.11:0.8.2.1 verson.
[cid:part1.6C4724D2.B06C4F81@outlook.com]
But when we move this dependency to project's parent pom file, the de
Hi, Everyone,
This is a reminder about the deadline for proposal this Thursday.
Thanks,
Jun
On Tue, Dec 4, 2018 at 1:49 PM Jun Rao wrote:
> Hi, Everyone,
>
> We have two upcoming Kafka Summits, one in NYC and another in London. The
> deadline for summiting proposals is Dec 20 for both events.
Hi Claudia,
Anything useful in the log cleaner log files?
Cheers,
Liam Clarke
On Tue, 18 Dec. 2018, 3:18 am Claudia Wegmann Hi,
>
> thanks for the quick response.
>
> My problem is not, that no new segments are created, but that segments
> with old data do not get compacted.
> I had to restart
Hi,
Can you please remove me from this mail box.
Regards,
Komal Babu
Sourcing Administrator
ko...@qbs.co.uk
020 8733 7139
http://www.qbsd.co.uk
QBS Distribution,
7 Wharfside,
Rosemont Road,
Wembley, HA0 4QB
United Kingdom
Banner
-Original Message-
From: Claudia Wegmann
Sent: 17
Hi,
thanks for the quick response.
My problem is not, that no new segments are created, but that segments with old
data do not get compacted.
I had to restart one broker because there was no diskspace left. After
recreating all indexes etc. the broker recognized the old data and compacted it
c
Hello!
Please check whether the segment.ms configuration on topic will help you
to solve your problem.
https://kafka.apache.org/documentation/
https://stackoverflow.com/questions/41048041/kafka-deletes-segments-even-before-segment-size-is-reached
Regards,
Florin
segment.ms This configuration
Dear kafka users,
I've got a problem on one of my kafka clusters. I use this cluster with kafka
streams applications. Some of this stream apps use a kafka state store.
Therefore a changelog topic is created for those stores with cleanup policy
"compact". One of these topics is running wild for
11 matches
Mail list logo