Are you using zookeeper 3.4.11 ?
Please take a look at ZOOKEEPER-2976
Can you pastebin more log before the NoRouteToHostException ?
Cheers
On Thu, Feb 8, 2018 at 11:39 AM, Raymond, Shawn P CTR USARMY NETCOM (US) <
shawn.p.raymond@mail.mil> wrote:
> Afternoon all,
>
> I was wondering if any
For #1, there is record-size-avg metric
Not sure about #2
On Thu, Feb 8, 2018 at 10:28 AM, Pawan K wrote:
> Hi,
> I am currently trying to research answers for the following questions. Can
> you please let me know where/how could i find these in the configuration.
>
> 1) Average record size (KB
Hi -
I was going through this example at
https://github.com/confluentinc/kafka-streams-examples/blob/3.3.x/src/test/scala/io/confluent/examples/streams/StreamToTableJoinScalaIntegrationTest.scala,
especially the leftJoin part
https://github.com/confluentinc/kafka-streams-examples/blob/3.3.x/src/te
I'm not sure I can answer your question, but may I pose another in
return: why do you feel having a memory mapped log file would be a
good thing?
On 09/02/2018, YuFeng Shen wrote:
> Hi Experts,
>
> We know that kafka use memory mapped files for it's index files ,however
> it's log files don't us
From: Raymond, Shawn P CTR USARMY NETCOM (US)
Sent: Thursday, February 8, 2018 2:39 PM
To: users@kafka.apache.org
Subject: Question Related to Zookeeper and kafka java error message.
Afternoon all,
I was wondering if anyone has seen the following messages appea
If I read the code correctly, the operation on this line prepares the input
for the (stringSerde, stringSerde) specified on line 142:
.leftJoin(userRegionsTable, (clicks: Long, region: String) => (if
(region == null) "UNKNOWN" else region, clicks))
FYI
On Sat, Feb 10, 2018 at 11:00 AM, Deb
Looking at
https://github.com/confluentinc/kafka-streams-examples/blob/3.3.x/src/test/scala/io/confluent/examples/streams/StreamToTableJoinScalaIntegrationTest.scala#L148,
it seems that the leftJoin generates a KStream[String, (String, Long)],
which means the value is a tuple of (String, Long) .. I
Please read the javadoc:
https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/Consumed.java
and correlate with the sample code.
Thanks
On Sat, Feb 10, 2018 at 1:10 PM, Debasish Ghosh
wrote:
> Looking at
> https://github.com/confluentinc/kafka-streams-
> exa
Hi all,
In Kafka the definitive guide in page 200-201, two parameters of
kafka.tools.DumpLogSegments appear not really work ...
the --index-sanity-check argument
the --print-data-log
exemple:
./bin/kafka-run-class.sh kafka.tools.DumpLogSegments --files
logs/kafka-0/customer-topic-0/0
For --index-sanity-check, according to dumpIndex():
if (indexSanityOnly) {
index.sanityCheck
println(s"$file passed sanity check.")
Do you see the print above ?
Which release of Kafka are you using ?
Cheers
On Sat, Feb 10, 2018 at 1:54 PM, adrien ruffie
wrote:
> Hi all,
No really ... just the same output like this:
kafka_2.11-1.0.0$ ./bin/kafka-run-class.sh kafka.tools.DumpLogSegments --files
logs/kafka-1/customer-topic-0/.log --index-sanity-check
Dumping logs/kafka-1/customer-topic-0/.log
Starting offset: 0
baseOffset: 0
I think this was due to the type of file you fed to the tool.
To use --index-sanity-check , you need to supply file with the following
suffix:
val IndexFileSuffix = ".index"
On Sat, Feb 10, 2018 at 2:09 PM, adrien ruffie
wrote:
> No really ... just the same output like this:
>
>
> kafka_2.11
yes you right ! but in the book .log file and .index were both provided in the
command line.
but for me I get this output ...
/home/adryen/Java/kafka_2.11-1.0.0/bin/kafka-run-class.sh
kafka.tools.DumpLogSegments --files
./.index,.log --index-sanity-chec
userClicksJoinRegion is never serialized...
It the result of the join and the join only (de)serializes its input in
the internal stores.
The output it forwarded in-memory to a consecutive map and return
`clicksByRegion` that is [String,Long].
-Matthias
On 2/10/18 1:17 PM, Ted Yu wrote:
> Pleas
The inputs to the leftJoin are the stream with [String, Long] and the table
with [String, String]. Is the default serializer (I mean from the config)
used for [String, String] ? Then how does the [String, Long] serialization
work ?
I guess the basic issue that I am trying to understand is how the
15 matches
Mail list logo