What happens when the new producer that is getting 70 MB/s is started on a
machine that is not part of the kafka cluster?
Can you include your topic description/configuration, producer
configuration, and broker configuration?
On 9/24/15, 1:44 AM, "Prabhjot Bharaj" wrote:
>Hi,
>
>I would like to
Hi,
thanks. Very helpful.
It might go good to have only one tool incl. a section in the
documentation about how to use it in a complex distributed environment.
Regards,
Markus
On 23.09.2015 13:51, Ben Stopford wrote:
Both classes work ok. I prefer the Java one simply because has better out
Hello Apache Kafka team,
I would like to subscribe to your general questions mailing list.
Thank You,
Best Regards,
Dasun Polwatta
CONFIDENTIALITY NOTICE:
This e-mail, including any attachment, is confidential and may contain
legally privileged information. If you are not the intended recipient
Hello there,
Hope you all having great time with kafka. I am just about to start with. I
am looking for a API that act as a Connection manager in my app. Say,
getting a producer instance and publishing. Getting all possible broker
list and so on. I know we have snippet for all in the document. But
Hi Dasun Polwatta,
You* can subscribe to this emails by sending an* email to
users-subscr...@kafka.apache.org < users-subscr...@kafka.apache.org>
On Thu, Sep 24, 2015 at 11:10 AM, Dasun Polwatta
wrote:
> Hello Apache Kafka team,
>
> I would like to subscribe to your general questions mailing li
Can anyone help with the below issue...
Thanks.
-Original Message-
From: Hema Bhatia [mailto:hema.bha...@apollo.edu]
Sent: Tuesday, September 22, 2015 12:57 PM
To: users@kafka.apache.org
Subject: ZkClient throwing NPEs
Hi,
I keep getting the below exception when Kafka tries to connect t
Hey Hema,
I'm not too familiar with ZkClient, but I took a look at the code and it
seems like there may still be a race condition around reconnects which
could cause the NPE you're seeing. I left a comment on the github issue on
the slim chance I'm not wrong:
https://github.com/sgroschupf/zkclient
Thanks Jason.
So what is the temporary workaround for this until its fixed? For now, we just
restart the app server having this issue, but we keep seeing this issue time
and again.
-Original Message-
From: Jason Gustafson [mailto:ja...@confluent.io]
Sent: Thursday, September 24, 201
Hi,
I have ported receiver less spark streaming for kafka to Spark 1.2 and am
trying to run a spark streaming job to consume data form my broker, but I
am getting the following error:
15/09/24 20:17:45 ERROR BoundedByteBufferReceive: OOME with size 352518400
java.lang.OutOfMemoryError: Java heap
Hey Hema,
I'm a little surprised you're seeing this so frequently. Do you know why
the zookeeper connection is so unstable? If not, then it might be
worthwhile investigating a little bit to see if there are other ways to
mitigate the problem. Other than that, maybe we can give the ZkClient devs
a
Adding Cody and Sriharsha
On Thu, Sep 24, 2015 at 1:25 PM, Sourabh Chandak
wrote:
> Hi,
>
> I have ported receiver less spark streaming for kafka to Spark 1.2 and am
> trying to run a spark streaming job to consume data form my broker, but I
> am getting the following error:
>
> 15/09/24 20:17:4
That looks like the OOM is in the driver, when getting partition metadata
to create the direct stream. In that case, executor memory allocation
doesn't matter.
Allocate more driver memory, or put a profiler on it to see what's taking
up heap.
On Thu, Sep 24, 2015 at 3:51 PM, Sourabh Chandak
w
Thanks Jason!
This exception is occurring mostly in pre-prod environment where we have more
network blips. However, I expected that application should be back up
gracefully and running. Requiring restarts is kind of bad. For now to handle
this scenario, I plan to monitor the zk connection time
We are having issues with producers and consumers frequently fully
disconnecting (from both the brokers and ZK) and reconnecting without any
apparent cause. On our production systems it can happen anywhere from every
10-15 seconds to 15-20 minutes. On our less beefy test systems and
developer lapto
I was able to get pass this issue. I was pointing the SSL port whereas
SimpleConsumer should point to the PLAINTEXT port. But after fixing that I
am getting the following error:
Exception in thread "main" org.apache.spark.SparkException:
java.nio.BufferUnderflowException
at
org.apache.spar
At the outset, this isn't about challenging the work that has been done,
primarily by folks @ Confluence for wrapping kafka in a REST API. Clearly,
there is a use case for a REST Service and they rose up to the challenge.
That said, I am trying to evaluate the limitations of a REST service around
Hi
I am trying to prepare a debian package for kafka kafka_2.11-0.8.3-SNAPSHOT
I am using pdebuild to build it in a chromed environement
I am encountering an error when running the task ':core:compileScala'
At the outset it looks like a permission error or path problem, but I am
unable
to find a
Here is the code snippet, starting line 365 in KafkaCluster.scala:
type Err = ArrayBuffer[Throwable]
/** If the result is right, return it, otherwise throw SparkException */
def checkErrors[T](result: Either[Err, T]): T = {
result.fold(
errs => throw new SparkException(errs.mkString("Throwi
Not out of the box, no - I don't think you can use an attribute of the
posted JSON to specify topics for kafka or folder for HDFS.
For dynamically creating topics in kafka, you would have to write some kind
of custom kafka producer - the kafka channel or sink in flume requires a
kafka topic to be
Well, in general you can't currently use compressed messages in any topic
that has compaction turned on regardless of whether or not you are using
Kafka-committed offsets. The log compaction thread will die either way.
There's only one compression thread for the broker that runs on all topics
that
> On Sep 24, 2015, at 8:11 PM, Todd Palino wrote:
>
> Well, in general you can't currently use compressed messages in any topic
> that has compaction turned on regardless of whether or not you are using
> Kafka-committed offsets. The log compaction thread will die either way.
> There's only one c
For now, that's the way it is. Historically, we've only monitored the lag
for our infrastructure applications. Other users are responsible for their
own checking, typically using the maxlag mbean or some application specific
metric. Besides the core, we've probably got a dozen or so consumers moved
22 matches
Mail list logo