Hi Mathias,
thanks again for your reply. Ill try to take some time now and make some
more exhaustive remarks.
On 09.07.2017 21:48, Matthias J. Sax wrote:
I think we do have a very good discussion and people openly share their
ideas. So I am not sure why your are frustrated (at least I get thi
I think we do have a very good discussion and people openly share their
ideas. So I am not sure why your are frustrated (at least I get this
impression).
Maybe it might be best if you propose an API change by yourself similar
to what Damian and Guozhang did (to whatever extend your time constraint
I have posted a comment in JIRA area for KAFKA-1194. It would be
appreciated if any user/dev can comment whether this solution is
sustainable.
This is ONLY for windows - so minor checks/modifications using
Os.isWindows() is probably required.
On 7 July 2017 at 09:21, Manikumar wrote:
> pl check
I'll try to answer this for you. I'm going to assume you are using the
pre-packaged kafka connect distro from confluent.
org.apache.kafka.connect.data.Schema is an abstraction of the type
definition for the data being passed around. How that is defined
generally falls onto the connector being used
Hi Ismael,
Gotcha, will do. Okay, in reading to docs you linked, that may explain what
we're seeing. When we upgraded to 0.10.0, we did not upgrade the clients
from 0.9.0.1, so while the message format is the default--in this case,
0.10.0--the message format expected by the consumers is pre-0.10.0
Hi John,
Please read the upgrade documentation for the relevant versions:
http://kafka.apache.org/documentation.html#upgrade
Also, let's try to keep the discussion in one thread. I asked some
questions in the related "0.10.1 memory and garbage collection issues"
thread that you started.
Ismael
Hey Ismael,
Thanks a bunch for responding so quickly--really appreciate the follow-up!
I will have to get those details tomorrow when I return to the office.
Thanks again, will forward details ASAP tomorrow.
--John
On Sun, Jul 9, 2017 at 10:41 AM, Ismael Juma wrote:
> Hi John,
>
> We would ne
Hi John,
We would need more details to be able to help. What is the version of your
producers and consumers, is compression being used (and the compression
type if it is) and what is the broker/topic message format version?
Ismael
On Sun, Jul 9, 2017 at 1:13 PM, John Yost wrote:
> Hey Everyone
Hi Everyone,
Ever since we've upgraded from 0.9.0.1 to 0.10.0 our five-node Kafka
cluster is unstable. Specifically, whereas before a 6GB memory heap worked
fine, following the upgrade all five brokers crashed with out of memory
errors within an hour of the upgrade. I boosted the memory heap to 10
BTW, we tried the following Confluent-recommended settings and one broker
crashed after 30 minutes with an out-of-memory error:
-Xms6g -Xmx6g -XX:MetaspaceSize=96m -XX:+UseG1GC -XX:MaxGCPauseMillis=20
-XX:InitiatingHeapOccupancyPercent=35 -XX:G1HeapRegionSize=16M
-XX:MinMetaspaceFree
Hey Everyone,
When we originally upgraded from 0.9.0.1 to 0.10.0 with the exact same
settings we immediately observed OOM errors. I upped the heap size from 6
GB to 10 GB and that solved the OOM issue. However, I am now seeing that
the ISR count for all partitions goes from 3 to 1 after about an h
11 matches
Mail list logo