Please give us more information:
release of Kafka
Did your consumer get any error ?
Have you inspected broker log(s) ?
Cheers
On Sun, Sep 3, 2017 at 11:08 PM, Sagar Nadagoud <
sagar.nadag...@wildjasmine.com> wrote:
> Hi,
>
> Please help me in this issue i am unable to read the message from top
The logs didn't go through. Consider using pastebin.
For steps 1 and 2, it seems you used some image which didn't go through.
Please use pastebin instead.
Where did you get the docker image ?
Cheers
On Fri, Sep 8, 2017 at 4:42 AM, Nick <394299...@qq.com> wrote:
> Dears,
>
> May I seek your hel
>From MirrorMaker.scala :
// Defaults to no data loss settings.
maybeSetDefaultProperty(producerProps, ProducerConfig.RETRIES_CONFIG,
Int.MaxValue.toString)
maybeSetDefaultProperty(producerProps,
ProducerConfig.MAX_BLOCK_MS_CONFIG, Long.MaxValue.toString)
I think the settings wo
Wouldn't KAFKA-5494 make remote produce more reliable?
Original message From: Todd Palino Date:
9/14/17 6:53 PM (GMT-08:00) To: users@kafka.apache.org Subject: Re: Kafka
MirrorMaker - target or source datacenter deployment
Always in the target datacenter. While you can set up
bq. at com.mytest.csd.kafka.KafkaDeduplicator.getQuotaStore(KafkaDe
duplicator.java:147)
Can you show us relevant code snippet for the above method ?
On Fri, Sep 15, 2017 at 2:20 AM, Jari Väimölä
wrote:
> Hello all,
> I have an apache kafka stream application running in docker container. It
>
t;
>
>
>
> <https://mail.google.com/mail/?ui=2&ik=6aa5d30a60&view=att&th=15e8701650e040b7&attid=0.3&disp=safe&realattid=f_j7m9sld82&zw>
>
> On Fri, Sep 15, 2017 at 10:33 PM, dev loper wrote:
>
>> Hi Ted,
>>
>> What should I be lo
might
>> have
>> > resulted in the "CommitFailedException due to non avialability of
>> > processing processors.
>> >
>> > After some time the issue got propagated to other servers, I have
>> attached
>> > the relevant logs with this
Looking at the calculation of totalTimeTakenToStoreRecords, it covers
store.put()
call.
Can you tell us more about what the put() does ?
Does it involve external key value store ?
Are you using 0.11.0.0 ?
Thanks
On Sat, Sep 16, 2017 at 6:14 AM, dev loper wrote:
> Hi Kafka Streams Users,
>
> I
Hi,
Were you using 0.11.0.0 ?
I ask this because some related fixes, KAFKA-5167 and KAFKA-5152, are only
in 0.11.0.1
Mind trying out 0.11.0.1 release and see whether the problem persists ?
On Fri, Sep 15, 2017 at 12:52 PM, Ted Yu wrote:
> bq. 1) Reduced MAX_POLL_RECORDS_CONFIG to 5
eds to be
> done on kafka broker side as well ?
>
> On Sat, Sep 16, 2017 at 5:18 AM, Ted Yu wrote:
>
> > Hi,
> > Were you using 0.11.0.0 ?
> >
> > I ask this because some related fixes, KAFKA-5167 and KAFKA-5152, are
> only
> > in 0.11.0.1
> >
>
Have you looked at https://github.com/confluentinc/kafka-connect-jdbc ?
On Sat, Sep 16, 2017 at 1:39 PM, M. Manna wrote:
> Sure. But all these are not available via Kafka open source (requires
> manual coding), correct? Only Confluence seems to provide some
> off-the-shelf connector but Confluen
000 and the processors were getting closed and started which might
>> have
>> > resulted in the "CommitFailedException due to non avialability of
>> > processing processors.
>> >
>> > After some time the issue got propagated to other servers, I have
&
: EBS
>
> Kafka Streams Instance : 3 Kafka Streams Application Instances (Current
> CPU Usage 8%- 24%)
>
> Instance Type : AWS M4 Large
> Machine Configuration : 2 VCPU;s, 8gb Ram, Storage : EBS (Dedicated
> EBS bandwidth 450 mbps)
> Thanks
>
> Dev
&
We're using rocksdb 5.3.6
It would make more sense to perform next round of experiment using rocksdb
5.7.3 which is latest.
Cheers
On Mon, Sep 18, 2017 at 5:00 AM, Bill Bejeck wrote:
> I'm following up from your other thread as well here. Thanks for the info
> above, that is helpful.
>
> I th
Looks like the screen shots didn't come through.
Consider pasting the text.
Thanks
Original message From: Yogesh Sangvikar
Date: 9/19/17 4:33 AM (GMT-08:00) To:
users@kafka.apache.org Cc: Sumit Arora ,
Bharathreddy Sodinapalle ,
asgar@happiestminds.com Subject: Re: Data
Kafka has been evolving rapidly.
Is there area(s) which you're particularly interested ?
You can find all the relevant KIPs under:
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals
Normally KIPs contain fair amount of technical details.
>From each KIP, you can find J
Please follow instructions on http://kafka.apache.org/contact
On Thu, Sep 21, 2017 at 1:30 PM, Daniele Ascione
wrote:
> hi, I would like to subscribe
>
Attachment didn't come thru.
Have you read ?
https://cwiki.apache.org/confluence/display/KAFKA/Security
especially the
https://cwiki.apache.org/confluence/display/KAFKA/Security#Security-ImplementingthePermissionManager
section ?
On Fri, Sep 22, 2017 at 2:45 AM, Pooppillikudiyil, Joby <
joby.poo
bq. 1 topic with replication factor 1
There is no fault tolerance for the above setup.
Related please read KIP-113 .
Cheers
On Mon, Sep 25, 2017 at 11:56 AM, Anu P wrote:
> Hi All,
>
> I would like to request inputs - pros and cons of setting up a 2 broker
> Kafka cluster in production.
>
>
Have you seen this comment ?
https://issues.apache.org/jira/browse/KAFKA-5122?focusedCommentId=15984467&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15984467
On Wed, Sep 27, 2017 at 12:44 PM, Stas Chizhov wrote:
> Hi,
>
> I am running a simple kafka streams app
Parth :
bq. an application which could take some real time data
Can you be a bit more specific on what your goal is ?
It would help narrow down the choices you have.
Cheers
On Thu, Sep 28, 2017 at 8:01 AM, David Garcia wrote:
> If you’re on the AWS bandwagon, you can use Kinesis-Analytics (
>
Looks like Ralph logged KAFKA-4946 for this already.
On Fri, Sep 29, 2017 at 12:40 AM, Dong Lin wrote:
> Hi Kafka users,
>
> I am wondering if anyone is currently using feature from MX4J loader. This
> feature is currently enabled by default. But if kafka_mx4jenable is
> explicitly set to true i
See instruction at https://kafka.apache.org/contact
On Fri, Sep 29, 2017 at 7:01 AM, Alex.Chen wrote:
> subscription
>
I think in producer.properties you should use:
security.protocol=SASL_PLAINTEXT
FYI
On Tue, Oct 3, 2017 at 7:17 AM, Pekka Sarnila wrote:
> Hi,
>
> kafka_2.11-0.11.0.0
>
> If I try to give --security-protocol xyz (xyz any value e.g.
> SASL_PLAINTEXT, PLAINTEXTSASL, SASL_SSL) I get error
>
> s
I did a quick search in the code base - there doesn't seem to be caching as
you described.
On Tue, Oct 3, 2017 at 6:36 AM, Kristopher Kane
wrote:
> If using a Byte SerDe and schema registry in the consumer configs of a
> Kafka streams application, does it cache the Avro schemas by ID and version
Have you taken a look
at
streams/examples/src/main/java/org/apache/kafka/streams/examples/pageview/PageViewUntypedDemo.java
?
final Deserializer jsonDeserializer = new
JsonDeserializer();
final Serde jsonSerde = Serdes.serdeFrom(jsonSerializer,
jsonDeserializer);
final Co
>From the example off:
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Authorization+Command+Line+Interface
it seems following 'User:', the formation is te...@example.com
Can you double check ?
On Wed, Oct 4, 2017 at 8:55 PM, Awadhesh Gupta
wrote:
> Hi,
>
> I am working on Kafka Author
Looking
at
streams/src/main/java/org/apache/kafka/streams/state/internals/RocksDBStore.java
:
throw new UnsupportedOperationException("Change log is not
supported for store " + this.name + " since it is TTL based.");
// TODO: support TTL with change log?
Past thread related to TTL:
http://search-hadoop.com/m/Kafka/uyzND1RLg4VOJ84U?subj=Re+Streams+TTLCacheStore
On Thu, Oct 5, 2017 at 9:54 AM, Ted Yu wrote:
> Looking at
> streams/src/main/java/org/apache/kafka/streams/state/internals/RocksDBStore.java
> :
>
>
Graph image didn't come through.
Consider using third party site for hosting image.
On Fri, Oct 6, 2017 at 6:46 AM, Alexander Petrovsky
wrote:
> Hello!
>
> I observe the follow strange behavior in my kafka graphs:
>
>
> As you can see, the topic __consumer_offsets have very bit rate, is it
> oka
What's the value for auto.offset.reset ?
Which release are you using ?
Cheers
On Fri, Oct 6, 2017 at 7:52 AM, Dmitriy Vsekhvalnov
wrote:
> Hi all,
>
> we several time faced situation where consumer-group started to re-consume
> old events from beginning. Here is scenario:
>
> 1. x3 broker kaf
; On Fri, Oct 6, 2017 at 8:58 PM, Dmitriy Vsekhvalnov <
> dvsekhval...@gmail.com>
> wrote:
>
> > Hi Ted,
> >
> > Broker: v0.11.0.0
> >
> > Consumer:
> > kafka-clients v0.11.0.0
> > auto.offset.reset = earliest
> >
> >
&
. Also in general with current
> semantics of offset reset policy IMO using anything but none is not really
> an option unless it is ok for consumer to loose some data (latest) or
> reprocess it second time (earliest).
>
> fre 6 okt. 2017 kl. 17:44 skrev Ted Yu :
>
> &g
I assume you have read
https://github.com/facebook/rocksdb/wiki/Building-on-Windows
Please also see https://github.com/facebook/rocksdb/issues/2531
BTW your question should be directed to rocksdb forum.
On Fri, Oct 6, 2017 at 6:39 AM, Valentin Forst wrote:
> Hi there,
>
> We have Kafka 0.11.
ring
> > > > next rebalance it can commit old stale offset- can this be the case?
> > > >
> > > >
> > > > fre 6 okt. 2017 kl. 17:59 skrev Dmitriy Vsekhvalnov <
> > > > dvsekhval...@gmail.com
> > > > >:
> &
Was there any consumer interceptor involved ?
Cheers
On Sun, Oct 8, 2017 at 6:29 AM, Michael Keinan
wrote:
> Hi
> Using Kafka 0.10.2.0
> I get a NPE while iterating the records after polling them using poll
> method.
> - Any idea where does it come from ?
> - How can I open an issue to Kafka te
Can you check the health of zookeeper quorum ?
Cheers
On Tue, Oct 10, 2017 at 3:35 PM, Kannappan, Saravanan (Contractor) <
saravanan_kannap...@comcast.com> wrote:
> Hello, Someone can you help me kafka server not starting after rebooting ,
> the below is the error message
>
> [2017-10-10 22:25:2
bq. There are 7 connectors configured with each connector configured with 8
tasks (Averaging about 4 tasks per connector)
Pardon. I don't quite understand the above setup. Can you describe in more
detail ?
Which version of connector are you using ?
Cheers
On Mon, Oct 16, 2017 at 11:30 PM, Dhawa
Can you take a look at KAFKA-5470 ?
On Tue, Oct 17, 2017 at 2:32 AM, 杨文 wrote:
> Hi Kafka Users,
> We are using kafka 0.9.0.1 and frequently we have seen below exception
> which causes the broker
> to die.We even increased the MaxDirectMemory to 1G but still see this.
> 2017-02-16 00:55:57,750]
Images didn't come thru.
Consider using third party website.
On Tue, Oct 17, 2017 at 9:36 PM, Pavan Patani
wrote:
> Hello,
>
> Previously I was using old version of Kafka-manager and it was showing
> "Producer Message/Sec and Summed Recent Offsets" parameters in topics as
> below.
>
> [image: I
Considering Ewen's response, you can open a JIRA for applying the
suggestion toward FileStreamSinkConnector.
Cheers
On Wed, Oct 18, 2017 at 10:39 AM, Marina Popova
wrote:
> Hi,
> I wanted to give this question a second try as I feel it is very
> important to understand how to control error
Here is related code:
log.debug("Using older server API v{} to send {} {} with
correlation id {} to node {}",
header.apiVersion(), clientRequest.apiKey(),
request, clientRequest.correlationId(), nodeId);
2147483647 is the node Id.
On Fri, Oct 20, 2017 at 2:
See code in NetworkClient :
int latestClientVersion = clientRequest
.apiKey().latestVersion();
if (header.apiVersion() == latestClientVersion) {
...
} else {
log.debug("Using older server API v{} to send {} {} with
correlation id {} to node {
This should fix what you observed:
https://github.com/apache/kafka/pull/4127
On Tue, Oct 24, 2017 at 12:21 PM, Vishal Shukla wrote:
> Hi All,
> I took latest source code from kafka git repo and tried to setup my local
> env. in Eclipse.
> I am getting a compilation error in 1 file in streams pr
JDK 1.8.0_91
Cheers
On Tue, Oct 24, 2017 at 1:23 PM, Guozhang Wang wrote:
> Hmm.. which Java version were you using?
>
>
> Guozhang
>
> On Tue, Oct 24, 2017 at 1:09 PM, Ted Yu wrote:
>
> > This should fix what you observed:
> >
> > https://github.com/apa
Eric:
I wonder if it is possible to load up 1.0.0 RC3 on test cluster and see
what the new behavior is ?
Thanks
On Tue, Oct 24, 2017 at 5:41 PM, Eric Lalonde wrote:
>
> >>> Could it be, that the first KafkaStreams instance was still in status
> >>> "rebalancing" when you started the second/thir
Do you mind providing a bit more information ?
Release of Kafka you use
Any difference between data1_log and the other, normal topic ?
Probably check the broker log where data1_log is hosted - see if there is
some clue.
Thanks
On Wed, Oct 25, 2017 at 12:11 PM, Dan Markhasin wrote:
> I'm tryi
lly reset the consumer group's offset to a few
> minutes before I restarted the broker, only to discover this strange
> behavior where no matter which datetime value I provided, it kept resetting
> to the latest offset.
>
>
> On 25 October 2017 at 22:48, Ted Yu wrote:
>
Clarification: my most recent reply was w.r.t. the strange situation Dan
described, not the offset resetting.
On Wed, Oct 25, 2017 at 1:24 PM, Ted Yu wrote:
> I wonder if you have hit KAFKA-5600.
>
> Is it possible that you try out 0.11.0.1 ?
>
> Thanks
>
> On Wed, Oct 25,
Have you seen the email a moment ago from Onur which uses the same KIP
number ?
Looks like there was race condition in modifying wiki.
Please consider bumping the KIP number.
Thanks
On Wed, Oct 25, 2017 at 4:14 PM, Jan Filipiak
wrote:
> Hello Kafka-users,
>
> I want to continue with the devel
Can you provide a bit more information ?
Release of Kafka
Java / Scala version
Thanks
On Wed, Oct 25, 2017 at 6:40 PM, Susheel Kumar
wrote:
> Hello Kafka Users,
>
> I am trying to run below sample code mentioned in Kafka documentation under
> Automatic Offset Committing for a topic with 1 part
t: 254410834
> > > timestamp: 1508978015677 offset: 254410854
> > > timestamp: 1508978016980 offset: 254410870
> > > timestamp: 1508978017212 offset: 254410883
> > >
> > >
> > > On 26 October 2017 at 07:26, Elyahou Ittah
&g
Please fill in thread link and JIRA number.
* prefix of the output key wich is the same as
serializing K
Typo: wich
* @param joiner
Please add explanation for joiner (in both APIs)
Please add javadoc for valueOtherSerde and joinValueSerde
* an event in this KTabl
This seems related to your question:
https://www.confluent.io/blog/exactly-once-semantics-are-possible-heres-how-apache-kafka-does-it/
On Fri, Oct 27, 2017 at 8:22 AM, Hemambika Hema
wrote:
> we are using kafka_2.10-0.10.0.1 version
>
> On Fri, Oct 27, 2017 at 8:45 PM, Hemambika Hema >
> wrote
Please take a look at KAFKA-4477
On Mon, Oct 30, 2017 at 2:06 AM, Yuanjia wrote:
> Hi,
> My cluster version is 0.10.0.0, 10 nodes. One server has many error
> log, like that
> java.io.IOException: Connection to 6 was disconnected before the response
> was read
> at kafka.utils.Networ
, and all clients work well.
> I notice KAFKA-5153, it maybe same to mine. But it doesn't update for
> a long time.
>
>
> From: Ted Yu
> Date: 2017-10-30 17:13
> To: users
> Subject: Re: Servers Getting disconnected
> Please take a look at KAFKA-4477
>
> On M
bq. attached screenshots from the log viewer
The screenshots didn't go through. Consider using 3rd party site.
On Wed, Nov 1, 2017 at 9:18 AM, Elmar Weber wrote:
> Hello,
>
> I had this morning the issue that a client offset got deleted from a
> broker as it seems.
>
> (Kafka 0.11.0.1 with patc
Neither the error nor properties files went through.
Can you use pastebin or some similar site ?
On Wed, Nov 1, 2017 at 5:31 AM, mascarenhas, jewel <
mascarenhas.je...@atos.net> wrote:
> Hi,
> I am trying to configure apache kafka version 0.11.0.1 with ssl, but am
> faing the following error whi
See:
https://github.com/apache/kafka/blob/1.0/clients/src/main/java/org/apache/kafka/clients/admin/AdminClient.java
On Thu, Nov 2, 2017 at 5:51 AM, diane wrote:
> Hi
>
> I was trying to look at the documentation for the AdminClient API , but
> the link from page
> https://kafka.apache.org/docume
Have you seen this thread ?
http://search-hadoop.com/m/Kafka/uyzND1ppGvmNscWc?subj=Re+Data+loss+while+upgrading+confluent+3+0+0+kafka+cluster+to+confluent+3+2+2
On Thu, Nov 2, 2017 at 1:46 PM, Karim Lamouri wrote:
> Hi,
>
> I wanted to know in which cases those warning arise and how to reduce t
Can you pastebin relevant logs from client and broker ?
Thanks
On Fri, Nov 3, 2017 at 1:37 PM, Manan G wrote:
> Hello,
>
> I am using 0.11.0.0 version of Kakfa broker and Java client library. My
> consumer code tracks offsets for each assigned partition and at some time
> interval manually comm
.com/yfJDSGPA
> >
> > server.log: https://pastebin.com/QKpk0zLn
> > controller.log: https://pastebin.com/9T0niwEw
> > state-change.log: https://pastebin.com/nrftHPC9
> >
> >
> > On Fri, Nov 3, 2017 at 1:53 PM, Ted Yu wrote:
> >
> >> Can you pas
Alex:
In the future, please use pastebin if the log is not too large.
When people find this thread in mailing list archive, the attachment
wouldn't be there.
Thanks
On Tue, Nov 7, 2017 at 8:32 AM, Matthias J. Sax
wrote:
> Alex,
>
> I am not sure, but maybe it's a bug. I noticed that you read t
I think how to use GDAX's API is orthogonal to using Kafka.
Kafka client has support for Java and Python.
On Tue, Nov 7, 2017 at 12:31 PM, Taha Arif wrote:
> Hello,
>
>
> I want to build a project that accesses the Gdax websocket in a real time
> stream, and push that data into Kafka to reforma
Did you use G1GC ?
Thanks
Original message From: John Yost Date:
11/8/17 5:48 AM (GMT-08:00) To: users@kafka.apache.org Cc: ja...@scholz.cz
Subject: Re: Kafka JVM heap limit
In addition, in my experience, a memory heap > 8 GB leads to long GC pauses
which causes the ISR statu
Have you seen this thread ?
http://search-hadoop.com/m/Kafka/uyzND1Q6wyNBj42g?subj=Kafka+Monitoring
On Wed, Nov 8, 2017 at 5:10 PM, chidigam . wrote:
> Hi All,
> What is the simplest way of monitoring the metrics in kaka brokers?
> Is there any opensource available?
> Any help in this regards i
PropertyConfigurator is used here:
https://github.com/apache/kafka/blob/trunk/tools/src/main/java/org/apache/kafka/tools/VerifiableLog4jAppender.java#L233
On Thu, Nov 9, 2017 at 2:17 PM, Arunkumar
wrote:
> Hi All
> We have a requirement to migrate log4J 1.x to log4j 2 for our kafka
> brokers us
bq. an older timestamp that allowed
I guess you meant 'than allowed'
Cheers
On Tue, Nov 21, 2017 at 2:57 PM, Matthias J. Sax
wrote:
> This is possible, but I think you don't need the time-based index for it :)
>
> You will just buffer up all messages for a 5 minute sliding-window and
> maintai
Can you provide more information (such as pastebin of relevant logs) ?
Cheers
On Wed, Nov 22, 2017 at 1:55 AM, Linux实训项目 wrote:
> Hi:
>Kafka node zookeeper is lost after a period of time.What causes the
> node to be lost in zookeeper?
>
>
>environment:
>zookeeper
There is KAFKA-3317 which is still open.
Have you seen this ?
http://search-hadoop.com/m/Kafka/uyzND1KvOlt1p5UcE?subj=Re+Brokers+is+down+by+java+io+IOException+Too+many+open+files+
On Wed, Nov 29, 2017 at 8:55 AM, REYMOND Jean-max (BPCE-IT - SYNCHRONE
TECHNOLOGIES) wrote:
> We have a cluster w
In the guide, note the following (see bin/windows):
change the script extension to .bat
On Mon, Dec 11, 2017 at 2:57 AM, kish babu wrote:
> Hi,
> I am trying to setup Kafka on my windows box which has no cygwin.
>
> I am following instructions from
>
> https://kafka.apache.org/quickstart
>
> Fi
>From the stack trace, it seems you hit KAFKA-6349 (where Damian provided a
PR).
FYI
On Mon, Dec 11, 2017 at 11:29 PM, Mr.Wang <1282183...@qq.com> wrote:
> Hi~
> We found this error when we read kafka data using storm:
> java.util.ConcurrentModificationException at
> java.util.LinkedHash
Have you checked s3 connector issue listing ?
Cheers
On Wed, Dec 13, 2017 at 2:38 AM, Pratik Shah wrote:
> Hi All,
> I am using kafka s3 sink connector ( version 4.0) to backup the
> _schemas topic for its backup.
> The issue is that the content stored in S3 is only the valu
Which release of Kafka did you download ?
The error seems to be (in English):
unable to find or load the main class
I searched Kafka source code but didn't find reference to OpenLink.
Can you give the complete error ?
Thanks
On Wed, Dec 13, 2017 at 12:55 AM, Thomas JASSEM
wrote:
> Hello,
>
Interesting.
Looks like disconnection resulted in the stack overflow.
I think the following would fix the overflow:
https://pastebin.com/Pm5g5V2L
On Thu, Dec 14, 2017 at 7:40 AM, Jörg Heinicke
wrote:
>
> Hi everyone,
>
> We recently switched to Kafka 1.0 and are facing an issue which we have
>
In StreamsConfig.java , CACHE_MAX_BYTES_BUFFERING_CONFIG is defined as:
.define(CACHE_MAX_BYTES_BUFFERING_CONFIG,
Type.LONG,
10 * 1024 * 1024L,
I think using numeral should be accepted (as shown by the Demo.java
classes).
On Thu, Dec 14, 201
Can you look at the log from controller to see if there is some clue
w.r.t. partition
82 ?
Was unclean leader election enabled ?
BTW which release of Kafka are you using ?
Cheers
On Thu, Dec 14, 2017 at 11:49 AM, Tarun Garg wrote:
> I checked log.dir of the all nodes and found index, log and t
Can you capture stack trace on the broker and pastebin it ?
Broker log may also provide some clue.
Thanks
On Mon, Dec 18, 2017 at 4:46 AM, HKT wrote:
> Hello,
>
> I was testing the transactional message on kafka.
> but I get a problem.
> the producer always blocking at second commitTransaction
data from __transaction_state-15
> (kafka.coordinator.transaction.TransactionStateManager)
> [2017-12-19 08:26:08,471] INFO [Transaction State Manager 0]: Loading
> transaction metadata from __transaction_state-18
> (kafka.coordinator.transaction.TransactionStateManager)
> [2017-12-19 08:2
ProducerRecord<>("test", 0, (long) 0, Long.toString(0));
> producer.send(record);
> producer.commitTransaction();
> producer.beginTransaction();
> record = new ProducerRecord<>("test", 0, (long)0,
> Long.
Since you're using a Vendor's distro, can you post on their community page ?
BTW do you notice any difference in settings between the working cluster
and this cluster ?
Cheers
On Thu, Dec 21, 2017 at 12:27 PM, sham singh
wrote:
> Hello All -
> I'm getting this error, when publishing messages t
Sahil:
I did a quick search in 0.11.0 branch and trunk for getEndOffsets but
didn't find any occurrence.
Mind giving us the location (and class) where getEndOffsets is called ?
Thanks
On Fri, Dec 22, 2017 at 11:29 PM, sahil aggarwal
wrote:
> Fixed it by some code change in ConsumerGroupCommand
Was broker running on 52.194 encountering some error ?
Can you check broker log on 52.194 ?
Thanks
On Mon, Dec 25, 2017 at 5:03 PM, G~D~Lunatic <747620...@qq.com> wrote:
> i‘ m a beginner of kafka. i usde kafka_2.11-0.9.0.1. my kafka built in
> two virtual machine which ip are 52.193 and 52.1
Cheers
On Mon, Dec 25, 2017 at 6:08 PM, G~D~Lunatic <747620...@qq.com> wrote:
> thank you , what's the file path of broker log. i have no idea about
> checking log
>
>
>
>
>
> -- 原始邮件 --
> 发件人: "Ted Yu";;
> 发送
replica Map() (kafka.controller.KafkaController)
> is this normal?
>
>
>
>
>
> -- 原始邮件 --
> 发件人: "Ted Yu";;
> 发送时间: 2017年12月26日(星期二) 上午10:19
> 收件人: "users";
>
> 主题: Re: kafka configure problem
>
>
>
>
> 4f77ef99d4db15373/core/src/main/scala/kafka/admin/Consume
>> rGroupCommand.scala#L467
>>
>> On 23 December 2017 at 13:07, Ted Yu wrote:
>>
>>> Sahil:
>>> I did a quick search in 0.11.0 branch and trunk for getEndOffsets but
>>> didn't find
Have you seen
https://examples.javacodegeeks.com/java-basics/exceptions/java-lang-illegalmonitorstateexception-how-to-solve-illegalmonitorstateexception/
?
You didn't include the whole code w.r.t. shadowKafkaProducer
If you need more help, please consider including more of your code.
Cheers
On W
Ted for the link. I got my issue there was an synchronization issue.
On Thu, Dec 28, 2017 at 8:57 AM, Ted Yu wrote:
> Have you seen
> https://examples.javacodegeeks.com/java-basics/exceptions/java-lang-
> illegalmonitorstateexception-how-to-solve-illegalmonitorstateexception/
> ?
&g
Please check https://github.com/xerial/snappy-java for how to build /
install snappyjava.
On Thu, Dec 28, 2017 at 5:29 AM, Debraj Manna
wrote:
> Hi
>
> I am seeing an warning like below and my kafka java producer client is not
> able to write to kafka broker. (Kafka version 0.10.0 both client &
Can you take a look at KAFKA-5337 (
https://cwiki.apache.org/confluence/display/KAFKA/KIP-169+-+Lag-Aware+Partition+Assignment+Strategy)
?
Cheers
On Thu, Dec 28, 2017 at 11:17 PM, 赖剑清 wrote:
> Hi, all
>
> I met a problem while using Kafka as a message queue.
>
> I have 10 consumer servers and 3
Looking at https://issues.apache.org/jira/browse/KAFKA-5686 , it seems you
should have specified LZ4.
FYI
On Fri, Dec 29, 2017 at 5:00 AM, Sven Ludwig wrote:
> Hi,
>
> we thought we have lz4 applied as broker-side compression on our Kafka
> Cluster for storing measurements, but today I looked i
29. Dezember 2017 um 14:45 Uhr
> Von: Manikumar
> An: users@kafka.apache.org
> Betreff: Re: Problem to apply Broker-side lz4 compression even in fresh
> setup
> Is this config added after sending some data? Can you verify the latest
> logs?
> This wont recompress existing messag
For #1, fetcher.getTopicMetadata() is called.
If you have time, you can read getTopicMetadata(). It is a blocking call
with given timeout.
For #2, I don't see any mechanism for metadata sharing.
FYI
On Fri, Dec 29, 2017 at 8:25 AM, Viliam Ďurina
wrote:
> Hi,
>
> I use KafkaConsumer.partitionsF
I verified that Brett said thru this code:
val (partitionsToBeReassigned, replicaAssignment) =
ReassignPartitionsCommand.parsePartitionReassignmentData(
"{\"version\":1,\"partitions\":[{\"topic\":\"metrics\",\"partition\"
:0,\"replicas\":[1,2]},{\"topic\":\"metrics\",\"partition\":1,\
t; Thanks Brett and Ted!
>
> On Sun, Dec 31, 2017 at 6:29 PM, Ted Yu wrote:
>
> > I verified that Brett said thru this code:
> >
> > val (partitionsToBeReassigned, replicaAssignment) =
> > ReassignPartitionsCommand.parsePartitionReassignmentData(
> >
>
bq. zookeeper.connect = localhost:2181,eg2-pp-ifs-245:
2181,eg2-pp-ifs-219:*9092*
Why did 9092 appear in zookeeper setting ?
Cheers
On Tue, Jan 2, 2018 at 2:18 AM, M. Manna wrote:
> Hi All,
>
> Firstly a very Happy New Year!
>
> I set up my 3 node configuration where each of the broke
Did you intend to attach pictures following the two solutions ?
It seems the pictures didn't come thru.
FYI
On Wed, Jan 3, 2018 at 8:39 PM, Tony Liu wrote:
> Hi All,
>
> This post here is aimed to ask experience about what did you do migration
> `Kafka/zookeeper` ? :)
>
> All of Kafka/zookeep
Looks like the .checkpoint file was generated from this code in
ProcessorStateManager
:
// write the checkpoint file before closing, to indicate clean
shutdown
try {
if (checkpoint == null) {
checkpoint = new OffsetCheckpoint(new File(baseDir,
CHECKPO
Which Kafka release are you using ?
Most likely /var/lib/kafka/test-0 was still being referenced by some thread.
There have been fixes in this area recently.
Cheers
On Fri, Jan 5, 2018 at 4:28 AM, Alex Galperin
wrote:
> Hi,
> I host Kafka in Docker container in Windows. I mounted volume for s
bq. WARN Found a corrupted index file due to requirement failed: Corrupt
index found, index file
(/data/kafka/data-processed-15/54942918.index)
Can you search backward for 54942918.index in the log to see if
we can find the cause for corruption ?
This part of code was rece
1 - 100 of 194 matches
Mail list logo