p size (in case of a silent OOM exception), doesn't
seem to matter
I'm kinda out of ideas.
On Tue, Jun 19, 2018 at 11:02 AM Frank Lyaruu wrote:
> We've tried running a fresh version with yesterday morning's trunk
> version, with the same result.
> We're runnin
h using the
> trunk version?
>
>
> Guozhang
>
> On Mon, Jun 18, 2018 at 9:03 AM, Frank Lyaruu wrote:
>
> > Yes, here it is:
> >
> > https://gist.github.com/flyaruu/84b65d52a01987946b07d21bafe0d5f1
> >
> > It ran completely fine for the
ted code from OneToManyGroupedProcessor ?
>
> Thanks
>
> On Mon, Jun 18, 2018 at 4:29 AM, Frank Lyaruu wrote:
>
> > Hi, I've upgraded our 0.11 based stream application to the trunk version,
> > and I get an intermittent NPE. It's is quite a big topology, and I
>
Hi, I've upgraded our 0.11 based stream application to the trunk version,
and I get an intermittent NPE. It's is quite a big topology, and I haven't
succeeded in reproducing it on a simpler topology.
It builds the topology, starts Kafka Streams, runs for about 20s., and then
it terminates
It seems
Do you commit the received messages? Either by doing it manually or setting
enable.auto.commit and auto.commit.interval.ms?
On Wed, Nov 29, 2017 at 11:15 PM, Tom van den Berge <
tom.vandenbe...@gmail.com> wrote:
> I'm using Kafka 0.10.0.
>
> I'm reading messages from a single topic (20 partitions
key and value you are trying to write?
> >>
> >> Thanks,
> >> Apurva
> >>
> >> On Thu, Jun 15, 2017 at 2:28 AM, Frank Lyaruu
> wrote:
> >>
> >>> Hey people, I see an error I haven't seen before. It is on a
> lowlevel-AP
It seems to happen when using Streams 0.11.1 snapshot against a 0.10.2
(release) broker, the problem disappeared after I upgraded the broker.
On Thu, Jun 15, 2017 at 11:28 AM, Frank Lyaruu wrote:
> Hey people, I see an error I haven't seen before. It is on a lowlevel-API
> ba
Hey people, I see an error I haven't seen before. It is on a lowlevel-API
based streams application. I've started it once, then it ran fine, then did
a graceful shutdown and since then I always see this error on startup.
I'm using yesterday's trunk.
It seems that the MemoryRecordsBuilder overflow
should happen. Not sure why you don't observe this. And thus,
> the producer should use this timestamp to write the records.
>
> How did you verify the timestamps that are set for your output records?
>
>
> -Matthias
>
>
> On 6/5/17 6:15 AM, Frank Lyaruu wrote:
>
stamp semantics. We need to revisit on timestamps can be maintained
> across the topology in Streams.
>
> Guozhang
>
> On Sat, Jun 3, 2017 at 10:54 AM, Frank Lyaruu wrote:
>
> > Hi Matthias,
> >
> > Ok, that clarifies quite a bit. I never really went into
implement a custom timestamp extractor and adjust the
> timestamps accordingly ("stream time" is based on whatever timestamp
> extractor return)
>
> However, if you have records with old timestamps, I am wondering why
> they are not truncated in your input topic? Do you no
Hi Kafka people,
I'm running an application that pushes database changes into a Kafka topic.
I'm also running a Kafka streams application
that listens to these topics, and groups them using the high level API, and
inserts them to another database.
All topics are compacted, with the exception of t
Hi Kafka people,
I've configured my brokers to use one hour (360 ms) for the
min.compaction.lag.ms.
In the logs I see that this is picked up in the 'created log' line. I add a
bit over a million messages, and I see the new log segments appearing
('Rolling new log segment') but after a few min
)
> > .build();
> >
> >
> > Does that help?
> >
> > Thanks,
> > Damian
> >
> >
> >
> > On Wed, 17 May 2017 at 06:09 Frank Lyaruu wrote:
> >
> >> Hi Kafka people,
> >>
> >> I'm using the low
Hi Kafka people,
I'm using the low level API that often creates a simple state store based
on a (single partition, compacted) topic (It feeds all messages and simply
stores all those messages as-is). Now all these stores get their own
'changelog' topic, to back the state store.
This is fine, but
Hi,
I'm using quite a lot of topics, and the speed on each topic isn't extreme,
so I generally use a single partition.
In this situation, the high level API of Kafka Streams still makes
repartition topics, but I don't see there use.
Wouldn't it make sense to skip repartition topics in this case,
t; of this JIRA I believe: https://issues.apache.org/jira/browse/KAFKA-4861 <
> https://issues.apache.org/jira/browse/KAFKA-4861>
>
> Thanks
> Eno
> > On 14 May 2017, at 23:09, Frank Lyaruu wrote:
> >
> > Hi Kafka people...
> >
> > After a bit of tuning a
Hi Kafka people...
After a bit of tuning and an upgrade to Kafka 0.10.1.2, this error starts
showing up and the whole thing kind of dies.
2017-05-14 18:51:52,342 | ERROR | hread-3-producer | RecordCollectorImpl
| 91 - com.dexels.kafka.streams - 0.0.115.201705131415 | task
[1_0] Error s
ncrease the retention minutes to something large?
>
> Thanks
> Eno
> > On 21 Mar 2017, at 19:13, Frank Lyaruu wrote:
> >
> > Hi Kafka people,
> >
> > We have a Kafka Streams application that replicates a database, and
> > transforms it to a different data model.
Hi Kafka people,
We have a Kafka Streams application that replicates a database, and
transforms it to a different data model. Some tables/topics move fast, with
many changes a second, some might be dormant for months. For those slow
moving topics, we have some trouble with the 'offsets.retention.m
gt; design is particularly challenging and why Global KTables was chosen
> instead. I'm not sure if you still want to pursue that original design,
> since it is not proven to work.
>
> Guozhang, perhaps we need to add a note saying that Global KTables is the
> new design?
Hi all,
I'm trying to implement joining two Kafka tables using a 'remote' key,
basically as described here:
https://cwiki.apache.org/confluence/display/KAFKA/Discussion%3A+Non-key+KTable-KTable+Joins
Under the "Implementation Details" there is one line I don't know how to
do:
1. First of al
most
> informative!
> Is there anything obvious in the RocksDB LOG?
>
> Thanks,
> Damian
>
>
> On Wed, 15 Feb 2017 at 06:38 Frank Lyaruu wrote:
>
> > Hi Kafka crew,
> >
> > I'm rolling out a Kafka Streams application on a pretty large dataset,
> a
Hi Kafka crew,
I'm rolling out a Kafka Streams application on a pretty large dataset, and
I see some exceptions that worry me, which I haven't seen with smaller data
sets
(Using Kafka streams trunk of a few days ago)
2017-02-15 15:25:14,431 | INFO | StreamThread-42 | StreamTask
| 86 -
ted transition so the first error is the cause
> probably. However, I can't tell what happened before since a normal
> shutdown also looks the same.
>
> Eno
> > On 16 Dec 2016, at 11:35, Frank Lyaruu wrote:
> >
> > Hi people,
> >
> > I'm run
Hi people,
I'm running a Kafka Streams app (trunk build from a few days ago), I see
this error from time to time:
Exception in thread "StreamThread-18" java.lang.NullPointerException
at org.apache.kafka.streams.processor.internals.StreamThread$5.
apply(StreamThread.java:998)
at org.apache.kafka.
Hi people,
I've just deployed my Kafka Streams / Connect (I only use a connect sink to
mongodb) application on a cluster of four instances (4 containers on 2
machines) and now it seems to get into a sort of rebalancing loop, and I
don't get much in mongodb, I've got a little bit of data at the beg
the RocksDB memory settings, yes the off heap memory usage does
> > sneak under the radar. There is a memory management story for Kafka
> Streams
> > that is yet to be started. This would involve limiting the off-heap
> memory
> > that RocksDB uses.
> >
> > Thanks,
&g
default in streams is 3
>
> However, i'm no expert on RocksDB and i suggest you have look at
> https://github.com/facebook/rocksdb/wiki/RocksDB-Tuning-Guide for more
> info.
>
> Thanks,
> Damian
>
> On Fri, 25 Nov 2016 at 13:02 Frank Lyaruu wrote:
>
> > @
son
wrote:
> What kind of disk are you using for the rocksdb store? ie spinning or ssd?
>
> 2016-11-25 12:51 GMT+01:00 Damian Guy :
>
> > Hi Frank,
> >
> > Is this on a restart of the application?
> >
> > Thanks,
> > Damian
> >
> >
Hi y'all,
I have a reasonably simple KafkaStream application, which merges about 20
topics a few times.
The thing is, some of those topic datasets are pretty big, about 10M
messages. In total I've got
about 200Gb worth of state in RocksDB, the largest topic is 38 Gb.
I had set the MAX_POLL_INTERV
I have a remote Kafka cluster, to which I connect using a VPN and a
not-so-great WiFi network.
That means that sometimes the Kafka Client loses briefly loses connectivity.
When it regains a connection after a while, I see:
org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be
c
ing the bug, and many thanks to Damian for the
> > quick catch!
> >
> > On Thu, Oct 13, 2016 at 12:30 PM, Frank Lyaruu
> wrote:
> >
> >> The issue seems to be gone. Amazing work, thanks...!
> >>
> >> On Thu, Oct 13, 2016 at 6:56 PM, Damian Guy
>
2016 at 17:46 Damian Guy wrote:
>
> > Hi Frank,
> >
> > Thanks for reporting. Can you provide a sample of the join you are
> > running?
> >
> > Thanks,
> > Damian
> >
> > On Thu, 13 Oct 2016 at 16:10 Frank Lyaruu wrote:
> >
> >
Hi Kafka people,
I'm joining a bunch of Kafka Topics using Kafka Streams, with the Kafka
0.10.1 release candidate.
It runs ok for a few thousand of messages, and then it dies with the
following exception:
Exception in thread "StreamThread-1" java.lang.NullPointerException
at
org.apache.kafka.str
35 matches
Mail list logo