punctuations are event-time based, not wall-clock time base.
We add wall-clock based punctuations to next release thought.
Cf.
https://docs.confluent.io/current/streams/developer-guide.html#defining-a-stream-processor
-Matthias
On 9/21/17 8:01 PM, 805930...@qq.com wrote:
> this is a kafka stre
cc'ed Daniele :)
On 9/21/17 1:59 PM, Ted Yu wrote:
> Please follow instructions on http://kafka.apache.org/contact
>
> On Thu, Sep 21, 2017 at 1:30 PM, Daniele Ascione
> wrote:
>
>> hi, I would like to subscribe
>>
>
signature.asc
Description: OpenPGP digital signature
this is a kafka stream question!context.schedule(60 * 1000L);but punctuate
func run a time with in every avg 3 seconds
805930...@qq.com
Team,
Please help me to set it up the Kafka Mirror Maker automation for Kafka Cluster
?
Thanks
Saravanan
I completely missed that in the docs (for me, j think it's because this is
the only operation I know of that requires that kind of confirmation).
Thanks!
-shargan
On Thu, Sep 21, 2017 at 12:41 Hans Jespersen wrote:
> Did you add the --execute flag?
>
> -hans
>
> > On Sep 21, 2017, at 11:37 AM,
Please follow instructions on http://kafka.apache.org/contact
On Thu, Sep 21, 2017 at 1:30 PM, Daniele Ascione
wrote:
> hi, I would like to subscribe
>
hi, I would like to subscribe
Did you add the --execute flag?
-hans
> On Sep 21, 2017, at 11:37 AM, shargan wrote:
>
> Testing kafka-consumer-groups.sh in my dev environment, I'm unable to reset
> offsets even when CURRENT-OFFSET is inbounds. Again, it returns as if the
> change took effect but describe still shows the orig
Testing kafka-consumer-groups.sh in my dev environment, I'm unable to reset
offsets even when CURRENT-OFFSET is inbounds. Again, it returns as if the
change took effect but describe still shows the original value and client
behavior bears this out.
-shargan
On Wed, Sep 20, 2017 at 13:40 shargan
Oh wow, okay, not sure what it is then.
On Thu, Sep 21, 2017 at 11:57 AM, Elliot Crosby-McCullough <
elliot.crosby-mccullo...@freeagent.com> wrote:
> I cleared out the DB directories so the cluster is empty and no messages
> are being sent or received.
>
> On 21 September 2017 at 16:44, John Yost
Have you checked the EBS burst balance on your disks that the streams
application is running on?
On 21 September 2017 at 04:28, dev loper wrote:
> Hi Bill,
>
> I will repeat my tests with Rocks DB enabled and I will revert to you with
> details. I might take 1-2 days to get back to you with deta
I cleared out the DB directories so the cluster is empty and no messages
are being sent or received.
On 21 September 2017 at 16:44, John Yost wrote:
> The only thing I can think of is message format...do the client and broker
> versions match? If the clients are a lower version than brokers (i.e
The only thing I can think of is message format...do the client and broker
versions match? If the clients are a lower version than brokers (i.e.,
0.9.0.1 client, 0.10.0.1 broker), then I think there could be message
format conversions both for incoming messages as well as for replication.
--John
This commit says that GPLv2 is an alternative license, so it's all good, I
believe:
https://github.com/facebook/rocksdb/commit/d616ebea23fa88cb9c2c8588533526a566d9cfab
Ismael
On Thu, Sep 21, 2017 at 4:21 PM, Ismael Juma wrote:
> LevelDB is New BSD (and the license in the first commit you linke
LevelDB is New BSD (and the license in the first commit you linked to is
definitely not GPL2):
https://github.com/google/leveldb/blob/master/LICENSE
The second commit you referenced, adds GPL2 to the list of licenses in the
pom, but it's not clear why.
Ismael
On Thu, Sep 21, 2017 at 3:44 PM, St
LevelDB GPL2 notice seems to have been added to rocksdb 5.7.1, but likely
this applies to old versions too.
https://github.com/facebook/rocksdb/commit/4a2e4891fe4c6f66fb9e8e2d29b04f46ee702b52#diff-7e1d2c46cd6eacd9a8d864450a128218
https://github.com/facebook/rocksdb/commit/6e3ee015fb1ce03e47838e9a3
Nothing, that value (that group of values) was at default when we started
the debugging.
On 21 September 2017 at 15:08, Ismael Juma wrote:
> Thanks. What happens if you reduce num.replica.fetchers?
>
> On Thu, Sep 21, 2017 at 3:02 PM, Elliot Crosby-McCullough <
> elliot.crosby-mccullo...@freeage
Thanks. What happens if you reduce num.replica.fetchers?
On Thu, Sep 21, 2017 at 3:02 PM, Elliot Crosby-McCullough <
elliot.crosby-mccullo...@freeagent.com> wrote:
> 551 partitions, broker configs are:
> https://gist.github.com/elliotcm/3a35f66377c2ef4020d76508f49f106b
>
> We tweaked it a bit fro
551 partitions, broker configs are:
https://gist.github.com/elliotcm/3a35f66377c2ef4020d76508f49f106b
We tweaked it a bit from standard recently but that was as part of the
debugging process.
After some more experimentation I'm seeing the same behaviour at about half
the CPU after creating one 50
Hello Apache Kafka community,
Is it on purpose that kafka-streams 0.11.0.1 depends on
org.rocksdb:rocksdbjni:5.0.1, and not on newer 5.7.3, because 5.0.1 has
Apache 2 license while 5.7.3 has also GPL 2.0 parts?
Kind regards,
Stevo Slavic.
A couple of questions: how many partitions in the cluster and what are your
broker configs?
On Thu, Sep 21, 2017 at 1:58 PM, Elliot Crosby-McCullough <
elliot.crosby-mccullo...@freeagent.com> wrote:
> Hello,
>
> We've been trying to debug an issue with our kafka cluster for several days
> now and
Hello,
We've been trying to debug an issue with our kafka cluster for several days
now and we're close to out of options.
We have 3 kafka brokers associated with 3 zookeeper nodes and 3 registry
nodes, plus a few streams clients and a ruby producer.
Two of the three brokers are pinning a core an
Hi,
We are running confluent s3 conector (3.2.0) and we observed a sink task
not being able to commit offsets after rebalance for like a week.It spits
"WorkerSinkTask:337 - Ignoring invalid task provided offset -- partition
not assigned" every time new file was written to S3. Eventually after 7
da
Thanks!
2017-09-20 11:37 GMT+02:00 Stas Chizhov :
> Hi!
>
> I am wondering if there are broker/client metrics for:
> - client version (to keep track of clients that needs an upgrade)
> - committed offsets (to detect situations when commits fail systematically
> with everything else being ok)
>
>
you can try DumpLogSegments tools to verify messages from log files. This
will give compression type for each message.
https://cwiki.apache.org/confluence/display/KAFKA/System+Tools#SystemTools-
DumpLogSegment
On Thu, Sep 21, 2017 at 1:38 PM, Vincent Dautremont <
vincent.dautrem...@olamobile.com.
Hi,
Snappy keeps a lot of parts in plain text :
look that example where only "pedia" is encoded/tokenized in the sentence.
https://en.wikipedia.org/wiki/Snappy_(compression)
> Wikipedia is a free, web-based, collaborative, multilingual encyclopedia
> project.
your data is then probably compresse
Hi,
If you want the Kafka broker to present the whole chain you have to use the
chain when creating the PKCS12 file (use the chain instead of the host
certificate). As you mentioned, the chain should be in the order 1) server
cert, 2) intermediate cert and 3) root cert. It will be then automatical
27 matches
Mail list logo