If a Streams application is shutdown via KafkaStream#close() it should
be able to reuse local store on restart.
As you mention that you application fails, I assume that some exception
happens within Streams and you catch this exception with an uncaught
exception handler to trigger some shutdown fo
Hi
I had a question.
Say stream application fails. We handle the shutdown gracefully.
We fix the issue and simply restart it without any application reset. So we
don't delete any internal changelog table or local state stores.
So once it is restarted does it create a new internal store by replaying
Hi,
We saw the following FATAL error in 2 of our brokers (3 in total, the active
controller doesn’t have this) and they crashed in the same time.
[2016-12-16 16:12:47,085] FATAL [ReplicaFetcherThread-0-3], Halting because log
truncation is not allowed for topic __consumer_offsets, Current lead
Hi,
What is the value of acks set for kafka internal topic __consumer_offsets?
I know the default replication factor for __consumer_offsets is 3, and
we are using version 0.9.01, and set min.sync.replicas = 2 in our
server.properties.
We noticed some partitions of __consumer_offsets has ISR with 1
Brokers failed repeatedly leaving behind page-cache in memory, which caused
broker restarts to fail with OOM every time.
After manually cleaning up page-cache, I was able to restart the broker.
However, still wondering what could have caused this state in the first place.
Any ideas?
-Zakee
Thank you Rajani, your suggestion is really helpful.
[2016-12-16 21:55:36,720] DEBUG Principal =
User:CN=writeuser,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown is
Allowed Operation = Create from host = 172.28.89.63 on resource =
Cluster:kafka-cluster (kafka.authorizer.logger)
Finally I am
Hello,
Is it correct that producers do not fail new connection establishment when
it exceeds the request timeout?
Running on AWS, we've encountered a problem where certain very low volume
producers end up with metadata that's sufficiently stale that they attempt
to establish a connection to a bro
Looking for reasons why my installations seem to be generating so many
issues:
Starting an app which is
stream->aggregate->filter->foreach
While it's running the system in question (AWS) averages >10% IOWait with
spikes to 60-70%. The CPU load is in the range of 3/2/1 (8 core machine w/
16G RAM
I guess. It's bugs, so always hard to be 100% sure.
We know about a null-pointer bug in task assignment/creating -- so I
assume it what you see.
-Matthias
On 12/16/16 11:19 AM, Jon Yeargers wrote:
> And these bugs would cause the behaviors Im seeing?
>
> On Fri, Dec 16, 2016 at 10:45 AM, Matthi
Good question! Here's my understanding.
The streams API has a config num.standby.replicas. If this value is set to
0, the default, then the local state will have to be recreated by
re-reading the relevant Kafka partition and replaying that into the state
store, and as you point out this will take
And these bugs would cause the behaviors Im seeing?
On Fri, Dec 16, 2016 at 10:45 AM, Matthias J. Sax
wrote:
> We just discovered a couple of bugs with regard to standby tasks... Not
> all bug fix PRs got merged yet.
>
> You can try running on trunk to get those fixes. Should only be a few
> day
Hi,
What is the value of acks set for kafka internal topic __consumer_offsets?
I know the default replication factor for __consumer_offsets is 3, and we
are using version 0.9.01, and set min.sync.replicas = 2 in our
server.properties.
We noticed some partitions of __consumer_offsets has ISR with 1
Hi everyone,
I've been reading a lot about new features in Kafka Streams and everything
looks very promising. There is even an article on Kafka and Event Sourcing:
https://www.confluent.io/blog/event-sourcing-cqrs-stream-processing-apache-kafka-whats-connection/
There are a couple of things that
You need to set ssl.client.auth="required" in server.properties.
Regards,
Rajini
On Wed, Dec 14, 2016 at 12:12 AM, Raghu B wrote:
> Hi All,
>
> I am trying to enable ACL's in my Kafka cluster with along with SSL
> Protocol.
>
> I tried with each and every parameters but no luck, so I need help
We just discovered a couple of bugs with regard to standby tasks... Not
all bug fix PRs got merged yet.
You can try running on trunk to get those fixes. Should only be a few
days until the fixes get merged.
-Matthias
On 12/16/16 9:10 AM, Jon Yeargers wrote:
> Have started having this issue with
Hi Eno,
in this example it doesn't make much sense, but I could add a .mapValues
that transforms the values and the use .through to materialize a table with
the updated values.
It is possible to the access the new table as a KeyValueStore also, but it
is much less convenient and I also expected t
Thanks, Hans for the insight. Will use compacted topic.
On Thu, Dec 15, 2016 at 3:53 PM, Hans Jespersen wrote:
> for #2 definitely use a compacted topic. Compaction will remove old
> messages and keep the last update for each key. To use this function you
> will need to publish messages as Key/V
Thanks Gerard & Derar for your valuable suggestions but I am able to send
and receive messages with SSL (Without ACL configuration).
I used only SSL port on 9093 and Enabled Inter broker communication as SSL
but If I enable SSL it is creating the Issues.
Anyway Let me try once again from side fr
Hi Frank,
Is this happening when you are shutting down the app, or is the app shutting
down unexpectedly?
The state transition error is not the main issue, it's largely a side effect of
the unexpected transition so the first error is the cause probably. However, I
can't tell what happened befo
Hi Mikael,
Currently that is not possible. Could you elaborate why you'd need that since
you can query from tableOne.
Thanks
Eno
> On 16 Dec 2016, at 10:45, Mikael Högqvist wrote:
>
> Hi,
>
> I have a small example topology that count words per minute (scala):
>
>words
> .map { (key
Have started having this issue with another KStream based app. Digging
through logs I ran across this message:
When I've seen it before it certainly does kill the application. At the end
of the SNIP you can see the exit process starting.
2016-12-16 17:04:51,507 [StreamThread-1] DEBUG
o.a.k.s.p.i
Hi Ewen,
Thanks for the support. I ran mirror maker on source cluster (0.8.1.0) and that
was successful.
Thanks, Vijayanand.
Sent from my iPhone
> On Dec 10, 2016, at 6:45 PM, Ewen Cheslack-Postava wrote:
>
> It's tough to read that stacktrace, but if I understand what you mean by
> "runn
> On Dec 15, 2016, at 16:39, Avi Flax wrote:
>
> Hi all,
>
> I apologize for flooding the list with questions lately. I guess I’m having a
> rough week.
>
> I thought my app was finally running fine after Damian’s help on Monday, but
> it turns out that it hasn’t been (successfully) consumin
Hi,
thank you for the nice clarification.
This is also described indirectly here, which I had not found before:
https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol#AGuideToTheKafkaProtocol-Partitioningandbootstrapping
The Log-Level of this could perhaps be considered
Hi,
Can some one look at the below question and see if you can help ? I've
tried to explain the issue in stack overflow .
http://stackoverflow.com/q/41177614/3705186
--Dhyan
FWIW I put a .warn in my shutdown hook - to make sure it wasn't being
called unknowingly.
Runtime.getRuntime().addShutdownHook(new Thread(() -> {
try {
LOGGER.warn("ShutdownHook");
kafkaStreams.close();
} catch (Exception e) {
Hi people,
I'm running a Kafka Streams app (trunk build from a few days ago), I see
this error from time to time:
Exception in thread "StreamThread-18" java.lang.NullPointerException
at org.apache.kafka.streams.processor.internals.StreamThread$5.
apply(StreamThread.java:998)
at org.apache.kafka.
Hi people,
I've just deployed my Kafka Streams / Connect (I only use a connect sink to
mongodb) application on a cluster of four instances (4 containers on 2
machines) and now it seems to get into a sort of rebalancing loop, and I
don't get much in mongodb, I've got a little bit of data at the beg
Hi,
I have a small example topology that count words per minute (scala):
words
.map { (key, word) =>
new KeyValue(word, Long.box(1L))
}
.groupByKey(Serdes.String, Serdes.Long)
.count(TimeWindows.of(5 * 60 * 1000L), tableOne)
.through(new WindowedSerde, Se
Hi all,
we have a cluster of 0.9.0.0 with 3 nodes, we have a topic with 3
replicas, and send it with ack -1, our sending latency is avg 7ms. I prepare to
optimize performance of cluster through adjusting some params.
we find our brokers has set config item as below,
log.flush.inte
Im seeing instances where my apps are exiting (gracefully, mind you)
without any obvious errors or cause. I have debug logs from many instances
of this and have yet to find a reason to explain what's happening.
- nothing in the app log
- nothing in /var/log/messages (IE not OOM killed)
- not being
Create proper JKS that has a certificate that is issued by a CA that is
trusted by the Kafka brokers, and you expect a principal with the DN in
your client cert. Spend more time on getting this done correctly and things
will work fine.
On Thu, Dec 15, 2016 at 9:11 PM, Gerard Klijs wrote:
> Most
32 matches
Mail list logo