kafka_2.11 is the Kafka server code and old Scala clients. kafka-client are
the new Java clients.
Thanks,
Grant
On Thu, Jun 23, 2016 at 9:25 PM, BYEONG-GI KIM wrote:
> Hello.
>
> I wonder what the difference is between kafka_2.11 and kafka-client on
> Maven Repo.
>
> Thank you in advance!
>
> B
Hello.
I wonder what the difference is between kafka_2.11 and kafka-client on
Maven Repo.
Thank you in advance!
Best regards
KIM
In case it helps anyone else, we opened sourced our Nagios health check for
monitoring consumer group health using Burrow:
https://github.com/williamsjj/kafka_health
-J
Hello Ryan,
On the DSL layer, currently there is not support for record window yet; and
we are discussing about adding such support in the future, maybe first
session windows then others.
On the Processor API layer, you can definitely implement this "record
window" feature yourself by keeping tra
Hi
Kafka version used : 0.8.2.1
Zookeeper version: 3.4.6
We have scenario where kafka 's broker in zookeeper path /brokers/ids just
disappears.
We see the zookeeper connection active and no network issue.
The zookeeper conection timeout is set to 6000ms in server.properties
Hence Kafka not part
Hello,
Say I have a stream, and want to determine whether or not a given "density"
of of records match a given condition. For example, let's say I want to
how many of the last 10 records have a numerical value greater than 100.
Does the kafka streams DSL (or processor API) provide a way to do th
That should work then. I would take some messages off the queue and verify
that they have the correct magic byte (byte 0).
-Dave
Dave Tauzell | Senior Software Engineer | Surescripts
O: 651.855.3042 | www.surescripts.com | dave.tauz...@surescripts.com
Connect with us: Twitter I LinkedIn I Fac
I'm also interested in knowing if other people have run into this problem
of different consumption speeds across consumers, and how they've dealt
with it. I've run into this in 0.7, 0.8, both beta and release, and now
0.9.0.1. It doesn't seem to be partition-specific, but consumer-specific.
In ou
So, I am just consuming from already existing Kafka queue and topics.
According to our internal documentation, when we put event data into Kafka
queue, each message has
1 magic byte
4 bytes of Schema ID
Then Avro serialized data
And we have our Schema Registry server running.
Sungwook
On Thu,
How are you putting data onto the Topic? The HdfsSink expects that you used
the KafkaAvroSerializer
(http://docs.confluent.io/1.0/schema-registry/docs/serializer-formatter.html)
which prepends a null byte and schema registry id to the front of the
serialized avro data. If you just put avro on
Hi,
I am testing kafka connect and got this error,
Exception in thread "WorkerSinkTask-local-file-sink-0"
org.apache.kafka.connect.errors.DataException: Failed to deserialize data
to Avro:
at
io.confluent.connect.avro.AvroConverter.toConnectData(AvroConverter.java:109)
at
org.apac
Gwen,
Have selected priority 'minor', component as 'core', have assigned no
labels.
Jira link: https://issues.apache.org/jira/browse/KAFKA-3895.
I have also added a question to the jira issue, alongwith a rough approach
that I have in mind.
It would be great if you can have a look and provide comm
See Keyhole Software blog and particularly John Boardman's presentation of
sample app with responsive web client using WebSockets connecting to a
netty embedded web server that itself uses producer and consumer clients
with a Kafka infrastructure (@johnwboardman). On first look, it seems like
a val
Thats a pretty cool feature, if anyone feels like opening a JIRA :)
On Thu, Jun 23, 2016 at 8:46 AM, Christian Posta
wrote:
> Sounds like something a traditional message broker (ie, ActiveMQ) would be
> able to do with a TTL setting and expiry. Expired messages get moved to a
> DLQ.
>
> On Thu, J
Tom,
When you say this:
"Deletion can happen at different times on the different replicas of the
log, and to different messages. Whilst a consumer will only be reading from
the lead broker for any log at any one time, the leader can and will change
to handle broker failure."
basically it means tha
Well, we are already using Kafka and would like to get this feature.
How hard can it be to hack it and use a custom kafka!? ;)
Let me look up the source code (never have checked it) and see what can be
done.
Thanks Tom and Christian, for helping me decide fast.
--
κρισhναν
On Thu, Jun 23, 2016
Hey Kafka experts,
After having read Jay Kreps awesome Kafka reading(
https://engineering.linkedin.com/distributed-systems/log-what-every-software-engineer-should-know-about-real-time-datas-unifying)
I have a doubt.
For communication between browsers (lets say collaborative editing, chat
etc.) is
Sounds like something a traditional message broker (ie, ActiveMQ) would be
able to do with a TTL setting and expiry. Expired messages get moved to a
DLQ.
On Thu, Jun 23, 2016 at 2:45 AM, Krish wrote:
> Hi,
> I am trying to design a real-time application where message timeout can be
> as low as a
No, there's no control over that. The right way to do this is to keep up
with the head of the topic and decide on "old" yourself in the consumer.
Deletion can happen at different times on the different replicas of the
log, and to different messages. Whilst a consumer will only be reading from
the
Thanks Tom.
Is there any way a consumer can be triggered when the message is about to
be deleted by Kafka?
--
κρισhναν
On Thu, Jun 23, 2016 at 6:16 PM, Tom Crayford wrote:
> Hi,
>
> A pretty reasonable thing to do here would be to have a consumer that
> moved "old" events to another topic.
>
Hi,
A pretty reasonable thing to do here would be to have a consumer that moved
"old" events to another topic.
Kafka has no concept of an expired queue, the only thing it can do once a
message is aged out is delete it. The deletion is done in bulk and
typically is set to 24h or even higher (Linke
That particular tool doen't seem to support ssl, at least not the 0.10
version.
On Thu, Jun 23, 2016 at 9:17 AM Radu Radutiu wrote:
> I have read the documentation and I can connect the consumer and producer
> successfully with SSL. However I have trouble running other scripts like
>
> bin/kafka
Hi,
I am trying to design a real-time application where message timeout can be
as low as a minute or two (message can get stale real-fast).
In the rare chance that the consumers lag too far behind in processing
messages from the broker, is there a concept of expired message queue in
Kafka?
I woul
/tmp is not a good location for storing files. It will get cleaned up
periodically, depending on your linux distribution.
Radu
On 22 June 2016 at 19:33, Misra, Rahul wrote:
> Hi Madhukar,
>
> Thanks for your quick response. The path is "/tmp/kafka-logs/". But the
> servers have not been restart
I have read the documentation and I can connect the consumer and producer
successfully with SSL. However I have trouble running other scripts like
bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list
{brokerUrl} —topic {topicName} --time -2
if the broker is configured with SSL only.
R
25 matches
Mail list logo