This looks correct. Sorry, not sure what else it could be.
On Sat, Jul 30, 2016 at 4:24 AM, Sean Morris (semorris)
wrote:
> Kafka 0.9.0.1
> Zookeeper 3.4.6
> Zkclient 0.7
>
> I have verified I only have one zkclient.jar in my class path.
>
> Thanks,
> Sean
>
>
&g
Hi,
You know the famous "Powered by Kafka" page?
https://cwiki.apache.org/confluence/display/KAFKA/Powered+By
Where the cool companies are showing off their use of Kafka?
We want to do the same for KafkaConnect and KafkaStreams - showcase
the early adopters of the technology.
If you are using e
No. If you want automatic update, you need to use the same broker id.
Many deployments use EBS to store their broker data. The
auto-generated id is stored with the data, so if a broker dies they
install a new machine and connect it to the existing EBS volume and
immediately get both the old id and
Can you define a DNS name that round-robins to multiple IP addresses?
This way ZKClient will cache the name and you can rotate IPs behind
the scenes with no issues?
On Wed, Aug 3, 2016 at 7:22 AM, Zuber wrote:
> Hello –
>
> We are planning to use Kafka as Event Store in a system which is being
MirrorMaker actually doesn't have a default - it uses what you
configured in the consumer.properties file you use.
Either:
auto.offset.reset = latest (biggest in old versions)
or
auto.offset.reset = earliest (smallest in old versions)
So you can choose whether when MirrorMaker first comes up, if
luent.io/job/system-test-kafka-0.10.0/138/
> <https://jenkins.confluent.io/job/system-test-kafka-0.10.0/138/>*
>
> Thanks,
> Ismael
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
ctor-hdfs in HDP? I won't be able to install
> Confluent platform there though... I would appreciate any pointers. Thanks.
>
> -B
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
kafka-connect-hdfs/quickstart-hdfs.properties
>
> And here is my quickstart-hdfs.properties:
>
> name=hdfs-sink
> connector.class=io.confluent.connect.hdfs.HdfsSinkConnector
> tasks.max=1
> topics=hdfs
> hdfs.url=hdfs://sandbox.hortonworks.com:8020
> flush.size=3
>
> Thanks,
>
,
> Damian Guy, Dustin Cote, Edoardo Comar, Eno Thereska, Ewen
> Cheslack-Postava, Flavio Junqueira, Florian Hussonnois, Geoff Anderson,
> Grant Henke, Greg Fodor, Guozhang Wang, Gwen Shapira, Henry Cai, Ismael
> Juma, Jason Gustafson, Jeff Klukas, Jendrik Poloczek, Jeyhun Karimov,
> Liq
hods of
> kafka-connect? For example I want to put thread.sleeps in kafka-streams
> side while transferring data and see the behaviour in kafka side or in
> application side. You can think of as simulation of load.
>
> Cheers
> Jeyhun
>
>
> --
> -Cheers
>
> Jey
Well deserved, Jason. Looking forward to your future contributions :)
On Tue, Sep 6, 2016 at 3:29 PM, Guozhang Wang wrote:
> Welcome, and really happy to have you onboard Jason!
>
>
> Guozhang
>
> On Tue, Sep 6, 2016 at 3:25 PM, Neha Narkhede wrote:
>
>> The PMC for Apache Kafka has invited Jaso
rg.apache.kafka.connect.json.JsonConverter.asConnectSchema(JsonConverter.java:493)
>at
> org.apache.kafka.connect.json.JsonConverter.jsonToConnect(JsonConverter.java:344)
>at
> org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:334)
>at
> org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:266)
>at
> org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:175)
>at
> org.apache.kafka.connect.runtime.WorkerSinkTaskThread.iteration(WorkerSinkTaskThread.java:90)
>at
> org.apache.kafka.connect.runtime.WorkerSinkTaskThread.execute(WorkerSinkTaskThread.java:58)
>at
> org.apache.kafka.connect.util.ShutdownableThread.run(ShutdownableThread.java:82)
>
> Thanks,
> Sri
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
ah, never mind - I just noticed you do use a schema... Maybe you are
running into this? https://issues.apache.org/jira/browse/KAFKA-3055
On Thu, Sep 15, 2016 at 4:20 PM, Gwen Shapira wrote:
> Most people use JSON without schema, so you should probably change
> your configurat
there.
Comments, improvements, and contributions are welcome and encouraged.
--
Gwen Shapira
this lag data at the
> server level.
>
> I am looking for best way to get the lag value and monitor it using kibana or
> grafana.
>
> Please suggest what is the best approach for this.
>
> Thanks and Regards
> Vikas Bhatia
>
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
gt;>> >> >>>>> be
>>>> >>> >> >>>>> used, it must have a custom backend + front-end.
>>>> >>> >> >>>>>
>>>> >>> >> >>>>> Thanks for the recommendation of Flume. Do you think this
>>>> >>> >> >>>>> will
>>>> >>> >> >>>>> work:
>>>> >>> >> >>>>>
>>>> >>> >> >>>>> - Spark Streaming to read data from Kafka
>>>> >>> >> >>>>> - Storing the data on HDFS using Flume
>>>> >>> >> >>>>> - Using Spark to query the data in the backend of the web
>>>> >>> >> >>>>> UI?
>>>> >>> >> >>>>>
>>>> >>> >> >>>>>
>>>> >>> >> >>>>>
>>>> >>> >> >>>>> On Thu, Sep 29, 2016 at 7:08 PM, Mich Talebzadeh
>>>> >>> >> >>>>> wrote:
>>>> >>> >> >>>>>>
>>>> >>> >> >>>>>> You need a batch layer and a speed layer. Data from Kafka
>>>> >>> >> >>>>>> can
>>>> >>> >> >>>>>> be
>>>> >>> >> >>>>>> stored on HDFS using flume.
>>>> >>> >> >>>>>>
>>>> >>> >> >>>>>> - Query this data to generate reports / analytics (There
>>>> >>> >> >>>>>> will
>>>> >>> >> >>>>>> be a
>>>> >>> >> >>>>>> web UI which will be the front-end to the data, and will
>>>> >>> >> >>>>>> show
>>>> >>> >> >>>>>> the
>>>> >>> >> >>>>>> reports)
>>>> >>> >> >>>>>>
>>>> >>> >> >>>>>> This is basically batch layer and you need something like
>>>> >>> >> >>>>>> Tableau
>>>> >>> >> >>>>>> or
>>>> >>> >> >>>>>> Zeppelin to query data
>>>> >>> >> >>>>>>
>>>> >>> >> >>>>>> You will also need spark streaming to query data online for
>>>> >>> >> >>>>>> speed
>>>> >>> >> >>>>>> layer. That data could be stored in some transient fabric
>>>> >>> >> >>>>>> like
>>>> >>> >> >>>>>> ignite or
>>>> >>> >> >>>>>> even druid.
>>>> >>> >> >>>>>>
>>>> >>> >> >>>>>> HTH
>>>> >>> >> >>>>>>
>>>> >>> >> >>>>>>
>>>> >>> >> >>>>>>
>>>> >>> >> >>>>>>
>>>> >>> >> >>>>>>
>>>> >>> >> >>>>>>
>>>> >>> >> >>>>>>
>>>> >>> >> >>>>>>
>>>> >>> >> >>>>>> Dr Mich Talebzadeh
>>>> >>> >> >>>>>>
>>>> >>> >> >>>>>>
>>>> >>> >> >>>>>>
>>>> >>> >> >>>>>> LinkedIn
>>>> >>> >> >>>>>>
>>>> >>> >> >>>>>>
>>>> >>> >> >>>>>>
>>>> >>> >> >>>>>> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>> >>> >> >>>>>>
>>>> >>> >> >>>>>>
>>>> >>> >> >>>>>>
>>>> >>> >> >>>>>> http://talebzadehmich.wordpress.com
>>>> >>> >> >>>>>>
>>>> >>> >> >>>>>>
>>>> >>> >> >>>>>> Disclaimer: Use it at your own risk. Any and all
>>>> >>> >> >>>>>> responsibility
>>>> >>> >> >>>>>> for
>>>> >>> >> >>>>>> any loss, damage or destruction of data or any other
>>>> >>> >> >>>>>> property
>>>> >>> >> >>>>>> which
>>>> >>> >> >>>>>> may
>>>> >>> >> >>>>>> arise from relying on this email's technical content is
>>>> >>> >> >>>>>> explicitly
>>>> >>> >> >>>>>> disclaimed. The author will in no case be liable for any
>>>> >>> >> >>>>>> monetary
>>>> >>> >> >>>>>> damages
>>>> >>> >> >>>>>> arising from such loss, damage or destruction.
>>>> >>> >> >>>>>>
>>>> >>> >> >>>>>>
>>>> >>> >> >>>>>>
>>>> >>> >> >>>>>>
>>>> >>> >> >>>>>> On 29 September 2016 at 15:01, Ali Akhtar
>>>> >>> >> >>>>>>
>>>> >>> >> >>>>>> wrote:
>>>> >>> >> >>>>>>>
>>>> >>> >> >>>>>>> It needs to be able to scale to a very large amount of
>>>> >>> >> >>>>>>> data,
>>>> >>> >> >>>>>>> yes.
>>>> >>> >> >>>>>>>
>>>> >>> >> >>>>>>> On Thu, Sep 29, 2016 at 7:00 PM, Deepak Sharma
>>>> >>> >> >>>>>>> wrote:
>>>> >>> >> >>>>>>>>
>>>> >>> >> >>>>>>>> What is the message inflow ?
>>>> >>> >> >>>>>>>> If it's really high , definitely spark will be of great
>>>> >>> >> >>>>>>>> use .
>>>> >>> >> >>>>>>>>
>>>> >>> >> >>>>>>>> Thanks
>>>> >>> >> >>>>>>>> Deepak
>>>> >>> >> >>>>>>>>
>>>> >>> >> >>>>>>>>
>>>> >>> >> >>>>>>>> On Sep 29, 2016 19:24, "Ali Akhtar"
>>>> >>> >> >>>>>>>>
>>>> >>> >> >>>>>>>> wrote:
>>>> >>> >> >>>>>>>>>
>>>> >>> >> >>>>>>>>> I have a somewhat tricky use case, and I'm looking for
>>>> >>> >> >>>>>>>>> ideas.
>>>> >>> >> >>>>>>>>>
>>>> >>> >> >>>>>>>>> I have 5-6 Kafka producers, reading various APIs, and
>>>> >>> >> >>>>>>>>> writing
>>>> >>> >> >>>>>>>>> their
>>>> >>> >> >>>>>>>>> raw data into Kafka.
>>>> >>> >> >>>>>>>>>
>>>> >>> >> >>>>>>>>> I need to:
>>>> >>> >> >>>>>>>>>
>>>> >>> >> >>>>>>>>> - Do ETL on the data, and standardize it.
>>>> >>> >> >>>>>>>>>
>>>> >>> >> >>>>>>>>> - Store the standardized data somewhere (HBase /
>>>> >>> >> >>>>>>>>> Cassandra /
>>>> >>> >> >>>>>>>>> Raw
>>>> >>> >> >>>>>>>>> HDFS / ElasticSearch / Postgres)
>>>> >>> >> >>>>>>>>>
>>>> >>> >> >>>>>>>>> - Query this data to generate reports / analytics (There
>>>> >>> >> >>>>>>>>> will be
>>>> >>> >> >>>>>>>>> a
>>>> >>> >> >>>>>>>>> web UI which will be the front-end to the data, and will
>>>> >>> >> >>>>>>>>> show
>>>> >>> >> >>>>>>>>> the reports)
>>>> >>> >> >>>>>>>>>
>>>> >>> >> >>>>>>>>> Java is being used as the backend language for
>>>> >>> >> >>>>>>>>> everything
>>>> >>> >> >>>>>>>>> (backend
>>>> >>> >> >>>>>>>>> of the web UI, as well as the ETL layer)
>>>> >>> >> >>>>>>>>>
>>>> >>> >> >>>>>>>>> I'm considering:
>>>> >>> >> >>>>>>>>>
>>>> >>> >> >>>>>>>>> - Using raw Kafka consumers, or Spark Streaming, as the
>>>> >>> >> >>>>>>>>> ETL
>>>> >>> >> >>>>>>>>> layer
>>>> >>> >> >>>>>>>>> (receive raw data from Kafka, standardize & store it)
>>>> >>> >> >>>>>>>>>
>>>> >>> >> >>>>>>>>> - Using Cassandra, HBase, or raw HDFS, for storing the
>>>> >>> >> >>>>>>>>> standardized
>>>> >>> >> >>>>>>>>> data, and to allow queries
>>>> >>> >> >>>>>>>>>
>>>> >>> >> >>>>>>>>> - In the backend of the web UI, I could either use Spark
>>>> >>> >> >>>>>>>>> to
>>>> >>> >> >>>>>>>>> run
>>>> >>> >> >>>>>>>>> queries across the data (mostly filters), or directly
>>>> >>> >> >>>>>>>>> run
>>>> >>> >> >>>>>>>>> queries against
>>>> >>> >> >>>>>>>>> Cassandra / HBase
>>>> >>> >> >>>>>>>>>
>>>> >>> >> >>>>>>>>> I'd appreciate some thoughts / suggestions on which of
>>>> >>> >> >>>>>>>>> these
>>>> >>> >> >>>>>>>>> alternatives I should go with (e.g, using raw Kafka
>>>> >>> >> >>>>>>>>> consumers vs
>>>> >>> >> >>>>>>>>> Spark for
>>>> >>> >> >>>>>>>>> ETL, which persistent data store to use, and how to
>>>> >>> >> >>>>>>>>> query
>>>> >>> >> >>>>>>>>> that
>>>> >>> >> >>>>>>>>> data store in
>>>> >>> >> >>>>>>>>> the backend of the web UI, for displaying the reports).
>>>> >>> >> >>>>>>>>>
>>>> >>> >> >>>>>>>>>
>>>> >>> >> >>>>>>>>> Thanks.
>>>> >>> >> >>>>>>>
>>>> >>> >> >>>>>>>
>>>> >>> >> >>>>>>
>>>> >>> >> >>>>>
>>>> >>> >> >>>>
>>>> >>> >> >>>
>>>> >>> >> >>
>>>> >>> >> >>
>>>> >>> >> >>
>>>> >>> >> >> --
>>>> >>> >> >> Thanks
>>>> >>> >> >> Deepak
>>>> >>> >> >> www.bigdatabig.com
>>>> >>> >> >> www.keosha.net
>>>> >>> >> >
>>>> >>> >> >
>>>> >>> >>
>>>> >>> >>
>>>> >>> >> -
>>>> >>> >> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>>>> >>> >>
>>>> >>> >
>>>> >>> >
>>>> >>> >
>>>> >>> > --
>>>> >>> > Thanks
>>>> >>> > Deepak
>>>> >>> > www.bigdatabig.com
>>>> >>> > www.keosha.net
>>>> >>
>>>> >>
>>>> >
>>>> >
>>>> >
>>>> > --
>>>> > Thanks
>>>> > Deepak
>>>> > www.bigdatabig.com
>>>> > www.keosha.net
>>>
>>>
>>
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
Hi Team Kafka,
I just merged PR 20 to our website - which gives it a new (and IMO
pretty snazzy) look and feel. Thanks to Derrick Or for contributing
the update.
I had to do a hard-refresh (shift-f5 on my mac) to get the new look to
load properly - so if stuff looks off, try it.
Comments and con
?
>
> What would be a good way to generate keys in this case, to ensure even
> partition spread?
>
> Thanks.
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
public
> benchmarking for 10GBe, I'd be happy to run benchmarks /publish results on
> this hardware if we can get it tuned up properly.
>
> What kind of broker/producer/consumer settings would you recommend?
>
> Thanks!
> - chris
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
ction-invert-key-value-the) which obviously it doesn't
>> > find.
>> > > >
>> > > > Does some body have a wolution to delete it ?
>> > > >
>> > > > Thanks in advance.
>> > > >
>> > > >
>> > > > Hamza
>> > > >
>> > > >
>> > >
>> >
>> > --
>> >
>> >
>> > This email, including attachments, is private and confidential. If you
>> have
>> > received this email in error please notify the sender and delete it from
>> > your system. Emails are not secure and may contain viruses. No liability
>> > can be accepted for viruses that might be transferred by this email or
>> any
>> > attachment. Any unauthorised copying of this message or unauthorised
>> > distribution and publication of the information contained herein are
>> > prohibited.
>> >
>> > 7digital Limited. Registered office: 69 Wilson Street, London EC2A 2BB.
>> > Registered in England and Wales. Registered No. 04843573.
>> >
>>
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
er implementation in Apache Kafka.
>
> Thanks,
> Harsha
>
--
*Gwen Shapira*
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter <https://twitter.com/ConfluentInc> | blog
<http://www.confluent.io/blog>
Oops. Sorry, didn't notice the 72h voting period has passed. You can
disregard.
Gwen
On Sat, Oct 29, 2016 at 4:29 PM, Gwen Shapira wrote:
> -1
>
> Kafka's development model is a good fit for critical path and
> well-established APIs. It doesn't work as well for add
as a committer and look forward to your
> continued participation!
>
> Joel
>
--
*Gwen Shapira*
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter <https://twitter.com/ConfluentInc> | blog
<http://www.confluent.io/blog>
itter : @ppatierno<http://twitter.com/ppatierno>
> Linkedin : paolopatierno<http://it.linkedin.com/in/paolopatierno>
> Blog : DevExperience<http://paolopatierno.wordpress.com/>
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
gt; real-time data pipelines on Apache Kafka.
>
> Thx!!
> Kenny Gorman
> Founder
> www.eventador.io
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
ze);
>
>
>
>
> From: Henry Kim
> Sent: Wednesday, November 2, 2016 2:46:27 PM
> To: users@kafka.apache.org
> Subject: HDFS Connector Compression?
>
>
> Is it possible to add compression to the HDFS Connector out of the box? Or
> doe
)) {
> // WARNING: if there is a rebalance, this call may return some records!!!
> consumer.poll(0);
> Uninterruptibles.sleepUninterruptibly(pauseWait, TimeUnit.MILLISECONDS);
> }
>
> consumer.resume(consumer.assignment().toArray(EMPTYTPARRAY));
>
>
> Thanks,
>
> Paul
>
>
>
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
end, here's the community
discount code: KS18Comm25
Looking forward to your amazing abstracts and to see you all there.
Gwen Shapira
so sometimes you are waiting for new segment to get created before
a new one is deleted.
>
> Thanks for the help!
> Simon Cooper
>
--
*Gwen Shapira*
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter <https://twitter.com/ConfluentInc> | blog
<http://www.confluent.io/blog>
t;
>
> * Protocol:
>
> http://kafka.apache.org/20/protocol.html
>
>
> * Successful Jenkins builds for the 2.0 branch:
>
> Unit/integration tests: https://builds.apache.org/job/kafka-2.0-jdk8/72/
>
> System tests: https://jenkins.confluent.io/job/system-test-kafka/job/2.0/
> 27/
>
>
> /**
>
>
> Thanks,
>
>
> Rajini
>
--
*Gwen Shapira*
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter <https://twitter.com/ConfluentInc> | blog
<http://www.confluent.io/blog>
Congrats Dong Lin! Well deserved!
On Mon, Aug 20, 2018, 3:55 AM Ismael Juma wrote:
> Hi everyone,
>
> Dong Lin became a committer in March 2018. Since then, he has remained
> active in the community and contributed a number of patches, reviewed
> several pull requests and participated in numerou
> * Documentation:
> http://kafka.apache.org/20/documentation.html
>
> * Protocol:
> http://kafka.apache.org/20/protocol.html
>
> * Successful Jenkins builds for the 2.0 branch:
> Unit/integration tests: https://builds.apache.org/job/kafka-2.0-jdk8/177/
>
> /**
Congrats, Vahid. Thank you for all your contribution!
On Tue, Jan 15, 2019, 2:45 PM Jason Gustafson Hi All,
>
> The PMC for Apache Kafka has invited Vahid Hashemian as a project
> committer and
> we are
> pleased to announce that he has accepted!
>
> Vahid has made numerous contributions to the K
tag:
> https://github.com/apache/kafka/releases/tag/2.1.1-rc2
>
> * Jenkins builds for the 2.1 branch:
> Unit/integration tests: https://builds.apache.org/job/kafka-2.1-jdk8/
>
> Thanks to everyone who tested the earlier RCs.
>
> cheers,
> Colin
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
Yay!
Thanks for running the release Colin, and to everyone who reported and
fixed bugs :)
On Tue, Feb 19, 2019, 3:37 PM Colin McCabe wrote:
> The Apache Kafka community is pleased to announce the release for Apache
> Kafka 2.1.1.
>
> This is a bugfix release for Kafka 2.1.0. All of the changes
; Unit/integration tests: https://builds.apache.org/job/kafka-2.2-jdk8/
> System tests: https://jenkins.confluent.io/job/system-test-kafka/job/2.2/
>
> /**
>
> Thanks,
>
> -Matthias
>
>
--
*Gwen Shapira*
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter <https://twitter.com/ConfluentInc> | blog
<http://www.confluent.io/blog>
c1/javadoc/
> >
> > * Tag to be voted upon (off 2.2 branch) is the 2.2.1 tag:
> > https://github.com/apache/kafka/releases/tag/2.2.1-rc1
> >
> > * Documentation:
> > https://kafka.apache.org/22/documentation.html
> >
> > * Protocol:
> > https://kafka.apache.org/22/protocol.html
> >
> > * Successful Jenkins builds for the 2.2 branch:
> > Unit/integration tests: https://builds.apache.org/job/kafka-2.2-jdk8/115/
> >
> > Thanks!
> > --Vahid
> >
>
>
> --
>
> Thanks!
> --Vahid
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
n:
> > https://repository.apache.org/content/groups/staging/org/apache/kafka/
> >
> > * Javadoc:
> > https://home.apache.org/~cmccabe/kafka-2.3.0-rc3/javadoc/
> >
> > * The tag to be voted upon (off the 2.3 branch) is the 2.3.0 tag:
> > https://github.com/apache/kafka/releases/tag/2.3.0-rc3
> >
> > best,
> > Colin
> >
> > C.
> >
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
David,
Why do we have two site-doc packages, one for each Scala version? It
is just HTML, right? IIRC, in previous releases we only packaged the
docs once?
Gwen
On Fri, Oct 4, 2019 at 6:52 PM David Arthur wrote:
>
> Hello all, we identified a few bugs and a dependency update we wanted to
> get
Congratulations Mickael! Well deserved!
On Thu, Nov 7, 2019 at 1:38 PM Jun Rao wrote:
>
> Hi, Everyone,
>
> The PMC of Apache Kafka is pleased to announce a new Kafka committer Mickael
> Maison.
>
> Mickael has been contributing to Kafka since 2016. He proposed and
> implemented multiple KIPs. He
+1 (binding)
Validated signatures, tests and ran some test workloads.
Thank you so much for driving this. Mani.
On Mon, Dec 9, 2019 at 9:32 AM Manikumar wrote:
>
> Hello Kafka users, developers and client-developers,
>
> This is the fifth candidate for release of Apache Kafka 2.4.0.
>
> This re
Hi everyone,
I'm happy to announce that Colin McCabe, Vahid Hashemian and Manikumar
Reddy are now members of Apache Kafka PMC.
Colin and Manikumar became committers on Sept 2018 and Vahid on Jan
2019. They all contributed many patches, code reviews and participated
in many KIP discussions. We app
Oh wow, I love this checklist. I don't think we'll have time to create one for
this release, but will be great to track this via JIRA and see if we can get
all those contributed before 2.6...
Gwen Shapira
Engineering Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | bl
ccessful Jenkins builds for the 2.6 branch:
> Unit/integration tests: https://builds.apache.org/job/kafka-2.6-jdk8/101/
> System tests: (link to follow)
>
>
> Thanks,
> Randall Hauch
>
--
Gwen Shapira
Engineering Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
Apache Kafka community.
--
Gwen Shapira
gt;
> Thanks for all the contributions, Sophie!
>
>
> Please join me to congratulate her!
> -Matthias
>
--
Gwen Shapira
Engineering Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
releases/tag/2.7.0-rc3
>
> * Documentation:
> https://kafka.apache.org/27/documentation.html
>
> * Protocol:
> https://kafka.apache.org/27/protocol.html
>
> * Successful Jenkins builds for the 2.7 branch:
> Unit/integration tests: (link to follow)
> System tests: (link to follow)
>
> Thanks,
> Bill
--
Gwen Shapira
Engineering Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
ilds for the 2.7 branch:
> Unit/integration tests:
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-2.7-jdk8/detail/kafka-2.7-jdk8/81/
>
> Thanks,
> Bill
--
Gwen Shapira
Engineering Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
horsten Hake, Tom
> Bentley, tswstarplanet, vamossagar12, Vikas Singh, vinoth chandar, Vito
> Jeng, voffcheg109, xakassi, Xavier Léauté, Yuriy Badalyantc, Zach Zhang
>
> We welcome your help and feedback. For more information on how to
> report problems, and to get involved, visit the project website at
> https://kafka.apache.org/
>
> Thank you!
>
>
> Regards,
> Bill Bejeck
--
Gwen Shapira
Engineering Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
PMC.
Congratulations, David!
Gwen Shapira, on behalf of Apache Kafka PMC
Exciting! Thanks for driving the release, David.
On Mon, Jan 24, 2022 at 9:04 AM David Jacot wrote:
>
> The Apache Kafka community is pleased to announce the release for
> Apache Kafka 3.1.0.
>
> It is a major release that includes many new features, including:
>
> * Apache Kafka supports Java 17
> ________
> From: Gwen Shapira
> Sent: Monday, November 07, 2016 3:34:39 PM
> To: Users
> Subject: Re: consumer client pause/resume/rebalance
>
> I think the current behavior is fairly reasonable. Following a
> rebalance the entire state of the cons
ully handling this instead of
> restarting the brokers?
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
>> documentation, If you want to upgrade zookeeper, you can not avoid downtime.
>>
>> Thank you,
>> Amit
>>
>> On Thursday, November 10, 2016, ZHU Hua B > >
>> wrote:
>>
>> > Hi,
>> >
>> >
>> > For a rolling upgra
> would be a simpler user interface for this common use case.
>
>
> --
> Cheers,
> Andrew
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
Thank you, Vahid!
On Wed, Nov 16, 2016 at 1:53 PM, Vahid S Hashemian
wrote:
> I'll open a JIRA.
>
> Andrew, let me know if you want to take over the implementation.
> Otherwise, I'd be happy to work on it.
>
> Thanks.
> --Vahid
>
>
>
>
> From:
uld not find it.
>
> Thanks.
>
> --
> Raghav
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
are intended solely for the use of the individual
> or entity to whom they are addressed. If you have received this e-mail in
> error, please notify the sender by reply e-mail immediately and destroy all
> copies of the e-mail and any attachments.
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
on your
> website.
>
> Greatly appreciated!
>
> Costa Tsirbas
> 514.443.1439
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
Hey Kafka Community,
I'm trying to take a pulse on the current state of the Kafka clients ecosystem.
Which languages are most popular in our community? What does the
community value in clients?
You can help me out by filling in the survey:
https://goo.gl/forms/cZg1CJyf1PuqivTg2
I will lock the s
technical debt if the costs weren't too high. If
> there are major issues then I can take on the client upgrade as well.
>
> Thanks in advance!
>
> --
>
> In Christ,
>
> Timmy V.
>
> http://blog.twonegatives.com/
> http://five.sentenc.es/ -- Spend less time
y by electronic mail and delete this message and
> all copies and backups thereof. Thank you. Greenway Health.
> This e-mail and any files transmitted with it are confidential, may contain
> sensitive information, and are intended solely for the use of the individual
> or entity to whom t
y. We don’t
>> want to presume any particular message size, and may not want to cache
>> the entire message in memory while processing it. Is there an
>> interface where we can consume messages via a stream, so that we can
>> read chunks of a message and process them based on some kind of batch
>> size that will allow us better control over memory usage?
>>
>>
>>
>
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
ance to fix this such
> that a restart within the heartbeat interval does not lead to a re-balance?
> Would a well defined client.id help?
>
> Regards
> Harald
>
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
f I
> remove com.sun.management.jmxremote it's takes 6 seconds but this is still
> much longer than I would have expected.
>
> Any suggestions on how to speed things up?
>
--
*Gwen Shapira*
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter <https://twitter.com/ConfluentInc> | blog
<http://www.confluent.io/blog>
org/confluence/display/KAFKA/Contributing+Website+
> Documentation+Changes
>
>
> We are trying to do the same for Connect, Ops, Configs, APIs etc in the
> near future. Any comments, improvements, and contributions are welcome and
> encouraged.
>
>
> --
> -- Guozhang
>
pon (off 0.10.0 branch) is the 0.10.0.1-rc0 tag:
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=c3638376708ee6c02dfe4e57747acae0126fa6e7
>
>
> Thanks,
> Guozhang
>
> --
> -- Guozhang
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
is waiting for your abstracts :)
--
Gwen Shapira
port in future minor releases.
>>
>>
>> * Javadoc:
>> http://home.apache.org/~guozhang/kafka-0.10.1.1-rc1/javadoc/
>>
>> * Tag to be voted upon (off 0.10.0 branch) is the 0.10.0.1-rc0 tag:
>> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=
>> c3638376708ee6c02dfe4e57747acae0126fa6e7
>>
>>
>> Thanks,
>> Guozhang
>>
>> --
>> -- Guozhang
>>
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
+-+Alter+Replication+Protocol+to+use+Leader+Epoch+rather+than+High+Watermark+for+Truncation
>
> <https://cwiki.apache.org/confluence/display/KAFKA/KIP-101+-+Alter+Replication+Protocol+to+use+Leader+Epoch+rather+than+High+Watermark+for+Truncation>
>
> Please let us know your v
contains both key and
> value.
>
> Can some one suggest/outline the general guidelines for keys to be used
> with K-V store from the SinkRecord.
>
> What should be the key for external K-V store to be used to store a records
> from kafka topics to external K-V store.
>
>
> columns: id and json. The id column is basically topic+partition+offset (to
>> guarantee idempotence on upserts), and the json column is basically the
>> json document
>>
>> Is that feasible using the out of the box JDBC connector? I didn’t see any
>> support for “json type” fields
>>
>> Thanks,
>> Stephane
>>
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
ctAppender.File=${kafka.logs.dir}/connect.log
> log4j.appender.connectAppender.layout=org.apache.log4j.PatternLayout
> log4j.appender.connectAppender.layout.ConversionPattern=[%d] %p %m (%c:%L)%n
>
> Thank you!
> Eric
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
l all enjoy this? :)
On Tue, Jan 10, 2017 at 10:23 PM, Gwen Shapira wrote:
> Is your goal to simply log connect to file rather than to the console?
> In this case your configuration is almost right. Just change the first
> line in connect-log4j.properties to:
>
> log4j.roo
5;
>
>
> Can someone help me understand if this value is supposed to be 'average
> partition count per topic' or 'total partition count for all topics'?
>
> I want to have separate JMX metric for partition count of each topic. Can
> someone point me to configu
Admin protocol. Throughout this, he
displayed great technical judgment, high-quality work and willingness
to contribute where needed to make Apache Kafka awesome.
Thank you for your contributions, Grant :)
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
gt;
>>>
>>> This message is for the designated recipient only and may contain
>>> privileged, proprietary, or otherwise confidential information. If you have
>>> received it in error, please notify the sender immediately and delete the
>>> original. Any other use of the e-mail by you is prohibited. Thank you in
>>> advance for your cooperation.
>>>
>>>
>>
>>
>>
>>
>>
>>
>> This message is for the designated recipient only and may contain
>> privileged, proprietary, or otherwise confidential information. If you have
>> received it in error, please notify the sender immediately and delete the
>> original. Any other use of the e-mail by you is prohibited. Thank you in
>> advance for your cooperation.
>>
>>
>>
>
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
> within aggregations. Instead, we can just send a "tombstone" without
>>>>>>>>>
>>>>>>>> the
>>>>>>
>>>>>>> old value and we do not need to calculate joins twice (one more time
>>>>>>
rent SNAPSHOT version. So the question rises, if
> this would be helpful for Kafka community, too.
>
> The idea would be, to update SNAPSHOT docs (web page and JavaDocs) on a
> daily basis based on trunk (of course, fully automated).
>
>
> Looking forward to your feedback.
>
ith cost per year?
>
> Thanks
> Lincu
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
ssage to initiate that
>>> process.
>>>>> I
>>>>>> might need to do the clean-up as part of the Connect code instead, or
>>>>> there
>>>>>> is a better way of doing that?
>>>>>>
>>>>>> Thanks,
>>>>>> Eric
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Sun, Jan 29, 2017 at 4:37 PM, Matthias J. Sax <
>>> matth...@confluent.io>
>>>>>> wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> currently, a Kafka Streams application is designed to "run forever"
>>> and
>>>>>>> there is no notion of "End of Batch" -- we have plans to add this
>>>>>>> though... (cf.
>>>>>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
>>>>>>> 95%3A+Incremental+Batch+Processing+for+Kafka+Streams)
>>>>>>>
>>>>>>> Thus, right now you need to stop your application manually. You would
>>>>>>> need to observe the application's committed offsets (and lag) using
>>>>>>> bin/kafka-consumer-groups.sh (the application ID is user as group ID)
>>> to
>>>>>>> monitor the app's progress to see when it is done.
>>>>>>>
>>>>>>> Cf.
>>>>>>> https://cwiki.apache.org/confluence/display/KAFKA/
>>>>>>> Kafka+Streams+Data+%28Re%29Processing+Scenarios
>>>>>>>
>>>>>>>
>>>>>>> -Matthias
>>>>>>>
>>>>>>>
>>>>>>> On 1/28/17 1:07 PM, Eric Dain wrote:
>>>>>>>> Hi,
>>>>>>>>
>>>>>>>> I'm pretty new to Kafka Streams. I am using Kafka Streams to ingest
>>>>> large
>>>>>>>> csv file. I need to run some clean-up code after all records in the
>>>>> file
>>>>>>>> are processed. Is there a way to send "End of Batch" event that is
>>>>>>>> guaranteed to be processed after all records? If not is there
>>>>> alternative
>>>>>>>> solution?
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> Eric
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>>
>>
>
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
eb 3, 2017 at 3:33 PM, Matthias J. Sax wrote:
> Hi All,
>
> I did prepare a KIP to do some cleanup some of Kafka's Streaming API.
>
> Please have a look here:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-120%3A+Cleanup+Kafka+Streams+builder+API
>
> Loo
mentioned in a previous reply, we plan to have at least one more KIP
> to clean up DSL -- this future KIP should include exact this change.
>
>
> -Matthias
>
>
> On 2/6/17 4:26 PM, Gwen Shapira wrote:
>> I like the cleanup a lot :)
>>
>> The cleaner lines betwee
se consider my contribution and hopefully you all like it and agree that
>> it should be merged into 0.10.3 :)
>> If not, be gentle, this is my first KIP!
>>
>> Happy Monday,
>> Steven
>>
>
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
ve.
>
> Below is the probable avro schema(schema.txt) for reference(actually v
> complex what is available to process):
>
> {
> "type" : "record",
> "namespace" : "mynamespace",
> "name" : "myname",
> "fields" : [{
> "name":"field1",
> "type":{
> "type":"record",
> "name":"Eventfield1",
> "fields":[{.}]
> }]
> ]
> }
>
> Please help to implement the same.
>
> Regards,
> Kush
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
t;
>>>> Hi,
>>>>
>>>> I like the proposal, thank you. I have found it frustrating myself not to
>>>> be able to understand simple things, like how many records have been
>>>> currently processed. The peek method would allow those kinds of
>&g
est id=" + id + " command=" + command);
> command.setId(9);
>
> return new KeyValue<>(UUID.randomUUID().toString(), command);
> })
> .through(Serdes.String(), testSpecificAvroSerde, "test2");
>
>
> *test.avsc*
> {
> "type": "
would like to propose a KIP to Add a tool to Reset Consumer Group Offsets.
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-122%3A+Add+a+tool+to+Reset+Consumer+Group+Offsets
>
> Please, take a look at the proposal and share your feedback.
>
> Thanks,
> Jorge.
gt;> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
>> 122%3A+Add+a+tool+to+Reset+Consumer+Group+Offsets
>> >
>> > Please, take a look at the proposal and share your feedback.
>> >
>> > Thanks,
>> > Jorge.
>>
>>
>>
>> --
>> Gwen Shapira
>> Product Manager | Confluent
>> 650.450.2760 | @gwenshap
>> Follow us: Twitter | blog
>>
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
ng, such that consumer will seek to the newly committed offset and
> start consuming from there?
>
> Not sure about this. I will recommend to keep it simple and ask user to
> stop consumers first. But I would considered it if the trade-offs are
> clear.
>
> @Matthias
>
>
-execute option and --reset-file (path to JSON)
>>
>> Reset based on file
>>
>> 4. Only with --verify option and --reset-file (path to JSON)
>>
>> Verify file values with current offsets
>>
>> I think we can remove --generate-and-execute because is a b
Just to clarify, we'll need to allow specifying topic and partition. I
don't think we want this on ALL partitions at once.
On Wed, Feb 8, 2017 at 3:35 PM, Gwen Shapira wrote:
> That's what I'd like to see. For example, suppose a Connect task fails
> because it can
rg/confluence/display/KAFKA/KIP-121%3A+Add+KStream+peek+method
>
> I believe the PR attached is already in good shape to consider merging:
>
> https://github.com/apache/kafka/pull/2493
>
> Thanks!
> Steven
>
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
ny
>> Stubbs, Apurva Mehta, Arun Mahadevan, Ashish Singh, Balint Molnar, Ben
>> Stopford, Bernard Leach, Bill Bejeck, Colin P. Mccabe, Damian Guy, Dan
>> Norwood, Dana Powers, dasl, Derrick Or, Dong Lin, Dustin Cote, Edoardo
>> Comar, Edward Ribeiro, Elias Levy, Emanuele Cesena,
ssing something ?
>
> On 23 February 2017 at 9:21:08 am, Gwen Shapira (g...@confluent.io) wrote:
>
> I saw them in Maven yesterday?
>
> On Wed, Feb 22, 2017 at 2:15 PM, Stephane Maarek
> wrote:
> > Awesome thanks a lot! When should we expect the dependencies to be
>
apache.org/0102/protocol.html
Thanks,
Gwen Shapira
Thank you for testing!!!
On Mon, Apr 10, 2017 at 7:36 AM, Mathieu Fenniak
wrote:
> Hi Gwen,
>
> +1, looks good to me. Tested broker upgrades, and connect & streams
> applications.
>
> Mathieu
>
>
> On Fri, Apr 7, 2017 at 6:12 PM, Gwen Shapira wrote:
>
&g
gested?
Either way, may be worthwhile to start a different discussion thread
about RC releases in Maven. Perhaps more knowledgable people will see
it and jump in.
Gwen
On Tue, Apr 11, 2017 at 4:31 PM, Steven Schlansker
wrote:
>
>> On Apr 7, 2017, at 5:12 PM, Gwen Shapira wrote:
FYI: I just updated the upgrade notes with Streams changes:
http://kafka.apache.org/documentation/#gettingStarted
On Fri, Apr 7, 2017 at 5:12 PM, Gwen Shapira wrote:
> Hello Kafka users, developers and client-developers,
>
> This is the first candidate for the release of Apache Kafka
Wrong link :)
http://kafka.apache.org/documentation/#upgrade
and
http://kafka.apache.org/documentation/streams#streams_api_changes_0102
On Tue, Apr 11, 2017 at 5:57 PM, Gwen Shapira wrote:
> FYI: I just updated the upgrade notes with Streams changes:
> http://kafka.apache.org/documen
401 - 500 of 535 matches
Mail list logo