Re: Apache Kafka website "videos" page clarification

2020-10-13 Thread Ben Stopford
o<http://twitter.com/ppatierno> > Linkedin : paolopatierno<http://it.linkedin.com/in/paolopatierno> > Blog : DevExperience<http://paolopatierno.wordpress.com/> > > From: Ben Stopford > Sent: Thursday, September 10, 2020 12:54 PM > To: Kafka Users > Subject: Re: A

Re: Apache Kafka website "videos" page clarification

2020-09-10 Thread Ben Stopford
; Microsoft MVP on Azure > > Twitter : @ppatierno<http://twitter.com/ppatierno> > Linkedin : paolopatierno<http://it.linkedin.com/in/paolopatierno> > Blog : DevExperience<http://paolopatierno.wordpress.com/> > -- Ben Stopford

Re: [VOTE] KIP-657: Add Customized Kafka Streams Logo

2020-08-20 Thread Ben Stopford
gt; > > > > > > > > On Tuesday, 18 August, 2020, 11:44:20 pm IST, Guozhang > Wang < > > > > > > wangg...@gmail.com> wrote: > > > > > > > > > > > > > > I'm leaning towards design B primarily because it reminds me > of > > > the > > > > > > Firefox > > > > > > > logo which I like a lot. But I also share Adam's concern that > it > > > > should > > > > > > > better not obscure the Kafka logo --- so if we can tweak a bit > to > > > fix > > > > > it > > > > > > my > > > > > > > vote goes to B, otherwise A :) > > > > > > > > > > > > > > > > > > > > > Guozhang > > > > > > > > > > > > > > On Tue, Aug 18, 2020 at 9:48 AM Bruno Cadonna < > > br...@confluent.io> > > > > > > wrote: > > > > > > > > > > > > > >> Thanks for the KIP! > > > > > > >> > > > > > > >> I am +1 (non-binding) for A. > > > > > > >> > > > > > > >> I would also like to hear opinions whether the logo should be > > > > > colorized > > > > > > >> or just black and white. > > > > > > >> > > > > > > >> Best, > > > > > > >> Bruno > > > > > > >> > > > > > > >> > > > > > > >> On 15.08.20 16:05, Adam Bellemare wrote: > > > > > > >>> I prefer Design B, but given that I missed the discussion > > > thread, I > > > > > > think > > > > > > >>> it would be better without the Otter obscuring any part of > the > > > > Kafka > > > > > > >> logo. > > > > > > >>> > > > > > > >>> On Thu, Aug 13, 2020 at 6:31 PM Boyang Chen < > > > > > > reluctanthero...@gmail.com> > > > > > > >>> wrote: > > > > > > >>> > > > > > > >>>> Hello everyone, > > > > > > >>>> > > > > > > >>>> I would like to start a vote thread for KIP-657: > > > > > > >>>> > > > > > > >>>> > > > > > > >> > > > > > > > > > > > > > > > > > > > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-657%3A+Add+Customized+Kafka+Streams+Logo > > > > > > >>>> > > > > > > >>>> This KIP is aiming to add a new logo for the Kafka Streams > > > > library. > > > > > > And > > > > > > >> we > > > > > > >>>> prepared two candidates with a cute otter. You could look up > > the > > > > KIP > > > > > > to > > > > > > >>>> find those logos. > > > > > > >>>> > > > > > > >>>> > > > > > > >>>> Please post your vote against these two customized logos. > For > > > > > > example, I > > > > > > >>>> would vote for *design-A (binding)*. > > > > > > >>>> > > > > > > >>>> This vote thread shall be open for one week to collect > enough > > > > votes > > > > > to > > > > > > >> call > > > > > > >>>> for a winner. Still, feel free to post any question you may > > have > > > > > > >> regarding > > > > > > >>>> this KIP, thanks! > > > > > > >>>> > > > > > > >>> > > > > > > >> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > Michael G. Noll > > > > Principal Technologist, Office of the CTO > > > > <https://www.confluent.io> > > > > > -- > > <https://www.confluent.io> > > Antony Stubbs > > Principal Consulting Engineer / Solutions Architect > > > Follow us: Blog > < > https://confluent.io/blog?utm_source=footer&utm_medium=email&utm_campaign=ch.email-signature_type.community_content.blog > > > • Slack <https://slackpass.io/confluentcommunity> • Twitter > <https://twitter.com/ConfluentInc> • YouTube < > https://youtube.com/confluent> > -- Ben Stopford Lead Technologist, Office of the CTO <https://www.confluent.io>

Re: [VOTE] KIP-657: Add Customized Kafka Streams Logo

2020-08-19 Thread Ben Stopford
> > > On Tue, Aug 18, 2020 at 9:48 AM Bruno Cadonna > > > > wrote: > > > > > > > > > >> Thanks for the KIP! > > > > >> > > > > >> I am +1 (non-binding) for A. > > > > >> > > > > &

Re: New Website Layout

2020-08-12 Thread Ben Stopford
uot; section is rendering as > > > > $1 class="anchor-heading">$8$9$10 > > <https://kafka.apache.org/documentation.html#$4> > > > > with the . Similar thing in > > https://kafka.apache.org/documentation.html#design_quotas. The source > HTML >

Re: New Website Layout

2020-08-10 Thread Ben Stopford
> meaning you can't click on the link to copy and paste the URL. Could the > > old behaviour be restored? > > > > Thanks, > > > > Tom > > > > On Wed, Aug 5, 2020 at 12:43 PM Luke Chen wrote: > > > > > When entering stream

Re: New Website Layout

2020-08-05 Thread Ben Stopford
onsole. I opened a PR to remove the > console.log() call: https://github.com/apache/kafka-site/pull/278 > > On Wed, Aug 5, 2020 at 9:45 AM Ben Stopford wrote: > > > > The new website layout has gone live as you may have seen. There are a > > couple of rendering issues in the strea

Re: New Website Layout

2020-08-05 Thread Ben Stopford
The new website layout has gone live as you may have seen. There are a couple of rendering issues in the streams developer guide that we're getting addressed. If anyone spots anything else could they please reply to this thread. Thanks Ben On Fri, 26 Jun 2020 at 11:48, Ben Stopford

New Website Layout

2020-06-26 Thread Ben Stopford
Hey folks We've made some updates to the website's look and feel. There is a staged version in the link below. https://ec2-13-57-18-236.us-west-1.compute.amazonaws.com/ username: kafka password: streaming Comments welcomed. Ben

Re: kafka compacted topic

2017-11-30 Thread Ben Stopford
The CPU/IO required to complete a compaction phase will grow as the log grows but you can manage this via the cleaner's various configs. Check out properties starting log.cleaner in the docs ( https://kafka.apache.org/documentation). All databases that implement LSM storage have a similar overhead

Re: GDPR appliance

2017-11-28 Thread Ben Stopford
You should also be able to manage this with a compacted topic. If you give each message a unique key you'd then be able to delete, or overwrite specific records. Kafka will delete them from disk when compaction runs. If you need to partition for ordering purposes you'd need to use a custom partitio

Re: Replication throttling

2017-10-05 Thread Ben Stopford
Typically you don't want replication throttling enabled all the time as if a broker drops out of the isr for whatever reason catch-up will be impeded. Having said that, this may not be an issue if the throttle is quite mild and your max write rate is well below your the network limit, but it is saf

Re: Event sourcing with Kafka and Kafka Streams. How to deal with atomicity

2017-07-24 Thread Ben Stopford
No worries Jose ;-) So there are a few ways you could do this, but I think it’s important that you manage a single “stock level” state store, backed by a changelog. Use this for validation, and keep it up to date at the same time. You should also ensure the input topic(s) are partitioned by produc

Re: Event sourcing with Kafka and Kafka Streams. How to deal with atomicity

2017-07-21 Thread Ben Stopford
Hi Jose If I understand your problem correctly, the issue is that you need to decrement the stock count when you reserve it, rather than splitting it into a second phase. You can do this via the DSL with a Transfomer. There's a related example below. Alternatively you could do it with the processor

Re: Resetting offsets

2017-05-03 Thread Ben Stopford
Hu is correct, there isn't anything currently, but there is an active proposal: https://cwiki.apache.org/confluence/display/KAFKA/KIP-122%3A+Add+Reset+Consumer+Group+Offsets+tooling On Wed, May 3, 2017 at 1:23 PM Hu Xi wrote: > Seems there is no command line out of box, but if you could write a

Re: Kafka running in VPN

2017-03-23 Thread Ben Stopford
The bootstrap servers are only used to make an initial connection. From there the clients's request metadata which provides them with a 'map' of the cluster. The addresses in the metadata are those registered in Zookeeper by each broker. They don't relate to the bootstrap list in any way. You can c

Re: Relationship fetch.replica.max.bytes and message.max.bytes

2017-03-23 Thread Ben Stopford
Hi Kostas - The docs for replica.fetch.max.bytes should be helpful here: The number of bytes of messages to attempt to fetch for each partition. This is not an absolute maximum, if the first message in the first non-empty partition of the fetch is larger than this value, the message will still be

Re: KIP-122: Add a tool to Reset Consumer Group Offsets

2017-02-08 Thread Ben Stopford
Yes - using a tool like this to skip a set of consumer groups over a corrupt/bad message is definitely appealing. B On Wed, Feb 8, 2017 at 9:37 PM Gwen Shapira wrote: > I like the --reset-to-earliest and --reset-to-latest. In general, > since the JSON route is the most challenging for users, we

Re: [ANNOUNCE] New committer: Grant Henke

2017-01-11 Thread Ben Stopford
Congrats Grant!! On Wed, 11 Jan 2017 at 20:01, Ismael Juma wrote: > Congratulations Grant, well deserved. :) > > Ismael > > On 11 Jan 2017 7:51 pm, "Gwen Shapira" wrote: > > > The PMC for Apache Kafka has invited Grant Henke to join as a > > committer and we are pleased to announce that he has a

[VOTE] KIP-106 - Default unclean.leader.election.enabled True => False

2017-01-11 Thread Ben Stopford
Looks like there was a good consensus on the discuss thread for KIP-106 so lets move to a vote. Please chime in if you would like to change the default for unclean.leader.election.enabled from true to false. https://cwiki.apache.org/confluence/display/KAFKA/%5BWIP%5D+KIP-106+-+Change+Default+uncl

Re: [VOTE] Vote for KIP-101 - Leader Epochs

2017-01-11 Thread Ben Stopford
OK - my mistake was mistaken! There is consensus. This KIP has been accepted. On Wed, Jan 11, 2017 at 6:48 PM Ben Stopford wrote: > Sorry - my mistake. Looks like I still need one more binding vote. Is > there a committer out there that could add their vote? > > B > > On Wed,

Re: [VOTE] Vote for KIP-101 - Leader Epochs

2017-01-11 Thread Ben Stopford
Sorry - my mistake. Looks like I still need one more binding vote. Is there a committer out there that could add their vote? B On Wed, Jan 11, 2017 at 6:44 PM Ben Stopford wrote: > So I believe we can mark this as Accepted. I've updated the KIP page. > Thanks for the input everyone.

Re: [VOTE] Vote for KIP-101 - Leader Epochs

2017-01-11 Thread Ben Stopford
So I believe we can mark this as Accepted. I've updated the KIP page. Thanks for the input everyone. On Fri, Jan 6, 2017 at 9:31 AM Ben Stopford wrote: > Thanks Joel. I'll fix up the pics to make them consistent on nomenclature. > > > B > > On Fri, Jan 6, 2017

Re: [VOTE] Vote for KIP-101 - Leader Epochs

2017-01-06 Thread Ben Stopford
t; 1) In OffsetForLeaderEpochResponse, start_offset probably should be > >> end_offset since it's the end offset of that epoch. > >> 3) That's fine. We can fix KAFKA-1120 separately. > >> > >> Jun > >> > >> > >> On Thu, Jan 5, 20

Re: [VOTE] Vote for KIP-101 - Leader Epochs

2017-01-05 Thread Ben Stopford
/[partitionId]/state in ZK? > > 4. Since there are a few other KIPs involving message format too, it would > be useful to consider if we could combine the message format changes in the > same release. > > Thanks, > > Jun > > > On Wed, Jan 4, 2017 at 9:24 AM, Ben Stopfo

[VOTE] Vote for KIP-101 - Leader Epochs

2017-01-04 Thread Ben Stopford
Hi All We’re having some problems with this thread being subsumed by the [Discuss] thread. Hopefully this one will appear distinct. If you see more than one, please use this one. KIP-101 should now be ready for a vote. As a reminder the KIP proposes a change to the replication protocol to rem

Re: Questions about single consumer per partition approach

2016-12-21 Thread Ben Stopford
Hi Alexi Typically you would use a key to guarantee that messages with the same key have a global ordering, rather than using manual assignment. Kafka will send all messages with the same key to the same partition. If you need global ordering, spanning all messages from a single producer, you can

Re: Halting because log truncation is not allowed for topic __consumer_offsets

2016-12-19 Thread Ben Stopford
Hi Jun This should only be possible in situations where there is a crash or something happens to the underlying disks (assuming clean leader election). I've not come across others. The assumption, as I understand it, is that the underlying issue stems from KAFKA-1211

Re: Training Kafka and ZooKeeper - Monitoring and Operability

2016-10-11 Thread Ben Stopford
afka and ZooKeeper work. > > I uploaded it on SlideShare, I thought it might be useful to other people: > http://fr.slideshare.net/NicolasMotte/training-kafka- > and-zookeeper-monitoring-and-operability > > In the description you will get a link to the version with audio > des

Re: rate-limiting on rebalancing, or sync from non-leaders?

2016-07-04 Thread Ben Stopford
Hi Charity There will be a KIP for this coming out shortly. All the best B > On 4 Jul 2016, at 13:14, Alexis Midon wrote: > > Same here at Airbnb. Moving data is the biggest operational challenge > because of the network bandwidth cannibalization. > I was hoping that rate limiting would appl

Re: Coordinator lost for consumer groups

2016-07-01 Thread Ben Stopford
You might try increasing the log.cleaner.dedupe.buffer.size. This should increase the deduplication yield for each scan. If you haven’t seen them there are some notes on log compaction here: https://cwiki.apache.org/confluence/display/KAFKA/Log+Compaction

Re: How many connections per consumer/producer

2016-06-30 Thread Ben Stopford
Hi Dhiaraj That shouldn’t be the case. As I understand it both the producer and consumer hold a single connection to each broker they need to communicate with. Multiple produce requests can be sent through a single connection in the producer (the number being configurable with max.in.flight.req

Re: Setting max fetch size for the console consumer

2016-06-24 Thread Ben Stopford
It’s actually more than one setting: http://stackoverflow.com/questions/21020347/kafka-sending-a-15mb-message B > On 24 Jun 2016, at 14:31, Tauzell, Dave wrote: > > How do I set the maximum fetch size for the console co

Re: is kafka the right choice

2016-06-24 Thread Ben Stopford
correction: elevates => alleviates > On 24 Jun 2016, at 11:13, Ben Stopford wrote: > > Kafka uses a long poll > <http://kafka.apache.org/documentation.html#design_pull>. So requests > effectively block on the server, if there is insufficient data available. > This

Re: is kafka the right choice

2016-06-24 Thread Ben Stopford
Kafka uses a long poll . So requests effectively block on the server, if there is insufficient data available. This elevates many of the issues associated with traditional polling approaches. Service-based applications often require direc

Re: Quotas feature Kafka 0.9.0.1

2016-06-09 Thread Ben Stopford
Hi Liju Alas we can’t use quotas directly to throttle replication. The problem is that, currently, fetch requests from followers include critical traffic (the replication of produce requests) as well as non critical traffic (brokers catching up etc) so we can’t apply the current quotas mechanis

Re: Rebalancing issue while Kafka scaling

2016-06-01 Thread Ben Stopford
> Hafsa > > 2016-06-01 12:57 GMT+02:00 Ben Stopford : > >> Hi Hafa >> >> If you create a topic with replication-factor = 2, you can lose one of >> them without losing data, so long as they were "in sync". Replicas can fall >> out of sync if

Re: Dynamic bootstrap.servers with multiple data centers

2016-06-01 Thread Ben Stopford
Hey Danny Currently the bootstrap servers are only used when the client initialises (there’s a bit of discussion around the issue in the jira below if you’re interested). To implement failover you’d need to catch a timeout exception in your client code, consulting your service discovery mechani

Re: Rebalancing issue while Kafka scaling

2016-06-01 Thread Ben Stopford
Hi Hafa If you create a topic with replication-factor = 2, you can lose one of them without losing data, so long as they were "in sync". Replicas can fall out of sync if one of the machines runs slow. The system tracks in sync replicas. These are exposed by JMX too. Check out the docs on replic

Failing between mirrored clusters

2016-05-11 Thread Ben Stopford
Hi I’m looking at failing-over from one cluster to another, connected via mirror maker, where the __consumer_offsets topic is also mirrored. In theory this should allow consumers to be restarted to point at the secondary cluster such that they resume from the same offset they reached in the pr

Re: Tune Kafka offsets.load.buffer.size

2016-04-20 Thread Ben Stopford
If you have a relatively small number of consumers you might further reduce offsets.topic.segment.bytes. The active segment is not compacted. B > On 18 Apr 2016, at 23:45, Muqtafi Akhmad wrote: > > dear Kafka users, > > Is there any tips about how to configure *offsets.load.buffer.size* > conf

Re: steps path to kafka mastery

2016-03-29 Thread Ben Stopford
Not sure which book you read but, based on the first few chapters, this book is an worthy investment. B > On 29 Mar 2016, at 03:40, S Ahmed wrote: > > Hello, > > This may be a silly question for some but here goes :) > > Without real product

Re: Kafka Streams scaling questions

2016-03-22 Thread Ben Stopford
Hi Kishore In general I think it’s up to you to choose keys that keep related data together, but also give you reasonable load balancing. I’m afraid that I’m not sure I fully followed your explanation of how storm solves this problem more efficiently though. I noticed you asked: "How would th

Re: Would Kafka streams be a good choice for a collaborative web app?

2016-03-21 Thread Ben Stopford
It sounds like a fairly typical pub-sub use case where you’d likely be choosing Kafka because of its scalable data retention and built in fault tolerance. As such it’s a reasonable choice. > On 21 Mar 2016, at 17:07, Mark van Leeuwen wrote: > > Hi Sandesh, > > Thanks for the suggestions. I

Re: Question regarding compression of topics in Kafka

2016-03-19 Thread Ben Stopford
n disk. In both cases >> data stored on disk in log files had same size equals to the data sent to >> Kafka. >> How do I verify that compression is being used and data stored on disk has >> savings in space due to compression? >> Thanks, >> R P >> >&

Re: Question regarding compression of topics in Kafka

2016-03-19 Thread Ben Stopford
Yes it will compress the data stored on the file system if you specify compression in the producer. You can check whether the data is compressed on disk by running the following command in the data directory. kafka-run-class kafka.tools.DumpLogSegments --print-data-log --files latest-log-file.l

Re: question about time-delay

2016-03-16 Thread Ben Stopford
Kafka’s defaults are set for low latency so that’s probably a reasonable measure of your lower bound latency for that message size. B > On 15 Mar 2016, at 16:26, 杜俊霖 wrote: > > Hello,I have some question when I use kafka transfer data。 > In my test,I create a producer and a consumer an

Re: Kafka Applicability - Large Messages

2016-03-14 Thread Ben Stopford
Becket did a good talk at the last Kafka meetup on how Linked In handle the large message problem. http://www.slideshare.net/JiangjieQin/handle-large-messages-in-apache-kafka-58692297 > On 14 Mar 2016, at 0

Re: Kafka topics with infinite retention?

2016-03-14 Thread Ben Stopford
A couple of things: - Compacted topics provide a useful way to retain meaningful datasets inside the broker, which don’t grow indefinitely. If you have an update-in-place use case, where the event sourced approach doesn’t buy you much, these will keep the reload time down when you regenerate ma

Re: Exactly-once publication behaviour

2016-02-19 Thread Ben Stopford
Hi Andrew There are plans to add exactly once behaviour. This will likely be a little more than Idempotent producers with the motivation being to provide better delivery guarantees for Connect, Streams and Mirror Maker. B > On 19 Feb 2016, at 13:54, Andrew Schofield > wrote: > > When pu

Re: Kafka response ordering guarantees

2016-02-17 Thread Ben Stopford
So long as you set max.inflight.requests.per.connection = 1 Kafka should provide strong ordering within a partition (so use the same key for messages that should retain their order). There is a bug currently raised agaisnt this feature though where there is an edge case that can cause ordering i

Re: Replication Factor and number of brokers

2016-02-17 Thread Ben Stopford
If you create a topic with more replicas than brokers it should throw an error but if you lose a broker you'd have under replicated partitions. B On Tuesday, 16 February 2016, Alex Loddengaard wrote: > Hi Sean, you'll want equal or more brokers than your replication factor. > Meaning, if your r

Re: Rebalancing during the long-running tasks

2016-02-16 Thread Ben Stopford
I think you’ll find some useful context in this KIP Jason wrote. It’s pretty good. https://cwiki.apache.org/confluence/display/KAFKA/KIP-41%3A+KafkaConsumer+Max+Records > On 16 Feb 2016, at 07:15, Насыров Р

Re: Kafka as master data store

2016-02-15 Thread Ben Stopford
Ted - it depends on your domain. More conservative approaches to long lived data protect against data corruption, which generally means snapshots and cold storage. > On 15 Feb 2016, at 21:31, Ted Swerve wrote: > > HI Ben, Sharninder, > > Thanks for your responses, I appreciate it. > > Ben

Re: Kafka as master data store

2016-02-15 Thread Ben Stopford
Hi Ted This is an interesting question. Kafka has similar resilience properties to other distributed stores such as Cassandra, which are used as master data stores (obviously without the query functions). You’d need to set unclean.leader.election.enable=false and configure sufficient replicat

Re: Kafka 0.8.2.0 Log4j

2016-02-12 Thread Ben Stopford
Check you’re setting the Kafka log4j properties. -Dlog4j.configuration=file:config/log4j.properties B > On 12 Feb 2016, at 07:33, Joe San wrote: > > How could I get rid of this warning? > > log4j:WARN No appenders could be found for logger > (kafka.utils.VerifiableProperties). > log4j:WARN

Re: How to retrieve the HighWaterMark

2016-02-11 Thread Ben Stopford
As an aside - you should also be able to validate this against the replication-offset-checkpoint file for each topic partition, server side. > On 11 Feb 2016, at 09:02, Ben Stopford wrote: > > Hi Florian > > I think you should be able to get it by calling consumer.seekToEnd()

Re: How to retrieve the HighWaterMark

2016-02-11 Thread Ben Stopford
Hi Florian I think you should be able to get it by calling consumer.seekToEnd() followed by consumer.position() for each topic partition. B > On 10 Feb 2016, at 09:23, Florian Hussonnois wrote: > > Hi all, > > I'm looking for a way to retrieve the HighWaterMark using the new API. > > Is th

Re: Communication between Kafka clients & Kafka

2016-01-22 Thread Ben Stopford
Hey Praveen Kafka uses a binary protocol over TCP. You can find details of specifics here if you’re interested. All the best B > On 22 Jan 2016, at 08:00, praveen S wrote: > > Do Kafka clients(producers & cons

Re: Possible WAN Replication Setup

2016-01-17 Thread Ben Stopford
Jason Don’t forget that Kafka relies on redundant replicas for fault tolerance rather than disk persistence, so your single instances might lose messages straight out of the box if they’re not terminated cleanly. You could set flush.messages to 1 though. Don’t forget about Zookeeper either. Th

Re: reassign __consumer_offsets partitions

2015-12-17 Thread Ben Stopford
Hi Damian The reassignment should treat the offsets topic as any other topic. I did a quick test and it seemed to work for me. Do you see anything suspicious in the controller log? B > On 16 Dec 2015, at 14:51, Damian Guy wrote: > > Hi, > > > We have had some temporary nodes in our kafka cl

Re: kafka connection with zookeeper

2015-12-12 Thread Ben Stopford
Hi Sadanand Kafka secures it’s connection with Zookeeper via SASL and it’s a little different to the way brokers secure connections between themselves and with clients. There’s more info here: http://docs.confluent.io/2.0.0/kafka/zookeeper-authentication.html

Re: SSL - kafka producer cannot publish to topic

2015-12-11 Thread Ben Stopford
Yes - that’s correct Ismael. I think what Shri was saying was that he got it working when he added the SSL properties to the file he passed into the Console Producer. > On 11 Dec 2015, at 17:06, Ismael Juma wrote: > > Hi Shrikant, > > On Thu, Dec 10, 2015 at 9:03 PM, Shrikant Patel wrote: >

Re: SSL - kafka producer cannot publish to topic

2015-12-10 Thread Ben Stopford
That it does. Thanks for the update Shri. B > On 10 Dec 2015, at 21:03, Shrikant Patel wrote: > > Figured it out. > > I was adding the ssl properties to producer.properties. We need to add this > to separate file and provide that file as input to procuder bat\sh script > --producer.config cli

Re: Error while sending data to kafka producer

2015-12-09 Thread Ben Stopford
Hi Ritesh You config on both sides looks fine. There may be something wrong with your truststore, although you should see exceptions in either the client or server log files if that is the case. As you appear to be running locally, try creating the JKS files using the shell script included he

Re: Error while sending data to kafka producer

2015-12-09 Thread Ben Stopford
what is your server config? > On 9 Dec 2015, at 18:21, Ritesh Sinha > wrote: > > Hi, > > I am trying to send message to kafka producer using encryption and > authentication.After creating the key and everything successfully.While > passing the value through console i am getting this error: >

Re: Doubt regarding Encryption and Authentication using SSL

2015-12-09 Thread Ben Stopford
Hi Ritesh You just need to create yourself a text file called client-ssl.properties or similar in the directory your running from. In that file you put your SSL client information. Something like this: security.protocol = SSL ssl.truststore.location = "/var/private/ssl/kafka.client.truststore.

Re: producer-consumer issues during deployments

2015-11-26 Thread Ben Stopford
Hi Prabhjot I may have slightly misunderstood your question so apologies if that’s the case. The general approach to releases is to use a rolling upgrade where you take one machine offline at a time, restart it, wait for it to come online (you can monitor this via JMX) then move onto the next.

Re: kafka java producer security to access kerberos

2015-11-09 Thread Ben Stopford
Hi Surrender Try using the producer-property option to specify the relevant ssl properties as a set of key-value pairs. There is some helpful info here too B > On 9 Nov 2015, at 15:48, Kudumula, Surender wrote: >

Re: New Consumer - discover consumer groups

2015-10-12 Thread Ben Stopford
I double checked with Jun but there is currently no direct API for consumer group discovery. I expect you already know this but you can get a consumer’s offset in the new API. You could also derive the info you need from the offsets topic. B > On 12 Oct 2015, at 17:09, Damian Guy wrote: >

Re: Getting started with Kafka using Java client

2015-10-12 Thread Ben Stopford
Hi Tarun There is an examples section in the kafka project here which shows the Consumer, SingleConsumer and Producer. These are just clients so you’ll need ZK and a Kafka server running to get them going. You probably don’t need to worry ab

Re: kafka metrics emitting to graphite

2015-10-11 Thread Ben Stopford
Hi Sunil Try using JMXTrans to expose Kafka’s internal JMX metrics to graphite. https://github.com/jmxtrans/jmxtrans B > On 11 Oct 2015, at 11:19, sunil kalva wrote: > > How to configure, to emit kafka broker metrics to graphite. > > t > SunilKalva

Re: [!!Mass Mail]Re: Dual commit with zookeeper and kafka

2015-10-10 Thread Ben Stopford
hat happened if client crash between this 2 > operations (commit 2 kafka and commit 2 zookeeper)? > 3. What happened if broker crash And my client support only zookeeper? (I > guess that I read message from beggining) Is it correct? Or broker > explicitly sync kafka commit with

Re: Kafka Mirror to consume data from beginning of topics

2015-10-09 Thread Ben Stopford
Hi Leo Set auto.offset.reset=smallest in your consumer.config B > On 8 Oct 2015, at 18:47, Clelio De Souza wrote: > > Hi there, > > I am trying to setup a Kafka Mirror mechanism, but it seems the consumer > from the source Kafka cluster only reads from new incoming data to the > topics, i.e.

Re: Dual commit with zookeeper and kafka

2015-10-09 Thread Ben Stopford
Hi Alexey Whether you commit offsets to Kafka itself (stored in offsets topic) or ZK or both depends on the settings in two properties: offset.storage and dual.commit.enabled (here ). Currently ZK is the default. Commits happen eith

Re: number of topics given many consumers and groups within the data

2015-09-30 Thread Ben Stopford
ld go with something similar to your second idea, >> but have a consumer read the single topic and split the data out into 400 >> separate topics in Kafka (no need for Cassandra or Redis or anything else). >> Then your real consumers can all consume their separate topics. Reading and

Re: number of topics given many consumers and groups within the data

2015-09-30 Thread Ben Stopford
Hi Shaun You might consider using a custom partition assignment strategy to push your different “groups" to different partitions. This would allow you walk the middle ground between "all consumers consume everything” and “one topic per consumer” as you vary the number of partitions in the topic

Re: Which perf-test tool?

2015-09-23 Thread Ben Stopford
Both classes work ok. I prefer the Java one simply because has better output and it does less overriding of default values. However, in both cases you probably need to tweak settings to suit your use case. Most notably: acks batch.size linger.ms based on whether you are interested in latency or

Re: 0.8.x

2015-08-26 Thread Ben Stopford
ace to producers. That should let you balance load. Also it may be worth adding that the code all uses non-blocking IO. I don’t have hard numbers here though. Has anyone else worked with 0.8.x at this level of load? B > On 26 Aug 2015, at 10:57, Ben Stopford wrote: > > Hi D

Re: 0.8.x

2015-08-26 Thread Ben Stopford
Hi Damian Just clarifying - you’re saying you currently have Kafka 0.7.x running with a dedicated broker addresses (bypassing ZK) and hitting a VIP which you use for load balancing writes. Is that correct? Are you worried about something specific in the 0.8.x way of doing things (ZK under that

Re: is SSL support feature ready to use in kafka-truck branch

2015-08-21 Thread Ben Stopford
Hi Qi Trunk seems fairly stable. There are guidelines here which includes how to generate keys https://cwiki.apache.org/confluence/display/KAFKA/Deploying+SSL+for+Kafka Your server config needs these properties (also

Re: thread that handle client request in Kafka brokers

2015-08-17 Thread Ben Stopford
Hi Tao This is unlikely to be a problem. The producer is threadsafe (see here ) so you can happily share it between your pool of message producers. Kafka also provides a range of facilities for

Re: Spooling support for kafka publishers !

2015-08-07 Thread Ben Stopford
Yes - that Jira needs completing, but I expect it is what you are looking for. You are welcome to pick it up if you wish. Otherwise I can pick it up. B > On 7 Aug 2015, at 10:52, sunil kalva wrote: > > -- Forwarded message -- > From: sunil kalva > Date: Fri, Aug 7, 2015 at

Re: kafka log flush questions

2015-08-07 Thread Ben Stopford
Hi Tao 1. I am wondering if the fsync operation is called by the last two routines internally? => Yes 2. If log.flush.interval.ms is not specified, is it true that Kafka let OS to handle pagecache flush in background? => Yes 3. If we specify ack=1 and ack=-1 in new producer, do those request onl

Re: Broker side consume-request filtering

2015-08-06 Thread Ben Stopford
Yes - this is what is basically what is termed selectors in JMS or routing-keys in AMQP. My guess is that a KIP to implement this kind of server side filtering would not make it through. Kafka is a producer-centric firehose. Server side filtering wouldn’t really fit well with the original desi

Re: message filterin or "selector"

2015-08-06 Thread Ben Stopford
I think short answer here is that, if you need freeform selectors semantics as per JMS message selectors then you’d need to wrap the API yourself (or get involved in adding the functionality to Kafka). As Gwen and Grant say, you could synthesise something simpler using topics/partitions to pro