Hi Diego,
Confluent offers support for Apache Kafka.
https://www.confluent.io/
Cheers,
Roger
On Wed, Apr 12, 2017 at 11:14 AM, Diego Paes Ramalho Pereira <
diego.pere...@b3.com.br> wrote:
> Hello,
>
>
>
> I work for a Stock Exchange in Brazil and We are looking for a company
> that can provid
Yes
On Fri, Feb 17, 2017 at 10:06 PM, Jeff Widman wrote:
> Will this new release use a new consumer?
>
> On Feb 16, 2017 11:33 PM, wrote:
>
> > You can't integrate 3.1.1 REST Proxy with a secure cluster because it
> uses
> > the old consumer API (hence zookeeper dependency). The 3.2 REST Proxy
This is great. Thanks, Ismael.
On Fri, Feb 3, 2017 at 7:35 AM, Grant Henke wrote:
> Looks good to me. Thanks for handling the KIP.
>
> On Fri, Feb 3, 2017 at 8:49 AM, Damian Guy wrote:
>
> > Thanks Ismael. Makes sense to me.
> >
> > On Fri, 3 Feb 2017 at 10:39 Ismael Juma wrote:
> >
> > > Hi
Are you using snappy compression? There was a bug with snappy that caused
corrupt messages.
Sent from my iPhone
> On Mar 29, 2016, at 8:15 AM, sunil kalva wrote:
>
> Hi
> Do we store message crc also on disk, and server verifies same when we are
> reading messages back from disk?
> And how to
Hi Li,
You might take a look at Apache Samza. It's conceptually simple but powerful
and makes heavy use of Kafka.
Best,
Roger
Sent from my iPhone
> On Sep 12, 2015, at 10:34 PM, Li Tao wrote:
>
> Hi Hackers,
>
> This is Lee, a learner of kafka, i have read the original paper on kafka,
> a
Issue is here: https://github.com/linkedin/Burrow/issues/3
On Fri, Jun 12, 2015 at 11:34 AM, Roger Hoover
wrote:
> Will do. Thanks
>
> Sent from my iPhone
>
> > On Jun 12, 2015, at 10:43 AM, Todd Palino wrote:
> >
> > Can you open an issue on the github page
Will do. Thanks
Sent from my iPhone
> On Jun 12, 2015, at 10:43 AM, Todd Palino wrote:
>
> Can you open an issue on the github page please, and we can investigate
> further there?
>
> -Todd
>
> On Fri, Jun 12, 2015 at 10:22 AM, Roger Hoover
> wrote:
>
>&
have you set up ACLs within it? I'm
> not able to see this on our ZK (3.4.6 with no ACLs).
>
> -Todd
>
> On Fri, Jun 12, 2015 at 9:34 AM, Roger Hoover
> wrote:
>
> > Hi,
> >
> > I was trying to give burrow a try and got a ZK error "i
Hi,
I was trying to give burrow a try and got a ZK error "invalid ACL
specified". Any suggestions on what's going wrong?
1434044348908673512 [Critical] Cannot get ZK notifier lock: zk: invalid ACL
specified
Here's my config:
[general]
logdir=log
logconfig=logging.cfg
pidfile=burrow.pid
c
Are you using snappy compression? I ran into an issue with message
corruption with the new producer, snappy compression, and broker restart.
On Mon, May 4, 2015 at 12:55 AM, scguo wrote:
> Hi
>
>
>
> Here is my questions.
>
>
>
> kafka.message.InvalidMessageException: Message is corrupt (store
Oops. I originally sent this to the dev list but meant to send it here.
Hi,
>
> When using Samza 0.9.0 which uses the new Java producer client and snappy
> enabled, I see messages getting corrupted on the client side. It never
> happens with the old producer and it never happens with lz4, gzip,
ails
> -XX:+PrintGCDateStamps
> -XX:+PrintTenuringDistribution
> -Xloggc:logs/gc.log
> -XX:ErrorFile=logs/hs_err.log
>
> -Jon
>
> On Mar 17, 2015, at 10:26 AM, Roger Hoover wrote:
>
> > Resurrecting an old thread. Are people running Kafka on Java 8 now?
> >
> >
Resurrecting an old thread. Are people running Kafka on Java 8 now?
On Sun, Aug 10, 2014 at 11:44 PM, Otis Gospodnetic <
otis.gospodne...@gmail.com> wrote:
> Just curious if you saw any issues with Java 1.8 or if everything went
> smoothly?
>
> Otis
> --
> Performance Monitoring * Log Analytics
Hi Jonathan,
TCP will take care of re-ordering the packets.
On Wed, Mar 4, 2015 at 6:05 PM, Jonathan Hodges wrote:
> Thanks James. This is really helpful. Another extreme edge case might be
> that the single producer is sending the database log changes and the
> network causes them to reach K
t; using Kafka client.
>
> I guess this latency is their at Logstash end and perhaps we need to look
> for an alternative to the same.
>
> Do let me know your observation and understanding as well.
>
> Thanks!
>
>
>
> On Thu, Mar 5, 2015 at 1:13 PM, Roger Hoover
> w
mention what's the throughput you have reaching.
>
> Thanks!
>
> On Thu, Mar 5, 2015 at 12:56 PM, Roger Hoover
> wrote:
>
> > Hi Vineet,
> >
> > Try enabling compression. That improves throughput 3-4x usually for me.
> > Also, you can use async mode if you&
Hi Vineet,
Try enabling compression. That improves throughput 3-4x usually for me.
Also, you can use async mode if you're willing to trade some chance of
dropping messages for more throughput.
kafka {
codec => 'json'
broker_list => "localhost:9092"
topic_id => "blah"
Joseph,
That's great! Thank you for writing that plugin.
Cheers,
Roger
On Mon, Dec 15, 2014 at 7:24 AM, Joseph Lawson wrote:
>
> ?Kafka made some headlines with Logstash announcing their latest version
> beta (1.5) which includes by default a Kafka input and output plugin. Good
> stuff. htt
"It also makes it possible to do validation on the server
side or make other tools that inspect or display messages (e.g. the various
command line tools) and do this in an easily pluggable way across tools."
I agree that it's valuable to have a standard way to plugin serialization
across many tool
Hi Daniel,
Thanks for sharing this. Looks like a great project. I probably don't
know enough to give a great answer but will throw in my 2c anyway.
I think Kafka prioritizes throughput and aeron prioritizes latency. As you
mentioned, maybe Aeron could replace the current Kafka TCP protocol. T
Just a guess but could it be a firewall issue? Did you enable connections
to port 9092 from outside EC2 in a security group? Can you telnet to each
broker IP and port?
On Tue, Oct 28, 2014 at 10:01 AM, Sameer Yami wrote:
> There was a typo earlier.
>
> This is the output -
>
> Topic:Test Parti
with someone who is and wanted to ask people about this so that we
> can learn what works and what doesn't.
>
> ___
> From: Roger Hoover [roger.hoo...@gmail.com]
> Sent: Friday, October 17, 2014 12:26 PM
> To: users@kafka.apache.org
> Sub
Casey,
Could you describe a little more about how these would help manage a
cluster?
My understanding is that Consul provides service discovery and leader
election. Kafka already uses ZooKeeper for brokers to discover each other
and elect partition leaders. Kafka high-level consumers use ZK to
At least two including the leader?
On Fri, Oct 17, 2014 at 8:12 AM, Guozhang Wang wrote:
> Hi Balaji,
>
> You could do a rolling bounce of the brokers to do the in-place upgrade if
> your partitions have at least two replicas. After that you may probably
> need to rebalance the leaders if they a
the
> kafka-topics tool should suffice.
>
> On Tue, Oct 14, 2014 at 1:32 PM, Roger Hoover
> wrote:
>
> > I still have a question though. Is there a definitive way to tell if a
> > topic is configured for compaction? The way it seems to work now is that
> > the ZK con
d
the broker defaults for leader?
Thanks,
Roger
On Tue, Oct 14, 2014 at 12:17 PM, Roger Hoover
wrote:
> Oh, duh. I see it in the kafka-topics tool as well. Sorry for the
> distraction.
>
> kafka-topics.sh --zookeeper localhost:2181 --describe --topic foo
>
> Top
, 2014 at 12:05 PM, Roger Hoover
wrote:
> I found this way:
>
> zookeeper-shell.sh localhost:2181 get /config/topics/foo
>
> {"version":1,"config":{"cleanup.policy":"compact"}}
>
> cZxid = 0x57
>
> ctime = Tue Oct 14 12:02:54
ion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 51
numChildren = 0
On Tue, Oct 14, 2014 at 11:58 AM, Roger Hoover
wrote:
> Hi,
>
> How do I check if a topic is configured for compaction? Is there a
> command-line tool to see topic metadata like cleanup.policy=compact?
>
> Thanks,
>
> Roger
>
Hi,
How do I check if a topic is configured for compaction? Is there a
command-line tool to see topic metadata like cleanup.policy=compact?
Thanks,
Roger
I believe I downloaded from trunk and compiled a jar from that. The
> hardest
> > part of that seemed to be configuring gradle to sign the jar having never
> > done it before.
> >
> > Christian
> >
> >
> > On Fri, Aug 15, 2014 at 11:00 AM
ded from trunk and compiled a jar from that. The hardest
> part of that seemed to be configuring gradle to sign the jar having never
> done it before.
>
> Christian
>
>
> On Fri, Aug 15, 2014 at 11:00 AM, Roger Hoover
> wrote:
>
> > Hi,
> >
15, 2014 at 11:00 AM, Roger Hoover
> wrote:
>
> > Hi,
> >
> > I want to try out the new producer api
> > (org.apache.kafka.clients.producer.KafkaProducer) but found that it's not
> > in the published jar.
> >
> > What's the best way to ge
Hi,
I want to try out the new producer api
(org.apache.kafka.clients.producer.KafkaProducer) but found that it's not
in the published jar.
What's the best way to get it? Build from source from the 0.8.1.1 tag?
Any flags I need to set to include the new producer in the jar?
Thanks,
Roger
Actually, there wasn't a way to do it prior to 0.8.1
On Fri, Jun 13, 2014 at 3:30 PM, Roger Hoover
wrote:
> Yes, I believe that prior to Kafka 0.8 there was no easy way for external
> clients to talk to Kafka brokers running in a cloud environment.
>
> I wrote a blog p
> Best regards,
> James
>
>
>
> > Roger Hoover 於 2014/6/14 上午12:32 寫道:
> >
> > I wouldn't say that Kafka's making it difficult. The cloud environment
> is
> > making it difficult. The VM that the Kafka broker is running o
I wouldn't say that Kafka's making it difficult. The cloud environment is
making it difficult. The VM that the Kafka broker is running on can only
see it's private IP (at the OS level) so you have to add the
advertised.host.name config so that it knows what public IP is assigned to
it.
On Fri,
I think setting these is not a good idea b/c only apply to the specific
client where you've setup the tunnel. Other clients cannot use these
settings
advertised.host.name=localhost
advertised.port=19092
You probably need to figure out another way such as
1) Setting up a local mapping on your pro
Thanks, Jay. Great write up.
I noticed a bad link to the docs for basic operations (
http://localhost/documentation.html#basic_ops). It's in the paragraph that
starts with "We also improved a lot of operational activities...".
Roger
On Wed, Mar 12, 2014 at 8:57 PM, Jay Kreps wrote:
> Hi guy
allbacks per call and adding this to the
> send rather than the method invocation, but both added some complexity and
> it seemed both could be implemented using the api provided without too much
> trouble.
>
> -Jay
>
>
> On Tue, Jan 28, 2014 at 12:33 PM, Roger Hoover >wrote:
&
ld have the same name
> > (I
> > > > was
> > > > > a
> > > > > > > bit sloppy about that so I'll fix any errors there). There are
> a
> > > few
> > > > > new
> > > > > > > things and we sh
ly
> connect to bootstrap urls sequentially until one succeeds when the producer
> is first created and fail fast if we can't establish a connection. This
> would not be wasted work as we could use the connection for the metadata
> request when the first message is sent. I like this
A couple comments:
1) Why does the config use a broker list instead of discovering the brokers
in ZooKeeper? It doesn't match the HighLevelConsumer API.
2) It looks like broker connections are created on demand. I'm wondering
if sometimes you might want to flush out config or network connectivi
I'll give you my take in case it helps.
Kafka achieves great throughput + durability because
1) The broker does minimal work
a) Routing is done by producers
b) State management is done by consumers
Other messaging brokers typically have to keep track of which messages
have been dispatch
e can look making the zookeeper config change on the producer
> soon. But that is something to discuss on a JIRA.
>
> Thanks,
> Neha
>
>
> On Wed, Oct 30, 2013 at 9:53 AM, Roger Hoover >wrote:
>
> > Hi,
> >
> > I'm still getting started with Kafka
Hi,
I'm still getting started with Kafka and was curious why there is an
asymmetry between the producer and consumer APIs. Why does the producer
config take a list of brokers where as the consumer config takes a list of
brokers?
Thanks,
Roger
let me know if anything else is needed.
Cheers,
Roger
On Fri, Oct 25, 2013 at 4:05 PM, Roger Hoover wrote:
> Ok. I'm working on it.
>
>
> On Thu, Oct 24, 2013 at 10:02 AM, Timothy Chen wrote:
>
>> Hi Folks/Roger,
>>
>> Unfortunately I don't have legal cle
ilure it isn't very helpful in
> saying exactly where the test failed; my environment is probably
> messed up but I know of one or two others who are having similar
> issues.
>
> Joel
>
>
> On Fri, Oct 25, 2013 at 4:24 PM, Roger Hoover
> wrote:
> > Hi,
> &
Hi,
I'm new to Scala but working on a simple patch for a configuration change
and want to run just my unit tests. When I run ./sbt test-only, it
executes all sorts of other tests but not the one I want. Is there an easy
way to run a single test? Any help is appreciated.
$ ./sbt test-only kafka
atch.
>
> Thanks!
>
> Tim
>
>
>
>
> On Mon, Oct 21, 2013 at 11:17 AM, Roger Hoover >wrote:
>
> > Agreed. Tim, it would be very helpful is you could provide a patch.
> > Otherwise, I may be willing to create one.
> >
> >
> > On Thu, Oct 17,
This is
> also
> > needed for deploying Kafka into Azure.
> >
> > I also created zkHost.port since the internal and external ports that's
> > exposed might be different as well.
> >
> > Tim
> >
> >
> > On Thu, Oct 17, 2013 at 3:13 PM,
Hi all,
I'm getting started experimenting with Kafka and ran into a configuration
issue.
Currently, in server.properties, you can configure host.name which gets
used for two purposes: 1) to bind the socket 2) to publish the broker
details to ZK for clients to use.
There are times when these two
Thank you, Jay.
When talking about flush rates, I think you mean the opposite of what was
said here:
"However very high application flush rates can lead to high latency when
the flush does occur."
should be
However very low application flush rates (infrequent flushes) can lead to
high latency w
52 matches
Mail list logo