regards,
Jim
> On 13 Jan 2025, at 21:09, Sankaran, Sailesh
> wrote:
>
> Business Use
>
>
> Hello Team ,
>
> Please help us with the procedure.
>
> Sailesh Sankaran CISSP
> Digital Plant – Next Generation Services
>
>
in sync. I’m letting this run.
Jim
On 2/25/22, 7:09 PM, "Jim Langston" wrote:
Hi all , after upgrading to 2.8.1 and restarting the brokers, one of the
brokers is continually logging messages similar to this
Feb 25 23:58:17 bigdata-worker2.dc.res0.
stop the messages ( I have
found several blogs/user group references to the message but none have
successfully stop the message). I have started
and stop the brokers several times.
Thanks,
Jim
; > > Does Kafka have something that behaves like a unique key so a producer
> > > > can’t write the same value to a topic twice?
> > >
> >
> > Hi Hans,
> >
> > Is there some documentation or an example with source code where I can
>
t; >
>
> Hi Hans,
>
> Is there some documentation or an example with source code where I can
> learn more about this feature and how it is implemented?
>
> Thanks,
> Jim
>
By the way I tried this...
echo "key1:value1" | ~/kafka/bin/kafka-console-p
> wrote:
> >
> > Does Kafka have something that behaves like a unique key so a producer
> > can’t write the same value to a topic twice?
>
Hi Hans,
Is there some documentation or an example with source code where I can
learn more about this feature and how it is implemented?
Thanks,
Jim
Does Kafka have something that behaves like a unique key so a producer can’t
write the same value to a topic twice?
After reading the blog from Neha
https://www.confluent.io/blog/apache-kafka-goes-1-0/
I moved to the 1.0 platform in production ..
Jim
/
On 3/27/18, 7:07 AM, "chou.fan" wrote:
Hi, we are planning to upgrade our kafka cluster to benefit from the new
feature of tra
I think is an open-ended question we are all starting to ask as
we are moving to K8 and docker environments. Do we stand
up a Kafka cluster by itself, or do we pull Kafka in the K8 umbrella ?
Jim
On 3/22/18, 3:28 PM, "Jason Turim" wrote:
What container orchestration package do y
original name, sounds painful.
Thoughts, suggestions,
Thanks,
Jim
+1
> On May 27, 2017, at 9:27 PM, Vahid S Hashemian
> wrote:
>
> Sure, that sounds good.
>
> I suggested that to keep command line behavior consistent.
> Plus, removal of ACL access is something that can be easily undone, but
> topic deletion is not reversible.
> So, perhaps a new follow-up JI
> On May 26, 2017, at 1:10 PM, Vahid S Hashemian
> wrote:
>
> Gwen, thanks for the KIP.
> It looks good to me.
>
> Just a minor suggestion: It would be great if the command asks for a
> confirmation (y/n) before deleting the topic (similar to how removing ACLs
> works).
>
+1 (or some sort
Yeah, let's figure out the "best" action to take...
Looks like something I'd like to get a handle on.
> On Aug 31, 2016, at 4:05 PM, Jason Gustafson wrote:
>
> Hi Achintya,
>
> We have a JIRA for this problem: https://issues.
> apache.org/jira/browse/KAFKA-3834. Do you expect the client to rai
Looks good here: +1
> On Aug 4, 2016, at 9:54 AM, Ismael Juma wrote:
>
> Hello Kafka users, developers and client-developers,
>
> This is the third candidate for the release of Apache Kafka 0.10.0.1. This
> is a bug fix release and it includes fixes and improvements from 53 JIRAs
> (including a
> Is this a case where multiple logical messages (when combined together)
>are
> treated by Kafka as a single message, and it's up to the consumer to
> separate them?
Yes.
-- Jim
On 6/6/16, 7:12 AM, "Tom Brown" wrote:
>How would it be possible to encrypt an
MG>Jim can we assume you only implement Asymmetric Cryptography?
As described and depicted in the blog post, we used asymmetric
cryptography as the basis for trust, with symmetric crypto doing the heavy
lifting. Specifically, for each "envelope", we include a randomly
gene
compromised a decrypting system).
-- Jim
On 6/2/16, 2:56 AM, "Tom Crayford" wrote:
>Filesystem encryption is transparent to Kafka. You don't need to use SSL,
>but your encryption requirements may cause you to need SSL as well.
>
>With regards to compression, without
MG>curious if Jim tested his encryption/decryption scenario on Kafka's
stateless broker?
MG>Jims idea could work if you want to implement a new
serializer/deserializer for every new supported cipher
Not sure if I understand. We didn't modify Kafka at all.
I definitely recommend
://symc.ly/1pC2CEG )
-- Jim
On 4/25/16, 11:39 AM, "David Buschman" wrote:
>Kafka handles messages which are compose of an array of bytes. Kafka does
>not care what is in those byte arrays.
>
>You could use a custom Serializer and Deserializer to encrypt and decrypt
>the data fro
where they left off. That's just the start of idea for a
possible approach; it would have to be thought through more carefully.
Not sure, but you may need to handle cases where messages get re-ordered.
-- Jim
On 1/20/16, 11:31 AM, "Josh Wo" wrote:
>Jim,
>So I guess the pr
network traffic, with the amount
depending on the size of the topic and your replication factor.
-- Jim
On 1/20/16, 10:37 AM, "Josh Wo" wrote:
>Hi Jens,
>I got your point but some of our use case cannot just rely on TTL. We try
>to have long expiry for message and rather compact
ption algorithm will make the encrypted
message appear random. Random data will not really compress. If it is
reliably compressing after encryption, then your encryption is not as
secure as it should be. Also discussed here:
http://security.stackexchange.com/a/19970.
-- Jim
On 1/15/16, 6:39 AM, &
what you
did. In our tests, the encryption didn't add as much overhead as we
thought it would.
-- Jim
--
Jim Hoagland, Ph.D.
Sr. Principal Software Engineer
Big Data Analytics Team
Cloud Platform Engineering
On 1/14/16, 2:23 PM, "Bruno Rassaerts" wrote:
>Hello,
>
>
Kafka brokers what to host after a service failure and
restart?
Thanks,
Jim
have any thoughts or questions.
Thanks,
Jim
--
Jim Hoagland, Ph.D.
Sr. Principal Software Engineer
Big Data Analytics Team
Cloud Platform Engineering
Symantec Corporation
http://cpe.symantec.com
These are my rough notes on building 0.8.1
building 0.8.1
https://cwiki.apache.org/confluence/display/KAFKA/Developer+Setup
git clone https://github.com/apache/kafka.git
cd kafka
git checkout --track -b 0.8.1 origin/0.8.1
vim gradle.properties
change to: scalaVersion=2.10.1
./gradlew idea
Open
her places as well. Option 3 is essentially what the high level
> consumer does under the covers already. It prefetches data in batches from
> the server to provide high throughput.
>
>
> On Wed, Aug 13, 2014 at 2:20 AM, Anand Nalya
> wrote:
>
> > Hi Jim,
> >
&
Are you using the random partitioner or a custom partitioner in your
producer?
Is your producer picking up all the available partitions?
What producer client are you using?
On 8/1/14, 7:33 AM, "François Langelier" wrote:
>HI all!
>
>I think I already saw this question on the mailing list, but I'
balance high/full
throughput and fully committed events.
On Thu, Jul 31, 2014 at 8:16 AM, Guozhang Wang wrote:
> Hi Jim,
>
> Whether to use high level or simple consumer depends on your use case. If
> you need to manually manage partition assignments among your consumers, or
> you need
Curious on a couple questions...
Are most people(are you?) using the simple consumer vs the high level
consumer in production?
What is the common processing paradigm for maintaining a full pipeline for
kafka consumers for at-least-once messaging? E.g. you pull a batch of 1000
messages and:
opti
ent re-try flooding.
What are some patterns around this that people are using currently to
handle message failures at scale with kafka?
pardon if this is a frequent question but the
http://search-hadoop.com/kafka server
is down so I can't search historicals at the moment.
thanks,
Jim
31 matches
Mail list logo