Hi Gerard,
When trying to reproduce this, did you use the go sarama client Safique
mentioned?
On Fri, Jun 3, 2016 at 5:10 AM, Gerard Klijs
wrote:
> I asume you use a replication factor of 3 for the topics? When I ran some
> test with producer/consumers in a dockerized setup, there where only f
ut 25% more cost, we doubled our vms, using twice as
many 1/2 sized EBS volumes.
-Christian
On Fri, Jul 8, 2016 at 12:07 PM Krish wrote:
> Thanks, Christian.
> I am currently reading about kafka-on-mesos.
> I will hack something this weekend to see if I can bring up a kafka
> schedul
Keeper. setting it to false, gives me this timeout, but only when I
also set the -Djava.security.auth... property.
I know, I'm missing a small thing.
Thanks,
Christian
set that var to false and I got
past the problem.
On Sat, Jan 7, 2017 at 7:54 AM, Christian wrote:
> Hi,
>
> I'm trying to set up SASL_PLAINTEXT authentication between the
> producer/consumer clients and the Kafka brokers only. I am not too worried
> about the broker to broker
from the server side.
Is this expected behavior?
Thanks,
Christian
7;s going on.
>
>
> Some reading material can be found at:
> https://github.com/gerritjvv/kafka-fast/blob/master/kafka-clj/Kerberos.md
>
> and if you want to see or need for testing a vagrant env with kerberos +
> kafka configured see
> https://github.com/gerritjvv/kafka-fast/blob/mast
un.security.krb5.debug=true and
> > > -Djava.security.debug=gssloginconfig,configfile,
> > configparser,logincontext
> > > to see debug info about what's going on.
> > >
> > >
> > > Some reading material can be found at:
> &g
ne out there help me? Is the
Kafka SASL implementation not meant for such a complicated scenario or am I
just thinking about it all wrong?
Thanks,
Christian
Thank you Harsha!
On Sun, Feb 26, 2017 at 10:27 AM, Harsha Chintalapani
wrote:
> Hi Christian,
> Kafka client connections are long-llving connections,
> hence the authentication part comes up during connection establishment and
> once we authenticate regular kafka p
uld not be able to mark jobs as completed except in a strict order
(while maintaining a processed successfully at least once guarantee).
This is not to say it cannot be done, but I believe your workqueue would
end up working a bit strangely if built with Kafka.
Christian
On 10/09/2014 06:13 AM, Will
ly, that's why we are evaluating if only with Kafka is enough.
> Because if Storm gives us the same benefits than Kafka it's better to stick
> with only one technology to keep everything as simple as possible.
>
I think it is more a question of will using Storm make managing your
consumers (two different applications) you would want each to have 4
threads (4*2 = 8 threads total).
There are also considerations depending on which consumer code you are
using (which I'm decidedly not someone with good information on)
Christian
On Wed, Jan 28, 2015 at 1:28 PM, Ri
a 'failed
messages' store somewhere else and have code that looks in there to make
retries happen (assuming you want the failure/retry to persist beyond the
lifetime of the process).
Christian
On Wed, Jan 28, 2015 at 7:00 PM, Guozhang Wang wrote:
> I see. If you are using the high-l
d in a Kafka focused discussion might be dealt with by
covering disk encryption and how the conversations between Kafka instances
are protected.
Christian
On Wed, Feb 25, 2015 at 11:51 AM, Jay Kreps wrote:
> Hey guys,
>
> One thing we tried to do along with the product release was start
us towards stream centric land).
Christian
On Wed, Feb 25, 2015 at 3:57 PM, Jay Kreps wrote:
> Hey Christian,
>
> That makes sense. I agree that would be a good area to dive into. Are you
> primarily interested in network level security or encryption on disk?
>
> -Jay
>
>
hing to be
fairly separate from Kafka even if there's a handy optional layer that
integrates with it.
Christian
On Wed, Feb 25, 2015 at 5:34 PM, Julio Castillo <
jcasti...@financialengines.com> wrote:
> Although full disk encryption appears to be an easy solution, in our case
> that
two with a single producer you would not expect to see all
partitions be hit.
Christian
On Mon, Mar 2, 2015 at 4:23 PM, Yang wrote:
> thanks. just checked code below. in the code below, the line that calls
> Random.nextInt() seems to be called only *a few times* , and all the rest
&g
Do you have a anything on the number of voters, or audience breakdown?
Christian
On Wed, Mar 4, 2015 at 8:08 PM, Otis Gospodnetic wrote:
> Hello hello,
>
> Results of the poll are here!
> Any guesses before looking?
> What % of Kafka users are on 0.8.2.x already?
> What % o
Hi Everyone,
I have been experimenting with the libraries listed below and experienced the
same problems.
I have not found any another other node clients. I am interested in finding a
node solution as well.
Happy to contribute on a common solution.
Christian Carollo
On Apr 24, 2013, at 10
According to the Kafka 8 documentation under broker configuration. There
are these parameters and their definitions.
log.retention.bytes -1 The maximum size of the log before deleting it
log.retention.bytes.per.topic "" The maximum size of the log for some
specific topic before deleting it
I'm cu
Hi Jun,
Thank you for your reply. I'm still a little fuzzy on the concept.
Are you saying I can have topic A, B and C and with
log.retention.bytes.per.topic.A = 15MB
log.retention.bytes.per.topic.B = 20MB
log.retention.bytes = 30MB
And thus topic C will get the value 30MB? Since it's not define
Jun,
For my first example is that syntax correct? I.e.
log.retention.bytes.per.topic.A = 15MB
log.retention.bytes.per.topic.B = 20MB
I totally guessed there and was wondering if I guessed right? Otherwise is
there a document with the proper formatting to full out this map?
Thank you,
Paul
Neha,
Correct, that is my question. We want to investigate capping our disk usage
so we don't fill up our hds. If you have any recommended configurations or
documents on these setting, please let us know.
Thank you,
Paul
On Tue, Aug 20, 2013 at 6:16 AM, Paul Christian
wrote:
> Jun,
is EventHubs. Before using
Kakfa or any system in production you'll want to be sure you understand the
operational aspects of it.
Christian
>
> Thanks for any comments too. :)
>
>
>
>
> On Mon, May 4, 2015 at 9:03 AM, Mayuresh Gharat <
gharatmayures...@gmail.com>
Wonder if you can listen to the zkPath for topics via a zk watch (
https://zookeeper.apache.org/doc/r3.3.3/api/org/apache/zookeeper/Watcher.html)
to let you know when the structure of the tree changes (ie, add/remove)?
The zkPath for topics is "/brokers/topics"
https://github.com/chris
kafka-0.9.0.1-candidate1/javadoc/
> >
> > * The tag to be voted upon (off the 0.9.0 branch) is the 0.9.0.1 tag
> >
> >
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=2c17685a45efe665bf5f24c0296cb8f9e1157e89
> >
> > * Documentation
> > http://kafka.apache.org/090/documentation.html
> >
> > Thanks,
> >
> > Jun
> >
> >
>
--
*Christian Posta*
twitter: @christianposta
http://www.christianposta.com/blog
http://fabric8.io
t;group.id", groupId)
> > props.put("auto.commit.enabled", "false")
> > // this timeout is needed so that we do not block on the stream!
> > props.put("consumer.timeout.ms", "1")
> > props.put("zookeeper.sync.time.ms", "200")
> >
>
--
*Christian Posta*
twitter: @christianposta
http://www.christianposta.com/blog
http://fabric8.io
setting in my email above is responsible for
> this auto reconnect mechanism?
>
> On Wed, Feb 17, 2016 at 8:04 PM, Christian Posta <
> christian.po...@gmail.com>
> wrote:
>
> > Yep, assuming you haven't completely partitioned that client from the
> > cluster,
0 1
>
> Can someone help me clarify or point me at a doc that explains what is
> getting counted here? You can shoot me if you like for attempting the
> hack-ish solution of re-setting the offset through the Zookeeper API, but I
> would still like to understand what, exactly, is represented by that number
> 30024.
>
> I need to hand off to IT for the Disaster Recovery portion and saying
> "trust me, it just works" isn't going to fly very far...
>
> Thanks.
>
--
*Christian Posta*
twitter: @christianposta
http://www.christianposta.com/blog
http://fabric8.io
I believe so. Happy to be corrected.
On Wed, Feb 17, 2016 at 12:31 PM, Joe San wrote:
> So if I use the High Level Consumer API, using the ConsumerConnector, I get
> this automatic zookeeper connection for free?
>
> On Wed, Feb 17, 2016 at 8:25 PM, Christian Posta <
> christi
Awesome, glad to hear. Thanks Jun!
On Wed, Feb 17, 2016 at 12:57 PM, Jun Rao wrote:
> Christian,
>
> Similar to other Apache projects, a vote from a committer is considered
> binding. During the voting process, we encourage non-committers to vote as
> well. We will cancel the re
yep! http://www.apache.org/dev/release-publishing.html#voted
On Wed, Feb 17, 2016 at 1:05 PM, Gwen Shapira wrote:
> Actually, for releases, committers are non-binding. PMC votes are the only
> binding ones for releases.
>
> On Wed, Feb 17, 2016 at 11:57 AM, Jun Rao wrote:
>
Can someone add Karma to my user id for contributing to the wiki/docs?
userid is 'ceposta'
thanks!
--
*Christian Posta*
twitter: @christianposta
http://www.christianposta.com/blog
http://fabric8.io
in case you are aware of any issue that might cause it?
> I'm chasing this leak for several days, and managed to track it down to the
> code writing to Kafka, so I'm a little desperate :) any help will do.
>
> Thanks!
>
> Asaf
>
--
*Christian Posta*
twitter: @christianposta
http://www.christianposta.com/blog
http://fabric8.io
; consumer level?
>
--
*Christian Posta*
twitter: @christianposta
http://www.christianposta.com/blog
http://fabric8.io
Would love to have the docs in gitbook/markdown format so they can easily
be viewed from the source repo (or mirror, technically) on github.com. They
can also be easily converted to HTML, have a side-navigation ToC, and still
be versioned along with the src code.
Thoughts?
--
*Christian Posta
For sure! Will take a look!
On Wednesday, March 2, 2016, Gwen Shapira wrote:
> Hey!
>
> Yes! We'd love that too! Maybe you want to help us out with
> https://issues.apache.org/jira/browse/KAFKA-2967 ?
>
> Gwen
>
> On Wed, Mar 2, 2016 at 2:39 PM, Christian Posta
>
even if the security parts
work out for you.
Christian
On Wed, Mar 2, 2016 at 9:52 PM, Jan wrote:
> Hi folks;
> does anyone know of Kafka's ability to work over Satellite links. We have a
> IoT Telemetry application that uses Satellite communication to send data from
> remote
b.com/christian-posta/kafka/tree/ceposta-doco
On Thu, Mar 3, 2016 at 6:28 AM, Marcos Luis Ortiz Valmaseda <
marcosluis2...@gmail.com> wrote:
> I love that too
> +1
>
> 2016-03-02 21:15 GMT-05:00 Christian Posta :
>
> > For sure! Will take a look!
> >
> > On W
e a message and to ensure it makes it to Mongo. (Redelivery is
> ok)
>
> Thanks for any help or pointers in the right direction.
>
> Michael
>
--
*Christian Posta*
twitter: @christianposta
http://www.christianposta.com/blog
http://fabric8.io
able to demonstrate that addressing
> this case solves my stuck consumer problem.
>
> How do I submit a bug report for this issue, or does this email constitute
> a bug report?
>
> --Larkin
>
--
*Christian Posta*
twitter: @christianposta
http://www.christianposta.com/blog
http://fabric8.io
> Thanks Christian,
>We would want to retry indefinitely. Or at
> least for say x minutes. If we don't poll how do we keep the heart beat
> alive to Kafka. We never want to loose this message and only want to commit
> to Kafka when the message i
page
>
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+papers+and+presentations
> is
> outdated..
>
> Thanks in advance..
>
--
*Christian Posta*
twitter: @christianposta
http://www.christianposta.com/blog
http://fabric8.io
.b
> less likely.
>
> -Jason
>
>
>
> On Fri, Mar 11, 2016 at 1:03 AM, Michael Freeman
> wrote:
>
> > Thanks Christian,
> > Sending a heartbeat without having to poll
> > would also be useful when using a large max
gt; -Jason
>
> On Mon, Mar 14, 2016 at 11:21 AM, Christian Posta <
> christian.po...@gmail.com
> > wrote:
>
> > Jason,
> >
> > Can you link to the proposal so I can take a look? Would the "sticky"
> > proposal prefer to keep partitions assig
to this list to learn and implement Kafka.
>
> Thanks,
> Punya
>
--
*Christian Posta*
twitter: @christianposta
http://www.christianposta.com/blog
http://fabric8.io
gt;
> bookmarks80[0, 1][0, 1]
>
> bookmarks91[1, 0][1, 0]
>
> We monitor the disk writes and I only see writes at broker 0, and broker 1
> sees none (not comparable at all). I do see comparable network traffic at
&
You need to send a mail to users-subscr...@kafka.apache.org
http://kafka.apache.org/contact.html
On Sat, Mar 19, 2016 at 4:14 AM, Andreas Thoelke
wrote:
> Hi,
>
> please add me to the Kafka list.
>
> Andreas
>
--
*Christian Posta*
twitter: @christianposta
http://www.chris
speeded up? I use this in a test and would like to make that
test faster.
Christian
--
--
Christian Schneider
http://www.liquid-reality.de
<https://owa.talend.com/owa/redir.aspx?C=3aa4083e0c744ae1ba52bd062c5a7e46&URL=http%3a%2f%2fwww.liquid-reality.de>
Computer Scientist
http://www.adobe.com
? From the long delay it looks a bit like a
reverse DNS issue but I dont know if these can happen with kafka or what to
configure to avoid the issue.
Christian
--
--
Christian Schneider
http://www.liquid-reality.de
<https://owa.talend.com/owa/redir.aspx?C=3aa4083e0c744ae1ba52bd062c5a7e46&U
gt; end
> > > up with bugs in production.
> > >
> > > I can volunteer for the release management of the LTS release but as a
> > > community, can we follow the rigour of back-porting the bug-fixes to
> the
> > > LTS branch?
> > >
> > > --
> > > Regards
> > > Vamsi Subhash
> > >
> >
> >
> >
> > --
> > Regards
> > Vamsi Subhash
> >
>
--
*Christian Posta*
twitter: @christianposta
http://www.christianposta.com/blog
http://fabric8.io
ides.
>
> I think the overhead which happens while establishing connection from
> consumer/producer to kafka broker(s) seems a little heavy.
>
> Thanks in advance!
>
> Best regards
>
> bgkim
>
--
*Christian Posta*
twitter: @christianposta
http://www.christianposta.com/blog
http://fabric8.io
consumer. The respective
> consumers do their fetches only on their assigned partitions but the old
> consumer still commits the same last offset it had when it was holding on
> to that partition At the same time the added consumer commits the correct
> offsets.
>
> Any inputs on what co
a.tools.JmxTool --object-name
> "kafka.consumer:type=FetchRequestAndResponseMetrics,name=FetchRequestRateAndTimeMs,clientId=ReplicaFetcherThread*,brokerHost=hostname*.
> cluster.com,brokerPort=*" --jmx-url
> service:jmx:rmi:///jndi/rmi://`hostname`:/jmxrmi
>
> There may
u, Mar 31, 2016 at 4:36 PM, craig w wrote:
> >
> > > Including jolokia would be great, I've used for kafka and it worked
> well.
> > > On Mar 31, 2016 6:54 PM, "Christian Posta"
> > > wrote:
> > >
> > > > What if we added some
>From what I know of previous discussions encryption at rest can be
handled with transparent disk encryption. When that's sufficient it's
nice and easy.
Christian
On Thu, Apr 21, 2016 at 2:31 PM, Tauzell, Dave
wrote:
> Has there been any discussion or work on at rest encr
re knowledge of the implementation
comment further on what would be required.
Christian
On Mon, May 2, 2016 at 9:41 PM, Bruno Rassaerts
wrote:
> We did try indeed the last scenario you describe as encrypted disks do not
> fulfil our requirements.
> We need to be capable of changing en
6, 2016 at 9:41 AM, Mudit Kumar wrote:
> How can i get the list for all the class names i can run through
> ./kafka-run-class.sh [class-name] command?
>
> Thanks,
> Mudit
--
*Christian Posta*
twitter: @christianposta
http://www.christianposta.com/blog
http://fabric8.io
1, initiating session
> (org.apache.zookeeper.ClientCnxn)
> > > > [2016-05-10 15:41:03,337] INFO Unable to read additional data from
> server sessionid 0x1549b308dd20002, likely server has closed socket,
> closing socket connection and attempting reconnect
> (org.apache.zookeeper.ClientCnxn)
> > > > [2016-05-10 15:41:05,121] INFO Opening socket connection to server
> 10.0.0.184/10.0.0.184:2181. Will not attempt to authenticate using SASL
> (unknown error) (org.apache.zookeeper.ClientCnxn)
> > > > [2016-05-10 15:41:05,121] INFO Socket connection established to
> 10.0.0.184/10.0.0.184:2181, initiating session
> (org.apache.zookeeper.ClientCnxn)
> > > > [2016-05-10 15:41:05,122] INFO Unable to read additional data from
> server sessionid 0x1549b308dd20002, likely server has closed socket,
> closing socket connection and attempting reconnect
> (org.apache.zookeeper.ClientCnxn)
> > > >
> > > > You can see when the first zookeeper dies and connection is lost ...
> and all the retries by kafka server in order to connect to the new one
> (same IP, same port).
> > > >
> > > > Why the zookeeper server closes the connection (I can see FIN ACK
> frames on Wireshark)
> > > >
> > > > Thanks,
> > > > Paolo.
> > > >
> > > > Paolo PatiernoSenior Software Engineer (IoT) @ Red Hat
> > > > Microsoft MVP on Windows Embedded & IoTMicrosoft Azure Advisor
> > > > Twitter : @ppatierno
> > > > Linkedin : paolopatierno
> > > > Blog : DevExperience
>
>
--
*Christian Posta*
twitter: @christianposta
http://www.christianposta.com/blog
http://fabric8.io
tps://twitter.com/anas24aj> [image: linkedin]
> > <http://in.linkedin.com/in/anas24aj> [image: googleplus]
> > <https://plus.google.com/u/0/+anasA24aj/>
> > +917736368236
> > anas.2...@gmail.com
> > Bangalore
> >
>
--
*Christian Posta*
twitter: @christianposta
http://www.christianposta.com/blog
http://fabric8.io
If you're using KafkaConnect, it does it for you!
basically you set the sourceRecord's "sourcePartition" and "sourceOffset"
fields (
https://github.com/christian-posta/kafka/blob/8db55618d5d5d5de97feab2bf8da4dc45387a76a/connect/api/src/main/java/org/apache/kafka/co
> > > > internal services which need to process the event to be "done"
> > > > processing before returning a response.
> > >
> > >
> > >
> > > In theory that's possible - the producer can return the offset of the
> > > message
ther projects, I know that
> without the initial pitch / discussion, it could be difficult to get such
> feature in. I can create a jira in the morning, no electricity again
> tonight :-/
>
> Get Outlook for iOS
>
>
>
>
> On Tue, May 17, 2016 at 4:53 PM -0700, &qu
s? Let's say I want to push 3 messages
> > > > atomically but the producer process crashes after sending only 2
> > > messages,
> > > > is it possible to "rollback" the first 2 messages (e.g. "all or
> > nothing"
> > > > semantics)?
> > > >
> > > > 3) Does it support request/response style semantics or can they be
> > > > simulated? My system's primary interface with the outside world is an
> > > HTTP
> > > > API so it would be nice if I could publish an event and wait for all
> > the
> > > > internal services which need to process the event to be "done"
> > > > processing before returning a response.
> > > >
> > > > PS: I'm a Node.js/Go developer so when possible please avoid Java
> > centric
> > > > terminology.
> > > >
> > > > Thanks!
> > > >
> > > > - Oli
> > > >
> > > > --
> > > > - Oli
> > > >
> > > > Olivier Lalonde
> > > > http://www.syskall.com <-- connect with me!
> > > >
> > >
> >
>
>
>
> --
> - Oli
>
> Olivier Lalonde
> http://www.syskall.com <-- connect with me!
>
--
*Christian Posta*
twitter: @christianposta
http://www.christianposta.com/blog
http://fabric8.io
a – considering that different consumers within the group
> commit to either kafka or Zk ?
>
> Regards
> Sathya,
>
--
*Christian Posta*
twitter: @christianposta
http://www.christianposta.com/blog
http://fabric8.io
72.17.0.1/172.17.0.1
> mirrormaker, local.general.example, 0, unknown, 0, unknown,
> mirrormaker-0_172.17.0.1/172.17.0.1
>
> On Wed, May 18, 2016 at 2:36 PM Christian Posta >
> wrote:
>
> > Maybe give it a try with the kafka-consumer-groups.sh tool and the
> >
apache.org/dyn/closer.cgi?path=/kafka/0.10.0.0/kafka_2.10-0.10.0.0
> > .tgz
> >
> https://www.apache.org/dyn/closer.cgi?path=/kafka/0.10.0.0/kafka_2.11-0.10.0.0
> > .tgz
> >
> > A big thank you for the following people who have contributed to the
> > 0.10.0.0 rele
he multi-threaded to one-thread and subscribing multi-topics?... I'm
> just
> > wonder whether a KafkaConsumer is able to stand the bunch of data without
> > performance degradation.
> >
> > Thanks in advance!
> >
> > Best regards
> >
> > KIM
> >
>
--
*Christian Posta*
twitter: @christianposta
http://www.christianposta.com/blog
http://fabric8.io
Kafka topics for now. So, as Gwen
> > mentioned, Connect would be the way to go to bring the data to a Kafka
> > Topic first.
>
> Got it — thank you!
>
>
--
*Christian Posta*
twitter: @christianposta
http://www.christianposta.com/blog
http://fabric8.io
gt; props.put("autooffset.reset", "smallest"); //
> props.put("autocommit.enable", false); KafkaConsumer String> consumer = new KafkaConsumer<>(props);
> consumer.subscribe(Arrays.asList("say-hello-test1")); final int
&g
Might be worth describing your use case a bit to see if there's another way
to help you?
On Tue, Jun 14, 2016 at 5:29 AM, Mudit Kumar wrote:
> Hey,
>
> How can I delete particular messages from particular topic?Is that
> possible?
>
> Thanks,
> Mudit
>
>
it in some
> topic till as such time that a service can dequeue, process it and/or
> investigate it.
>
> Thanks.
>
> Best,
> Krish
>
--
*Christian Posta*
twitter: @christianposta
http://www.christianposta.com/blog
http://fabric8.io
; resources; I can also ensure that it runs on a dedicated machine.
>
> Thanks.
>
> Best,
> Krish
>
--
*Christian Posta*
twitter: @christianposta
http://www.christianposta.com/blog
http://fabric8.io
Hi all,
I'll first describe a simplified view of relevant parts of our setup (which
should be enough to repro), describe the behavior we're seeing, and then
note some information I've come across after digging in a bit.
We have a kafka stream application, and one of our transform steps keeps a
st
e a good way to efficiently query all of the most
recent data. Note that since the healthcheck punctuator needs to aggregate
on all the recent values, it has to do a *fetchAll(start, end) *which is
how these duplicates are affecting us.
On Fri, Jun 29, 2018 at 7:32 PM, Guozhang Wang wrote:
> Hello Ch
m `true`?
>
> Thanks,
> Damian
>
> On Mon, 2 Jul 2018 at 17:29 Christian Henry
> wrote:
>
> > We're using the latest Kafka (1.1.0). I'd like to note that when we
> > encounter duplicates, the window is the same as well.
> >
> > My original code was a b
Any other ideas here? Should I create a bug?
On Tue, Jul 3, 2018 at 1:21 PM, Christian Henry wrote:
> Nope, we're setting retainDuplicates to false.
>
> On Tue, Jul 3, 2018 at 6:55 AM, Damian Guy wrote:
>
>> Hi,
>>
>> When you create your window store do
gestions we would be very interested in your opinion. We are also
interested in general about experiences in implementing GDPR compliance in
Kafka, especially when dealing with multiple, interconnected systems.
Kind regards,
--
Christian Apolloni
Disclaimer: The contents of this email and a
On 2020/08/19 16:15:40, Nemeth Sandor wrote:
> Hi Christian,>
Hi, thanks for your reply.
> depending on how your Kafka topics are configured, you have 2 different>
> options:>
>
> a) if you have a non-log-compacted then you can set the message retention>
> on the
e whether our understanding is correct and
whether it's a bug or not.
In general, I think part of the issue is that the system receives the delete
order at the time that it has to be performed: we don't deal with the
processing of the required waiting periods, that's what happe
As alternative solution we also investigated encryption: encrypting all
messages with an individual key and removing the key once the "deletion" needs
to be performed.
Has anyone experience with such a solution?
--
Christian Apolloni
Disclaimer: The contents of this ema
it-lon19/handling-gdpr-apache-kafka-comply-freaking-out/
That's what sparked our interest in such a solution.
Kind regards,
--
Christian Apolloni
Disclaimer: The contents of this email and any attachment thereto are intended
exclusively for the attention of the addressee(s). The email and a
think in worst case we can make this happen by encrypting the messages
but it would be great if we could filter on broker side.
Christian
--
--
Christian Schneider
http://www.liquid-reality.de
Computer Scientist
http://www.adobe.com
do we still need a custom
producer partitioner or is it enough to simply assign to the topic like
described above?
Christian
Am Mi., 8. Dez. 2021 um 11:19 Uhr schrieb Luke Chen :
> Hi Christian,
> Answering your question below:
>
> > Let's assume we just have one topic wi
kafka broker side.
Christian
--
--
Christian Schneider
http://www.liquid-reality.de
Computer Scientist
http://www.adobe.com
thread-1) new file processed
Thanks for any input!
Christian
s will work and behave like with
IN_MEMORY StoreType as it is straight forward to use.
Do you see a chance to get InteractiveQueryV2 work with GlobalKTable?
Kind regards,
Christian
-Original Message-
From: Sophie Blee-Goldman
Sent: Wednesday, November 22, 2023 1:51 AM
To: christ
Hi there,
when wanting to install Kafka using the command tar -xzf kafka_2.13-3.7.1.tgz
my Mac says no, produces the following error message: tar: Error opening
archive: Failed to open 'kafka_2.13-3.7.1.tgz'
Any idea how I can still install Kafka?
Thanks for helping.
Christian
ur company.
Christian
On Apr 11, 2017 11:10, "IT Consultant" <0binarybudd...@gmail.com> wrote:
Thanks for your response .
We aren't allowed to hard code password in any of our program
On Apr 11, 2017 23:39, "Mar Ian" wrote:
> Since is a java property y
described above to address the durability issue for more critical
data were realized?
Many thanks,
--
Christian Schuhegger
r/KafkaProducer.java#L151
for reference)?
Looking around it seems plausible the language in the documentation
might refer to a separate sort of callback that existed in 0.7 but not
0.8. In our use case we have something useful to do if we can detect
messages failing to be sent.
Christian
signa
On 05/01/2014 07:22 PM, Christian Csar wrote:
> I'm looking at using the java producer api for 0.8.1 and I'm slightly
> confused by this passage from section 4.4 of
> https://kafka.apache.org/documentation.html#theproducer
> "Note that as of Kafka 0.8.1 the asy
of an Async callback.
Christian
On 06/23/2014 04:54 PM, Guozhang Wang wrote:
> Hi Kyle,
>
> We have not fully completed the test in production yet for the new
> producer, currently some improvement jiras like KAFKA-1498 are still open.
> Once we have it stabilized in production at
ing a given conversion. That way you will avoid losing
information, particularly if you expect any of your conversion tools to
be of more general use.
Christian
On 08/25/2014 05:36 PM, Gwen Shapira wrote:
> Personally, I like converting data before writing to Kafka, so I can
> easily s
. My call back ends up putting information about the call to
beanstalk into another executor service for later processing.
Christian
On 08/26/2014 12:35 PM, Ryan Persaud wrote:
> Hello,
>
> I'm looking to insert log lines from log files into kafka, but I'm concerned
> with
make aspects of building such a chat system much much
easier (you can avoid writing your own message replication system) but
it is definitely not plug and play using topics for users.
Christian
On 09/05/2014 09:46 AM, Jonathan Weeks wrote:
> +1
>
> Topic Deletion with 0.8.1.1 is extremely
feature be GA or considered as stable?
Best regards,
Christian A. Mathiesen
I would like to share my @apachekafka <https://twitter.com/apachekafka>
@Docker <https://twitter.com/Docker> image! with all of you. The
Documentation is a work in progress!
https://hub.docker.com/r/christiangda/kafka/
Regards,
Christian
tside
kafka itself configuration.
it was not tested on Kubernates, but I expect to do that soon.
feel free to let me know your feedback on the github's repository
Regards,
Christian
El mié., 29 nov. 2017 9:09 PM, Christian F. Gonzalez Di Antonio <
christian...@gmail.com> escribió:
> uhh, so sorry, I forgot it.
>
> Dockek Hub: https://hub.docker.com/r/christiangda/kafka/
>
> Github: https://github.com/christiangda/docker-kafka
>
> Regards,
>
&
100 matches
Mail list logo