listeners=PLAINTEXT://:9092listeners=PLAINTEXT://98,1,96.147:9092>
>
> Link to article:
> Kafka Listeners - Explained
>
> |
> |
> |
> | | |
>
> |
>
> |
> |
> | |
> Kafka Listeners - Explained
>
> How to connect clients to Kafka hosted
about the environment but would appreciate
any pointers in the meantime.
-Val
On Tue, Apr 28, 2020 at 12:07 AM Suresh Chidambaram
wrote:
> Hi Val,
>
> Could you share the server.properties and zookeeper.properties?
>
> Thanks
> C Suresh
>
>
> On Tuesday, Apr
Greetings to the Kafka Community!
I'm a newbie in Kafka and only recently went beyond a local installation
described in the Quickstart. I have faced a weird issue that I can't
explain.
I want to deploy on two machines:
- Machine #1 runs ZooKeeper and a single Kafka broker. I use default
configura
>
> Am 10.12.2019 um 06:36 schrieb Valentin :
>
> Hi Chao,
>
> I suppose, you would like to know:
> within a consumer group which message is coming from which partition, since
> partitions corresponds to broker and broker = ip, right?
>
> Well, if you really wa
called context()
However, keep in mind broker and IP addresses could change. I would not
recommend to build a business logic on top of ip addresses.
Within a producer, you can manage the destination partition / broker.
Regards
Valentin
--
> Am 10.12.2019 um 04:48 schrieb 朱超 :
>
alse",
> "connection.url": "jdbc:oracle:thin:@X:16767/XXX",
> "key.converter.schema.registry.url": "http://localhost:8081";,
> "pk.mode": "record_value",
> "pk.fields": "ID"
>
>
> when I use "insert.mode": "insert" everything works fine, but when I switch
> to „upsert“ I am only able to insert binary files „BLOBs“ up to 32767 bytes.
> Otherwise I am getting:
> SQLException: ORA-01461: can bind a LONG value only for insert into a LONG
> column
>
> Do you have an idea how to solve this issue?
Many thx in advance
Valentin
Hi Karishma,
you can definitely use Avro without the Confluent schema registry. Just write
you own serializer/ deserializer. However , you need to share the avro schema
version between your producer and consumer somehow.. and also think about
changes on your avro schema.
Greets
Valentin
Von
Valentin
> Am 06.10.2017 um 18:30 schrieb Ted Yu :
>
> I assume you have read
> https://github.com/facebook/rocksdb/wiki/Building-on-Windows
>
> Please also see https://github.com/facebook/rocksdb/issues/2531
>
> BTW your question should be directed to rocksdb forum.
ently I am getting the following error:
UnsatisfiedLinkError … Can’t find dependency library. Trying to find
rocksdbjni,…dll
Thanks in advance
Valentin
Hi Sean,
Thanks a lot for this info !
Are you running DC/OS in prod?
Regards
Valentin
> Am 03.10.2017 um 15:29 schrieb Sean Glover :
>
> Hi Valentin,
>
> Kafka is available on DC/OS in the Catalog (aka Universe) as part of the
> `kafka` package. Mesosphere has put a
Hi Avinash,
Thanks for this hint.
It would have been great, if someone could share experience using this
framework on the production environment.
Thanks in advance
Valentin
> Am 02.10.2017 um 19:39 schrieb Avinash Shahdadpuri :
>
> There is a a native kafka framework which runs on
have a different DC/OS
Cluster
3. Kafka -Cluster on its own
Cheers,
Valentin
> Am 02.10.2017 um 16:35 schrieb David Garcia :
>
> I’m not sure how your requirements of Kafka are related to your requirements
> for marathon. Kafka is a streaming-log system and marathon is a schedul
dedicated DC/OS instance for our Kafka-Cluster? Or Kafka-Cluster
on its own?
- Is there something else we should consider important if using Kafka on DC/OS
+ Marathon?
Thanks in advance for your time.
Valentin
.10.1.1
not considered stable yet? I'm not sure about using it... Maybe downgrade
would work?
Re: restarting the faulty broker. As I understand, to avoid losing data,
I'd have to shut down other parts of the cluster first, right?
-Valentin
On Thu, Dec 22, 2016 at 9:01 PM, Jan Omar
Hello,
I have a three broker Kafka setup (the ids are 1, 2 (kafka 0.10.1.0) and
1001 (kafka 0.10.0.0)). After a failure of two of them a lot of the
partitions have the third one (1001) as their leader. It's like this:
Topic: userevents0.open Partition: 5Leader: 1 Replicas:
1,2,1001
://kafka.apache.org/documentation.html#java
2016-01-28 16:58 GMT+01:00 Muresanu A.V. (Andrei Valentin) <
andrei.mures...@ing.ro>:
> Hi all,
>
> what is the oracle jdk version that is "supported" b
Hi all,
what is the oracle jdk version that is "supported" by kafka 0.9.0 ?
6/7/8...
ATTENTION:
The information in this e-mail is confidential and only meant for the intended
recipient. If you are not the intended recipient , don'
uot;partition": 1, "replicas": [2,4,6] },
{ "topic": "T5", "partition": 2, "replicas": [1,3,5] },
{ "topic": "T5", "partition": 3, "replicas": [2,4,6] },
{ "topic": "T5", "parti
Valentin
--r-- 1 kafka kafka0 Apr 6 10:56 0266.log
To fix the issue we had to change unclean.leader.election.enable from
false to true and restart all 3 brokers.
Is that really the intended approach in such a scenario?
Greetings
Valentin
level and TLS
for transport encryption.
Offset tracking is done by the connected systems, not by the
broker/zookeeper/REST API.
We expect a high message volume in the future here, so performance would
be a key concern.
Greetings
Valentin
Hi Jun,
ok, I created:
https://issues.apache.org/jira/browse/KAFKA-1655
Greetings
Valentin
On Sat, 27 Sep 2014 08:31:01 -0700, Jun Rao wrote:
> Valentin,
>
> That's a good point. We don't have this use case in mind when designing
the
> new consumer api. A straightforwar
well as my current
SimpleConsumer approach :|
Or am I missing something here?
Greetings
Valentin
On Wed, 24 Sep 2014 17:44:15 -0700, Jun Rao wrote:
> Valentin,
>
> As Guozhang mentioned, to use the new consumer in the SimpleConsumer
way,
> you would subscribe to a set of topic partiti
ionale is for deprecating the
SimpleConsumer API, if there are use cases which just work much better
using it.
Greetings
Valentin
On 23/09/14 18:16, Guozhang Wang wrote:
> Hello,
>
> For your use case, with the new consumer you can still create a new
> consumer instance for each topi
ood
approach.
All in all, the planned design of the new consumer API just doesn't seem
to fit my use case well. Which is why I am a bit anxious about the
SimpleConsumer API being deprecated.
Or am I missing something here? Thanks!
Greetings
Valentin
> On Mon, Sep 22, 2014 at 8:10 AM,
eConsumer.
Greetings
Valentin
> 2) I am not very familiar with HTTP wrapper on the clients, could
someone
> who have done so comment here?
>
> 3) The new KafkaConsumer has not been fully implemented, but you can
take a
> look at its JavaDoc for examples of using the new client.
>
onsumer to switch topics/partitions to the
currently needed set or would this be a very expensive operation (i.e.
because it would fetch metadata from Kafka to identify the leader for each
partition)?
Greetings
Valentin
27 matches
Mail list logo