Yes thanks Manikumar! I just tested this and it is indeed all in and working
great in 0.11! I thought I would have to wait until 1.0 to be able to use and
recommend this in production.
I published 100 messages
seq 100 | ./bin/kafka-console-producer.sh --broker-list localhost:9092
--to
I guess It's kinda late since I am already in transit for work.
Is there any plan to do something in Europe e.g. London or some other place?
On 18 Aug 2017 4:41 pm, "Gwen Shapira" wrote:
> Hi,
>
> I figured everyone in this list kinda cares about Kafka, so just making
> sure you all know.
>
> K
Hi,
I figured everyone in this list kinda cares about Kafka, so just making
sure you all know.
Kafka Summit SF happens in about a week:
https://kafka-summit.org/events/kafka-summit-sf/
August 28 in San Francisco. It is not too late to register.
The talks are pretty great (and very relevant to e
I'm interested in knowing if theres any plan or idea to add transactions to
connect.
We make use of the JDBC source connector and its bulk extract mode. It
would be great if the connector could create a transaction around the
entire extraction in order to ensure the entire table's data made it int
+ dev experts for inputs.
--Senthil
On Fri, Aug 18, 2017 at 9:15 PM, SenthilKumar K
wrote:
> Hi Users , We have planned to use Kafka for one of the use to collect data
> from different server and persist into Message Bus ..
>
> Flow Would Be :
> Source --> Kafka --> Streaming Engine --> Repo
Alternatively you can set topic overrides for retention.bytes. By turning
back file.delete.delay.ms that change should be nearly instant after the
next log cleanup cycle.
# Apply topic config override
$ kafka-configs --alter --entity-type topics --entity-name test --zookeeper
localhost:32181 --ad
This feature got released in Kafka 0.11.0.0. You can
use kafka-delete-records.sh script to delete data.
On Sun, Aug 13, 2017 at 11:27 PM, Hans Jespersen wrote:
> This is an area that is being worked on. See KIP-107 for details.
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 107%3A+
Hi Users , We have planned to use Kafka for one of the use to collect data
from different server and persist into Message Bus ..
Flow Would Be :
Source --> Kafka --> Streaming Engine --> Reports
We like to store different types of data in the same topic , same time data
should be accessed easil
We are also collecting consumer group metrics from Kafka - we didn't want
to add extra unnecessary dependencies (such as burrow, which is also
overkill for what we need), so we just run a script every minute on the
brokers that parses the output of kafka-consumer-groups.sh and uploads it
to an http
Yes, the confluent SerDe's support nested avro records. Underneath the covers
they are using avro classes (DatumReader and DatumWriter) to carry out those
operations. So, as long as you're sending valid avro data to be produced or
consumed, the confluent SerDe's will handle it just fine.
__
You're welcome. I'm glad it was helpful. I think it is a good idea to maintain
a schema that can be evolved per topic and configure the schema registry to the
type of Avro evolution rules that fits your use case. While it is possible to
have many different non-compatible schemas per topic, it's
Hi,
If the userData value is null then that would usually mean that there
wasn't a record with the provided key in the global table. So you should
probably check if you have the expected data in the global table and also
check that your KeyMapper is returning the correct key.
Thanks,
Damian
On
Hi everyone,
When using left join, I can't get the value of Global KTable record in
ValueJoiner parameter (3rd parameter). Here is my code:
val userTable: GlobalKTable[String, UserData] =
builder.globalTable(Serdes.String(), userDataSede, userTopic, userDataStore)
val jvnStream: KStream[String,
Hi,
I have a question on Kafka transaction.id config related to atomic writes
feature of Kafka11. If I have multiple producers across different JVMs, do
i need to set transactional.id differently for each JVM. Does transaction.id
controls the begin and ending of transactions.
If its not set uniqu
OK, I got it, thank you Damian, Eno.
On Fri, Aug 18, 2017 at 4:30 PM, Damian Guy wrote:
> Duy, if it is in you logic then you need to handle the exception yourself.
> If you don't then it will bubble out and kill the thread.
>
> On Fri, 18 Aug 2017 at 10:27 Duy Truong
> wrote:
>
> > Hi Eno,
> >
Duy, if it is in you logic then you need to handle the exception yourself.
If you don't then it will bubble out and kill the thread.
On Fri, 18 Aug 2017 at 10:27 Duy Truong wrote:
> Hi Eno,
>
> Sorry for late reply, it's not a deserialization exception, it's a pattern
> matching exception in my
Hi Eno,
Sorry for late reply, it's not a deserialization exception, it's a pattern
matching exception in my logic.
val jvnStream: KStream[String, JVNModel] = sourceStream.leftJoin(userTable,
(eventId: String, datatup: (DataLog, Option[CrawlData])) => {
datatup._1.rawData.userId
Broker is 100% running. ZK path shows /broker/ids/1
On Fri, Aug 18, 2017 at 1:02 AM, Yang Cui wrote:
> please use zk client to check the path:/brokers/ids in ZK
>
> 发自我的 iPhone
>
> > 在 2017年8月18日,下午3:14,Raghav 写道:
> >
> > Hi
> >
> > I have a 1 broker and 1 zookeeper on the same VM. I am using K
please use zk client to check the path:/brokers/ids in ZK
发自我的 iPhone
> 在 2017年8月18日,下午3:14,Raghav 写道:
>
> Hi
>
> I have a 1 broker and 1 zookeeper on the same VM. I am using Kafka 10.2.1.
> I am trying to create a topic using below command:
>
> "bin/kafka-topics.sh --create --zookeeper local
your broker is not running
发自我的 iPhone
> 在 2017年8月18日,下午3:14,Raghav 写道:
>
> Hi
>
> I have a 1 broker and 1 zookeeper on the same VM. I am using Kafka 10.2.1.
> I am trying to create a topic using below command:
>
> "bin/kafka-topics.sh --create --zookeeper localhost:2181
> --replication-facto
Hi
I have a 1 broker and 1 zookeeper on the same VM. I am using Kafka 10.2.1.
I am trying to create a topic using below command:
"bin/kafka-topics.sh --create --zookeeper localhost:2181
--replication-factor 1 --partitions 16 --topic topicTest04"
It fails with the below error. Just wondering why
Hello,
Could you tell me if burrow or remora is compatible with ssl kafka clusters
?
Gabriel.
2017-08-16 15:39 GMT+02:00 Gabriel Machado :
> Hi Jens and Ian,
>
> Very usefuls projects :).
> What's the difference between the 2 softwares ?
> Do they support kafka ssl clusters ?
>
> Thanks,
> Gab
22 matches
Mail list logo