I agree with what Matthias has said w.r.t failing fast. There are plenty of
times when you don't want to fail-fast and must attempt to make progress.
The dead-letter queue is exactly for these circumstances. Of course if
every record is failing, then you probably do want to give up.
On Fri, 2 Jun
Hi
1.
That greatly complicates monitoring. Fail Fast gives you that when you
monitor only the lag of all your apps
you are completely covered. With that sort of new application Monitoring
is very much more complicated as
you know need to monitor fail % of some special apps aswell. In my
opini
Jan, you have a choice to Fail fast if you want. This is about giving
people options and there are times when you don't want to fail fast.
On Fri, 2 Jun 2017 at 11:00 Jan Filipiak wrote:
> Hi
>
> 1.
> That greatly complicates monitoring. Fail Fast gives you that when you
> monitor only the lag
IMHO your doing it wrong then. + building to much support into the kafka
eco system is very counterproductive in fostering a happy userbase
On 02.06.2017 13:15, Damian Guy wrote:
Jan, you have a choice to Fail fast if you want. This is about giving
people options and there are times when you d
So, I fixed the problem doing a rolling restart, and after some checks
seems there was no data loss.
On 1 June 2017 at 17:57, Del Barrio, Alberto <
alberto.delbar...@360dialog.com> wrote:
> I might give it a try tomorrow. The reason for having so large init and
> sync limit times is because in th
Hi.
Is there any way that I get the data into a Kafka topic in Json format?
The source that I ingest the data from have the data in Json format,
however when I look that data in the kafka topic, schema and payload fields
are added and data is not in json format.
I want to avoid implementing a tra
Hi All,
I have been seeing below error since three days ,
Can you please help me understand more about this ,
WARN Failed to send SSL Close message
(org.apache.kafka.common.network.SslTransportLayer)
java.io.IOException: Broken pipe
at sun.nio.ch.FileDispatcherImpl.write0(Native Method
Hi,
I would like to add that I use kafka-connect and schema-registery version `
3.2.1-6`.
Best regards,
Mina
On Fri, Jun 2, 2017 at 10:59 AM, Mina Aslani wrote:
> Hi.
>
> Is there any way that I get the data into a Kafka topic in Json format?
> The source that I ingest the data from have the d
Check which serializer you have configured in your producer. You are probably
using an Avro serializer which will add the schema and modify the payload to
avro data. You can use a String serializer or a ByteArray serializer and the
data will either be Base64 encoded or not encoded at all.
-han
My earlier comment still applies but in Kafka Connect the equivalent of a
serializer/deserializer (serdes) is called a “converter”.
Check which converter you have configured for your source connector and if it
is overriding whatever the default converter is configured for the connect
worker it
Jan, I agree with you philosophically. I think one practical challenge has
to do with data formats. Many people use untyped events, so there is simply
no guarantee on the form of the input. E.g. many companies use JSON without
any kind of schema so it becomes very hard to assert anything about the
Hi Hans,
Thank you for your quick response, appreciate it.
In *kafka-connect* docker, I see below settings in
*kafka-connect.properties* file in *kafka-connect *directory:
key.converter.schemas.enable=false
key.converter.schema.registry.url=http://kafka-schema-registry:
value.converter.schema.re
Hi Kafka people,
I'm running an application that pushes database changes into a Kafka topic.
I'm also running a Kafka streams application
that listens to these topics, and groups them using the high level API, and
inserts them to another database.
All topics are compacted, with the exception of t
I have a KTable and backing store whose partitioning is value dependent.
I want certain groups of messages to be ordered and that grouping is determined
by one field (D) of the (possibly large) value.
When I lookup by only K, obviously you don't know the partition it should be on.
So I will build
I dont know whether ldap can directly be integrated with kafka client and
server, but kafka can use kerberos type authentication.
While kerberos can integrate with AD for user store and authentication of
user.
Hope this is helpfull
Nixon
On Fri, Jun 2, 2017 at 1:58 AM, Arunkumar
wrote:
>
> Hi
Hi,
What are the pros and cons of setting up Zookeeper on the same server as
the Kafka broker ? Earlier offsets were being written to zookeeper which
was a major overhead but with offsets being written to Kafka now, what
other requirements should be taken into consideration for setting up
Zookeeper
Hi Frank,
yes, retention policy is based on the embedded record timestamps and not
on system time. Thus, if you send messages with an old timestamp, they
can trigger log/segment rolling.
>> I see that the repartition topics have timestamp.type = CreateTime, does
>> that mean it uses the timestamp
Hi everyone, another question for the list :)
I'm creating a cluster of KTable (and GlobalKTable) based off the same
input stream K,V.
It has a number of secondary indices (think like a RDBMS)
K1 -> K
K2 -> K
etc
These are all based off of trivial mappings from my main stream that also
feeds the
Usually, the overhead comes when you have kafka and zookeeper doing the
housekeeping (i.e. Disk IO) on the same server. ZK even suggests that you
should keep their logs on totally different physical machine for better
performance. Furthermore, if a mechanical failure occurs, you might not
want both
I am not sure if I understand the use case correctly. Could you give
some more context?
> backing store whose partitioning is value dependent
In infer that you are using a custom store and not default RocksDB? If
yes, what do you use? What does "value dependent" mean in this context?
Right now,
Hi,
If you want to populate a GlobalKTable you can only do this by reading a
topic. So the short answer for you head line is: no, you can suppress
the intermediate topic.
However, I am wondering what the purpose of you secondary index is, and
why you are using a GlobalKTable for it. Maybe you can
> On Jun 2, 2017, at 2:11 PM, Matthias J. Sax wrote:
>
> I am not sure if I understand the use case correctly. Could you give
> some more context?
Happily, thanks for thinking about this!
>
>> backing store whose partitioning is value dependent
>
> In infer that you are using a custom store
> On Jun 2, 2017, at 2:21 PM, Matthias J. Sax wrote:
>
> Hi,
>
> If you want to populate a GlobalKTable you can only do this by reading a
> topic. So the short answer for you head line is: no, you can suppress
> the intermediate topic.
Bummer! Maybe this is an opt-in optimization to consider
Thanks. That helps to understand the use case better.
Rephrase to make sure I understood it correctly:
1) you are providing a custom partitioner to Streams that is base on one
field in your value (that's fine with regard to fault-tolerance :))
2) you want to use interactive queries to query the s
Makes sense now (considering the explanation form the other thread).
With regard to an "opt-in" optimization. We could simplify the API and
hide some details, but it wouldn't buy you anything from an execution
point of view. As you need all data on each instance, you need to
somehow "broadcast" th
You're entirely right. I'd forgotten that each instance would only read
a subset of the main topic. Should have figured that out myself. Thanks
for the sanity check! :)
> On Jun 2, 2017, at 3:41 PM, Matthias J. Sax wrote:
>
> Makes sense now (considering the explanation form the other thread)
>
> On Jun 2, 2017, at 3:32 PM, Matthias J. Sax wrote:
>
> Thanks. That helps to understand the use case better.
>
> Rephrase to make sure I understood it correctly:
>
> 1) you are providing a custom partitioner to Streams that is base on one
> field in your value (that's fine with regard to f
You have shared the Kafka connect properties but not the source connector
config.
Which source connector are you using? Does it override the default settings you
provided?
Are you running the connector in standalone mode or distributed mode?
Also what are you using to consume the messages and see
Hello All,
I am getting "Connector configuration is invalid" error with following
configuration. Can someone please help me in finding out what I am doing
wrong here
name=topic_see
connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
tasks.max=1
connection.url=jdbc:db2://*.*.*.com:6/
29 matches
Mail list logo