Hi,
I’m trying to test the new exactly once transaction feature. Doing simple test
like:
/opt/kafka/bin/kafka-console-producer.sh --request-required-acks "all"
--producer-property "transactional.id=777"
--producer-property="enable.idempotence=true" --broker-list broker1:9092
--topic bla
Fai
Hello,
I recently integrated my Kafka broker with SSL authentication. I now am
able to use SSL to authenticate the producer and consumer to the broker,
and they can communicate as intended. Additionally, I implemented an
access control list to limit access to the topics on the broker. However,
Hi,
What are the ACLs for the topics and groups concerned?
btw my kafka-console-consumer.sh supports offset, were you using some other
command?
Cheers,
Tom
On 2 August 2017 at 15:31, Alisa Bohac wrote:
> Hello,
>
> I recently integrated my Kafka broker with SSL authentication. I now am
>
Hi,
regarding --offset flag it works only with the new consumer and not the old one
which connect to Zookeeper (using the --zookeeper option instead of
--bootstrap-server).
At same time the --partition option must be used with the --offset.
What is the error you have on the command line tryin
I installed the Confluent Connect package on DC/OS. It started a worker on
the cluster without any problem. I am using version 3.2.2 (compatible with
0.10.2.1).
Now I have a Kafka streams application that generates Avro into a Kafka
topic named avro-topic .. and I configure the HdfsSinkConnector
Face the similar issue in kafka 0.10.0.1. Going through the kafka code
figured that when coordinator goes down the other ISR scans whole log
file of partition of __consumer_offsets for my consumer group to
update the cache of offsets. In my case its size was around ~600G
which took around ~40 mins
Hi Marcin,
The console producer hasn't been updated to invoke the appropriate methods
if transactions are enabled. It also requires a bit of thinking on how it
should work. Would there be a way to start and commit the transaction via
the console or would the console producer do it periodically? Wh
That sounds like either a protobuf dependency compatibility issue between
what is on the classpath of kafka connect and the hadoop cluster you are
trying to write to (e.g. you're on a newer version of protobuf than your
cluster, or vice versa), or a wire incompatilibty of the communcation
protocol
Thanks for the heads up. In fact I am using both HDFS and Kafka Connect
from Mesosphere repository. Hence expected some compatibility .. will take
a look to see if they include different versions of protobuf.
regards.
On Thu, Aug 3, 2017 at 7:24 PM, Stephen Durfey wrote:
> That sounds like eith
Hey, We are trying to activate the sink JDBC connector.
I'm trying to connect one topic from kafka to RedShift,
but I'm keep getting the following error:
ERROR Task test-redshift-sink-0 threw an uncaught and unrecoverable exception
(org.apache.kafka.connect.runtime.WorkerSinkTask:449)
org.apache.k
Looking around some more it could be a misconfiguration of the running
hadoop cluster and what you're trying to point to. So, double check the IP
address and port number for the hadoop cluster (in the hadoop cluster
config and make sure the port is open, reachable, and listening). If its
all runnin
Yes, looks like the hdfs.url value that I gave is not the proper one. But I
cannot find how to get the appropriate URL to supply here. If I ask dcos
hdfs for the core-site.xml I get the following ..
$ dcos hdfs endpoints core-site.xml
fs.default.name
hdfs://hdfs
(Note mixed public/private lists)
Yes, from your description your use of the Apache Kafka logo sounds
fine, as long as you are otherwise complying with the ASF trademark
policy. In particular, we have a closely related FAQ:
https://www.apache.org/foundation/marks/faq/#integrateswith
Using an
Hey all.
I'm trying to activate the jdbc sink connector,
and I keep getting the following error:
ERROR Task test-redshift-sink-0 threw an uncaught and unrecoverable exception
(org.apache.kafka.connect.runtime.WorkerSinkTask:449)
org.apache.kafka.connect.errors.ConnectException: No fields found us
zhangmin...@xbsafe.cn
Ismael raises good questions about what transactions would mean for the
console producer.
However, the kafka-producer-perf-test script has transactions enabled. It
enables you to generate transactions of a certain duration (like 50ms,
100ms). It produces messages of specified size and commits them
Hi,
I observed that it took 2-6 milliseconds for a topic to be received by a
Kafka consumer from a Kafka producer, and I wonder what I might be
missing or I was wrong in configuring Kafka for low latency (targeting
at < 100 microseconds). I did the following:
1. On the broker, I tried to pre
Hi Gaurav, those results are definitely inconsistent with the benchmarking
we did. Can you see if this reproduces with the 0.10.0 message format
running with 0.11.0.0 broker?
On Wed, Aug 2, 2017 at 4:52 AM, Gaurav Abbi wrote:
> Hi Apurva,
> For the ProduceRequest,
>
>- The increase is from 4
Hello,
I have a 0.11.0.0 kafka cluster running with SSL and SASL SCRAM enabled
connected with Zookeeper 3.5.3-beta SASL-Plain and SSL enabled.
Kafka is connecting correctly to zookeeper(I replaced default zookeeper
library with the corresponding cluster zookeeper library and added Netty
library as
19 matches
Mail list logo