Hi shikha,
I think you should ask in Spark community.
Thanks
Luke
On Tue, Nov 15, 2022 at 3:17 AM shikha sharma
wrote:
> Hello,
>
> I am trying to connect to kafka using this command:
> orderRawData = spark.readStream \
> .format("kafka") \
> .option("kafka.bootstrap.servers", "18.211.
Hello,
I am trying to connect to kafka using this command:
orderRawData = spark.readStream \
.format("kafka") \
.option("kafka.bootstrap.servers", "18.211.252.152:9092") \
.option("startingOffsets","earliest") \
.option("failOnDataLoss", "false") \
.option("subscribe", "real-ti
Does the container system used in your Rancher environment have persistence
configured for the brokers? Are you are using ephemeral storage?
On Thu, Jun 28, 2018 at 7:39 AM Karthick Kumar wrote:
> Hi,
>
> I'm using Kafka cluster on three different servers, Recently my servers
> went down when I
Hi,
I'm using Kafka cluster on three different servers, Recently my servers
went down when I start the server and then start the services, One of the
Zookeeper is not connected to the Kafka cluster and it stayed there for two
days...
So I killed the stack in Rancher and then start the new one all
Hi Guys,
Anyone has ever run into the following issue or give me suggestion to do
addressing, thanks.
2018-02-07 18:59:59,783 [myid:9] - INFO [NIOServerCxn.Factory:0.0.0.0/
0.0.0.0:2181:NIOServerCnxn@1040] - Closed socket connection for client /
10.92.74.216:27897 (no session established for cli
There are no errors in the broker logs.
The Kafka Cluster in itself is functional. I have other producers and
consumers working which are in public subnet (same as kafka cluster).
On Wed, Jun 29, 2016 at 7:15 PM, Kamesh Kompella wrote:
> For what it's worth, I used to get similar messages with
For what it's worth, I used to get similar messages with docker instances on
centos.
The way I debugged the problem was by looking at Kafka logs. In that case, it
turned out that brokers could not reach zk and this info was in the logs. The
logs will list the parameters the broker used at start
I have Kafka Cluster setup on AWS Public Subnet with all brokers having
elastic IPs
My producers are on private subnet and not able to produce to the kafka on
public subnet.
Both subnets are in same VPC
I added the private ip/cidr of producer ec2 instance to Public Kafka's
security group.
(I can t
Does that happen in the middle of a consumer rebalance? If so, that could
be normal since on every rebalance, we will interrupt existing socket
connections. The question is weather rebalances complete in the end. If
there are too many rebalances, take a look at
https://cwiki.apache.org/confluence/d
Hi,
I keep getting the folllowing error in Kafka.
2014-01-14 10:46:55:073 SyncProducer [ERROR] Producer connection to :9092 unsuccessful
java.nio.channels.ClosedByInterruptException
at
java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
10 matches
Mail list logo