s connector can monitor a directory and pick up any new files that are
> created. Great for picking up batch files, parsing them, and publishing
> each line as if it were published in realtime.
>
> -hans
>
> > On Mar 15, 2019, at 7:52 AM, Pulkit Manchanda
> wrote:
> >
> &g
Hi All,
I am building a data pipeline to send logs from one data source to the
other node.
I am using Kafka Connect standalone for this integration.
Everything works fine but the problem is on Day1 the log file is renamed as
log_Day0 and a new log file log_Day1 is created.
And my Kafka Connect do
Aruna,
Instead of using 1000 partitions you can chose to create dynamic topics and
have a consumer for each topic.
On Fri, Jan 11, 2019 at 6:43 AM Peter Levart wrote:
>
>
> On 1/10/19 2:26 PM, Sven Ludwig wrote:
> > Okay, but
> >
> > what if one also needs to preserve the order of messages comin
Yes, As Todd said you have to use some id as the key to partition.
The rebalancing will be an over head and if you increase the partitions
later you will lose the order.
you can go through
https://anirudhbhatnagar.com/2016/08/22/achieving-order-guarnetee-in-kafka-with-partitioning/
for more unders
HI All,
I have a consumer application continuously polling for the record in a
while loop wasting the CPU cycles.
Is there any alternative like I get a callback/event from Kafka sooner the
producer publishes the record to the topic.
Thanks
Pulkit
multiple threads. Don't
> initiate a new kafka producer for each of your thread.
>
> On Fri, Aug 17, 2018 at 9:26 PM Pulkit Manchanda
> wrote:
>
> > Hi All,
> >
> > I am sending the multiple records to the same topic.
> > I have the two approaches
&
Hi All,
I am sending the multiple records to the same topic.
I have the two approaches
1)Sharing the producer with all the threads
2) creating a new producer for every thread.
I am sending the records of size ~150Mb on multiple request.
I am running the cluster and app on my local machine with 3
:
> Not sure. Anything interesting in the logs? Maybe you need to enable DEBUG.
>
> As an alternative, you might ask a question on the Github page providing
> the example code.
>
>
> -Matthias
>
> On 8/1/18 7:11 AM, Pulkit Manchanda wrote:
> > Hi Matthias,
&g
Hi Matthias,
Thanks for the reply, I had already tried that. But it doesn't work either.
Pulkit
On Tue, Jul 31, 2018 at 9:22 PM, Matthias J. Sax
wrote:
> Is `delete.topic.enable` set to `true`? It's a broker configuration.
>
>
> -Matthias
>
> On 7/31/18 8:57 AM, Pu
HI All,
I am want to create and delete Kafka topics on runtime in my Application.
I followed few projects on GitHub like
https://github.com/simplesteph/kafka-0.11-examples/blob/master/src/main/scala/au/com/simplesteph/kafka/kafka0_11/demo/KafkaAdminClientDemo.scala
But to no avail. The code runs
yet to reach the max. limit considering the system
> resources used. I guess our bottleneck might be the no. of tcp connections
> to a broker.
>
>
> -Original Message-----
> From: Pulkit Manchanda
> Sent: Tuesday, July 24, 2018 6:16 AM
> To: users@kafka.apache.org
> Sub
Hi All,
I am working on a use case where my multiple producers will be publishing
to the same topic.
For this I need to know what is the MAX_LIMIT of same producers writing to
the same topic?
Also is there any case that the create a pool of producers and these
instances can be shared?
Thanks
Pul
ect
> boris.lublin...@lightbend.com
> https://www.lightbend.com/
>
> > On Jul 11, 2018, at 8:53 AM, Pulkit Manchanda
> wrote:
> >
> > Hi All,
> >
> > I want to build a datapipeline with the following design. Can please
> anyone
> > advice me that is it feasible t
Hi All,
I want to build a datapipeline with the following design. Can please anyone
advice me that is it feasible to do? Or there are better options.
HTTP Streams --> (HTTP stream consumer)(using AKKA HTTP Streaming) --> (kafka
Stream Producer)(using Kafka Streaming) --> (Kafka Stream Consumer)(
duction/introduction-to-dependency-
> mechanism.html#Dependency_Scope
>
> Kind regards,
>
> Liam Clarke
>
> On Tue, 10 Jul. 2018, 9:24 am Pulkit Manchanda,
> wrote:
>
> > Hi,
> >
> > I am trying to do structured streaming with kafka as source.
Hi,
I am trying to do structured streaming with kafka as source.
I am unable to get pass of this code.
val df = spark
.readStream
.format("org.apache.spark.sql.kafka010.KafkaSourceProvider")
.option("kafka.bootstrap.servers", "localhost:8082")
.option("subscribe", "jsontest")
.load()
T
16 matches
Mail list logo