gt;>>>>> org.apache.beam.sdk.io.kafka.KafkaUnboundedSource.createReader(KafkaUnboundedSource.java:121)\n\tat
>>>>>>>>>> org.apache.beam.sdk.io.kafka.KafkaUnboundedSource.createReader(KafkaUnboundedSource.java:43)\n\t
essBundleContext.output(FnApiDoFnRunner.java:1335)\n\tat
>>>>>>>>> org.apache.beam.sdk.transforms.DoFnOutputReceivers$WindowedContextOutputReceiver.output(DoFnOutputReceivers.java:75)\n\tat
>>>>>>>>> org.apache.beam.sdk.io.Read$UnboundedSo
.harness.FnApiDoFnRunner.processElementForSplitRestriction(FnApiDoFnRunner.java:715)\n\tat
>>>>>>>> org.apache.beam.fn.harness.data.PCollectionConsumerRegistry$MetricTrackingFnDataReceiver.accept(PCollectionConsumerRegistry.java:216)\n\tat
>>>>>>>> o
t; org.apache.beam.fn.harness.FnApiDoFnRunner.processElementForPairWithRestriction(FnApiDoFnRunner.java:688)\n\tat
>>>>>>> org.apache.beam.fn.harness.data.PCollectionConsumerRegistry$MetricTrackingFnDataReceiver.accept(PCollectionConsumerRegistry.java:216)\n\tat
>>>>>&g
piDoFnRunner.access$600(FnApiDoFnRunner.java:121)\n\tat
>>>>>> org.apache.beam.fn.harness.FnApiDoFnRunner$ProcessBundleContext.outputWithTimestamp(FnApiDoFnRunner.java:1340)\n\tat
>>>>>> org.apache.beam.fn.harness.FnApiDoFnRunner$ProcessBundleContext.o
llectionConsumerRegistry.java:179)\n\tat
>>>> org.apache.beam.fn.harness.BeamFnDataReadRunner.forwardElementToConsumer(BeamFnDataReadRunner.java:177)\n\tat
>>>> org.apache.beam.fn.harness.data.QueueingBeamFnDataClient.drainAndBlock(QueueingBeamFnDataClient.java:106)\n\tat
>
PoolExecutor.java:624)\n\tat
>>> java.lang.Thread.run(Thread.java:748)\n"
>>> log_location: "org.apache.beam.fn.harness.data.QueueingBeamFnDataClient"
>>>
>>> On Fri, Jun 5, 2020 at 3:57 PM Piotr Filipiuk
>>> wrote:
>>&
pache.beam.fn.harness.data.QueueingBeamFnDataClient"
>>
>> On Fri, Jun 5, 2020 at 3:57 PM Piotr Filipiuk
>> wrote:
>>
>>> Thank you for the suggestions.
>>>
>>> Neither Kafka nor Flink run in a docker container, they all run locally.
>>>
t;> error, see attached.
>>
>> On Fri, Jun 5, 2020 at 3:47 PM Venkat Muthuswamy <
>> venkat_pack...@yahoo.com> wrote:
>>
>>> Is Kafka itself running inside another container? If so inspect that
>>> container and see if it has a network alias and add th
etwork alias and add that alias to your
>> /etc/hosts file and map it to 127.0.0.1.
>>
>>
>>
>> *From:* Chamikara Jayalath
>> *Sent:* Friday, June 5, 2020 2:58 PM
>> *To:* Luke Cwik
>> *Cc:* user ; dev ; Heejong
>> Lee
>> *Subject:* Re:
ne 5, 2020 2:58 PM
> *To:* Luke Cwik
> *Cc:* user ; dev ; Heejong Lee
>
> *Subject:* Re: Python SDK ReadFromKafka: Timeout expired while fetching
> topic metadata
>
>
>
> Is it possible that "'localhost:9092'" is not available from the Docker
> enviro
Subject: Re: Python SDK ReadFromKafka: Timeout expired while fetching topic
metadata
Is it possible that "'localhost:9092'" is not available from the Docker
environment where the Flink step is executed from ? Can you try specifying the
actual IP address of the node runn
Is it possible that "'localhost:9092'" is not available from the Docker
environment where the Flink step is executed from ? Can you try specifying
the actual IP address of the node running the Kafka broker ?
On Fri, Jun 5, 2020 at 2:53 PM Luke Cwik wrote:
> +dev +Chamikara Jayalath
> +Heejong
+dev +Chamikara Jayalath +Heejong
Lee
On Fri, Jun 5, 2020 at 8:29 AM Piotr Filipiuk
wrote:
> I am unable to read from Kafka and getting the following warnings & errors
> when calling kafka.ReadFromKafka() (Python SDK):
>
> WARNING:root:severity: WARN
> timestamp {
> seconds: 1591370012
>
14 matches
Mail list logo