By the way, I've just tried using AvroCoder, which is inferring the
schema from the Java object to be deserialized and I got same error
when restoring the checkpoint :(
On Fri, 2020-06-05 at 06:24 +, Ivan San Jose wrote:
> Sorry but I'm afraid I'm not understanding well the scenario...
> What'
It does not work even after removing the Loopback flag.
Current setting:
"--runner=PortableRunner",
"--job_endpoint=localhost:8099",
I have a jobserver running and flink cluster running on docker on the same
machine on gcp vm.
CONTAINER IDIMAGE
See my answers inline.
> Sorry but I'm afraid I'm not understanding well the scenario... What's
> the point on keeping a reference of the serialized class if
> AVRO/Protobuf are using schemas?
They keep a reference to the class because they produce that type when
they deserialize data.
> I mean,
Thank you so much for your detailed answers Max, I will try to achieve
what you've suggested about creating a custom coder which doesn't have
non-transient fields refencening the serialized Java model. My skills
with Beam are not so advanced though, but will try my best hehehe
On Fri, 2020-06-05 a
If you remove environment_type=LOOPBACK, the default is docker, which
requires your Flink task managers to have Docker installed (explained here:
https://beam.apache.org/documentation/runtime/sdk-harness-config/). You can
try Docker in Docker if you want, but that's not the best way to do this.
In
I am unable to read from Kafka and getting the following warnings & errors
when calling kafka.ReadFromKafka() (Python SDK):
WARNING:root:severity: WARN
timestamp {
seconds: 1591370012
nanos: 52300
}
message: "[Consumer clientId=consumer-2, groupId=] Connection to node -1
could not be estab
Hi folks,
As a sponsor for Berlin Buzzwords, we were given some ticket codes. If any
of you are interested in receiving one of the codes for the event, please
let me know and I will send the ticket shop link and code over to you!
Berlin Buzzwords is one of the leading conferences for Big Data in
+dev +Chamikara Jayalath +Heejong
Lee
On Fri, Jun 5, 2020 at 8:29 AM Piotr Filipiuk
wrote:
> I am unable to read from Kafka and getting the following warnings & errors
> when calling kafka.ReadFromKafka() (Python SDK):
>
> WARNING:root:severity: WARN
> timestamp {
> seconds: 1591370012
>
Is it possible that "'localhost:9092'" is not available from the Docker
environment where the Flink step is executed from ? Can you try specifying
the actual IP address of the node running the Kafka broker ?
On Fri, Jun 5, 2020 at 2:53 PM Luke Cwik wrote:
> +dev +Chamikara Jayalath
> +Heejong
Is Kafka itself running inside another container? If so inspect that container
and see if it has a network alias and add that alias to your /etc/hosts file
and map it to 127.0.0.1.
From: Chamikara Jayalath
Sent: Friday, June 5, 2020 2:58 PM
To: Luke Cwik
Cc: user ; dev ; Heejong Lee
Sub
Thank you for the suggestions.
Neither Kafka nor Flink run in a docker container, they all run locally.
Furthermore, the same issue happens for Direct Runner. That being said
changing from localhost:$PORT to $IP_ADDR:$PORT resulted in a different
error, see attached.
On Fri, Jun 5, 2020 at 3:47 P
11 matches
Mail list logo