gt; wrote:
>
>> Seems like Java dependency is not being properly set up when running the
>> cross-language Kafka step. I don't think this was available for Beam 2.21.
>> Can you try with the latest Beam HEAD or Beam 2.22 when it's released ?
>> +Heejong
How did you start spark job server and what version of Apache Beam SDK did
you use?
There were some protocol changes recently so if both versions are not
matched you may see gRPC errors. If you used the gradle command on the
latest head for starting spark job server, I would recommend checking out
What protobuf version did you use for generating the Message object? It
looks like ProtoCoder supports both 2 and 3.
https://github.com/apache/beam/blob/master/sdks/java/extensions/protobuf/src/main/java/org/apache/beam/sdk/extensions/protobuf/ProtoCoder.java#L70
On Wed, May 13, 2020 at 6:43 AM T
If we assume that there's only one reader, all partitions are assigned to a
single KafkaConsumer. I think the order of reading each partition depends
on KafkaConsumer implementation i.e. how KafkaConsumer.poll() returns
messages.
Reference:
assigning partitions:
https://github.com/apache/beam/blob
What do you mean by "PCollection of dicts, each having different key
values"? What's the type of the PCollections? I assume that you want to
merge two PCollections of KV such as
PCollection[("a", 1), ("b", 2), ("c", 3)] + PCollection[("a", 4), ("d", 5),
("e", 6)]. Is that correct?
On Tue, Feb 11,
Looks like your code is not grammatically correct. It's impossible to chain
pcollection and pcollection with a vertical bar. Each chain should start
with an initial pipeline or pcollection and generates another pcollection
via a given ptransform.
If you modify your code like below, it should work: