Thanks. It might be theoretically possible to do this (at least for the
case where existing fields do not change). Whether anyone currently has
available time to do this is a different question, but it's something that
can be looked into.
On Mon, Dec 7, 2020 at 9:29 PM Talat Uyarer
wrote:
> Addi
Thanks Reuven,
I can work on that. I know the internals of BeamSQL. I could not figure out
How to replace Step's code with new generated code after the pipeline is
submitted. Could you share your thoughts on this?
Thanks
On Tue, Dec 8, 2020 at 9:20 AM Reuven Lax wrote:
> Thanks. It might be th
Reuven, could you clarify what you have in mind? I know multiple times
we've discussed the possibility of adding update compatibility support to
SchemaCoder, including support for certain schema changes (field
additions/deletions) - I think the most recent discussion was here [1].
But it sounds li
There's a difference between a fully dynamic schema and simply being able
to forward "unknown" fields to the output.
A fully-dynamic schema is not really necessary unless we also had dynamic
SQL statements. Since the existing SQL statements do not reference the new
fields by name, there's no reaso
Proposal 1 would also interact poorly with SELECT * EXCEPT ... statements,
which returns all columns except specific ones. Adding an unknown field
does seem like a reasonable way to handle this. It probably needs to be
something that is native to the Row type, so columns added to nested rows
also w
I'm not sure that we could support EXCEPT statements, as that would require
introspecting the unknown fields (what if the EXCEPT statement matches a
field that later is added as an unknown field?). IMO this sort of behavior
only makes sense on true pass-through queries. Anything that modifies the
i
We could support EXPECT statements in proposal 2 as long as we restricted
it to known fields.
We are getting into implementation details now. Making unknown fields just
a normal column introduces a number of problems. ZetaSQL doesn't support
Map type. All our IOs would need to explicitly deal with
Hi Beam community,
I got a quick question about withValueSerializer() method of KafkaIO.Write
class:
https://beam.apache.org/releases/javadoc/2.25.0/org/apache/beam/sdk/io/kafka/KafkaIO.Write.html
The withValueSerializer method does not support passing in a serializer
provider. The problem wit
Talat, are you interested in writing a proposal and sending it to
d...@beam.apache.org? We could help advise on the options.
Reuven
On Tue, Dec 8, 2020 at 10:28 AM Andrew Pilloud wrote:
> We could support EXPECT statements in proposal 2 as long as we restricted
> it to known fields.
>
> We are
Hi all,
Sorry for the step-in. This case reminds me the similar req. in my company
for plugin lambda func in beam's pipeline dynamically like filtering,
selecting, etc. without restarting the job long time ago, like flink
stateful functions, AKKA, etc.
Generally, SQL defines input, output, and tra
Yes Reuven, I would like to write a proposal for that. And also I
like Andrew Pilloud's idea. We can only put necessary fields on Row rest of
them can stay in the unknown field side. We are using Beam Calcite SQL . Is
it ok right ?
On Tue, Dec 8, 2020 at 3:15 PM Reuven Lax wrote:
> Talat, are y
Kobe cloud you little bit eleborate your idea ?
On Tue, Dec 8, 2020, 6:27 PM Kobe Feng wrote:
> Hi all,
> Sorry for the step-in. This case reminds me the similar req. in my company
> for plugin lambda func in beam's pipeline dynamically like filtering,
> selecting, etc. without restarting the j
Talat, my bad, first thing first, to resolve the issue, your proposal would
definitely help the start point for researching schema revolution in beam
pipeline, and I could comment there if any.
Andrew first reply is clear about the intention and scope for apache beam:
static graph for maximum opti
13 matches
Mail list logo