Thanks Timo,
I can see why this is pretty complicated to solve nicely at the moment (and
in general).
We will work around this for now, and looking forward to help make this
better in the future!
Gyula
On Mon, Apr 20, 2020 at 4:37 PM Timo Walther wrote:
> Hi Gyula,
>
> first of all the excepti
Hi Gyula,
first of all the exception
```
org.apache.flink.table.api.TableException: Rowtime attributes must not
be in the input rows of a regular join. As a workaround you can cast the
time attributes of input tables to TIMESTAMP before.
```
is IMHO one of the biggest shortcomings that we cu
Thanks for the clarification, we can live with this restriction I
just wanted to make sure that I fully understand why we are getting
these errors and if there is any reasonable workaround.
Thanks again :)
Gyula
On Mon, Apr 20, 2020 at 4:21 PM Kurt Young wrote:
> According to the current implem
According to the current implementation, yes you are right hive table
source will always be bounded.
But conceptually, we can't do this assumption. For example, we
might further improve hive table source
to also support unbounded cases, .e.g. monitoring hive tables and always
read newly appeared da
The HiveTableSource (and many others) return isBounded() -> true.
In this case it is not even possible for it to change over time, so I am a
bit confused.
To me it sounds like you should always be able to join a stream against a
bounded table, temporal or not it is pretty well defined.
Maybe there
The reason here is Flink doesn't know the hive table is static. After you
create these two tables and
trying to join them, Flink will assume both table will be changing with
time.
Best,
Kurt
On Mon, Apr 20, 2020 at 9:48 PM Gyula Fóra wrote:
> Hi!
>
> The problem here is that I dont have a temp
Hi!
The problem here is that I dont have a temporal table.
I have a regular stream from kafka (with even time attribute) and a static
table in hive.
The Hive table is static, it doesn't change. It doesn't have any time
attribute, it's not temporal.
Gyula
On Mon, Apr 20, 2020 at 3:43 PM godfrey
Hi Gyual,
Can you convert the regular join to lookup join (temporal join) [1],
and then you can use window aggregate.
> I understand that the problem is that we cannot join with the Hive table
and still maintain the watermark/even time column. But why is this?
Regular join can't maintain the tim
Hi All!
We hit a the following problem with SQL and trying to understand if there
is a valid workaround.
We have 2 tables:
*Kafka*
timestamp (ROWTIME)
item
quantity
*Hive*
item
price
So we basically have incoming (ts, id, quantity) and we want to join it
with the hive table to get the total pr