Thanks a lot for the added context and pointers Julian and Leonard,
I've fixed it by going down to the arithmetics as suggested in one of the
Calcite discussions.
The changes proposed by FLIP-126 definitely look good. I'll check its
details further.
Best Regards,
On Thu, 4 Mar 2021 at 04:18, Le
While using a simple query such as this
SELECT
`ts`,
FLOOR(`ts` TO WEEK) as `week_start`,
CEIL(`ts` TO WEEK) as `week_end`
FROM some_table
I get some weird results like these:
2021-03-01T00:00|2021-02-25T00:00|2021-03-04T00:00
Which is obviously wrong since March 1st is on Mond
k strategy on this computed field to
> make
> the field to be a rowtime attribute. Because streaming over window
> requires to
> order by a time attribute.
>
> Best,
> Jark
>
> On Sun, 21 Feb 2021 at 07:32, Sebastián Magrí
> wrote:
>
>> I have a table with t
ng async but
>> easier to interact with also work, like a Discourse forum?
>>
>> Thanks for bringing this up!
>>
>> Marta
>>
>>
>>
>> On Mon, Feb 22, 2021 at 10:03 PM Yuval Itzchakov
>> wrote:
>>
>>> A dedicated Slack would be
Is there any chat from the community?
I saw the freenode channel but it's pretty dead.
A lot of the time a more chat alike venue where to discuss stuff
synchronously or just share ideas turns out very useful and estimulates the
community.
--
Sebastián Ramírez Magrí
t; Regards,
> Timo
>
> On 20.02.21 18:46, Sebastián Magrí wrote:
> > I mean the SQL queries being validated when I do `mvn compile` or any
> > target that runs that so that basic syntax checking is performed without
> > having to submit the job to the cluster
I'm using a query like this
WITH aggs_1m AS (
SELECT
`evt`,
`startts`
`endts`,
SUM(`value`) AS `value`
FROM aggregates_per_minute
), aggs_3m AS (
SELECT
`evt`,
TUMBLE_START(`endts`, INTERVAL '3' MINUTE) AS `startts`,
TUMBLE_END(`endts`, INTERVAL '3' MINUTE) AS `en
I have a table with two BIGINT fields for start and end of an event as UNIX
time in milliseconds. I want to be able to have a resulting column with the
delta in milliseconds and group by that difference. Also, I want to be able
to have aggregations with window functions based upon the `end` field.
t; have?
>
> This looks clearly like a bug to me. We should open an issue in JIRA.
>
> Regards,
> Timo
>
> On 18.02.21 16:17, Sebastián Magrí wrote:
> > While using said function in a query I'm getting a query compilation
> > error saying that there's no
t;pre-flight
> phase". A cluster is not required but it is already JVM runtime of the
> client.
>
> Regards,
> Timo
>
> On 18.02.21 14:55, Sebastián Magrí wrote:
> > Is there any way to check SQL strings in compile time?
> >
> > --
> > Sebastián Ramírez Magrí
>
>
--
Sebastián Ramírez Magrí
While using said function in a query I'm getting a query compilation error
saying that there's no applicable method for the given arguments. The
parameter types displayed in the error are
org.apache.flink.table.data.TimestampData,
org.apache.flink.table.data.TimestampData
And there's no overload
Is there any way to check SQL strings in compile time?
--
Sebastián Ramírez Magrí
The root of the previous error seemed to be the flink version the connector
was compiled for. I've tried compiling my own postgresql-cdc connector, but
still have some issues with dependencies.
On Thu, 28 Jan 2021 at 11:24, Sebastián Magrí wrote:
> Applied that parameter and that seem
ote:
> Hi Sebastian,
>
> sorry for the late reply. Could you solve the problem in the meantime?
> It definitely looks like a dependency conflict.
>
> Regards,
> Timo
>
>
> On 22.01.21 18:18, Sebastián Magrí wrote:
> > Thanks a lot Matthias!
> >
> >
k
>
> [1]:
> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/connectors/#transform-table-connectorformat-resources
>
> On Thu, 28 Jan 2021 at 17:28, Sebastián Magrí
> wrote:
>
>> Hi Jark!
>>
>> Please find the full pom file attached.
>>
and
> the Factory file contains
>
> com.alibaba.ververica.cdc.connectors.postgres.table.PostgreSQLTableFactory
>
>
> Best,
> Jark
>
>
> On Tue, 26 Jan 2021 at 21:17, Sebastián Magrí
> wrote:
>
>> Thanks a lot for looking into it
> If not, you could try applying the ServicesResourceTransformer[1]
>
> Best,
>
> Dawid
>
> [1]
> https://maven.apache.org/plugins/maven-shade-plugin/examples/resource-transformers.html#ServicesResourceTransformer
> On 26/01/2021 12:29, Sebastián Magrí wrote:
>
> Hi!
&g
Hi!
I've reported an issue with the postgresql-cdc connector apparently caused
by the maven shade plugin excluding either the JDBC connector or the cdc
connector due to overlapping classes. The issue for reference is here:
https://github.com/ververica/flink-cdc-connectors/issues/90
In the meanti
> Best,
> Matthias
>
> On Fri, Jan 22, 2021 at 4:35 PM Sebastián Magrí
> wrote:
>
>> Hi Matthias!
>>
>> I went through that thread but as I'm just using the `apache/flink`
>> docker image for testing I honestly couldn't figure out how I would do th
link-table-planner-blink as it is suggested in [1]?
>
> Best,
> Matthias
>
> [1]
> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/Flink-1-10-exception-Unable-to-instantiate-java-compiler-td38221.html
>
> On Fri, Jan 22, 2021 at 4:04 PM Sebastián Magrí
> wro
Hi!
I'm trying out Flink SQL with the attached docker-compose file.
It starts up and then I create a table with the following statement:
CREATE TABLE mytable_simple (
`customer_id` INT
) WITH (
'connector' = 'jdbc',
'url' = 'jdbc:postgresql://pgusr:pgpwd@postgres/pdgb',
'table-name' = 'm
21 matches
Mail list logo