Hi,
I'm using Flink 1.17.1 streaming API, on YARN.
My app first stuck at process func serialization. I know Avro Schema
is not serializable so I removed all references from my process
functions. Now it passes first round, but stuck again at the following
error:
org.apache.flink.client.program.Pr
bstraction for all lookup tables, and each connector has there own cache
> implementation. For example JDBC uses Guava cache and FileSystem uses
> in-memory HashMap, and both of them don’t load all records in dim table into
> the cache.
>
> Best,
>
> Qingsheng
>
>
> >
Hi,
I've read some docs
(https://help.aliyun.com/document_detail/182011.html) stating Flink
optimization technique using:
- partitionedJoin = 'true'
- cache = 'ALL'
- blink.partialAgg.enabled=true
However I could not find any official doc references. Are these
supported at all?
Also "partitione
e table options that are listed on table configuration [1]
> plus some pipeline options.
>
> State backend options are not likely one of them.
>
> [1]
> https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/dev/table/config/
>
> Best,
> Paul Lam
>
> 2022年3
Just tried editing flink-conf.yaml and it seems SQL Client does not respect
that also. Is this an intended behavior?
On Tue, Mar 15, 2022 at 7:14 PM dz902 wrote:
> Hi,
>
> I'm using Flink 1.14 and was unable to set S3 as state backend. I tried
> combination of:
>
> SET st
Hi,
I'm using Flink 1.14 and was unable to set S3 as state backend. I tried
combination of:
SET state.backend='filesystem';
SET state.checkpoints.dir='s3://xxx/checkpoints/';
SET state.backend.fs.checkpointdir='s3://xxx/checkpoints/';
SET state.checkpoint-storage='filesystem'
As well as:
SET st
nly after a successful checkpoint or at the end of
> input. I guess you did not enable checkpointing and as Kafka is a never
> ending source Hudi will never commit the records. For your testing job, as
> value sources are finite and will end soon you can see records in Hudi
> instantly.
&
Hi,
I have two connectors created with SQL CLI. Source from Kafka/Debezium, and
the sink S3 Hudi.
I can SELECT from the source table OK. I can issue INSERT INTO the sink OK.
So I think both of them work fine. Both have same table structure, jus
However when I do:
INSERT INTO sink
SELECT id, LAS
e are also in the logs if you set the appropriate logging
>> level.
>>
>> [1]
>> https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/sql/queries/overview/#execute-a-query
>> [2] https://stackoverflow.com/a/65681975
>>
>>
>> dz902 于2022年3月1
zhi Weng wrote:
> Hi!
>
> For stages and logs you can refer to the web UI. For generated code set
> logging level of org.apache.flink.table.runtime.generated.CompileUtils to
> debug.
>
> What query are you running? If possible can you share your SQL in the
> mailing list?
&g
Hi,
I'm trying to debug SQL queries, from SQL client or Zeppelin notebook (job
submitted to remote cluster).
I have a query not getting any data. How do I debug? Can I see the actual
code generated from the SQL query? Or is it possible to show all the
stages, actions or logs generated by the quer
11 matches
Mail list logo