Hi all!
I'm trying to understand the logic of saving checkpoint files and from the
exchange dump with ceph I see the following requests
HEAD
/checkpoints/example-job//shared/9701fae2-0de3-4d6c-b08b-0a92fb7285c9
HTTP/1.1
HTTP/1.1 404 Not Found
HEAD
/checkpoints/e
Thank you for the tips, I will try these out
From: Josh Mahonin
Sent: 18 January 2024 21:07
To: Qing Lim
Cc: Jun Qin ; User
Subject: Re: Use different S3 access key for different S3 bucket
Oops my syntax was a bit off there, as shown in the Hadoop docs, it looks like:
fs.s3a.bucket..
Josh
Th
Glad to hear this!
Best,
Zakelly
On Fri, Jan 19, 2024 at 9:22 AM Konstantinos Karavitis
wrote:
> I would like again to thank you as we managed to fix this strange issue we
> had by moving all the state initializations into the open method of
> ProcessFunction!
>
> On Thu, Jan 18, 2024 at 11:53
Please send email to user-unsubscr...@flink.apache.org and
user-zh-unsubscr...@flink.apache.org if you want to unsubscribe the mail
from user@flink.apache.org and user...@flink.apache.org, you can refer
[1][2] for more details.
Best,
Junrui
[1]
https://flink.apache.org/zh/community/#%e9%82%ae%e4%
退订
Hi Alexandre,
I couldn't find the image apache/flink-statefun-playground:3.3.0-1.0 in Docker
Hub.
You can temporarily use the release-3.2 version.
Hi Martijn, did we ignore pushing it to the docker registry?
Best,
Jiabao
[1] https://hub.docker.com/r/apache/flink-statefun-playground/tags
On 2
I would like again to thank you as we managed to fix this strange issue we
had by moving all the state initializations into the open method of
ProcessFunction!
On Thu, Jan 18, 2024 at 11:53 PM Konstantinos Karavitis <
kkaravi...@gmail.com> wrote:
> Thank you very much Zakelly for taking the time
Thank you very much Zakelly for taking the time to answer to my question. I
appreciate it a lot.
Unfortunately, I cannot share the source code as it is confidential and
owned by the company that I co-operate with.
But, yes you are right that inside the code, I can see that the state
initialization
Oops my syntax was a bit off there, as shown in the Hadoop docs, it looks
like:
fs.s3a.bucket..
Josh
>
Hi Qing,
You may have some luck with using per-bucket S3 configuration. Assuming
you're using the flink-s3-fs-hadoop plugin, you should be able to apply
different access keys to different buckets, eg:
https://hadoop.apache.org/docs/stable/hadoop-aws/tools/hadoop-aws/index.html#Configuring_differen
Hi,
I am trying to run the example provided here:
https://github.com/apache/flink-statefun-playground/tree/release-3.3/python/greeter
1 - Following the read.me, with docker (that I installed):
"docker-compose build" works well. But "docker-compose up" returns an error:
[image: image.png]
2 - W
Hi, I am suddenly starting to get
* java.lang.IllegalArgumentException: The minBucketMemorySize is not valid!
It comes from `org.apache.flink.table.runtime.util.collections.binary.
AbstractBytesHashMap`
I believe the actually value is coming from generated code, any advice on what
we
Hi Yang,
You can run `StandaloneAutoscalerEntrypoint`, and the scale report will print
in log (info level) by LoggingEventHandler[2].
[1]
flink-kubernetes-operator/flink-autoscaler-standalone/src/main/java/org/apache/flink/autoscaler/standalone/StandaloneAutoscalerEntrypoint.java
at main · apa
Hi,
I have a question about how to correctly set up a test that will read
input from locally provided collection in bounded mode and provide
outputs at the end of the computation. My test case looks something like
the following:
String[] lines = ...;
try (StreamExecutionEnvironment env =
St
Hi Jun
I am indeed talking about processing two different tables, but I don’t see any
option that allow configuring credentials at Flink table level, do you know
where is it documented?
Today we are setting the credentials via Flink conf yaml, which is documented
here:
https://nightlies.apach
Hi Qing
The S3 credentials are associated with Flink SQL tables.
I assume you are talking about processing/joining from two different tables,
backed up by two different S3 buckets. If so, you can provide different
credentials for different tables, then use the two tables in your pipeline.
Jun
Hi Sun,
As Dulce said, running in a cluster is typically recommended. However if for
some reason you need to run in standalone mode, why do you recreate Cluster on
each job, can you try to reuse the MiniCluster?
I happen to have a similar setup when we are running in standalone mode, and
run m
Hello dear flink community,
I noticed that there's a scaling report feature (specifically, the strings
defined in AutoscalerEventHandler) in the Flink operator autoscaler.
However, I'm unable to find this information in the Flink operator logs.
Could anyone guide me on how to access or visualize t
Hi, I am using Flink SQL to create table backed by S3 buckets.
We are not using AWS S3, so we have to use access key and secret for Auth.
My pipeline depends on 2 different buckets, each requires different
credentials, can flink support this?
Qing Lim | Marshall Wace LLP, George House, 131 Sl
19 matches
Mail list logo