Hi Jason,
The best option would indeed be to make the dimension data available in
something like a database which you can access via JDBC, HBase or Hive.
Those do support lookups.
Best regards,
Martijn
On Thu, 20 Jan 2022 at 22:11, Jason Yi <93t...@gmail.com> wrote:
> Thanks for the quick resp
Changing the order of exec command makes sense to me. Would you please
create a ticket for this?
The /opt/flink/conf is cleaned up because we are mounting the conf files
from K8s ConfigMap.
Best,
Yang
Tamir Sagi 于2022年1月18日周二 17:48写道:
> Hey Yang,
>
> Thank you for confirming it.
>
> IMO, a b
Hi, Meghajit
Thanks Meghajit for sharing your user case.
I found a workaround way that you could try to name your file in a
timestamp style. More details could be found here[1].
Another little concern is that Flink is a distributed system, which means
that we could not assume any order even if we
Hi, Paul
Would you like to share some information such as the Flink version you used
and the memory of TM and JM.
And when does the timeout happen? Such as at begin of the job or during the
running of the job
Best,
Guowei
On Thu, Jan 20, 2022 at 4:45 PM Paul Lam wrote:
> Hi,
>
> I’m tuning a
As per another recent thread. This is still an issue.
On Wed, 19 Jan 2022 at 06:36, Chesnay Schepler wrote:
> This is a serialization bug in Flink, see
> https://issues.apache.org/jira/browse/FLINK-24550.
> It will be fixed in the upcoming 1.14.3 release.
>
> On 19/01/2022 09:01, Caizhi Weng wro
I had the same issue in my thread it was mentioned that it was supposed to
be fixed in 1.14.3
On Thu, 20 Jan 2022 at 07:40, Martin wrote:
> Thanks for the quick response, I assumed thats already known, but was not
> able to find the issue. Thanks :)
>
> Chesnay Schepler schrieb am 20.01.2022 13:
Thanks for the quick response.
Is there any best or suggested practice for the use case of when we have
data sets in a filesystem that we want to use in Flink as reference data
(like dimension data)?
- Would making dimension data a Hive table or loading it into a table in
RDBMS (like MySQL)
Hi Robert,
I agree with you, I mean that's why I was writing a K8s operator, but the
restriction wasn't decided by me, it was imposed on me. I guess my thinking was
rather that an operator wouldn't necessarily supersede standalone+reactive, at
least not in my case, but that certainly doesn't me
Hi Jason,
It's not (properly) supported and we should update the documentation.
There is no out of the box possibility to use a file from filesystem as a
lookup table as far as I know.
Best regards,
Martijn
Op do 20 jan. 2022 om 18:44 schreef Jason Yi <93t...@gmail.com>
> Hello,
>
> I have da
Just tried this again with Flink 1.14.3 since
https://issues.apache.org/jira/browse/FLINK-24550 is listed as fixed. I am
running into similar errors when calling the /v1/jobs/overview endpoint
(without any running jobs):
{"errors":["Internal server error.",""]}
Peter Westermann
Team Lead – Re
Hello,
[Apologies if this group does not answer questions related to AIFlow
project and happy to learn if there are other email handles I need to send
my questions to]
I am new to AIFlow and exploring some demo projects for a simple workflow I
want to try with two flink jobs, a batch (bounded pro
Hi Alexis,
The usage of Custom Resource Definitions (CRDs). The main reason given to
> me was that such resources are global (for a given cluster) and that is not
> desired. I know that ultimately a CR based on a CRD can be scoped to a
> specific namespace, but customer is king…
I don't think th
Hello,
I have data sets in s3 and want to use them as lookup tables in Flink. I
defined tables with the filesystem connector and joined the tables to a
table, defined with the Kinesis connector, in my Flink application. I
expected its output to be written to s3, but no data was written to a sink
t
On Thu, Jan 20, 2022 at 2:46 AM yidan zhao wrote:
> self-define the window assigners.
>
Thanks, I'll check that out. If you have links to especially good examples
and explanations, that would be great. Otherwise, I presume the Flink
codebase itself is the place to start.
--
Cheers,
Aeden
Hi,
We have been looking at using stateful functions to deploy a remote
python model as a stateful and interacting with it from Flink via
Kafka.
Everything has worked well until we ran into some in-house deployment
issues around the various environments.
This coupled with the use case (where we
Hi Caizhi,
Thanks for responding.
>So you'd like to flatten the traffic by materializing the results of
different parallelisms at different times?
Yes.
> What's your use case for streaming windows?
In short, summarizing many-many-many millions of sessions every minute
involving mostly stateless,
Hi Guowei,
Thanks for your answer. Regarding your question,
*> Currently there is no such public interface ,which you could extend to
implement your own strategy. Would you like to share the specific problem
you currently meet?*
The GCS bucket that we are trying to read from is periodically popul
Thanks for the quick response, I assumed thats already known, but was not able to find the issue. Thanks :)
Chesnay Schepler schrieb am 20.01.2022 13:36 (GMT +01:00):
This is a bug in Flink for which I have filed a ticket: https://issues.apache.org/jira/browse/FLINK-25732
As is you can only req
This is a bug in Flink for which I have filed a ticket:
https://issues.apache.org/jira/browse/FLINK-25732
As is you can only request the job overview from the leading jobmanager.
On 20/01/2022 13:15, Martin wrote:
Hey,
I upgraded today my Flink application to Flink 1.14.3.
I run it in a HA-
Hey,
I upgraded today my Flink application to Flink 1.14.3.
I run it in a HA-Standalone-K8 deployment with 2 JobManagers, so one active and one on standby.As its only a prototyp I make the UI, port 8081 of the JobManager pods, avaiable via NodePort.
Already with older Flink version I sometimes got
Thanks Nico.
I will let you know the results
Op do 20 jan. 2022 om 10:39 schreef Nico Kruber :
> Hi,
> unfortunately, the gradle example in the docs has grown a bit old [1] and
> I
> haven't gotten around to updating it yet. Nonetheless, we are using an
> updated
> version internally and so far
Hello,
I'm writing to ask for help with generating completion hints for Flink SQL.
I'm trying to use the Calcite SqlAdisor with the Flink parser. My problem
is that I can get completion working for table names, but not column names.
"select a.mgr from ^stuff a"
gives me good results: CATALOG.S
self-define the window assigners.
Caizhi Weng 于2022年1月17日周一 13:11写道:
> Hi!
>
> So you'd like to flatten the traffic by materializing the results of
> different parallelisms at different times?
>
> As far as I know this is not possible. Could you please elaborate more on
> the reason you'd like t
Hi,
unfortunately, the gradle example in the docs has grown a bit old [1] and I
haven't gotten around to updating it yet. Nonetheless, we are using an updated
version internally and so far this has been working fine. The latest project
we've been using this at is available at:
https://github.com
Hi, Meghajit
1. From the implementation [1] the order of split depends on the
implementation of the FileSystem.
2. From the implementation [2] the order of the file also depends on the
implementation of the FileSystem.
3. Currently there is no such public interface ,which you could extend to
imp
Hi,
I’m tuning a Flink job with 1000+ parallelism, which frequently fails with Akka
TimeOutException (it was fine with 200 parallelism).
I see some posts recommend increasing `akka.ask.timeout` to 120s. I’m not
familiar with Akka but it looks like a very long time compared to the default
10s
26 matches
Mail list logo