Thanks Tamir,
I was having some issues connecting from my IDE (solved) but this is really
helpful.
On Sat, Mar 6, 2021, 23:04 Tamir Sagi wrote:
> I had a typo in my previous answer, the env name was missing an 'S'
>
> ENABLE_BUILT_IN_PLUGIN --> ENABLE_BUILT_IN_PLUGIN*S*
> once again, the value
Hi Yuval,
That's correct you will always get a LogicalWatermarkAssigner if you
assigned a watermark.
If you implement SupportsWatermarkPushdown, the LogicalWatermarkAssigner
will be pushed
into TableSource, and then you can push Filter into source if source
implement SupportsFilterPushdown.
Best,
Hi Jark,
Even after implementing both, I don't see the watermark being pushed to the
tablesource in the logical plan and avoids predicate pushdown from running.
On Sun, Mar 7, 2021, 15:43 Jark Wu wrote:
> Hi Yuval,
>
> That's correct you will always get a LogicalWatermarkAssigner if you
> assig
Hey folks,
The community here has been awesome with my recent questions about Flink, so
I’d like to give back. I’m already a member of the ASF JIRA but I was wondering
if I could get access to the Flink Project.
I’ve contributed a good bit to Apache Beam in the past, but I figured that I’ll
b
Hey Rion,
you don't need special access to Flink's Jira: Any JIra user is assignable
to tickets, but only committers can assign people.
For low hanging fruits, we have a "starter" label to tag those tickets. I
also recommend keeping an eye on Jira tickets about topics you are
experienced with / i
Hi Hemant,
I don't see any problem in your settings. Any exceptions suggesting why TM
containers are not coming up?
Thank you~
Xintong Song
On Sat, Mar 6, 2021 at 3:53 PM bat man wrote:
> Hi Xintong Song,
> I tried using the java options to generate heap dump referring to docs[1]
> in flink-
Hi, Yuval, Jark, Timo.
Currently the watermark push down happens in the logical rewrite phase but
the filter push down happens in the local phase, which means the planner
will first check the Filter push down and then check the watermark push
down.
I think we need a rule to transpose between the
I think you want to submit a Flink python job to the existing session
cluster.
Please ensure the session cluster is created with proper service exposed
type[1].
* LoadBalancer for the cloud environment
* NodePort for self managed K8s cluster
* ClusterIP for the K8s internal submission, which means
Hi Navneeth,
It seems from the stack that the exception is caused by the underlying EFS
problems ? Have you checked
if there are errors reported for EFS, or if there might be duplicate mounting
for the same EFS and others
have ever deleted the directory?
Best,
Yun
--Original
Hi Yun,
Thanks for the response. I checked the mounts and only the JM's and TM's
are mounted with this EFS. Not sure how to debug this.
Thanks
On Sun, Mar 7, 2021 at 8:29 PM Yun Gao wrote:
> Hi Navneeth,
>
> It seems from the stack that the exception is caused by the underlying EFS
> problems
The company has provided an algorithm written in C++, which has been packaged
into a.so file. I have built a SpringBoot project, which uses JNI to operate
the algorithm written in C++. Could you please tell me how to call it in Flink?
Do i need to define operators, chains of operators?
Hi,
I tried with the standalone session (sorry I do not have a yarn cluster in
hand) and it seems that
the flink cluster could startup normally. Could you check the log of
NodeManager to see the detail
reason that the container does not get launched? Also have you check if there
are some spell
Hi!
I'm running a backfill Flink stream job over older data. It has multiple
interval joins. I noticed my checkpoint is regularly gaining in size. I'd
expect my checkpoints to stabilize and not grow.
Is there a setting to prune useless data from the checkpoint? My top guess
is that my checkpo
Hi Alexey,
Thanks for confirming.
Can you send me a copy of the exception stack trace? That could help me
pinpoint the exact issue.
Cheers,
Gordon
On Fri, Mar 5, 2021 at 2:02 PM Alexey Trenikhun wrote:
> Hi Gordon,
> I was using RocksDB backend
> Alexey
>
> --
> *F
14 matches
Mail list logo