It is really helpful to find the lost container quickly. In our inner flink
version, we optimize it by task's report and jobmaster's probe. When a task
fails because of the connection, it reports to the jobmaster. The jobmaster
will try to confirm the liveness of the unconnected taskmanager for cer
Hi Team,
Please find the query below.
Use Case: Using parallelism.default property mentioned in flink-conf.yaml
file to enable system-level parallelism in flink configuration.
Issue: Even after setting the parallelism.default to 3, on config start the
configuration starts with parallelism as 1.
Hi Mans,
`LookupTableSource` used to look up rows from external storage system by
given keys, it's very suitable for Key/Value storage system (e.g Redis,
HBase), or storage system with key concept (e.g, Mysql, Hive). `
ScanTableSource` is used to scan all rows from an external storage system.
Some
Hi Alexis,
I used to create a ticket[1] to make the ApplicationDeployer interface
public. However, the community is very careful to add new public interfaces.
Because it will make the maintenance more difficult. AFAIK, the
ApplicationDeployer is stable and it is a very basic requirement for
Applic
Hi, Jameson
Thanks very much for replying , I am really struggling on this.
I am using flowId as my keys, which means they will be matched and never use
again.
This seems like the scenario 2. I didn't know it is not fixed yet.
thank you again and do you have any solutions ?
On 2021/07/07 01:47:00
Hi Li,
How big is your keyspace? Had a similar problem which turns out to
be scenario 2 in this issue
https://issues.apache.org/jira/browse/FLINK-19970. Looks like the bug
in scenario 1 got fixed by scenario 2 did not. There's more detail in
this thread,
http://deprecated-apache-flink-user-ma
Hey Folks:
I am trying to understand how LookupTableSource works and have a few questions:
1. Are there other examples/documentation on how create a query that uses it vs
ScanTableSource ?2. Are there any best practices for using this interface ?3.
How does the planner decide to use LookupTableS
Hello Roman,
Sorry I did some more testing and the original failure was caused by a
different part of the pipeline. We I added a new stateless operator, it was
able to restart from the previous savepoint with no issue.
Another question I have is, since you explicitly asked whether it's a savepo
Hi Alexis,
KubernetesSessionCli provides a similar functionality IIUC but it's
also marked as @Internal (so it likely will change in the future; the
REST APIs it uses aren't likely to change, but I guess it doesn't help
as you'd like some helper classes.).
I think it's a good idea to ask this ques
Hi Wanghui,
unfortunately, this is not supported to my knowledge. See also this similar
question on Stackoverflow:
https://stackoverflow.com/questions/60950594/flink-encryption-parameters-in-flink-conf-yaml
Best regards,
Nico
On Mon, Jul 5, 2021 at 3:45 PM Wanghui (HiCampus)
wrote:
> Hello, I
Hi Wanghui,
if I understand correctly, you are looking for the config option
security.ssl.algorithms [1]?
Best regards,
Nico
[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/deployment/security/security-ssl/#cipher-suites
On Tue, Jul 6, 2021 at 3:46 AM Wanghui (HiCampus)
w
Hi,
I have a very strange bug when using MATCH_RECOGNIZE.
I'm using some joins and unions to create an event stream. Sample
event stream (for one user) looks like this:
uuid cif event_type v balance ts
621456e9-389b-409b-aaca-bca99eeb43b3 0004091386 trx
4294.38 74.52495000
Hi,
I have a very strange bug when using MATCH_RECOGNIZE.
I'm using some joins and unions to create event stream. Sample event stream
(for one user) looks like this:
uuidcif event_type v balance ts
621456e9-389b-409b-aaca-bca99eeb43b30004091386 trx
4294.38000
Yes, I have noticed the PR and commented there with some consideration
about the new option. We can discuss further there.
On Tue, Jul 6, 2021 at 6:04 PM Till Rohrmann wrote:
> This is actually a very good point Gen. There might not be a lot to gain
> for us by implementing a fancy algorithm for
This is actually a very good point Gen. There might not be a lot to gain
for us by implementing a fancy algorithm for figuring out whether a TM is
dead or not based on failed heartbeat RPCs from the JM if the TM <> TM
communication does not tolerate failures and directly fails the affected
tasks. T
As Yangze stated ticket cache will be expired after its lifespan.
Please be aware that when keytab is used then Flink obtains delegation
tokens which will be never ever used.
The fact that delegation token handling is not functioning is a known issue
and working on it to fix it.
w/o delegation toke
Dear community,
I have been struggling a lot with the deployment of my PyFlink job.
Moreover, the performance seems to be very disappointing especially the
low-throughput latency. I have been playing around with configuration
values, but it has not been improving.
In short, I have a Datastream job
I am using Flink CEP to do some performance tests.
Flink version 1.13.1.
below is the sql:
INSERT INTO to_kafka
SELECT bizName, wdName, wdValue , zbValue , flowId FROM kafka_source
MATCH_RECOGNIZE
(
PARTITION BY flow_id
ORDER BY proctime
MEASURES A.biz_name as bizName, A.wd_name as w
Hi, Mohit,
Have you figured out any solusions on this problem ?
I am now facing the exactly same problem ,
I was using Flink of version 1.12.0 and I also upgrated it to 1.13.1 but the
checkpoint size is still growing.
On 2021/06/02 15:45:59, "Singh, Mohit" wrote:
> Hi,
>
> I am facing an i
19 matches
Mail list logo