Hi Till, Fabian,
Thanks for your responses again.
Till, you have nailed it. I will comment on them individually. But first, I
feel like I am still not stating it well enough to illustrate the need. May be
I am overthinking :)
So let me try one more time with a preface that we are talking abou
All,
I have another slow Memory Leak situation using basic TimeSession Window
(earlier it was GlobalWindow related that Fabian helped clarify).
I have a very simple data pipeline:
DataStream processedData = rawTuples
.window(ProcessingTimeSessionWindows.withGap(Time.seconds(A
Good evening,
According to Flink's documentation I have excluded the Flink runtime library
from the runtime dependencies of my project:
dependencies {
compileOnly group: 'org.apache.flink', name: 'flink-core',
version: '1.4.2'
compileOnly group: 'org.apache
Hi!
I am trying to fetch metrics provided by Beam SDK via Flink runner in
detached mode, but looks like it is not supported yet.
I understand from class DetachedJobExecutionResult that metrics are not
supported to be extracted in detached mode job execution. Is this a
limitation of Flink as a runn
Hmm could you maybe share the client logs with us.
Cheers,
Till
On Fri, Jun 15, 2018 at 4:54 PM Garvit Sharma wrote:
> Yes, I did.
>
> On Fri, Jun 15, 2018 at 6:17 PM Till Rohrmann
> wrote:
>
>> Hi Garvit,
>>
>> have you exported the HADOOP_CLASSPATH as described in the release notes
>> [1]?
>
I remember that another user reported something similar, but he wasn't
using the PrometheusReporter. see
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/JVM-metrics-disappearing-after-job-crash-restart-tt20420.html
We couldn't find the cause, but my suspicion was FLINK-8946
I’m seeing a different exception when producing to Kinesis, which seems to do
with back pressure handling:
java.lang.RuntimeException: An exception was thrown while processing a record:
Rate exceeded for shard shardId-0026 in stream turar-test-output under
account .
Rate exceeded
Yes, I did.
On Fri, Jun 15, 2018 at 6:17 PM Till Rohrmann wrote:
> Hi Garvit,
>
> have you exported the HADOOP_CLASSPATH as described in the release notes
> [1]?
>
> [1]
> https://ci.apache.org/projects/flink/flink-docs-release-1.5/release-notes/flink-1.5.html#hadoop-classpath-discovery
>
> Chee
Hi,
this sounds very strange. I just tried it out locally with with a standard
metric and the Prometheus metrics seem to be unregistered after the job has
reached a terminal state. Thus, it looks as if the standard metrics are
properly removed from `CollectorRegistry.defaultRegistry`. Could you ch
Hi Garvit,
have you exported the HADOOP_CLASSPATH as described in the release notes
[1]?
[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.5/release-notes/flink-1.5.html#hadoop-classpath-discovery
Cheers,
Till
On Fri, Jun 15, 2018 at 2:22 PM Garvit Sharma wrote:
> Does someone has
Hi Peter,
this sounds very strange. I just tried to reproduce the issue locally but
for me it worked without a problem. Could you maybe share the jobmanager
logs on DEBUG log level with us?
As a side note, enabling the asynchronous checkpointing mode for the
FsStateBackend does not have an effect
Does someone has any idea how to get rid if the above parse exception while
submitting flink job to Yarn.
Already searched on the internet, could not find any solution to it.
Please help.
On Fri, Jun 15, 2018 at 9:15 AM Garvit Sharma wrote:
> Thanks Chesnay, Now it is connecting to the Resourc
Hi James,
as long as you do not change anything for `sql1`, it should work to recover
from a savepoint if you pass the `-n`/`--allowNonRestoredState` option to
the CLI when resuming your program from the savepoint. The reason is that
an operators generated uid depends on the operator and on its in
Hi,
At the moment (Flink 1.5.0), the operator UIDs depend on the overall
application and not only on the query.
Hence, changing the application by adding another query might change the
existing UIDs.
In general, you can only expect savepoint restarts to work if you don't
change the application an
Hi,
ideally we would not have to cancel all tasks and only redeploy the whole
job in case of a restart. Instead we should do what you've outlined:
Redeploy the failed tasks and reset the state of all other running tasks.
At the moment, this is, however, not yet possible. While improving Flink's
re
@Alexey
If you’d like to stick to 1.4.x for now, you can just do:
`mvn clean install -Daws.kinesis-kpl-version=0.12.6` when building the Kinesis
connector, to upgrade the KPL version used.
I think we should add this to the documentation. Here’s a JIRA to track that -
https://issues.apache.org/j
Hi:
My application use flink sql, I want to add new sql to the application,
For example first version is
DataStream paymentCompleteStream = getKafkaStream(env,
bootStrapServers, kafkaGroup, orderPaymentCompleteTopic)
.flatMap(new
PaymentComplete2AggregatedOrderItemFlatMap()).assignT
Hi Stefan,
Thanks for the answer.
Fixing the uids solved the problem, that's not an issue anymore.
The savepoint directory is there, but the RocksDB state is not restored
after restarting the application because
that state directory has been removed when I stopped the application. It
looks like th
Porting and rebuilding 1.4.x isn't a big issue. I've done it on our fork, back
when I reported the upcoming issue and we're running fine.
https://github.com/SaleCycle/flink/commit/d943a172ae7e6618309b45df848d3b9432e062d4
Ignore the circleci file of course, and the rest are the changes that I bac
19 matches
Mail list logo