Hi Chesnay,
Thanks for your quick reply. I have another question. Will the state, which
is ignored, be transported
to TMs from DFS? Or will it be detected by JM's checkpoint coordinator and
only those states reuired
by operators be transported to each TM?
Best,
Tony Wei
2018-08-17 14:38 GMT+08:0
Hi,
it will not be transported. The JM does the state assignment to create the
deployment information for all tasks. If will just exclude the state for
operators that are not present. So in your next checkpoints they will no longer
be contained.
Best,
Stefan
> Am 17.08.2018 um 09:26 schrieb T
Hi Miguel,
the issue that you are observing is due to Java's type erasure.
"new MyClass()" is always erasured to "new MyClass()" by the
Java compiler so it is impossible for Flink to extract something.
For classes in declarations like
class MyClass extends ... {
...
}
the compiler adds t
Hi,
In Flink's documents, I couldn't find any example that uses primitive type
when working with States. What would be the initial value of a ValueState of
type Int/Boolean/...? The same question apply for MapValueState like
[String, Int]
Thanks and regards,
Averell
--
Sent from: http://apach
Hi Stefan,
Thanks for your detailed explanation.
Best,
Tony Wei
2018-08-17 15:56 GMT+08:00 Stefan Richter :
> Hi,
>
> it will not be transported. The JM does the state assignment to create the
> deployment information for all tasks. If will just exclude the state for
> operators that are not pr
Hey,
After you call, by default values you mean after you call :
getRuntimeContext.getState()
If so, the default value will be state with *value() *of null, as described
in :
/**
* Returns the current value for the state. When the state is not
* partitioned the returned value is the same for
Thank you Dominik.
So there's an implicit conversion, which means that getState().value() would
always give a deteministic result (i.e: Boolean value would always be false,
Int value would always be 0)
I found another funny thing is even with ref type like Integer, there is
also that implicit con
I’m exploring moving some “manual” state management into Flink-managed state
via Flink’s windowing paradigms, and I’m running into the surprise that many
pieces of the windowing architecture require the stream be upcast to Object
(AnyRef in scala). Is there a technical reason for this? I’m curre
Hello,
I noticed CPU utilization went high and took a thread dump on the task
manager node. Why would RocksDBMapState.entries() / seek0 call consumes CPU?
It is Flink 1.4.2
"Co-Flat Map (3/4)" #16129 prio=5 os_prio=0 tid=0x7fefac029000
nid=0x338f runnable [0x7feed2002000]
java.lang.Th
Hi sagar,
There are some examples in flink-jpmml git library[1], for example[2]. Hope
it helps.
Best, Hequn
[1] https://github.com/FlinkML/flink-jpmml
[2]
https://github.com/FlinkML/flink-jpmml/tree/master/flink-jpmml-examples/src/main/scala/io/radicalbit/examples
On Fri, Aug 17, 2018 at 10:09
Some data is silently lost on my Flink stream job when state is restored
from a savepoint.
Do you have any debugging hints to find out where exactly the data gets
dropped?
My job gathers distinct values using a 24-hour window. It doesn't have any
custom state management.
When I cancel the job wi
I am using Flink in EMR with following configuration.
{
"Classification": "flink-log4j",
"Properties": {
"log4j.logger.no":"DEBUG",
"log4j.appender.file":"org.apache.log4j.rolling.RollingFileAppender",
"log4j.appender.file.RollingPolicy.FileNamePattern":"logs/log.%d{
Hi all,
we have a problem with flink 1.5.2 high availability in standalone mode.
We have two jobmanagers running. When I shut down the main job manager, the
failover job manager encounters an error during failover.
Logs:
2018-08-17 14:38:16,478 WARN akka.remote.ReliableDeliverySupervisor
Hi Hequn,
Thanks for pointing that out.
We were wondering if there is anything else other than these examples, that
would help.
Thanks,
On Fri, Aug 17, 2018 at 5:33 AM Hequn Cheng wrote:
> Hi sagar,
>
> There are some examples in flink-jpmml git library[1], for example[2].
> Hope it helps.
>
Hi all,
we are using flink 1.5.2 in batch mode with prometheus monitoring.
We noticed that a few metrics do not get unregistered after a job is finished:
flink_taskmanager_job_task_operator_numRecordsIn
flink_taskmanager_job_task_operator_numRecordsInPerSecond
flink_taskmanager_job_task_operato
Hello Navneet Kumar Pandey,
org.apache.log4j.rolling.RollingFileAppender is part of Apache Extras
Companion for Apache log4j [1]. Is that library in your classpath?
Are there hints in taskmanager.err?
Can you run:
cat /usr/lib/flink/conf/log4j.properties
on the EMR master node and show the
Hello,
I can't seem to be able to override the CaseClassSerializer with my custom
serializer. I'm using env.getConfig.addDefaultKryoSerializer() to add the
custom serializer but I don't see it being used. I guess it is because it
only uses Kryo based serializers if it can't find a Flink serializer
I have faced this issue, but in 1.4.0 IIRC. This seems to be related to
https://issues.apache.org/jira/browse/FLINK-10011. What was the status of
the jobs when the main Job Manager has been stopped ?
2018-08-17 17:08 GMT+02:00 Helmut Zechmann :
> Hi all,
>
> we have a problem with flink 1.5.2 hig
I am using this *log4j.properties *file config for rolling files once per
day and it is working perfectly. Maybe this will give You some hint:
log4j.appender.file=org.apache.log4j.DailyRollingFileAppender
log4j.appender.file.file=${log.file}
log4j.appender.file.append=false
log4j.appender.file.lay
Hi Gerard,
you are correct, Kryo serializers are only used when no built-in Flink
serializer is available.
Actually, the tuple and case class serializers are one of the most
performant serializers in Flink (due to their fixed length, no null
support). If you really want to reduce the seriali
Hi,
I am on Flink 1.4.2 and as part of my operator logic (i.e. RichFlatMapFunction)
I am collecting the values in the Collector object.
But I am getting an error stating “Caused by:
org.apache.flink.streaming.runtime.tasks.ExceptionInChainedOperatorException:
Could not forward element to next
Hi antonio,
Yes, ProcessWindowFunction is a very low level window function.
It allows you to access the data in the window and allows you to customize
the output of the window.
So if you use it, while giving you flexibility, you need to think about
other things, which may require you to write more
Hi Helmut,
Is the metrics of all the sub task instances of a job not unregistered, or
part of it is not unregistered. Is there any exception log information
available?
Please feel free to create a JIRA issue and clearly describe your problem.
Thanks, vino.
Helmut Zechmann 于2018年8月17日周五 下午11:14
Hi Community,
I am using Flink 1.4.2 to do streaming processing. I fetch data from Kafka and
write the parquet file to HDFS. In the previous environment, the Kafka had 192
partitions and I set the source parallelism to 192, the application works fine.
But recently we had increased the Kafka par
24 matches
Mail list logo