How do we get uncaught exceptions in operators to skip the problematic
messages, rather than crash the entire job? Is there an easier or less
mistake-prone way to do this than wrapping every operator method in
try/catch?
And what to do about Map? Since it has to return something, we're either
retu
on every event. The windows need to be of a fixed
>>> size, but to have their start and end times update continuously, and I'd
>>> like to trigger on every event. Is this a bad idea? I've googled and read
>>> the docs extensively and haven't been able to identify built-in
>>> functionality or examples that map cleanly to my requirements.
>>>
>>> OK, I just found DeltaTrigger, which looks promising... Does it make
>>> sense to write a WindowAssigner that makes a new Window on every event,
>>> allocation rates aside?
>>>
>>> Thanks!
>>>
>>> -0xe1a
>>>
>>
--
Jacob Sevart
Software Engineer, Safety
e don’t find clear way to
> reproduce
> > this problem (when the flink job creates “abnormal” checkpoints).
> >
> > Configuration:
> >
> > We are using flink 1.8.1 on emr (emr 5.27)
> >
> > Kafka: confluence kafka 5.4.1
> >
> > Flink kafka connector:
> > org.apache.flink:flink-connector-kafka_2.11:1.8.1 (it includes
> > org.apache.kafka:kafka-clients:2.0.1 dependencies)
> >
> > Our input kafka topic has 32 partitions and related flink source has
> 32
> > parallelism
> >
> > We use pretty much all default flink kafka concumer setting. We only
> > specified:
> >
> > CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG,
> >
> > ConsumerConfig.GROUP_ID_CONFIG,
> >
> > CommonClientConfigs.SECURITY_PROTOCOL_CONFIG
> >
> > Thanks a lot in advance!
> >
> > Oleg
> >
>
>
>
--
Jacob Sevart
Software Engineer, Safety
https://github.com/apache/flink/pull/11475
On Sat, Mar 21, 2020 at 10:37 AM Jacob Sevart wrote:
> Thanks, will do.
>
> I only want the time stamp to reset when the job comes up with no state.
> Checkpoint recoveries should keep the same value.
>
> Jacob
>
> On Sat, Mar 2
you would need a different value.
> Updating the config after a recovery is not possible.
>
> Cheers,
> Till
>
> On Fri, Mar 20, 2020 at 6:29 PM Jacob Sevart wrote:
>
>> Thanks, makes sense.
>>
>> What about using the config mechanism? We're collecting and
p;m=t8gx18WI38mWMMo9o1GAUERpXwVKG5wnYdvT3gBZxo8&s=v2kbM2mYHBcsKjNzFCaaSbg_3vyfYIhoX8stFXSzRnY&e=>
>
> Cheers,
> Till
>
> On Tue, Mar 17, 2020 at 2:29 AM Jacob Sevart wrote:
>
>> Thanks! That would do it. I've disabled the operator for now.
>>
>> The pur
>
> Cheers,
> Till
>
> On Sat, Mar 14, 2020 at 3:23 AM Jacob Sevart wrote:
>
>> Oh, I should clarify that's 43MB per partition, so with 48 partitions it
>> explains my 2GB.
>>
>> On Fri, Mar 13, 2020 at 7:21 PM Jacob Sevar
Oh, I should clarify that's 43MB per partition, so with 48 partitions it
explains my 2GB.
On Fri, Mar 13, 2020 at 7:21 PM Jacob Sevart wrote:
> Running *Checkpoints.loadCheckpointMetadata *under a debugger, I found
> something:
> *subtaskState.managedOperatorState[0].sateNameToPa
there are many `ByteStreamStateHandle`, and
> their names are the strings you saw in the metadata.
>
> Best,
> Congxian
>
>
> Jacob Sevart 于2020年3月6日周五 上午3:57写道:
>
>> Thanks, I will monitor that thread.
>>
>> I'm having a hard time following the seriali
che.org-253E&d=DwIBaQ&c=r2dcLCtU9q6n0vrtnDw9vg&r=lTq5mEceM-U-tVfWzKBngg&m=awEv6FqKY6dZ8NIA4KEFc_qQ6aadR_jTAWnO17wtAus&s=P3Xd0IFKJTDIG2MMeP-hOSfY4ohoCEUMQEJhvGecSlI&e=
>
> Which you can re-post your problem and monitor for answers.
>
> Cheers,
> Kostas
>
> On Wed, M
cts/flink/flink-docs-release-1.6/ops/state/savepoints.html
>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__ci.apache.org_projects_flink_flink-2Ddocs-2Drelease-2D1.6_ops_state_savepoints.html&d=DwMFaQ&c=r2dcLCtU9q6n0vrtnDw9vg&r=lTq5mEceM-U-tVfWzKBngg&m=Gj8rciOHU7hUM_QxeMOSC8QqWhJ
Per the documentation:
"The meta data file of a Savepoint contains (primarily) pointers to all
files on stable storage that are part of the Savepoint, in form of absolute
paths."
I somehow have a _metadata file that's 1.9GB. Running *strings *on it I
find 962 strings, most of which look like HDFS
12 matches
Mail list logo