Hi Jiazhi
Keyed state is only supported in keyed stream as it needs key selector and key
serializer to select specific key from the input element, this is correct.
If you dig into Flink code, the keyed state backend would only be created when
the operator has its own serializer[1].
After 'keyB
Hi Fabian,
Thanks for your reply, it helps a lot.
Best Regards,
Jie
| |
shadowell
|
|
shadow...@126.com
|
签名由网易邮箱大师定制
On 7/8/2020 18:17,Fabian Hueske wrote:
Hi Jie,
The auto-ID generation is not done by the SQL translation component but on a
lower level, i.e., it's independent of
Deal all
Keyed state (ValueState, ReducingState, ListState, AggregatingState, MapState)
Supported in Keyed Stream, meaning only in KeyedProcessFunction? But in
practice, I can also use these states in ProcessAllWindowFunction and
ProcessWindowFunction. Why?
thank you
jiazhi
Not really; but essentially you have to override
SLF4JReporter#notifyOfAddedMetric and filter the metrics you're
interested in. Then build the flink-metrics-slf4j module, and replace
the corresponding jar in your distribution.
On 08/07/2020 18:20, Manish G wrote:
Ok.Any resource on same?
On
Ok.Any resource on same?
On Wed, Jul 8, 2020, 9:38 PM Chesnay Schepler wrote:
> There's no built-in functionality for this. You could customize the
> reporter though.
>
> On 08/07/2020 17:19, Manish G wrote:
> > Hi,
> >
> > I have added a Meter in my code and pushing it to app logs using slf4j
There's no built-in functionality for this. You could customize the
reporter though.
On 08/07/2020 17:19, Manish G wrote:
Hi,
I have added a Meter in my code and pushing it to app logs using slf4j
reporter.
I observe that apart from my custometrics, lots of other metrics like
gauge, histog
Hi,
I have added a Meter in my code and pushing it to app logs using slf4j
reporter.
I observe that apart from my custometrics, lots of other metrics like
gauge, histogram etc is also published. It makes it difficult to filter out
data for generating splunk graphs.
Is there a way to limit publis
Hi Song, Guo,
Thanks for the information.
I will first upgrade our flink cluster to 1.10.0 and try again.
Currently, we are encountering some dependency conflict issue, possibly
with tranquility. But that is another issue.
For your information, (also as I described in the previous email)
*What Fl
I've asked this question in https://issues.apache.org/jira/browse/FLINK-9268
but it's been inactive for two years so I'm not sure it will be visible.
While creating a savepoint I get a org.apache.flink.util.SerializedThrowable:
java.lang.NegativeArraySizeException. It's happening because some of m
Hi Jie,
The auto-ID generation is not done by the SQL translation component but on
a lower level, i.e., it's independent of Flink's SQL translation.
The ID generation only depends on the topology / graph structure of the
program's operators.
The ID of an operator depends on the IDs of its predeces
Thanks Zhijiang and Piotr for the great work as release manager, and thanks
everyone who makes the release possible!
Best,
Danny Chan
在 2020年7月8日 +0800 PM4:59,Congxian Qiu ,写道:
>
> Thanks Zhijiang and Piotr for the great work as release manager, and thanks
> everyone who makes the release possible
Congratulations!
Thanks Zhijiang and Piotr for the great work, and thanks everyone for their
contribution!
Best,
Godfrey
Benchao Li 于2020年7月8日周三 下午12:39写道:
> Congratulations! Thanks Zhijiang & Piotr for the great work as release
> managers.
>
> Rui Li 于2020年7月8日周三 上午11:38写道:
>
>> Congratulat
Thanks guys,
It is clear this is a Java thing.
Niels
On Wed, Jul 8, 2020 at 9:56 AM Tzu-Li (Gordon) Tai
wrote:
> Ah, didn't realize Chesnay has it answered already, sorry for the
> concurrent
> reply :)
>
>
>
> --
> Sent from:
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.c
Ah, didn't realize Chesnay has it answered already, sorry for the concurrent
reply :)
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Hi,
This would be more of a Java question.
In short, type inference of generic types does not work for chained
invocations, and therefore type information has to be explicitly included.
If you'd like to chain the calls, this would work:
WatermarkStrategy watermarkStrategy = WatermarkStrategy
WatermarkStrategy.forBoundedOutOfOrderness(Duration.of(1,
ChronoUnit.MINUTES)) returns a WatermarkStrategy, but the exact type
is entirely dependent on the variable declaration (i.e., it is not
dependent on any argument).
So, when you assign the strategy to a variable then the compiler can
in
Thanks Zhijiang and Piotr for the great work as release manager, and thanks
everyone who makes the release possible!
Best,
Congxian
Benchao Li 于2020年7月8日周三 下午12:39写道:
> Congratulations! Thanks Zhijiang & Piotr for the great work as release
> managers.
>
> Rui Li 于2020年7月8日周三 上午11:38写道:
>
>>
Hi,
Assuming that the job jar bundles all the required dependencies (including
the Beam dependencies), making them available under `/opt/flink/usrlib/` in
the container either by mounting or directly adding the job artifacts should
work. AFAIK It is also the recommended way, as opposed to adding t
Hi Vijay,
The FlinkKinesisProducer does not use blocking calls to the AWS KDS API.
It does however apply backpressure (therefore effectively blocking all
upstream operators) when the number of outstanding records accumulated
exceeds a set limit, configured using the FlinkKinesisProducer#setQueueLi
19 matches
Mail list logo