see if there is any error information
about the metrics;
[1]
https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/config/#metrics-reporter-%3Cname%3E-filter-excludes
--
Best,
Matt Wang
Replied Message
| From | patricia lee |
| Date | 09/18/2023 16:58 |
| To
/blob/main/flink-connector-dynamodb/src/test/java/org/apache/flink/connector/dynamodb/sink/examples/SinkIntoDynamoDb.java
Thank you again for your help and sharing those resources.
Cheers,
Matt.
On Wed, 9 Nov 2022 at 03:51, Teoh, Hong wrote:
> Hi Matt,
>
>
>
> First of all, aweso
e in this area for some
pointers on what else I could start reading
Thanks
Matt
image, but bundled in your user-jar (along with the connector).
>
> On 08/11/2022 02:14, Matt Fysh wrote:
> > Hi, I'm following the kinesis connector instructions as documented
> > here:
> >
> https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/connecto
Hi, I'm following the kinesis connector instructions as documented here:
https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/connectors/datastream/kinesis/
I'm also running Flink in standalone session mode using docker compose and
the Python images, as described in the Flink docs (Depl
.
Please let me know which sections of the docs, or which areas of Python, I
should read to learn how to find a solution to this problem
Thanks
On Mon, 31 Oct 2022 at 18:49, Leonard Xu wrote:
> Hi, Matt
>
> I’ve checked your job is pretty simple, I've CC Xingbo who is a PyFlink
&g
sink, plus not having large data size
requirements, I suspect this is due to a bug.
I'm running v1.13.2 and have created a docker-based reproduction repository
here: https://github.com/mattfysh/pyflink-oom
Please take a look and let me know what you think
Thanks!
Matt
possible to overcome these issues and have event
fan-out working on AWS?
Thanks,
Matt
e out!
Cheers,
Matt
On Mon, Jun 27, 2022 at 4:59 AM Yang Wang wrote:
> Could you please share the JobManager logs of failed deployment? It will
> also help a lot if you could show the pending pod status via "kubectl
> describe ".
>
> Given that the current Flink Kubernet
sks in my
first Beam pipeline so it should be simple enough but it just times out.
Next step for me is to document the result which will end up on
hop.apache.org. I'll probably also want to demo this in Austin at the
upcoming Beam summit.
Thanks a lot for your time and help so far!
Cheers,
Matt
Hi Mátyás & all,
Thanks again for the advice so far. On a related note I noticed Java 8
being used, indicated in the log.
org.apache.flink.runtime.entrypoint.ClusterEntrypoint[] -
JAVA_HOME: /usr/local/openjdk-8
Is there a way to use Java 11 to start Flink with?
Kind regards,
hings easy. Following page after page of complicated instructions
just to get a few files into a pod container... I feel it's just a bit
much.
But again, this is my frustration with k8s, not with Flink ;-)
Cheers,
Matt
On Wed, Jun 22, 2022 at 5:32 AM Yang Wang wrote:
> Matyas and Gyula have
want to figure out a way to do this with Flink as well since I believe,
especially on AWS (even with Spark centric options on EMR, EMR serverless),
that running a pipeline is just too complicated. Your work really helps!
All the best,
Matt
On Tue, Jun 21, 2022 at 4:53 PM Őrhidi Mátyás
wrote:
>
fication to that effect? Just
brainstorming ;-) (and forking apache/flink-kubernetes-operator)
All the best,
Matt
On Tue, Jun 21, 2022 at 2:52 PM Őrhidi Mátyás
wrote:
> Hi Matt,
>
> - In FlinkDeployments you can utilize an init container to download your
> artifact onto a shared
anks in advance!
All the best,
Matt (mcasters, Apache Hop PMC)
t: Tuesday, October 19, 2021 3:03 AM
To: LeVeck, Matt
Cc: user@flink.apache.org
Subject: Re: Flink ignoring latest checkpoint on restart?
This email is from an external sender.
Hi Matt,
this seems interesting, I'm aware of some possible inconsistency issues with
unstable connections [1], but I h
tion that would allow the
fallback checkpoint to be something more recent?
Thanks,
Matt
2021/10/11 12:22:28.137 INFO c.i.strmprocess.ArgsPreprocessor -
latestSavepointPrefix:desanalytics-7216-doc-comprehension-analytics-7216-prd/4/savepoints/savepoint-00-abb450590ca7/_metadata
LastModifie
think I know what to do internally
On 2020/06/15 15:11:32, Robert Metzger wrote:
> Hi Matt,
>
> sorry for the late reply. Why are you using the "flink-docker" helm example
> instead of
> https://ci.apache.org/projects/flink/flink-docs-master/ops/deployment
I'm not the original poster, but I'm running into this same issue. What you
just described is exactly what I want. I presume you guys are using some
variant of this helm
https://github.com/docker-flink/examples/tree/master/helm/flink to configure
your k8s cluster? I'm also assuming that this cl
We're currently using this template:
https://github.com/docker-flink/examples/tree/master/helm/flink for running
kubernetes flink for running a job specific cluster ( with a nit of specifying
the class as the main runner for the cluster ).
How would I go about setting up adding savepoints, so
: {{ .Values.flink.checkpointUrl }}/checkpoints
state.savepoints.dir: {{ .Values.flink.checkpointUrl }}/savepoints
state.backend.incremental: true
state.backend.rocksdb.localdir: /tmp/taskmanager
Thanks!
-Matt
From: Guowei Ma
Date: Monday, June 1, 2020 at 1:01 AM
To: "Wissman, Matt"
consistent (follows the
same pattern).
Changing the checkpoint interval seemed to fix the problem of the large and
growing checkpoint size but I’m not sure why.
Thanks!
-Matt
From: Till Rohrmann
Date: Thursday, May 28, 2020 at 10:48 AM
To: "Wissman, Matt"
Cc: Guowei Ma , "user@f
hing is set in terms of watermarks – do they apply for Process
Time?
The set of keys processed in the stream is stable over time
The checkpoint size actually looks pretty stable now that the interval was
increased. Is it possible that the short checkpoint interval prevented
compaction?
Thanks!
and be purged from the checkpoint?
Flink Version 1.7.1
Thanks!
-Matt
Suppose I'm using state stored in-memory that has a TTL of 7 days max. Should I
run into any issues with state this long other than potential OOM?
Let's suppose I extend this such that we add rocksdb...any concerns with this
with respect to maintenance?
Most of the examples that I've been seein
checkpoints.dir=$(S3_CHECKPOINT)
62 - -Dstate.savepoints.dir=$(S3_SAVEPOINT)
63 - --allowNonRestoredState
64 - -s $(S3_SAVEPOINT)
I originally didn't have the last 2 args, I added them based upon various
emails I saw on this list and other google search results, to no avail.
Thanks
-Matt
I'm wondering what the best practice is for using secrets in a Flink program,
and I can't find any info in the docs or posted anywhere else.
I need to store an access token to one of my APIs for flink to use to dump
results into, and right now I'm passing it through as a configuration
parameter
where the data is stored.
[1] https://apacheignite-mix.readme.io/docs/flink-streamer
Best,
Matt
Matt
On Mon, May 29, 2017 at 9:37 AM, Till Rohrmann wrote:
> Hi Matt,
>
> I looked into it and it seems that the Task does not respect the context
> class loader. The problem is that
should be enabled (as in ignite.xml),
because it must match the config on the client node.
If you follow the Readme file it's everything there, if you have any
problem let me know!
Cheers,
Matt
[1] https://github.com/Dromit/FlinkTest
On Wed, May 17, 2017 at 3:49 PM, Matt wrote:
> Th
Thanks for your help Till.
I will create a self contained test case in a moment and send you the link,
wait for it.
Cheers,
Matt
On Wed, May 17, 2017 at 4:38 AM, Till Rohrmann wrote:
> Hi Matt,
>
> alright, then we have to look into it again. I tried to run your example,
> howe
se in [1].
Best,
Matt
[1] https://gist.github.com/17d82ee7dd921a0d649574a361cc017d
On Thu, Apr 27, 2017 at 10:09 AM, Matt wrote:
> Hi Till,
>
> Great! Do you know if it's planned to be included in v1.2.x or should we
> wait for v1.3? I'll give it a try as soon as it'
ssage on the list
[1].
The idea is to collocate Flink jobs on Ignite nodes, so each dataflow only
processes the elements stored on the local in-memory database. I get the
impression this should be much faster than randomly picking a Flink node
and sending all the data over the network.
Any insight on
ome a general fix.
>
> I have heard that Till is about to change some things about local
> execution, so I included him in CC. Maybe he can provide additional hints
> how your use case might be better supported in the upcoming Flink 1.3.
>
> Best,
> Stefan
>
> Am 25.04.2
the class.
Not sure what is wrong.
On Tue, Apr 25, 2017 at 5:38 PM, Matt wrote:
> Hi Stefan,
>
> Check the code here: https://gist.github.com/
> 17d82ee7dd921a0d649574a361cc017d , the output is at the bottom of the
> page.
>
> Here are the results of the additional tests you
ction in
Test$Foo with the same result: it says "Cannot load user class:
com.test.Test$Foo"
Looks like Flink is not using the correct ClassLoader. Any idea?
Regards,
Matt
On Tue, Apr 25, 2017 at 7:00 AM, Stefan Richter wrote:
> Hi,
>
> I would expect that the local environment
to the chunk
of code that creates the job. Is this currently possible?
Any fix or workaround is appreciated!
Best,
Matt
[1] https://gist.github.com/f248187b9638023b95ba8bd9d7f06215
[2] https://gist.github.com/796ee05425535ece1736df7b1e884cce
cated computation
system. Running a collocated Ignite closure and executing a Flink job in a
local environments should be enough.
Why do you recommend against custom collocation? I may be missing something.
Matt
On Mon, Apr 24, 2017 at 9:47 AM, Ufuk Celebi wrote:
> Hey Matt,
>
> in
are executed where
the data resides? In case there's no way to guarantee that unless you
enable local environment, what do you think of that approach (in terms of
performance)?
Any additional insight regarding stream processing on Ignite or any other
distributed storage is very welcome!
Best
_2016-08-27_at_00.32.42.png?t=1491606817725
On Sat, Apr 8, 2017 at 9:40 PM, Matt wrote:
> I compared them some days ago.
>
> I found a useful article about many of the tsdb available out there [1],
> check the big table on the article, it's really helpful. The thing that
> b
some ideas from the search engine industry).
The other promising alternative is Prometheus, though I haven't had a look
at it yet, I plan to do so in the near future.
If anyone is using a time-series database and wants to tell us about it
that would be helpful!
Best regards,
Matt
interface in the current version.
If anyone has any code or a working project to use as a reference that
would be awesome for me and for the rest of us looking for a time-series
database solution!
Best regards,
Matt
[1] https://github.com/druid-io/tranquility/blob/master/docs/flink.md
Hi,
>Am I missing something obvious?
So it was that!
Thanks very much for the help, sure I'll be able to figure that out.
Matt
From: Tzu-Li (Gordon) Tai
Sent: 27 February 2017 12:17
To: user@flink.apache.org
Subject: Re: Fw: Flink Kinesis Connector
gt;.
Does this still exist? Am I missing something obvious?
Thanks in advance for any help,
Matt
I really don't know what you mean, I've been reading the documentation and
examples showing iterations. but it just won't work for me I believe. Maybe
you can write a quick example? It doesn't matter the details, only the
topology.
If anyone else has an idea it's very we
;
*green*.keyBy(...).flatMap(...);
---
Any idea is welcome.
Matt
On Sat, Jan 28, 2017 at 5:31 PM, Matt wrote:
> I'm aware of IterativeStream but I don't think it's useful in this case.
>
> As shown in the example above, my use case is "cyclic" in that the same
> obje
ect from *Input2*)
and finally to *predictionStream* (flatMap2).
The same operator is never applied twice to the object, thus I would say
this dataflow is cyclic only in the dependencies of the stream
(predictionStream depends on statsStream, but it depends on
predictionStream in the first place).
But I would rather avoid writing unnecessarily into kafka.
Is there any other way to achieve this?
Thanks,
Matt
I'll create a new thread with my last message since it's not completely
related with the original question here.
On Sat, Jan 28, 2017 at 11:55 AM, Matt wrote:
> Aha, ok, got it!
>
> I just realized that this ConnectedStream I was talking about (A) depends
> on another Conn
a and read from that
topic on predictionStream instead of initializing it with a reference of
statsStream. I would rather avoid writing unnecessarily into kafka.
Is there any other way to achieve this?
Thanks,
Matt
On Fri, Jan 27, 2017 at 6:35 AM, Timo Walther wrote:
> Hi Matt,
>
Hi all,
What's the purpose of .keyBy() on ConnectedStream? How does it affect
.map() and .flatMap()?
I'm not finding a way to group stream elements based on a key, something
like a Window on a normal Stream, but for a ConnectedStream.
Regards,
Matt
g it to one node and then another
makes the whole data flow unpractical. It's better to move all created
instances to one single node where only one instance of the classifier
is maintained.
I'm not sure if this is possible or how to do this.
On Thu, Jan 12, 2017 at 11:11 PM, Matt wrote:
ained with
enough instances. Is it possible to do this? If I'm not wrong env.execute
(line 24) can be used only once.*
Regards,
Matt
I'm still looking for an answer to this question. Hope you can give me some
insight!
On Thu, Dec 22, 2016 at 6:17 PM, Matt wrote:
> Just to be clear, the stream is of String elements. The first part of the
> pipeline (up to the first .apply) receives those strings, and returns
&
Anyone has any experience mining a Flink+Kafka stream?
I'm looking for an online analysis framework to apply some classifiers on a
time serie.
Any example of how to integrate Flink with MOA, Samoa, ADAMS, DataSketches
or any other framework is appreciated.
Regards,
Matt
Just to be clear, the stream is of String elements. The first part of the
pipeline (up to the first .apply) receives those strings, and returns
objects of another class ("A" let's say).
On Thu, Dec 22, 2016 at 6:04 PM, Matt wrote:
> Hello,
>
> I have a window process
nt is this?
Regards,
Matt
for sharing the stack trace.
>
> This seems not really Flink related, it is part of the specific Avro
> encoding logic.
> The Avro Generic Record Type apparently does not allow the map value to be
> null.
>
>
>
> On Tue, Dec 20, 2016 at 4:55 PM, Matt wrote:
>
>&
f you need any other information let me know.
Regards,
Matt
On Tue, Dec 20, 2016 at 6:46 AM, Stephan Ewen wrote:
> The "null" support in some types is not fully developed. However in that
> case I am wondering why it does not work. Can you share the stack trace, so
> we can
ed by: java.lang.NullPointerException*
The field mentioned is a HashMap, and some keys are mapped
to null values.
Why isn't it possible to forward/serialize those elements with null values?
What do you do when your elements may contain nulls?
Regards,
Matt
jects
for stream C.
Anyway, I've already solved this problem a few days back.
Regards,
Matt
On Mon, Dec 19, 2016 at 5:57 AM, Fabian Hueske wrote:
> Hi Matt,
>
> the combination of a tumbling time window and a count window is one way to
> define a sliding window.
> In your exam
ments in C (triangles), I have to
process n *independent* elements of B (n=2 in the example).
Maybe there's a better or simpler way to do this. Any idea is appreciated!
Regards,
Matt
[1] http://i.imgur.com/dG5AkJy.png
On Thu, Dec 15, 2016 at 3:22 AM, Matt wrote:
> Hello,
>
> I h
lements of B that would have gone into the same tumbling window, not with
the last 3 consecutive elements?
I hope the problem is clear, don't hesitate to ask for further
clarification!
Regards,
Matt
ng #4, after doing some more tests I think it's more complex than I
first thought. I'll probably create another thread explaining more that
specific question.
Thanks,
Matt
On Wed, Dec 14, 2016 at 2:52 PM, Jamie Grier
wrote:
> For #1 there are a couple of ways to do this. The e
hieve this with
predefined triggers or a custom trigger is the only way to go here?
Best regards,
Matt
Err, I meant if I'm not wrong *
On Mon, Dec 12, 2016 at 2:02 PM, Matt wrote:
> I just checked with version 1.1.3 and it works fine, the problem is that
> in that version we can't use Kafka 0.10 if I'm not work. Thank you for the
> workaround!
>
> Best,
> Matt
&g
I just checked with version 1.1.3 and it works fine, the problem is that in
that version we can't use Kafka 0.10 if I'm not work. Thank you for the
workaround!
Best,
Matt
On Mon, Dec 12, 2016 at 1:52 PM, Yassine MARZOUGUI <
y.marzou...@mindlytix.com> wrote:
> Yes, it was
I'm using 1.2-SNAPSHOT, should it work in that version?
On Mon, Dec 12, 2016 at 12:18 PM, Yassine MARZOUGUI <
y.marzou...@mindlytix.com> wrote:
> Hi Matt,
>
> What version of Flink are you using?
> The incremental agregation with fold(ACC, FoldFunction, WindowFunction)
In case this is important, if I remove the WindowFunction, and only use the
FoldFunction it works fine.
I don't see what is wrong...
On Mon, Dec 12, 2016 at 10:53 AM, Matt wrote:
> Hi,
>
> I'm following the documentation [1] of window functions with incremental
> aggrega
ou provide a working example of a fold function
with both a FoldFunction and a WindowFunction?
Regards,
Matt
[1]
https://ci.apache.org/projects/flink/flink-docs-master/dev/windows.html#windowfunction-with-incremental-aggregation
[2] https://gist.github.com/cc7ed5570e4ce30c3a482ab835e3983d
,
Matt
[1] https://github.com/Dromit/StreamTest/
[2]
https://github.com/Dromit/StreamTest/blob/master/src/main/java/com/stream/Serde.java
[3]
https://github.com/Dromit/StreamTest/blob/master/src/main/java/com/stream/MainProducer.java#L19
[4]
https://github.com/Dromit/StreamTest/blob/master/src/main/java
/java/eu/bde/sc4pilot/fcd/FcdTaxiEvent.java#L60
I would like to see a more "generic" approach for the class Product in my
last message. I believe a more general purpose de/serializer for POJOs
should be possible to achieve using reflection.
On Wed, Dec 7, 2016 at 1:16 PM, Luigi Selmi wrote:
> Hi Ma
String code;
public double price;
public String description;
public long created;
}
---
Regards,
Matt
[1] http://data-artisans.com/kafka-flink-a-practical-how-to/
e array to get each of them, same for two integers, and nearly any
> other types).
>
> I feel there should be a more general way of doing this regardless of the
> fields on the class you're de/serializing.
>
> What do you do in these cases? It should be a pretty comm
73 matches
Mail list logo