at:
https://github.com/ververica/lab-flink-latency
It should be rather simple to check this out and adapt to your needs.
I'd love to get some feedback on it so that I can eventually get this into the
Flink docs/quickstarts.
Nico
[1] https://issues.apache.org/jira/browse/FLINK-24478
On Wednesday,
her with the RocksDB community.
Nico
On Wednesday, 4 August 2021 14:26:32 CEST David Anderson wrote:
> I am hearing quite often from users who are struggling to manage memory
> usage, and these are all users using RocksDB. While I don't know for
> certain that RocksDB is the cause
onal JVM parameters for TMs and JMs as shown in [1]
Nico
[1] https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/
deployment/config/#env-java-opts
On Tuesday, 18 May 2021 15:13:45 CEST Robert Cullen wrote:
> The new MINIO operator/tenant model requires connection over SSL. I’ve
&g
Hi Andreas,
judging from [1], it should work if you refer to it via
security.ssl.rest.keystore: ./deploy-keys/rest.keystore
security.ssl.rest.truststore: ./deploy-keys/rest.truststore
Nico
[1]
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-KAFKA-KEYTAB-Kafkaconsumer
ob/cluster, it would be
possible to extract this from the Java process.
Our Ververica Platform, for example, also creates these key/truststores per
deployment [1] and uses Kubernetes secrets to store the certificates.
Nico
[1] https://docs.ververica.com/user_guide/application_operations/d
s and I would recommend removing
those from the fat jar as documented in [1].
Nico
[1] https://ververica.zendesk.com/hc/en-us/articles/360014323099
On Thursday, 10 September 2020 13:44:32 CEST Piotr Nowojski wrote:
> Hi Josson,
>
> Thanks again for the detailed answer, and sorry
omes with a
SSL setup [2] that you can enable with a click of a button and it just works
as expected. Maybe also something to check out (not just for configuring SSL).
Feel free to contact me personally for more in this regard.
Nico
[1] https://ci.apache.org/projects/flink/flink-docs-release-1.11
reading through [1] which also
contains an example at the bottom of the page and how to use curl to test or
use the REST endpoint.
Nico
[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.11/ops/security-ssl.html
On Tuesday, 25 August 2020 14:40:04 CEST Adam Roberts wrote:
> He
erver
directory (blob.storage.directory) will store files under /tmp and on
the JobManager, they are only accessed during deployments, so that falls
under this cleanup detection.
A solution is to change the BLOB storage directory.
Nico
[1]
https://data-artisans.com/flink-forward-berlin/resource
of, for example, a follow-up map operation.
Nico
On 13/09/18 14:52, Encho Mishinev wrote:
> Hi Nico,
>
> Unfortunately I can't share any of data, but it is not even data being
> processed at the point of failure - it is still in the
> matching-files-from-GCS phase.
>
>
minimal working example that you can share
(either privately with me, or here) and shows this error?
Nico
On 29/08/18 14:19, Encho Mishinev wrote:
> Hello,
>
> I am using Flink 1.5.3 and executing jobs through Apache Beam 2.6.0. One
> of my jobs involves reading from Googl
Hi Ashish,
this was just merged today for Flink 1.6.
Please have a look at https://github.com/apache/flink/pull/6326 to check
whether this fulfils your needs.
Nico
On 14/07/18 14:02, ashish pok wrote:
> All,
>
> We are running into a blocking production deployment issue. It looks
>
If this is about too many timers and your application allows it, you may
also try to reduce the timer resolution and thus frequency by coalescing
them [1].
Nico
[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.5/dev/stream/operators/process_function.html#timer-coalescing
On 11/07
Hi Gyula,
as a follow-up, you may be interested in
https://issues.apache.org/jira/browse/FLINK-9413
Nico
On 04/05/18 15:36, Gyula Fóra wrote:
> Looks pretty clear that one operator takes too long to start (even on
> the UI it shows it in the created state for far too long). Any ide
tuck: I'll have a further look into the
logs and state transition and will come back to you.
Nico
On 21/05/18 09:27, Amit Jain wrote:
> Hi All,
>
> I totally missed this thread. I've encountered same issue in Flink
> 1.5.0 RC4. Please look over the attached logs of
Also, please have a look at the other TaskManagers' logs, in particular
the one that is running the operator that was mentioned in the
exception. You should look out for the ID 98f5976716234236dc69fb0e82a0cc34.
Nico
PS: Flink logs files should compress quite nicely if they grow too big :
last Friday.
From the information you provided, I suppose you are running a streaming
job in Flink 1.4, do you? Your example looks like a simpler setup: can
you try to minimise it so that you can share the code and we can have a
look?
Regards
Nico
On 18/04/18 01:59, James Yu wrote:
> Miguel
Hi James,
The checkpoint coordinator at the JobManager is triggering the
checkpoints by inserting checkpoint barriers into the sources. These
will get to the TaskManagers via the same communication channels data is
flowing between them. Please refer to [1] for more details.
Nico
[1]
https
No, currently, this it is up to you to decide whether you need to scale
and how. If, for a running Flink job, you decide to scale, you
- flink cancel --withSavepoint
- flink run -p --fromSavepoint
Nico
On 29/03/18 19:27, NEKRASSOV, ALEXEI wrote:
> Is there an auto-scaling feature in Fl
probably died from something else, but since you didn't see anything in
the logs there, I don't think this is an issue.
Nico
On 29/03/18 16:24, Gary Yao wrote:
> Hi Juho,
>
> Thanks for the follow up. Regarding the BlobServerConnection error, Nico
> (cc'ed)
> might ha
If you refer to the files under the conf folder, these are only used by
the standalone cluster startup scripts, i.e. bin/start-cluster.sh and
bin/stop-cluster.sh
Nico
On 28/03/18 12:27, Alexander Smirnov wrote:
> Hi,
>
> are the files needed only on cluster startup stage?
> are th
Flink does not decide the parallelism based on your job.
There is a default parallelism (configured via parallelism.default [1],
by default 1) which is used if you do not specify it yourself.
Nico
[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.4/ops/config.html#common-options
On
ow to count 3 minutes from any other time, then
please refer to using TumblingEventTimeWindows#of(Time size, Time offset).
Nico
[1]
https://ci.apache.org/projects/flink/flink-docs-master/dev/stream/operators/windows.html#tumbling-windows
On 27/03/18 16:22, NEKRASSOV, ALEXEI wrote:
&g
operator's
parallelism above this value. The actual parallelism can be set per job
in your program but also in the flink client:
flink run -p
Nico
[1]
https://ci.apache.org/projects/flink/flink-docs-master/ops/production_ready.html#set-maximum-parallelism-for-operators-explicitly
On
twork would cover the additional cost during snapshot creation.
Nico
On 21/03/18 06:01, Miyuru Dayarathna wrote:
> Hi,
>
> Since we could not observe log messages such as "Asynchronous RocksDB
> snapshot" in the Flink's log files, we ran the application with Flink
&g
d the web UI if you need
this.
Regards
Nico
On 13/03/18 11:16, Sampath Bhat wrote:
> Hello
>
> I would like to know if flink supports any user level authentication
> like username/password for flink web ui.
>
> Regards
> Sampath S
>
signature.asc
Description: OpenPGP digital signature
e. an appropriate entry for 'KafkaClient'.
Regards
Nico
On 13/03/18 08:42, sundy wrote:
>
> Hi ,all
>
> I use the code below to set kafka JASS config, the
> serverConfig.jasspath is /data/apps/spark/kafka_client_jaas.conf, but
> on flink standalone dep
env.execute("Socket Window WordCount");
I also tried with a StreamTransformation only but got the same message.
Regards
Nico
On 09/03/18 14:33, eSKa wrote:
> Hi guys,
>
> We were trying to use UI's "Submit new job" functionality (and later REST
> endpoints fo
close() actually comes from the RichFunction, the handling compared to a
ProcessFunction should not be different.
Can you give more details on your program and why you think it was not
called?
Regards
Nico
On 15/03/18 21:16, Gregory Fee wrote:
> Hello! I had a program lose a task manager the ot
3TupleDataStream(env: StreamExecutionEnvironment):
DataStream[(Int, Long, String)] = {
val data = new mutable.MutableList[(Int, Long, String)]
data.+=((1, 1L, "Hi"))
data.+=((2, 2L, "Hello"))
data.+=((3, 2L, "Hello world"))
data.+=((4, 3L, "Hello world,
t script for you now.
Regards
Nico
On 18/03/18 17:06, miki haiat wrote:
> I think that you can use the catalog option only if you install dc/os ?
>
>
> iv installed mesos and marathon
>
>
>
>
> On Sun, Mar 18, 2018 at 5:59 PM, Lasse Nedergaard
> mai
e database, e.g. by updating a key multiple times, you may indeed have
more data to store. Judging from the state sizes you gave, this is
probably not the case.
Let's get started with this and see whether there is anything unusual.
Regards,
Nico
[1]
https://berlin.flink-forward.org/kb
I think, I found a code path (race between threads) that may lead to two
markers being in the list.
I created https://issues.apache.org/jira/browse/FLINK-8896 to track this
and will have a pull request ready (probably) today.
Nico
On 07/03/18 10:09, Mu Kong wrote:
> Hi Gordon,
>
> T
?
Are you sure the JobManager is running?
How do you start the cluster? If you have been using start-cluster.sh
(as per [1]), please also try to start the services manually to check
whether there's something wrong there.
Nico
[1]
https://ci.apache.org/projects/flink/flink-docs-releas
o see what is going on.
Depending on where the failure was, there may even be logs on the master
machine.
Nico
On 04/03/18 15:52, Yesheng Ma wrote:
> Hi all,
>
> When I execute bin/start-cluster.sh on the master machine, actually
> the command `nohup /bin/bash -l bin/jobmanager.s
l
commit the offsets ourselves and will try to commit every time a
checkpoint completes. In case of a failure in the last commit, we will
simply commit the new one instead with the next checkpoint.
Nico
On 05/03/18 17:11, Edward wrote:
> We have noticed that the Kafka offset auto-commit func
rting them again worked without a flaw. My bet is on
something Flink-external because of the "Temporary failure in name
resolution" error message.
Maybe @Patrick (cc'd) has encountered this before and knows more.
Nico
[1]
https://ci.apache.org/projects/flink/flink-docs-r
moved from the list of partitions to find leaders for in
the code and solely used during cancelling the fetcher.
I don't know whether this is possible, but I suppose there could be more
than one marker and we should call removeAll() instead - @Gordon, can
you elaborate/check whether this could
changes are merged now.
Nico
On 06/03/18 11:24, Paris Carbone wrote:
> Hey,
>
> Indeed checkpointing iterations and dealing with closed sources are
> orthogonal issues, that is why the latter is not part of FLIP-15. Though, you
> kinda need both to have meaningful checkpoint
rding this improvement either.
@Stephan: is this documented somewhere?
Nico
On 02/03/18 23:55, Ken Krugler wrote:
> Hi Stephan,
>
> Thanks for the update.
>
> So is support for “running checkpoints with closed sources” part
> of FLIP-15
> <https://cwiki.apache.org/con
parameters to the docker container via
additions to the command starting the docker image.
Nico
[1]
https://github.com/docker-flink/docker-flink/tree/master/1.4/hadoop28-scala_2.11-alpine
On 27/02/18 18:25, JP de Vooght wrote:
> Hello Nico,
>
> took me a while to respond. Thank you for
arting the job and
redoing all of the work. [...] The periodic in-flight checkpoints are
not used here.
DataStream:
This one would start immediately inserting data (as it is a streaming
job), and draw periodic checkpoints that make sure replay-on-failure
only has to redo only a bit, not everything.
Hi Ken,
LocalFlinkMiniCluster should run checkpoints just fine. It looks like it
was attempting to even create one but could not finish. Maybe your
program was not fully running yet?
Can you tell us a little bit more about your set up and how you
configured the LocalFlinkMiniCluster?
Nico
On 23
streaming setups, we highly recommend to set this value to false as the
core state backends currently do not use the managed memory.
Nico
[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.4/ops/config.html#managed-memory
On 22/02/18 19:56, santoshg wrote:
> An update - I was able
Judging from the code, you should separate different jars with a colon
":", i.e. "—addclasspath jar1:jar2"
Nico
On 26/02/18 10:36, kant kodali wrote:
> Hi Gordon,
>
> Thanks for the response!! How do I add multiple jars to the classpaths?
> Are they separated by
d be in there, feel
free to open an improvement request in our issue tracker at
https://issues.apache.org/jira/browse/FLINK
Nico
On 16/01/18 13:35, Adrian Vasiliu wrote:
> Hi Nico,
> Thanks a lot. I did consider that, but I've missed the clarification of
> the contract brought by th
thing regarding your source: typically you'd want the
checkpoint lock around the collect() call, i.e.
synchronized (ctx.getCheckpointLock()) {
ctx.collect(...)
}
Nico
On 16/01/18 12:27, George Theodorakis wrote:
> Thank you very much, indeed this was my bottleneck.
>
> My problem no
this, for example:
==
socketChannel.configureBlocking(false);
socketChannel.connect(new InetSocketAddress("http://jenkov.com";, 80));
while(! socketChannel.finishConnect() ){
//wait, or do something else...
}
======
Nico
[1]
https://docs.oracle.com/javase/7/docs/ap
Hi Adrian,
couldn't you solve this by providing your own DeserializationSchema [1],
possibly extending from JSONKeyValueDeserializationSchema and catching
the error there?
Nico
[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.4/dev/connectors/kafka.html#the-deserializations
Just found a nice (but old) blog post that explains Flink's integration
with Kafka:
https://data-artisans.com/blog/kafka-flink-a-practical-how-to
I guess, the basics are still valid
Nico
On 16/01/18 11:17, Nico Kruber wrote:
> Hi Jason,
> I'd suggest to start with [1] and [2]
ggest to create some small benchmarks for your setup
since this probably depends on the cluster architecture and the
parallelism of the operators and the number of Kafka partitions.
Maybe Gordon (cc'd) can give some more insights.
Regards
Nico
[1]
https://ci.apache.org/projects/flink/flink-docs-r
s.
Please comment the first line out and uncomment the following one to
read like this:
==
# This affects logging for both user code and Flink
#log4j.rootLogger=INFO, file
# Uncomment this if you want to _only_ change Flink's logging
log4j.logger.org.apache.flink=INFO
==
Nico
On
TaskManager logs and some more details on
the job you are running?
Nico
On 16/01/18 07:04, Data Engineer wrote:
> This question has been asked on StackOverflow:
> https://stackoverflow.com/questions/48262080/how-to-get-automatic-fail-over-working-in-flink
>
> I am using Apache Fl
is
only served to you via HTML. For HA, this may come from another
JobManager than the Web interface you are browsing.
I'm including Till (cc'd) as he might know more.
Nico
On 16/01/18 09:22, jelmer wrote:
> HI,
>
> We recently upgraded our test environment to from flink 1.3
the
classpath is set up a bit differently, I suppose. Maybe, you are also
affected by the Maven shading problem for maven >= 3.3 [1][2].
As a workaround, can you try to shade elasticsearch's netty away? See
[3] for details.
Regards
Nico
[1] https://issues.apache.org/jira/browse/FLINK-5013
[
#x27;s classpath (if available).
Nico
On 01/12/17 14:42, Jens Oberender wrote:
> Hi
>
> A workmate of mine tried to migrate the existing flink connector to
> ElasticSearch 6 but we had problems with netty dependencies that clashed
> (Flink uses 4.0.27 and ES is on 4.1).
&
org.apache.flink.fs.s3presto.shaded.com.amazonaws.services.s3.internal.S3ErrorResponseHandler
Therefore, the cause for this exception would be interesting (as Stephan
suggested).
Nico
[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.4/ops/deployment/aws.html#shaded-hadooppresto-s3-file-systems-recommended
On 03/01/18 10:22, Stephan Ewen
Hi Vishal,
let me already point you towards the JIRA issue for the credit-based
flow control: https://issues.apache.org/jira/browse/FLINK-7282
I'll have a look at the rest of this email thread tomorrow...
Regards,
Nico
On 02/01/18 17:52, Vishal Santoshi wrote:
> Could you please poi
Sorry for the late response,
but I finally got around adding this workaround to our "common issues"
section with PR https://github.com/apache/flink/pull/5231
Nico
On 29/11/17 09:31, Ufuk Celebi wrote:
> Hey Dominik,
>
> yes, we should definitely add this to the docs.
>
tead shade into datastax' namespace as shown?
This would also make sure to follow the shaded path in that class which,
for example, deactivates epoll.
Nico
On 18/12/17 15:43, Timo Walther wrote:
> Hi Dominik,
>
> thanks for reporting your issue. I will loop in Chesnay that might kno
Hi,
are you running Flink in an JRE >= 8? We dropped Java 7 support for
Flink 1.4.
Nico
On 14/12/17 12:35, 杨光 wrote:
> Hi,
> I am usring flink single-job mode on YARN. After i upgrade flink
> verson from 1.3.2 to 1.4.0, the parameter
> "yarn.taskmanager.env.JAVA_HOME"
set the log level to INFO, you should see a message like "Created
BLOB server storage directory ..." with the path. Can you double check
whether there is really no space left there?
Nico
On 12/12/17 08:02, Chan, Regina wrote:
> And if it helps, I’m running on flink 1.2.1. I sa
ation parameter
in Flink's flink-conf.yaml.
Regards
Nico
On Wednesday, 6 December 2017 08:45:19 CET bernd.winterst...@dev.helaba.de
wrote:
> Hi Nico
> I think there were changes in the default port fort the BLOB server. I
> missed the fact that the Kubernetes configuration was still
ke it through (also see [1] for details on how the checkpointing works).
So yes, your second assumption is true and that was what I meant by "get
back to normal once your sink has caught up with all buffered events" in
my first message and I assume Stephan also meant with the cascading eff
block this?)
Did you, by any chance, set up SSL, too? There was a recent thread on
the mailing list [1] where a had some problems with
"security.ssl.verify-hostname" being set to true which may be related.
Nico
[1]
https://lists.apache.org/t
lp us identifying the problem by providing logs at DEBUG level
(did akka report any connection loss and gated actors? or maybe some
other error in there?) or even a minimal program to reproduce.
Nico
On 01/12/17 07:36, Steven Wu wrote:
>
> org.apache.flink.runtime.checkpoint.Checkpoint
?
Nico
On 28/11/17 00:19, Chan, Regina wrote:
> Hi,
>
>
>
> As I moved from Flink 1.2.0 to 1.3.2 I noticed that the TaskManager may
> have all tasks with FINISHED but then take about 2-3 minutes before the
> Job execution switches to FINISHED. What is it doing that’s taking th
with "start-cluster.sh" instead?
Nico
On Tuesday, 21 November 2017 07:26:09 CET Shailesh Jain wrote:
> a) Nope, there are no taskmanager logs, the job never switches to RUNNING
> state.
>
> b) I think so, because even when I start the job with 4 devices, only 1
> slo
gt;
> If jar -tf target/program.jar | grep MeasurementTable shows the class is
> present, are there other dependencies missing? You may need to add runtime
> dependencies into your pom or gradle.build file.
>
> On Tue, Nov 21, 2017 at 2:28 AM, Nico Kruber wrote:
> > Hi S
r (which I find
unlikely though) - maybe Aljoscha (cc'd) can elaborate a bit more on the Beam
side and has some other idea.
Nico
On Tuesday, 14 November 2017 14:54:28 CET Shankara wrote:
> Hi Nico,
>
>
> - how do you run the job?
>
>>> If we run same pr
e to debug into what is happening by running this in
your IDE in debug mode and pause the execution when you suspect it to hang.
Nico
On Tuesday, 14 November 2017 14:27:36 CET Piotr Nowojski wrote:
> 3. Nico, can you take a look at this one? Isn’t this a blob server issue?
>
> Piotr
throughput accordingly.
Nico
On Tuesday, 14 November 2017 11:29:33 CET Chesnay Schepler wrote:
> I don't there's anything you can do except reducing the parallelism or
> the size of your messages.
>
> A separate serializer is used for each channel as the serializers are
> st
>From what I read in [1], simply add JVM options to env.java.opts as you would
when you start a Java program yourself, so setting "-XX:+UseG1GC" should
enable G1.
Nico
[1] https://ci.apache.org/projects/flink/flink-docs-release-1.3/setup/
config.html#common-options
On Friday,
.
If, however, all your operators have the same parallelism, doing multiple
keyBy(0) calls in your program will not re-shuffle the data, because of the
deterministic assignment of keys to operators.
Nico
[1] https://flink.apache.org/features/2017/07/04/flink-rescalable-state.html
On Thursday, 9 Nov
Hi Sathya,
have you checked this yet?
https://ci.apache.org/projects/flink/flink-docs-release-1.3/setup/
jobmanager_high_availability.html
I'm no expert on the HA setup, have you also tried Flink 1.3 just in case?
Nico
On Wednesday, 8 November 2017 04:02:47 CET Sathya Hariesh Prakash (sat
e file must be truncated to at
recovery")?
Nico
On Tuesday, 7 November 2017 19:51:35 CET Ivan Budincevic wrote:
> Hi all,
>
> We recently implemented a feature in our streaming flink job in which we
> have a AvroParquetWriter which we build
Table" (an inner class starting in lower-case?), really in the jar
file? It might be a wrongly generated protobuf class ...
Nico
On Tuesday, 7 November 2017 15:34:35 CET Shankara wrote:
> Hi,
>
> I am using flink 2.1.0 version and protobuf-java 2.6.1 version.
> I am getting b
h gate has multiple
channels and the metrics offered are the sum among all of them.
Nico
On Monday, 23 October 2017 09:31:04 CET Timo Walther wrote:
> Hi Aitozi,
>
> I will loop in people that are more familar with the network stack and
> metrics. Maybe this is a bug?
>
> Rega
ting the metric, the difference to `System.currentTimeMillis()`
will be used which is based on system time and may decrease if the clock is
adjusted, e.g. via NTP. Also, this is probably called from a different thread
and `System.currentTimeMillis()` apparently may jump backwards there as well
[1]
ommended for it? How about EFS?
I can't think of a reason this should be any different to non-incremental
checkpoints. Maybe Stefan (cc'd) has some more info on this.
For more details on the whole topic, I can recommend Stefan's talk at the last
Flink Forward [4] though.
Ni
the asynchronous
updater thread.
I'm including Gordon (cc'd) just to be sure as he may know more.
Nico
On Friday, 3 November 2017 04:06:02 CET Ron Crocker wrote:
> We have a system where the Kafka partition a message should go into is a
> function of a value in the message. O
falling behind, the difference may be negative.
Nico
On Thursday, 2 November 2017 17:58:51 CET Sofer, Tovi wrote:
> Hi group,
>
> Can someone maybe elaborate how can latency gauge shown by latency marker be
> negative?
>
> 2017-11-02
Hi Michael,
yes, it seems that the self-contained jars only contain the Java examples.
You may also follow the quickstart [1] to get started writing Flink streaming
programs in Scala.
Nico
[1] https://ci.apache.org/projects/flink/flink-docs-release-1.3/quickstart/
scala_api_quickstart.html
from inside Scala, so I included Gordon (cc'd) who may know more.
Nico
[1] https://ci.apache.org/projects/flink/flink-docs-release-1.3/dev/
types_serialization.html
On Saturday, 23 September 2017 10:11:28 CEST shashank agarwal wrote:
> Hello Team,
>
> As our schema evolves d
-SNAPSHOT.jar containing both (which you can verify with
your favorite archiver tool for zip files).
Afaik, there is no simple switch to turn off Java or Scala examples. You may
either adapt the pom.xml or create your own Project with the examples and
programming languages you need.
Nico
On
Hi Federico,
I also did not find any implementation of a hive sink, nor much details on this
topic in general. Let me forward this to Timo and Fabian (cc'd) who may know
more.
Nico
On Friday, 22 September 2017 12:14:32 CEST Federico D'Ambrosio wrote:
> Hello everyone,
>
>
) into the JobManager.
Nico
On Sunday, 24 September 2017 02:48:40 CEST Elias Levy wrote:
> I am curious, why is the History Server a separate process and Web UI
> instead of being part of the Web Dashboard within the Job Manager?
Nico
[1] https://issues.apache.org/jira/browse/FLINK-3172
On Sunday, 24 September 2017 03:04:51 CEST Elias Levy wrote:
> I am wondering why HA mode there is a need for a separate config parameter
> to set the JM RPC port (high-availability.jobmanager.port) and why this
> parameter accept
On Thursday, 21 September 2017 20:08:01 CEST Narendra Joshi wrote:
> Nico Kruber writes:
> > according to [1], even with asynchronous state snapshots (see [2]), a
> > checkpoint is only complete after all sinks have received the barriers and
> > all (asynchronous) snapshot
k (although I'm curious
on where the "aBuffer" in the error message comes from). I'm forwarding this
to Gordon in CC because he probably knows better as he was involved in state
migration before (afaik).
Nico
[1] https://ci.apache.org/projects/flink/flink-docs-release-1.3/op
works in remote environments, i.e. if the jar is uploaded to a
running Flink cluster and executed there.
You can also do that if you like to, i.e. "mvn package" the wiki example and
let Flink run the code as in [2].
Nico
[1] https://ci.apache.org/projects/flink/flink-docs-release-1.3/
, i.e. although
they are running in parallel, each stores the expected state.
If you want to know more about the details of how this is done, I can
recommend Stefan's (cc'd) talk at Flink Forward last week [4]. He may also be
able to answer in more detail in case I missed something.
proxy/application_1504649135200_0001/jobs/
1a0fd176ec8aabb9b8464fa481f755f0/cancel-with-savepoint/target-directory/
s3%253A%252F%252F%252Fremit-flink
Nico
On Wednesday, 20 September 2017 00:52:05 CEST Emily McMahon wrote:
> Thanks Eron & Fabian.
>
> The issue was hitting a yarn proxy url
instances? Also that you're not accidentally running
the yarn-session.sh script of 1.3?
Nico
On Wednesday, 6 September 2017 06:36:42 CEST Sunny Yun wrote:
> Hi,
>
> Using flink 1.2.0, I faced to issue
> https://issues.apache.org/jira/browse/FLINK-6117
> https://issues.ap
3/dev/connectors/
guarantees.html
Nico
On Thursday, 31 August 2017 16:06:55 CEST Philip Doctor wrote:
> I have a few Flink jobs. Several of them share the same code. I was
> wondering if I could make those shared steps their own job and then specify
> that the sink for one process was the sour
to sum up: the lines you were seeing seem to be the down- and upload of the
TaskManager logs from the web interface which go through the BlobServer and
its components.
Nico
On Thursday, 31 August 2017 11:51:27 CEST Federico D'Ambrosio wrote:
> Hi,
>
> 1) I'm using Fli
they are stuck (using jstack)?
4) These PUT requests in the TM logs are strange, unless you showed the TM
logs in the web interface - did you?
Nico
On Thursday, 31 August 2017 09:45:59 CEST Fabian Hueske wrote:
> Hi Federico,
>
> Not sure what's going on there but Nico (in CC) is
1536 (in your case)
buffers should be enough and you said, you are already using 36864.
So this brings me back to the question on why it is input-dependent.
Can you share your log files (privately if you prefer)?
Nico
[1] https://ci.apache.org/projects/flink/flink-docs-release-1.2/setup/
config
though where we planned to make the BlobServer use more
generic than what is actually needed at this stage.
Nico
On Wednesday, 23 August 2017 15:09:45 CEST Timo Walther wrote:
> I think Nico in CC knows more about it.
>
> Am 23.08.17 um 14:34 schrieb anugrah nayar:
> > Hey,
&g
Hi Chao,
what I meant by "per-record base" was actually supposed to be "per-event base"
(event = one entity of whatever the stream contains). As from the API:
processing is supposed to be one event at a time and this is what is performed
internally, too.
Nico
On Thursday,
1 - 100 of 186 matches
Mail list logo