Did you upgrade both the client and cluster to 1.6.0? The server
returned a completely empty response which shouldn't be possible if it
runs 1.6.0.
On 05.09.2018 07:27, 潘 功森 wrote:
Hi Vino,
Below are dependencies I used,please have a look.
I floud it also inclued flink-connector-kafka-0.10
The most likely explanation is that you haven't configured the cluster
to have enough resources to run multiple jobs.
Try increasing taskmanager.numberOfTaskSlots:
https://ci.apache.org/projects/flink/flink-docs-master/ops/config.html#taskmanager-numberoftaskslots-1
On 04.09.2018 15:13, Rad Rad
You will either have to create a custom Flink build (i.e. there's no
option to change this behavior) or disable all WARN messages by the
CliFrontend class with the logger configuration.
On 03.09.2018 15:21, Sarathy, Parth wrote:
The message is in WARN level and not in INFO or DEBUG level , so
Please enable DEBUG logging for the client and TRACE logging for the
cluster.
For the client, look for log messages starting with "Sending request
of", this will contain the host and port that requests are sent to by
the client. Verify that these are correct and match the host/port that
you u
We collect driving data from thousands of users, each vehicle is associated
with a IMEI (unique code). The device installed in these vehicles emits GPS
points in 5 second intervals. My requirement is to assemble all the GPS
points that belong to a single trip and construct a Trip object, for a giv
We collect driving data from thousands of users, each vehicle is associated
with a IMEI (unique code). The device installed in these vehicles emits GPS
points in 5 second intervals. My requirement is to assemble all the GPS
points that belong to a single trip and construct a Trip object, for a give
We collect driving data from thousands of users, each vehicle is associated
with a IMEI (unique code). The device installed in these vehicles emits GPS
points in 5 second intervals. My requirement is to assemble all the GPS
points that belong to a single trip and construct a Trip object, for a give
I wrote a task using Typesafe config. It must be pointed config file position
using jvm Options like "-Dconfig.resource=dev.conf".
How can I do that with Flink dashboard?
Thanks for the help!
You can't set JVM options when submitting through the Dashboard. This
cannot be implemented since no separate JVM is spun up when you submit a
job that way.
On 05.09.2018 11:41, zpp wrote:
I wrote a task using Typesafe config. It must be pointed config file
position using jvm Options like "-Dc
Hey,
You can’t as Chesnay have already said, but for your usecase you could use
arguments instead of JVM option and they will work equally good.
Best Regards,
Dom.
Wysłane z aplikacji Poczta dla Windows 10
Od: Chesnay Schepler
Wysłano: środa, 5 września 2018 11:43
Do: zpp; user@flink.apache.or
Hi all,
I'm trying to use CONVERT or CAST functions from Calcite docs to query some
table with Table API.
https://calcite.apache.org/docs/reference.html
csv_table.select("col1,CONCAT('field1:',col2,',field2:',CAST(col3 AS
string))");
col3 is actually described as int the CSV schema and CONCAT doe
Hi
You are using SQL syntax in a Table API query. You have to stick to Table
API syntax or use SQL as
tEnv.sqlQuery("SELECT col1,CONCAT('field1:',col2,',field2:',CAST(col3 AS
string)) FROM csvTable")
The Flink documentation lists all supported functions for Table API [1] and
SQL [2].
Best, Fabi
Hi All,
I was playing with queryable state. As queryable stream can not be modified
how do I use the output of say my reduce function for further processing.
Below is 1 example. I can see that I am processing the data twice. One for
the Queryable stream and once for the my original stream. That m
I am trying to upgrade a job from flink 1.4.2 to 1.6.0
When we do a deploy we cancel the job with a savepoint then deploy the new
version of the job from that savepoint. Because our jobs tend to have a lot
of state it often takes multiple minutes for our savepoints to complete.
On flink 1.4.2 we
I have used connected streams where one part of the connected stream maintains
state and the other part consumes it.
However it was not queryable externally. For state that is queryable externally
you are right you probably need another operator to store state and support
query-ability.
Sent
hi Regina
I've just been using flink, and recently I've been asked to store Flink data
on HDFS in parquet format. I've found several examples in GitHub and the
community, but there are always bugs. I see your storage directory, and
that's what I want, so I'd like to ask you to reply to me for a sl
Hello,
Thanks for the response. I had already tried setting the log level to debug in
log4j-cli.properties, logback-console.xml, and log4j-console.properties but no
additional relevant information comes out. On the server, all that comes out
are zookeeper ping responses:
2018-09-05 15:16:56,786
Hello,
I'm having difficulty reading the status (such as time taken for each
dataflow operator in a job) of jobs that have completed.
First, when I click on "Completed jobs" on the web interface (by default at
8081), no job shows up.
I see jobs that exist as "Running", but as soon as they finish,
When you create an environment that way, then the cluster is shutdown
once the job completes.
The WebUI can _appear_ as still working since all the files, and data
about the job, is cached in the browser.
On 05.09.2018 17:39, Miguel Coimbra wrote:
Hello,
I'm having difficulty reading the stat
Hello All,
I have recently made the upgrade from Flink 1.3.2 to 1.5.2. Everything has
gone well, except I'm no longer seeing the numRunningJobs jobmanager
metric. We use this for some notifications and recovery. According to the
docs, it should still be available. Has something changed or mayb
Thanks for the reply.
However, I think my case differs because I am running a sequence of
independent Flink jobs on the same environment instance.
I only create the LocalExecutionEnvironment once.
The web manager shows the job ID changing correctly every time a new job is
executed.
Since it is t
This is a known issue with no workaround except switching the cluster to
legacy mode: https://issues.apache.org/jira/browse/FLINK-10135
On 05.09.2018 19:58, Bryant Baltes wrote:
Hello All,
I have recently made the upgrade from Flink 1.3.2 to 1.5.2.
Everything has gone well, except I'm no lon
Ok, thanks!
On Wed, Sep 5, 2018 at 2:11 PM Chesnay Schepler wrote:
> This is a known issue with no workaround except switching the cluster to
> legacy mode: https://issues.apache.org/jira/browse/FLINK-10135
>
> On 05.09.2018 19:58, Bryant Baltes wrote:
> > Hello All,
> >
> > I have recently made
No, the cluster isn't shared. For each job a separate cluster is spun up
when calling execute(), at the end of which it is shut down.
For explicitly creation and shutdown of a cluster I would suggest to
execute your jobs as a test that contains a MiniClusterResource.
On 05.09.2018 20:59, Migu
Hi guys,
I enabled incremental flink checkpoint for my flink job. I had the job read
messages at a stable rate. For each message, the flink job store something
in the keyed state. My question is: For every minute, the increased state
size is the same, shouldn't the incremental checkpoint size remai
Hi Jelmer,
Here's a similar question, and you can refer to the discussion options.[1]
[1]:
http://mail-archives.apache.org/mod_mbox/flink-user/201808.mbox/%3ccamjeyba9zjx_huqtlxdcu87hphrvrzxzoyjpqxzxdkq2h_k...@mail.gmail.com%3E
Hi Till and Chesnay,
Recently, several users have encountered this
Hi all,
Recently I want to write a test in a batch case that some of tasks are
FINISHED.
I try to write a finite SourceFunction and a (expected) blocking
SinkFunction. But the job FINISHED. This surprises me. Why could the Sink
FINISHED in such a case?
The job and log are attached.
Best,
tison.
Hello,
Does keyed managed ListState preserve elements order, for example if I call
listState.add(e1); listState.add(e2); listState.add(e3); , does ListState
guarantee that listState.get() will return elements in order they were added
(e1, e2, e3)
Alexey
Hi,
With some additional research,
*Before the flag*
I realized for failed containers (that exited for a specific we still were
Requesting new TM container and launching TM). But for the "Detected
unreachable: [akka.tcp://fl...@blahabc.sfdc.net:123]" issue I do not see
the container marked as fai
Hi Jelmer,
I saw that you have already found the JIRA issue tracking this problem [1]
but
I will still answer on the mailing list for transparency.
The timeout for "cancel with savepoint" should be RpcUtils.INF_TIMEOUT.
Unfortunately Flink is currently not respecting this timeout. A pull request
Hi Jason,
>From the stacktrace it seems that you are using the 1.4.0 client to list
jobs
on a 1.5.x cluster. This will not work. You have to use the 1.5.x client.
Best,
Gary
On Wed, Sep 5, 2018 at 5:35 PM, Jason Kania wrote:
> Hello,
>
> Thanks for the response. I had already tried setting the
As far as I know, rocksdb mainly uses off-heap memory, which is hard to be
controlled by JVM. Maybe you can monitor off-heap memory of taskmanager
process by professional tools, such as gperftools...
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Hi Alexey,
The answer is Yes, which preserves the semantics of the List's order of
elements.
Thank, vino.
Alexey Trenikhun 于2018年9月6日周四 上午10:55写道:
> Hello,
> Does keyed managed ListState preserve elements order, for example if I
> call listState.add(e1); listState.add(e2); listState.add(e3); ,
+ user mail list
From: Yun Tang
Sent: Thursday, September 6, 2018 14:36
To: burgesschen
Subject: Re: Increased Size of Incremental Checkpoint
Hi
I think the "checkpoint size" metrics showed in your graph means the total
checkpoint size of each time. The incremen
You are correct. Thanks! I misused the job ID. Sorry for bothering you guys.
Best regards,
Chang
from iPhone
> On 4 Sep 2018, at 18:06, Chesnay Schepler wrote:
>
> Please check that the job ID is correct.
>
>> On 04.09.2018 15:48, Chang Liu wrote:
>> Dear All,
>>
>> I had the following
35 matches
Mail list logo