Hi all
I'm trying to prove flink sql stream non-windowed inner join with flink 1.5.0,
but it failed.
Then i tried flink test
case(flink-libraries/flink-table/src/test/scala/org/apache/flink/table/runtime/stream/sql/JoinItCase.testNonWindowInnerJoin)
with java instead of scala, but it failed too
Hi.
We sometimes see job fails with a blob store exception, like the one below.
Anyone has an idea why we get them, and how to avoid them?.
In this case the job has run without any problems for a week and then we
get the error. Only this job are affected right now all other running as
expected and
Can someone please tell why am I facing this?
On Wed, Jun 13, 2018 at 10:33 PM Garvit Sharma wrote:
> Hi,
>
> I am using *flink-1.5.0-bin-hadoop27-scala_2.11 *to submit jobs through
> Yarn, but I am getting the below exception :
>
> java.lang.NoClassDefFoundError:
> com/sun/jersey/core/util/Feat
Hi Rinat,
> are my assumptions about checkpoint/ savepoint state usage correct ?
Indeed, a bit incorrect, you can also restore the job from a checkpoint. By
default, the checkpoint data will be removed if the job finish(maybe canceled
by user), but you can configure flink to retain the checkp
Hi Fabian,
Thanks for the prompt response and apologies for delayed response.
You wrapped up the bottom lines pretty well - if I were to wrap it up I’d say
“best possible” recovery on “known" restarts either say manual cancel + start
OR framework initiated ones like on operator failures with t
Hi, chris
It means there are four threads and each thread outputs a record. You can
use env.setParallelism() to change the default value(i.e., 4) to other
values.
Best, Hequn
On Thu, Jun 14, 2018 at 9:09 AM, chrisr123 wrote:
>
> What does the number in front of the ">" character mean when cal
What does the number in front of the ">" character mean when call print()
on a dataset?
For example I may have this in my source where I am reading a socket stream
of sensor data:
DataStream> simpleStream = env
.socketTextStream(parms.get("host")
Right after I sent this, I realized that FLINK-7502 is likely the fix that
I'm looking for. I swapped in a more recent version of the
flink-metrics-prometheus jar and it seems to be much happier now.
Thanks,
Russell
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble
Which scripts did you try?
I use Windows 10 as well and can run the .bat scripts with powershell
and the .sh scripts in WSL just fine.
We did rework the windows scripts in 1.5, the primary change being
separate processes for Job- and TaskManager .
On 13.06.2018 21:04, TechnoMage wrote:
Has
Hi,
I'm trying to add some custom metrics for a Flink job, but have bumped into
some issues using the PrometheusReporter. If I'm running multiple
instances of the same job under the same TaskManager, I'm seeing the
following error when the second instance of the job tries to create the
metric wit
IV'e already responded to you previous mail asking the same question.
On 13.06.2018 19:06, Chris Kellogg wrote:
How can one build a connectors jar from the source?
Also, is there a quick way to build the examples from the source
without having to do a mvn clean package -DskipTests?
Thanks.
Hi guys, thx for your reply.
The following code info is actual for release-1.5.0 tag,
org.apache.flink.streaming.connectors.fs.bucketing.BucketingSink class
For now, BucketingSink has the following lifecycle of files
When moving files from opened to pending state:
on each item (method invoke:43
Hi mates, on my way of using BucketingSink, I've decided to enable
checkpointing, to prevent hanging of files in open state on job failure.
But it seems, that I’m not properly understood the meaning of checkpointing …
I’ve enabled the fs backend for checkpoints, and while job is working
everythi
Any ideas on the standard way ( or any roundabout way ) of doing a version
upgrade that looks back ward compatible.
The @FieldSerializer.Optional("0") actually does ignore the field ( even
if reset ) giving it the default value if kyro is used. It has to do with
the FieldSerializer behaves . Th
Has any work been done on support for Windows in 1.5? I tried the scripts in
1.4 with windows 10 with no luck.
Michael
How can one build a connectors jar from the source?
Also, is there a quick way to build the examples from the source without
having to do a mvn clean package -DskipTests?
Thanks.
Chris
Hi,
I am using *flink-1.5.0-bin-hadoop27-scala_2.11 *to submit jobs through
Yarn, but I am getting the below exception :
java.lang.NoClassDefFoundError:
com/sun/jersey/core/util/FeaturesAndProperties
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(Class
Hi All,
I have implemented a custom sourcefuntion on a datasource with an
asynchronous API (the API calls return Scala futures). I need to perform
calls to the asynchronous API during initialization of each individual
(parallel) source instance, and when in exacly-once mode also during
snapshotsta
Just to add some more info, here is the data I have on Prometheus (with
some names redacted):
flink_taskmanager_job_task_operator_records_lag_max{host="002s",operator_id="cbc357ccb763df2852fee8c4fc7d55f2",subtask_index="0",task_attempt_id="fa104111e1f493bbec6f4b2ce44ec1da",task_attempt_num="11",ta
Hi Gordon,
We have Kafka 0.10.1.1 running and use the flink-connector-kafka-0.10
driver.
There are a bunch of flink_taskmanager_job_task_operator_* metrics,
including some about the committed offset for each partition. It seems I
have 4 different records_lag_max with different attempt_id, though,
Hi,
I am joining two streams with a session window and want to emit a joined
(early) result for every element arriving on one of the streams.
Currently the code looks like this:
s1.join(s2)
.where(s1.id).equalTo(s2.id)
.window(EventTimeSessionWindows.withGap(Time.minutes(15)))
// trigger(?)
.app
Hi,
Which Kafka version are you using?
AFAIK, the only recent changes to Kafka connector metrics in the 1.4.x series
would be FLINK-8419 [1].
The ‘records_lag_max’ metric is a Kafka-shipped metric simply forwarded from
the internally used Kafka client, so nothing should have been affected.
Do
Hi There,
I am trying to parse multiple csv files in a directory using
CsvTableSource and insert each row into cassandra using CassandraSink.
How does flink handle any errors to parse some of the csv files within that
directory?
--
Thanks & Regards,
Athar
Hi Sihua, Thx for your reply
> On 9 Jun 2018, at 11:42, sihua zhou wrote:
>
> Hi Rinat,
>
> I think there is one configuration {{state.checkpoints.num-retained}} to
> control the maximum number of completed checkpoints to retain, the default
> value is 1. So the risk you mentioned should not
24 matches
Mail list logo