Hi Joshua,
Looks like a bug, the UI parsing we saw in Flink 1.5.0 and 1.6.0 is normal.
Please feel free to create a JIRA issue.
In addition, I suggest you upgrade your Flink version.
Thanks, vino.
Joshua Fan 于2018年10月24日周三 上午10:08写道:
> Hi Vino,
>
> the version is 1.4.2.
>
> Yours
> Joshua
>
>
Hi Vino,
the version is 1.4.2.
Yours
Joshua
On Tue, Oct 23, 2018 at 7:26 PM vino yang wrote:
> Hi Joshua,
>
> Which version of Flink are you using?
>
> Thanks, vino.
>
> Joshua Fan 于2018年10月23日周二 下午5:58写道:
>
>> Hi All
>>
>> came into new situations, that the UI can show metric data but the da
You can try with *-yD env.java.opts.taskmanager="-XX:+UseConcMarkSweepGC"*
if you are running fink on yarn.
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Thinking out loud here:
I can't tell where the class load is failing.
The general model I've used with ByteBuddy in this scenario is very similar
to yours.
I subclass my superclass using ByteBuddy.
I inject the new class into a JAR that will be shared by the task managers.
I subclass the Flink cla
Hi Timo,
I write simple testing code for the issue, please checkout
https://gist.github.com/yinhua-dai/143304464270afd19b6a926531f9acb1
I write a custom table source which just use RowCsvInputformat to create the
dataset, and use the provided CsvTableSink, and can reproduce the issue.
--
Sent
I think I have already done that in my custom sink.
@Override
public String[] getFieldNames() {
return this.fieldNames;
}
@Override
public TypeInformation[] getFieldTypes() {
return this.fieldTypes;
}
@Override
public TableSink configure(String[] fieldNames, TypeInformation
Update on this - if i just do empty mapping and drop the sql part, it works
just fine. i wonder if there's any class loading that needs to be done when
using SQL, not sure how i do that
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
After removing some operators (which i still need, but wanted to understand
where my issues are) i get a slightly different stacktrace (though still
same issue).
my current operators are
1. a sql select with group by (returns retracted stream )
2. filter (take only non retracted)
3. map (tuple t
Hello,
I have two unrelated questions regarding the Flink API. Both of these are
connected to some proof-of-concepts and will not be deployed in an actual
production environment, so even less-than-ideal suggestions are welcome!
First, maybe this is a silly question, but is it possible to set the
Thanks Dominik, hope it will be resolved soon
On Tue, Oct 23, 2018 at 4:47 PM Dominik Wosiński wrote:
> Hey Alexander,
> It seems that this issue occurs when the broker is down and the partition
> is selecting the new leader AFAIK. There is one JIRA issue I have found,
> not sure if that's what
Hi,
I am using below code to read data from AWS Kinesis stream. But it is
giving me the request body and not the request header. How to get the
request header from Kinesis. My flink jar versions are:
flink-java - 1.6.1
flink-streaming-java_2.11 - 1.6.1
flink-connector-kinesis_2.11 - 1.6.1
My c
Dear community,
this is the weekly community update thread #43. Please post any news and
updates you want to share with the community to this thread.
# Release vote for Flink 1.5.5 and 1.6.2
The community is currently voting on the first release candidates for Flink
1.5.5 [1] and Flink 1.6.2 [2]
Hey Alexander,
It seems that this issue occurs when the broker is down and the partition
is selecting the new leader AFAIK. There is one JIRA issue I have found,
not sure if that's what are You looking for:
https://issues.apache.org/jira/browse/KAFKA-6221
This issue is connected with Kafka itself
Hi
I'm running a flink job on Mesos and I'm trying to change my TaskManager's
JVM options. Because our flink-conf.yaml comes from unify image so I can't
modify it.
I try to put it in environment variable JVM_ARGS, here it my setting:
JVM_ARGS=-XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFra
Hey Mark,
Do You use more than 1 Kafka consumer for Your jobs? I think this relates
to the known issue in Kafka:
https://issues.apache.org/jira/browse/KAFKA-3992.
The problem is that if You don't provide client ID for your
*KafkaConsumer* Kafka
assigns one, but this is done in an unsynchronized wa
Hi,
I stumbled upon an exception in the "Exceptions" tab which I could not
explain. Do you know what could cause it? Unfortunately I don't know how to
reproduce it. Do you know if there is a respective JIRA issue for it?
Here's the exception's stack trace:
org.apache.flink.streaming.connectors.k
Did you configure the IAM access roles correctly? Are those two machines are
allowed to communicate?
> Am 23.10.2018 um 12:55 schrieb madan :
>
> Hi,
>
> I am trying to setup cluster with 2 EC2 instances. Able to bring up the
> cluser with 1 master and 2 slaves. But I am not able to connect to
I was glad to find that bravo had now been updated to support installing
bravo to a local maven repo.
I was able to load a checkpoint created by my job, thanks to the example
provided in bravo README, but I'm still missing the essential piece.
My code was:
OperatorStateReader reader = ne
Hi,
We are using ValueState to maintain state. It is a pretty simple job with a
keyBy operator on a stream and the subsequent map operator maintains state
in a ValueState instance. The transaction load is in billion transactions
per day. However the amount of state per key is a list of 18x6 long v
Thanks Nico!.
I was using google compute engine.Have changed blob storage directory, have
to wait and see if it solves the problem.
Thanks,
Manju
On Tue, Oct 23, 2018 at 2:07 PM Nico Kruber wrote:
> Hi Manjusha,
> If you are, for example, using one of Amazon's Linux AMIs on EMR, you
> may fal
Hi Joshua,
Which version of Flink are you using?
Thanks, vino.
Joshua Fan 于2018年10月23日周二 下午5:58写道:
> Hi All
>
> came into new situations, that the UI can show metric data but the data
> remains the same all the time after days. So, there are two cases, one is
> no data in UI at all, another is
Hi,
We regularly see the following two exceptions in a number of jobs shortly
after they have been resumed during our flink cluster startup:
org.apache.kafka.common.KafkaException: Error registering mbean
kafka.consumer:type=consumer-node-metrics,client-id=consumer-1,node-id=node--1
at
org.apa
Hi,
I am trying to setup cluster with 2 EC2 instances. Able to bring up the
cluser with 1 master and 2 slaves. But I am not able to connect to master
to submit my job.
Here is how my attempt,
ExecutionEnvironment.createRemoteEnvironment("18.225.26.184", 6123, new
Configuration());
Tried with publ
Hi,
It does not. Looking at the generated code, that SCHEMA$ value gets created
in the companion object for the case class (which behaves equivalently to a
static field in java).
This gets compiled down to a classfile with a $ suffix- in this case,
"AlertEvent.SCHEMA$" doesn't exist, and to get t
Hello,
Thank you for your answer and apologies for the late response.
For timers we are using :
state.backend.rocksdb.timer-service.factory: rocksdb
Are we still affected by [1] ?
For the interruptibility, we have coalesced our timers and the application
became more responsive to stop signa
Hi All
came into new situations, that the UI can show metric data but the data
remains the same all the time after days. So, there are two cases, one is
no data in UI at all, another is dead data in UI all the time.
when dig into the taskmanager.log, taskmanager.error, taskmanager.out,
there is n
From the stack below, it indicates there are no available buffers for source
outputs including watermark and normal records, so the source will be blocked
on request buffer from LocalBufferPool.
The checkpoint process is also affected by above blocking request. The root
cause is why the queued o
Thank you. That worked.
Regards,
On Tue, 23 Oct 2018 at 09:08, Dawid Wysakowicz
wrote:
> Hi Ahmad,
>
> I think Alexander is right. You've declared the state descriptor
> transient, which effectively makes it null at the worker node, when the
> state access is happening. Remove the transient mod
Hi Folani,
Metrics is definitely one way, while the other can be that, depending on your
job,
if you have e.g. processFunctions, you can always attach different timestamps
(depending on what you want to measure) and based on these, do the computations
you need. Based on this you can for example c
Hi Manjusha,
If you are, for example, using one of Amazon's Linux AMIs on EMR, you
may fall into a trap that Lasse described during his Flink Forward talk
[1]: These images include a default cron job that cleans up files in
/tmp which have not been recently accessed. The default BLOB server
direct
Hi Yinhua,
your custom sink must implement
`org.apache.flink.table.sinks.TableSink#configure`. This method is
called when writing to a sink such that the sink can configure itself
for the reverse order. The methods `getFieldTypes` and `getFieldNames`
must then return the reconfigured schema;
Hello,
Checkpointing to hdfs.
*state.backend.fs.checkpointdir: hdfs://flink-hdfs:9000/flink-checkpoints*
*state.checkpoints.num-retained: 2*
Thanks,
Manjusha
On Tue, Oct 23, 2018 at 1:05 PM Dawid Wysakowicz
wrote:
> Hi Manjusha,
>
> I am not sure what is wrong, but Nico or Till (cc'ed) might
Hi Ahmad,
I think Alexander is right. You've declared the state descriptor
transient, which effectively makes it null at the worker node, when the
state access is happening. Remove the transient modifier or instantiate
the descriptor in the open method. The common pattern is to have the
state itse
This was very helpful. Thank you very much :)
| |
maidangdang44
|
|
maidangdan...@126.com
|
签名由网易邮箱大师定制
On 10/23/2018 15:20,Dawid Wysakowicz wrote:
Hi,
The problem is that sTime is not a Time Attribute[1], which has to be aligned
with watermarks mechanism. Right now you cannot create a time at
I write a customized table source, and it emits some fields let's say f1, f2.
And then I just write to a sink with a reversed order of fields, as below:
*select f2, f1 from customTableSource*
And I found that it actually doesn't do the field reverse.
Then I tried with flink provided CsvTableSou
Hi Manjusha,
I am not sure what is wrong, but Nico or Till (cc'ed) might be able to
help you.
Best,
Dawid
On 23/10/2018 06:58, Manjusha Vuyyuru wrote:
> Hello All,
>
> I have a job which fails lets say after every 14 days with IO
> Exception, failed to fetch blob.
> I submitted the job using c
i delete ‘row‘ flag
it seems good looking
--
a job have two sql
source is kafka
sink is redis or other sink
Asql
select
reqIp as factorContenta,
count(*) as eCount,
60 * 60 as expire
from
kafka_source
where
uri is not null
group by
Hello,
I'm interested in creating a Flink batch app that can process multiple files
from S3 source in parallel. Let's say I have the following S3 structure and
that my Flink App has Parallelism set to 3 workers.
s3://bucket/data-1/worker-1/file-1.txt
s3://bucket/data-1/worker-1/file-2.t
Hi,
The problem is that sTime is not a Time Attribute[1], which has to be
aligned with watermarks mechanism. Right now you cannot create a time
attribute from within TableFunction, as far as I know.
What you could do is to do the splitting logic in DataStream API and
register a proper table with
hi
i deleteflag
it seems good looking
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
How is your cluster configured ?
What is the Checkpoint/save point directory configuration ?
On Tue, Oct 23, 2018 at 8:00 AM Manjusha Vuyyuru
wrote:
> Hello All,
>
> I have a job which fails lets say after every 14 days with IO Exception,
> failed to fetch blob.
> I submitted the job using co
hi
i deleteflag
it seems good looking
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Hi,
Could you rephrase your question? I think some parts of the question are
missing. It would be also easier to help you if you could state the
final problem a bit more clearly.
Best,
Dawid
On 23/10/2018 04:06, WeiWen Fan wrote:
> a job have two sql
> source is kafka
> sink is redis or other
I've tried with t2, test.t2 and test.test.t2.
On Mon, 22 Oct 2018, 19:26 Zhang, Xuefu, wrote:
> Have you tried "t2" instead of "test.t2"? There is a possibility that
> catalog name isn't part of the table name in the table API.
>
> Thanks,
> Xuefu
>
>
44 matches
Mail list logo