Hi All,
I have following requirement
1. i have avro json message containing {eventid, usage, starttime, endtime}
2. i am reading this from kafka source
3. if there is overlapping hour in a record split the record by rounding
off to hourly bounderies
4.My objective is a) read the message b) aggr
hi,
How can I count the element in datastream? I don't want the keyBy().
i found that use savepoint to recover job sometimes failed, can we use latest checkpointed to recover failed jobhow?thank you.
发自网易邮箱大师
Thanks for the reply. Unfortunately that project was unexpectedly cancelled
but for other reasons. I was happy to work on it, and hopefully gained some
insight. I have another question today unrelated towards Elasticsearch
sinks, and will ask there.
On Fri, Jan 5, 2018 at 2:52 AM, Fabian Hueske w
Ah, I see. Temporary Credentials are delegated through the AWS Security Token
Service through the AssumeRole API.
Sorry, I wasn’t knowledgable of the Temporary Credentials feature before.
Seems like we should add support for the
STSAssumeRoleSessionCredentialsProvider [1]. And yes, your observat
No, they are not but we can definitely look into that.
If no, is there a workaround to implement or customize AWS Utils?
Thank you
> On Jan 11, 2018, at 6:41 PM, Tzu-Li (Gordon) Tai wrote:
>
> Hi Sree,
>
> Are Temporary Credentials automatically shipped with AWS EC2 instances when
> delegate
Hi Sree,
Are Temporary Credentials automatically shipped with AWS EC2 instances when
delegated to the role?
If yes, you should be able to just configure the properties so that the Kinesis
consumer automatically fetches credentials from the AWS instance.
To do that, simply do not provide the Acce
>
> Hi,
>
> According to my understanding, Kinesis Connector requires Access Key and
> Secret Key to connect.
>
> Is it possible or any work around to use Temporary Credentials from AWS to
> use in Kinesis Connector?
> We have scenario where we are trying to access cross-account Stream and we
Hello,
I am just getting started with Flink and am attempting to use the kafka
connector.
In particular I am attempting to use the jar
flink-connector-kafka-0.11_2.11-1.4.0.jar downloaded from:
https://repo1.maven.org/maven2/org/apache/flink/flink-connector-kafka-0.11_2.11/1.4.0/
with the latest
When checkpointing is turned on a simple CEP loop pattern
private Pattern, ?> alertPattern =
Pattern.>begin("start").where(checkStatusOn)
.followedBy("middle").where(checkStatusOn).times(2)
.next("end").where(checkStatusOn).within(Time.minutes(5))
I see failures.
SimpleBinaryEve
Here’s a more complete view of the task manager log from the start of this
occurrence:
2018-01-11 14:50:08.286 [heartbeat-filter -> input-trace-filter ->
filter-inactive-ds -> filter-duplicates
(ip-10-80-54-205.us-west-2.compute.internal:2181)] INFO
c.i.flink.shaded.zookeeper.org.apache.zooke
Hi Dongwon,
I am not familiar with the deployment on DC/OS. However, Eron Wright and
Jörg
Schad (cc'd), who have worked on the Mesos integration, might be able to
help
you.
Best,
Gary
On Tue, Jan 9, 2018 at 10:29 AM, Dongwon Kim wrote:
> Hi,
>
> I've launched JobManager and TaskManager on DC/O
As another data point, here’s an except from a stack dump for the task manager:
"heartbeat-filter -> input-trace-filter -> filter-inactive-ds ->
filter-duplicates (5/10)-EventThread" #94 daemon prio=5 os_prio=0
tid=0x7f48c04d4800 nid=
0x68ef waiting on condition [0x7f48470eb000]
java.
I’m seeing sporadic issues where it appears that curator (or other) user
threads are left running after a stream shutdown, and then the user class
loader goes away and I get spammed with ClassNotFoundExceptions… I’m wondering
if this might have something to do with perhaps the UserClassLoader be
This is less of a question and more of a PSA.
It looks like there is some sort of binary incompatible change in the scala
standard library class `scala.collection.immutable.::` between point releases
of scala 2.11. CaseClassTypeInfo generated by the type information macro will
fail to deserial
Hi Timo
"You don't need to specify the type in .flatMap() explicitly. It will be
automatically extracted using the generic signature of DataDataConverter.”
It does not. That is the reason why I had to add it there
> Regarding your error. Make sure that you don't mix up the API classes. If you
>
Thanks for trying it out and letting us know.
Cheers,
Till
On Thu, Jan 11, 2018 at 9:56 AM, Oleksandr Baliev wrote:
> Hi Till,
>
> thanks for your reply and clarification! With RocksDBStateBackend btw the
> same story, looks like a wrapper over FsStateBackend:
>
> 01/11/2018 09:27:22 Job execut
Another thing to point out is that watermarks are usually data-driven,
i.e., they depend on the timestamps of the events and not on the clock of
the machine.
Otherwise, you might observe a lot of late data, i.e., events with
timestamps smaller than the last watermark.
If you assign timestamps and
Thanks Gary,
I was only trying with a fixed set of events, so the Watermark was not
advancing, like you said.
Jayant Ameta
On Thu, Jan 11, 2018 at 3:36 PM, Gary Yao wrote:
> Hi Jayant,
>
> The difference is that the Watermarks from
> BoundedOutOfOrdernessTimestampExtractor are based on the gre
Hi Jayant,
The difference is that the Watermarks from
BoundedOutOfOrdernessTimestampExtractor are based on the greatest timestamp
of
all previous events. That is, if you do not receive new events, the
Watermark
will not advance. In contrast, your custom implementation of
AssignerWithPeriodicWaterm
Hi Boris,
each API is designed language-specific so they might not always be the
same. Scala has better type extraction features and let you write code
very precisely. Java requires sometime more code to archieve the same.
You don't need to specify the type in .flatMap() explicitly. It will b
Hi Till,
thanks for your reply and clarification! With RocksDBStateBackend btw the
same story, looks like a wrapper over FsStateBackend:
01/11/2018 09:27:22 Job execution switched to status FAILING.
org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not
find a file system implem
Great, thanks for reporting back!
2018-01-10 22:46 GMT+01:00 xiatao123 :
> got the issue fixed after applying patch from
> https://github.com/apache/flink/pull/4150
>
>
>
> --
> Sent from: http://apache-flink-user-mailing-list-archive.2336050.
> n4.nabble.com/
>
23 matches
Mail list logo