m, however, no expert on this and the implications regarding the use of
> Avro from inside Scala, so I included Gordon (cc'd) who may know more.
>
>
>
> Nico
>
>
> [1] https://ci.apache.org/projects/flink/flink-docs-release-1.3/dev/
> types_serialization.html
>
also I have filed this bug also:
https://issues.apache.org/jira/browse/FLINK-7756
On Wed, Jan 17, 2018 at 2:55 PM, shashank agarwal
wrote:
> Hello,
>
> A quick question which scala collection should I use in my scala case
> class which won't go through generic seria
n error.
--
Thanks Regards
SHASHANK AGARWAL
--- Trying to mobilize the things
B, the watermark is not
> sufficiently advanced to trigger computation, only when you see Event 2.A
> does the watermark advance and you get a result. This is what I would
> expect to happen.
>
>
> On 3. Jan 2018, at 19:46, shashank agarwal wrote:
>
> @Dawid, I was using 1.3.2
cessed only if its
> timestamp was lower than current watermark while it should be lower or
> equal.
>
> Best
> Dawid
>
> > On 3 Jan 2018, at 17:05, shashank agarwal wrote:
> >
> > ssed A with origTimestamp Y. (
>
>
--
Thanks Regards
SHASHANK AGARWAL
--- Trying to mobilize the things
should be printed)
On Wed, Jan 3, 2018 at 8:30 PM, Aljoscha Krettek
wrote:
> Can you please check what the input watermark of your operations is? There
> is a metric called "currentLowWatermark" for this.
>
> Best,
> Aljoscha
>
> On 3. Jan 2018, at 15:54, shasha
ntimestamp variable in
other one i am using system current millis.
On Wed, Jan 3, 2018 at 8:06 PM, Aljoscha Krettek
wrote:
> Ok, but will there be events in all Kafka partitions/topics?
>
>
> On 3. Jan 2018, at 15:33, shashank agarwal wrote:
>
> Hi,
>
> Yes, E
cessing will be stalled downstream.
>
> Best,
> Aljoscha
>
>
> On 3. Jan 2018, at 14:29, shashank agarwal wrote:
>
> Hello,
>
> I have some patterns in my program. For an example,
>
> A followedBy B.
>
> As I am using kafka source and my event API's u
ithin (10 seconds) in CEP also still
not generating results.
Am I doing anything wrong?
I have to cover the case where B can come after A from Kafka.
--
Thanks Regards
SHASHANK AGARWAL
--- Trying to mobilize the things
in connector separately. I will request the
feature for the same. While that this is workaround for scala developers.
On Thu, Dec 28, 2017 at 11:28 AM, Michael Fong
wrote:
> Hi, shashank agarwal
> <https://plus.google.com/u/1/102996216504701757998?prsrc=4>
>
>
> No
is a Product):
>
>
> https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-cassandra/src/main/java/org/apache/flink/streaming/connectors/cassandra/CassandraScalaProductSink.java
>
> Regards,
> Timo
>
>
>
> Am 12/21/17 um 1:39 PM schrieb shashank a
und i found.
On Tue, Dec 19, 2017 at 6:11 PM, shashank agarwal
wrote:
> I have tried that by creating class with companion static object:
>
> @SerialVersionUID(507L)
> @Table(keyspace = "neofp", name = "order_detail")
> class OrderFinal(
>
t 7:48 PM, Timo Walther wrote:
> Libraries such as CEP or Table API should have the "compile" scope and
> should be in the both the fat and non-fat jar.
>
> The non-fat jar should contain everything that is not in flink-dist or
> your lib directory.
>
> Regards,
> Ti
stuff in there that shouldn't be
> there. This can explain the errors you seeing because of classloading
> conflicts.
>
> Could you try not building a fat-jar and have only your code in your jar?
>
> Best,
> Aljoscha
>
>
> On 20. Dec 2017, at 10:15, shashank agarwal
xplanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
So i think it's adding Hadoop libs in classpath too cause it's able to
create the checkpointing directories from flink-conf file to HDFS.
On Wed, Dec 20, 2017 at 2:31 PM, shashank agarwal
wrote:
&
.0.3-8/hadoop-mapreduce/lib/*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/.//*
But application fails contuniously with logs which i have sent earlier.
I have tried to add flink- hadoop-compability*.jar as suggested by Jorn
but it's not working.
On Tue, Dec 19, 2017 at 5:08 PM, shashank agarwal
w
ath of your Custer nodes
>
> On 19. Dec 2017, at 12:38, shashank agarwal wrote:
>
> yes, it's working fine. now not getting compile time error.
>
> But when i trying to run this on cluster or yarn, getting following
> runtime error :
>
> org.apache.flink.core.fs.Unsup
Task.openAllOperators(StreamTask.java:393)
at
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:254)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:718)
at java.lang.Thread.run(Thread.java:745)
On Tue, Dec 19, 2017 at 3:34 PM, shashank agarwal
wrote:
> HI,
>
> I have upgraded flin
/usr/hdp/2.6.0.3-8/hadoop-hdfs/lib/*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/.//*:/usr/hdp/2.6.0.3-8/hadoop-yarn/lib/*:/usr/hdp/2.6.0.3-8/hadoop-yarn/.//*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/lib/*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/.//*
On Fri, Dec 8, 2017 at 9:24 PM, shashank agarwal
wrote:
> Sure i’ll
HI,
I have upgraded flink from 1.3.2 to 1.4.0. I am using cassandra sink in my
scala application. Before sink, i was converting my scala datastream to
java stream and sinking in Cassandra. I have created pojo class in scala
liked that :
@SerialVersionUID(507L)
@Table(keyspace = "neofp", name = "o
gt; I see, thanks for letting us know!
>>
>>
>> On 8. Dec 2017, at 15:42, shashank agarwal wrote:
>>
>> I had to include two dependencies.
>>
>> hadoop-hdfs (this for HDFS configuration)
>> hadoop-common (this for Path)
>>
>>
>>
I had to include two dependencies.
hadoop-hdfs (this for HDFS configuration)
hadoop-common (this for Path)
On Fri, Dec 8, 2017 at 7:38 PM, Aljoscha Krettek
wrote:
> I think hadoop-hdfs might be sufficient.
>
>
> On 8. Dec 2017, at 14:48, shashank agarwal wrote:
>
> Ca
, 2017 at 6:58 PM, shashank agarwal
wrote:
> It's a compilation error. I think I have to include the Hadoop
> dependencies.
>
>
>
>
>
> On Fri, Dec 8, 2017 at 6:54 PM, Aljoscha Krettek
> wrote:
>
>> Hi,
>>
>> Is this a compilation error or a
re not there.
>
> Best,
> Aljoscha
>
>
> On 8. Dec 2017, at 14:10, shashank agarwal wrote:
>
> Hello,
>
> I am trying to test 1.4.0-RC3, Hadoop libraries removed in this version.
> Actually, i have created custom Bucketer for the bucketing sink. I am
> extending
>
&g
Hello,
I am trying to test 1.4.0-RC3, Hadoop libraries removed in this version.
Actually, i have created custom Bucketer for the bucketing sink. I am
extending
org.apache.flink.streaming.connectors.fs.bucketing.Bucketer
in the class, i have to use org.apache.hadoop.fs.Path but as hadoop
libra
is part
>
> Future resultFuture = client.query(str);
>
> with your HTTP post request call, returning a Future.
>
> Best,
> Stefan
>
>
> Am 21.11.2017 um 11:46 schrieb shashank agarwal :
>
> Hello,
>
> Is anybody has an example of HTTP post request using Async io?
&
Hello,
Is anybody has an example of HTTP post request using Async io?
Shashank
Hello,
A quick question. I am building flink from source after applying some
patch. There are some changes in CEP library also. So I want to use that
source generated lib instead of release maven repo from central maven.
Is anybody has direct script for publishing in local?
is generated jars in
scala APi should we have
to use serializer like Avro4s ? Or we can use default Avro in our Scala
flink app than what will be the steps ?
Please guide.
--
Thanks Regards
SHASHANK AGARWAL
--- Trying to mobilize the things
Hi,
Can i change State backend from FsStateBackend to RocksDBStateBackend
directly or i have to do some migration ?
--
Thanks Regards
SHASHANK AGARWAL
--- Trying to mobilize the things
> a lot of users have hundreds of millions of different keys for some states.
>
> Best,
> Aljoscha
>
> On 2. Aug 2017, at 14:59, shashank agarwal wrote:
>
> If I am creating KeyedState ("count by email id") and keyed stream has 10
> unique email id's.
&
k into "MapState", which is an efficient way to have "sub
>> keys" under a keyed state.
>>
>> Stephan
>>
>>
>> On Mon, Jul 31, 2017 at 6:01 PM, shashank agarwal
>> wrote:
>>
>>> Hello,
>>>
>>> I have to c
amily in RocksDB.
> Having too many of those is not memory efficient.
>
> Having fewer states is better, if you can adapt your schema that way.
>
> I would also look into "MapState", which is an efficient way to have "sub
> keys" under a keyed state.
>
> St
create keyed state
by email, phone etc.
am i right ? is this impact on the performance or is this wrong approach ?
Which approach would you suggest in this use case.
--
Thanks Regards
SHASHANK AGARWAL
--- Trying to mobilize the things
v/stream/
> process_function.html
>
> Best,
> Aljoscha
>
> On 13. Jun 2017, at 15:27, shashank agarwal wrote:
>
> Hi,
>
> I have to process each event with last 1 hour , 1 week and 1 month data.
> Like how many times same ip occurred in last 1 month corresponding to that
guide what should i use Table, ProcessFunction
or global window. Or what approach should i take ?
--
Thanks Regards
SHASHANK AGARWAL
--- Trying to mobilize the things
36 matches
Mail list logo