Hi All
We are using flink 1.10 version which were having Queryable state for querying
the in-memory state. we are planning to migrate our old applications
to newer version of the flink ,In latest version documents I can't find any
reference to it. can anyone highlight what was approach to query
-with-dependencies.jar/run?programArgs=/users/puneet/app/orchestrator/PropertiesStream_back.json&entryClass=com.orchestrator.flowExecution.GraphExecutor
Response:
{
"errors": [
"Not found."
]
}
--
*Cheers *
*Puneet Kinra*
*Mobile:+91880016
at
>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:168)
>> ~[flink-dist_2.12-1.11.1.jar:1.11.1]
>>
>> I have confirmed that something is wrong in my application causing this
>> error. However, it is hard to
t gellyStreaming.gradoop.StatefulFunctions.Test.main(Test.java:17)
>
> Process finished with exit code 1
>
>
>
>
>
> --
> Sent from:
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
>
--
*Cheers *
*Puneet Kinra*
*Mobile:+918800167808 | Skype : puneet.ki...@customercentria.com
*
*e-mail :puneet.ki...@customercentria.com
*
wrote:
> Hi Puneet,
>
> Can you describe how you validated that the state is not restored
> properly? Specifically, how did you introduce faults to the cluster?
>
> Best,
> Gary
>
> On Tue, Mar 3, 2020 at 11:08 AM Puneet Kinra <
> puneet.ki...@customercentria.com&
Sorry for the missed information
On recovery the value is coming as false instead of true, state.backend has
been configured in flink-conf.yaml along the
the path for checkpointing and savepoint.
On Tue, Mar 3, 2020 at 3:34 PM Puneet Kinra <
puneet.ki...@customercentria.com> wrote:
BeaconSource())
.name("AMQSource");
env.setStreamTimeCharacteristic(TimeCharacteristic.ProcessingTime);
env.setParallelism(1);
KeyedStream, Tuple> keyedValues = AMQSource.keyBy(0);
SingleOutputStreamOperator processedStream =
keyedValues.process(new TimeProcessTrigger()).setParallelism(1);
p
not be triigged if there's no
> message received during this minute. But i still need to execute the
> funtion.
>
> How can i implement it ?
>
> --
> wangl...@geekplus.com.cn
>
>
>
>
--
*Cheers *
*Puneet Kinra*
*Mobile:+9188001678
user@flink.apache.org
--
*Cheers *
*Puneet Kinra*
*Mobile:+918800167808 | Skype : puneet.ki...@customercentria.com
*
*e-mail :puneet.ki...@customercentria.com
*
sorry for typo yep we developed few days back.
On Tue, Jan 29, 2019 at 10:27 PM Puneet Kinra <
puneet.ki...@customercentria.com> wrote:
> Yep I did we days back.
>
> On Tue, Jan 29, 2019 at 10:13 PM wrote:
>
>> Hi all,
>>
>>
>>
>> I was wondering
be appreciated.
>
>
>
> We are using Scala 2.12 and Flink 1.7.
>
>
>
> Kind regards,
>
>
>
> Jacopo Gobbi
>
--
*Cheers *
*Puneet Kinra*
*Mobile:+918800167808 | Skype : puneet.ki...@customercentria.com
*
*e-mail :puneet.ki...@customercentria.com
*
r use case
>>
>
>
> --
>
>
>
>
>
> Regards,
> Selvaraj C
>
--
*Cheers *
*Puneet Kinra*
*Mobile:+918800167808 | Skype : puneet.ki...@customercentria.com
*
*e-mail :puneet.ki...@customercentria.com
*
;> --
>> Sent from:
>> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
>>
>
--
*Cheers *
*Puneet Kinra*
*Mobile:+918800167808 | Skype : puneet.ki...@customercentria.com
*
*e-mail :puneet.ki...@customercentria.com
*
etter to show us the whole pipeline of your job. For example, write
> a sample code(or provide a git link) that can reproduce your problem easily.
>
> Best, Hequn
>
>
> On Tue, Jan 8, 2019 at 11:44 AM Puneet Kinra <
> puneet.ki...@customercentria.com> wrote:
>
>> Hi
;
>
> On Tue, Jan 8, 2019 at 1:38 AM Puneet Kinra com> wrote:
>
>> I checked the same the function is getting exited when i am calling
>> ctx.getTimeservice () function.
>>
>> On Mon, Jan 7, 2019 at 10:27 PM Timo Walther wrote:
>>
>>> Hi Puneet,
>&
ect. Are you sure the registering
> takes place?
>
> Regards,
> Timo
>
> Am 07.01.19 um 14:15 schrieb Puneet Kinra:
>
> Hi Hequn
>
> Its a streaming job .
>
> On Mon, Jan 7, 2019 at 5:51 PM Hequn Cheng wrote:
>
>> Hi Puneet,
>>
>> The value o
to make sure t1< `parseLong + 5000` < t2.
>
> Best, Hequn
>
> On Mon, Jan 7, 2019 at 5:50 PM Puneet Kinra <
> puneet.ki...@customercentria.com> wrote:
>
>> Hi All
>>
>> Facing some issue with context to onTimer method in proce
tps://ci.apache.org/projects/flink/flink-docs-stable/ops/cli.html#restore-a-savepoint
>
> Cheers,
> Till
>
> On Fri, Jan 4, 2019 at 2:49 PM Puneet Kinra <
> puneet.ki...@customercentria.com> wrote:
>
>> The List it returns is blank
>>
>> On Fri, Jan 4, 2
ntext ctx,
Collector out) throws Exception {
// TODO Auto-generated method stub
super.onTimer(timestamp, ctx, out);
System.out.println("Executing timmer"+timestamp);
out.collect("Timer Testing..");
}
}
--
*Cheers *
*Puneet Kinra*
*Mobile:+918800167808 | Skype : puneet.ki...@c
The List it returns is blank
On Fri, Jan 4, 2019 at 7:15 PM Till Rohrmann wrote:
> Hi Puneet,
>
> what exactly is the problem when you try to resume from a checkpoint?
>
> Cheers,
> Till
>
> On Fri, Jan 4, 2019 at 2:31 PM Puneet Kinra <
> puneet.ki...@customerc
Hi All
I am creating a poc where i am trying the out of box feature of flink
for managed state of operator . I am able to create the checkpoint while
running my app in eclipse but when i am trying to restart the app . I am
unable to restore
the state.
Please find attached below snippet.
step fol
Hi
Max directory error.
--
*Cheers *
*Puneet Kinra*
*Mobile:+918800167808 | Skype : puneet.ki...@customercentria.com
*
*e-mail :puneet.ki...@customercentria.com
*
Hi
Is there any way to get thread name in coFlatMap function.
or getting runtime context.
--
*Cheers *
*Puneet Kinra*
*Mobile:+918800167808 | Skype : puneet.ki...@customercentria.com
*
*e-mail :puneet.ki...@customercentria.com
*
Hi
Is there a way to connect more than 2 streams with different stream schema
--
*Cheers *
*Puneet Kinra*
*Mobile:+918800167808 | Skype : puneet.ki...@customercentria.com
*
*e-mail :puneet.ki...@customercentria.com
*
2 , custom:plus(s1.price,s2.price) as price"
+ "insert into outputStream")
.returns("outputStream");
env.execute();
After developing the poc we came across this thing.
On Mon, Jul 9, 2018 at 5:12 PM, Puneet Kinra <
puneet.ki...@customercentria.com> wrote
n
>> using state and timers.
>> There is no built-in support to reconfigure such operators.
>>
>> Best,
>> Fabian
>>
>> [1] https://ci.apache.org/projects/flink/flink-docs-release-1.5/
>> dev/stream/state/broadcast_state.html
>>
>>
>>
reconfigure such operators.
>
> Best,
> Fabian
>
> [1] https://ci.apache.org/projects/flink/flink-docs-
> release-1.5/dev/stream/state/broadcast_state.html
>
>
> 2018-07-05 14:41 GMT+02:00 Puneet Kinra
> :
>
>> Hi Aarti
>>
>> Flink doesn't
Hi
Not able to see 1.6 flink release notes.
--
*Cheers *
*Puneet Kinra*
*Mobile:+918800167808 | Skype : puneet.ki...@customercentria.com
*
*e-mail :puneet.ki...@customercentria.com
*
lys, Inc. – Blog <https://qualys.com/blog> | Community
> <https://community.qualys.com> | Twitter <https://twitter.com/qualys>
>
>
> <https://www.qualys.com/email-banner>
>
--
*Cheers *
*Puneet Kinra*
*Mobile:+918800167808 | Skype : puneet.ki...@customercentria.com
*
*e-mail :puneet.ki...@customercentria.com
*
Hi
can anybody please send the link or ref document for 1.6 release.
--
*Cheers *
*Puneet Kinra*
*Mobile:+918800167808 | Skype : puneet.ki...@customercentria.com
*
*e-mail :puneet.ki...@customercentria.com
*
update it with each incoming record, and
>emit the record when it reaches the threshold value.
>
>
>
> Please rate the three approaches according to their efficiency.
>
>
>
> Regards,
>
> Teena
>
--
*Cheers *
*Puneet Kinra*
*Mobile:+918800167808 | Skype : puneet.ki...@customercentria.com
*
*e-mail :puneet.ki...@customercentria.com
*
Seeking for the same answer, you want connect multiple streams?
On Sat, Apr 7, 2018 at 5:34 AM, Michael Latta wrote:
> I would like to “join” several streams (>3) in a custom operator. Is this
> feasible in Flink?
>
>
> Michael
>
--
*Cheers *
*Puneet Kinra*
*Mobile:
and modifies the splits to point to the new location of the file before
> shipping them downstream to be read
> (but I have not done it).
>
> Keep in mind that if you do not change the contents of the file, then it
> will not be reprocessed.
>
> Cheers,
> Kostas
>
> On M
the default parallelism of the entire
> pipeline, you have to change it in StreamExecutionEnvironment. Otherwise
> every following operator will again have the full parallelism which leads
> to a shuffle operation after your source.
>
> I hope this helps.
>
> Regards,
&g
Hi
Is there any way of connecting multiple streams(more than one) in flink
--
*Cheers *
*Puneet Kinra*
*Mobile:+918800167808 | Skype : puneet.ki...@customercentria.com
*
*e-mail :puneet.ki...@customercentria.com
*
if i set parallelsim equals to 1 still it create multiple splits while
processing.
--
*Cheers *
*Puneet Kinra*
*Mobile:+918800167808 | Skype : puneet.ki...@customercentria.com
*
*e-mail :puneet.ki...@customercentria.com
*
stream.
> The stream can then separately handled, be written to files or Kafka or
> wherever.
>
> Best,
> Fabian
>
> [1] https://ci.apache.org/projects/flink/flink-docs-
> release-1.4/dev/stream/side_output.html
>
> 2018-03-20 10:36 GMT+01:00 Puneet Kinra
> :
>
pache.log4j.PatternLayout
log4j.appender.amssourceAppender.layout.ConversionPattern=%d [%t] %-5p (%C
%M:%L) %x - %m%n
--
*Cheers *
*Puneet Kinra*
*Mobile:+918800167808 | Skype : puneet.ki...@customercentria.com
*
*e-mail :puneet.ki...@customercentria.com
*
org.apache.flink.streaming.api.functions.source.MessageAcknowledgingSourceBase
- No state to restore for the AMQSource.
2018-03-15 20:59:39,488 WARN
org.apache.flink.streaming.connectors.activemq.AMQSource - Active MQ
source received non bytes message: null
On Thu, Mar 15, 2018 at 9:00 PM, Puneet Kinra <
puneet.ki...@customercentria.com>
I tried in cluster as well .
On Wed, Mar 14, 2018 at 10:01 PM, Timo Walther wrote:
> Hi Puneet,
>
> are you running this job on the cluster or locally in your IDE?
>
> Regards,
> Timo
>
>
> Am 14.03.18 um 13:49 schrieb Puneet Kinra:
>
> Hi
>
> I us
nv.addSource(new
AMQSource(sourceConfig));
messageStream.print();
env.execute();
}
}
--
*Cheers *
*Puneet Kinra*
*Mobile:+918800167808 | Skype : puneet.ki...@customercentria.com
*
*e-mail :puneet.ki...@customercentria.com
*
PM, Gary Yao wrote:
>
>> cc'ing user mailing list
>>
>> On Mon, Feb 12, 2018 at 12:40 PM, Puneet Kinra <
>> puneet.ki...@customercentria.com> wrote:
>>
>>> Hi Gary
>>>
>>> Thanks for the response i am able to upload the jar but i wh
Hi All
I am unable to deposit jobs from frontend, no errors
are also getting generated.
--
*Cheers *
*Puneet Kinra*
*Mobile:+918800167808 | Skype : puneet.ki...@customercentria.com
*
*e-mail :puneet.ki...@customercentria.com
*
I am unable to submit the job in flink from UI any specific port opening
is required.
On Fri, Feb 9, 2018 at 5:10 PM, Puneet Kinra <
puneet.ki...@customercentria.com> wrote:
> I am unable to submit the job in flink from UI
>
> --
> *Cheers *
>
> *Puneet Kinra*
>
>
I am unable to submit the job in flink from UI
--
*Cheers *
*Puneet Kinra*
*Mobile:+918800167808 | Skype : puneet.ki...@customercentria.com
*
*e-mail :puneet.ki...@customercentria.com
*
n Fri, Feb 9, 2018 at 3:23 PM, Till Rohrmann wrote:
> Hi Puneet,
>
> without more information about the job you're running (ideally code), it's
> hard to help you.
>
> Cheers,
> Till
>
> On Fri, Feb 9, 2018 at 10:43 AM, Puneet Kinra <
> puneet.ki...@custo
)
{
bonusPointBatch.transcationLoading(jsonFileReader, cache);
}
but in the plan it is show only two operators ideally it show 4 operators
2 from each job.
[image: Inline image 1]
--
*Cheers *
*Puneet Kinra*
*Mobile:+918800167808 | Skype : puneet.ki...@customercentria.com
*
*e-mail :puneet.ki...@customercentria.com
*
fy a unique key attribute on a dynamic table in Flink?
> How do I place a dynamic table in update/upsert/"replace" mode, as opposed
> to append mode?
>
> Posted in StackOverflow as well: https://stackoverflow.co
> m/questions/48554999/apache-flink-how-to-enable-upsert
be we can
> figure out a simple workaround.
>
> Thanks, Hequn.
>
> 2018-01-26 23:08 GMT+08:00 Puneet Kinra
> :
>
>> Hi
>>
>> I know currently Ingesting a table from a retraction stream is not
>> supported yet.
>> is there any plan to include in u
Hi
I know currently Ingesting a table from a retraction stream is not
supported yet.
is there any plan to include in upcoming releases.
--
*Cheers *
*Puneet Kinra*
*Mobile:+918800167808 | Skype : puneet.ki...@customercentria.com
*
*e-mail :puneet.ki...@customercentria.com
*
2 from Table2 group by UID> Count Per Subscriber(retract
Stream) .
Now want to put this aggregator into the third table keeping Key UID
(upsert Mode) & then want to run selection criteria on the third table
example :select * from Table3 where TableCount1==? && TableCount1== ?
java:168)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:718)
at java.lang.Thread.run(Unknown Source)
--
*Cheers *
*Puneet Kinra*
*Mobile:+918800167808 | Skype : puneet.ki...@customercentria.com
*
*e-mail :puneet.ki...@customercentria.com
*
52 matches
Mail list logo