ay 12, 2022 at 11:06 PM Roman Grebennikov wrote:
> Hi,
>
> AFAIK scala REPL was removed completely in Flink 1.15 (
> https://issues.apache.org/jira/browse/FLINK-24360), so there is nothing
> to cross-build.
>
> Roman Grebennikov | g...@dfdx.me
>
>
> On Thu, May 1
roject was done in this way. And
> the project is a bit experimental, so if you're interested in scala3 on
> Flink, you're welcome to share your feedback and ideas.
>
> with best regards,
> Roman Grebennikov | g...@dfdx.me
>
>
--
Best Regards
Jeff Zhang
he.org/jira/browse/FLINK-25128
>
>
> Best regards,
> Yuxia
>
> ------
> *发件人: *"Jeff Zhang"
> *收件人: *"User"
> *发送时间: *星期六, 2022年 5 月 07日 下午 10:05:55
> *主题: *Unable to start sql-client when putting
> flink-table-planner_
verFactory(FactoryUtil.java:553)
at
org.apache.flink.table.client.gateway.context.ExecutionContext.lookupExecutor(ExecutionContext.java:154)
... 8 more
--
Best Regards
Jeff Zhang
t;> Does not seem to include this script anymore.
>>
>> Am I missing something?
>>
>> How can I still start a scala repl?
>>
>> Best,
>>
>> Georg
>>
>>
--
Best Regards
Jeff Zhang
k.services.sts.StsAsyncClient
> import software.amazon.awssdk.services.sts.StsAsyncClient
>
> scala> StsAsyncClient.builder
> :72: error: Static methods in interface require -target:jvm-1.8
>StsAsyncClient.builder
>
> Why do I have this error? Is there any way to solve this problem?
>
>
> Thanks,
> Jing
>
>
--
Best Regards
Jeff Zhang
NK_HOME/lib, that's too heavy.
>
>
> Thanks for your any suggestions or replies!
>
>
> Best Regards!
>
>
>
>
--
Best Regards
Jeff Zhang
k-sql-the-easy-way-d9d48a95ae57
The easy way to learn Flink Sql.
Hope it would be helpful for you and welcome to join our community to
discuss with others. http://zeppelin.apache.org/community.html
--
Best Regards
Jeff Zhang
>
>> > > >
>> > > >
>> > > >
>> > > >
>> > > > Your Personal Data: We may collect and process information about you
>> > > > that may be subject to data protection laws. For more information
>> > > > about how we use and disclose your personal data, how we protect
>> > > > your information, our legal basis to use your information, your
>> > > > rights and who you can contact, please refer to:
>> > > > http://www.gs.com/privacy-notices
>> > >
>> > >
>> > >
>> > > Your Personal Data: We may collect and process information about you
>> > > that may be subject to data protection laws. For more information
>> > > about how we use and disclose your personal data, how we protect your
>> > > information, our legal basis to use your information, your rights and
>> > > who you can contact, please refer to:
>> > > www.gs.com/privacy-notices<http://www.gs.com/privacy-notices>
>> >
>> >
>> >
>> > Your Personal Data: We may collect and process information about you
>> that may be subject to data protection laws. For more information about how
>> we use and disclose your personal data, how we protect your information,
>> our legal basis to use your information, your rights and who you can
>> contact, please refer to: www.gs.com/privacy-notices<
>> http://www.gs.com/privacy-notices>
>>
>
--
Best Regards
Jeff Zhang
BTW, you can also send email to zeppelin user maillist to join zeppelin
slack channel to discuss more details.
http://zeppelin.apache.org/community.html
Jeff Zhang 于2021年6月9日周三 下午6:34写道:
> Hi Maciek,
>
> You can try zeppelin which support pyflink and display flink job url
> inli
gt;
> EnvironmentSettings.new_instance().in_streaming_mode().use_blink_planner().build()
> > > table_env = TableEnvironment.create(env_settings)
> > >
> > > How can I enable Web UI in this code?
> > >
> > > Regards,
> > > Maciek
> > >
> > >
> > >
> > > --
> > > Sent from:
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
> >
>
>
> --
> Maciek Bryński
>
--
Best Regards
Jeff Zhang
rom CLI
>> bin/flink run -m localhost:8081 -c kafka.sample.flink.SQLSample
>> ~/workspaces/kafka-sample/target/kafka-sample-0.1.2-jar-with-dependencies.jar
>> /sample.sql
>> ++
>> | result |
>> ++
>> | OK |
>> ++
>> 1 row in set
>> Job has been submitted with JobID ace45d2ff850675243e2663d3bf11701
>> ++++
>> | op | uuid |ots |
>> ++++
>>
>>
>> --
>> Regards,
>> Tao
>>
>
>
> --
> Regards,
> Tao
>
--
Best Regards
Jeff Zhang
.html
And check the following 2 links for more details of how to use flink on
zeppelin
https://app.gitbook.com/@jeffzhang/s/flink-on-zeppelin/
http://zeppelin.apache.org/docs/0.9.0/interpreter/flink.html
--
Best Regards
Jeff Zhang
obile and Backend
>
>
> Remind.com <https://www.remind.com/> | BLOG <http://blog.remind.com/> |
> FOLLOW US <https://twitter.com/remindhq> | LIKE US
> <https://www.facebook.com/remindhq>
>
--
Best Regards
Jeff Zhang
能否实现这样的方式?
> 感谢
>
--
Best Regards
Jeff Zhang
ork?
>
> Thank you!
>
> Mark
>
> ‐‐‐ Original Message ‐‐‐
> On Friday, June 5, 2020 6:13 PM, Jeff Zhang wrote:
>
> You can try JobListener which you can register to ExecutionEnvironment.
>
>
> https://github.com/apache/flink/blob/master/fl
ock is never run.
>
> Thank you!
>
> Mark
>
--
Best Regards
Jeff Zhang
gt;> -
>>>>>
>>>>> var settings = EnvironmentSettings.newInstance()
>>>>> .useBlinkPlanner()
>>>>> .inBatchMode()
>>>>> .build();
>>>>> var tEnv = TableEnvironment.create(settings);
>>>>>
>>>>> The above configuration, however, does not connect to a remote
>>>>> environment. Tracing code in TableEnvironment.java, I see the
>>>>> following method in BlinkExecutorFactory.java that appears to
>>>>> relevant -
>>>>>
>>>>> Executor create(Map, StreamExecutionEnvironment);
>>>>>
>>>>> However, it seems to be only accessible through the Scala bridge. I
>>>>> can't seem to find a way to instantiate a TableEnvironment that takes
>>>>> StreamExecutionEnvironment as an argument. How do I achieve that?
>>>>>
>>>>> Regards,
>>>>> Satyam
>>>>>
>>>>
--
Best Regards
Jeff Zhang
I'm afraid that there might be other
> behaviors for other environments.
>
> So what's the best practice to determine whether a job has finished or
> not? Note that I'm not waiting for the job to finish. If the job hasn't
> finished I would like to know it and do something else.
>
--
Best Regards
Jeff Zhang
view wrt the
> overall architecture complexity).
>
> @Oytun indeed we'd like to avoid recompiling everything when a single user
> class (i.e. not related to Flink classes) is modified or added. Glad to see
> that there are other people having the same problem here
>
> On Tue, Apr
some other token (e.g.
> /userapi/*).
>
> What do you think about this? Does it sound reasonable to you?
> Am I the only one that thinks this could be useful for many use cases?
>
> Best,
> Flavio
>
--
Best Regards
Jeff Zhang
午4:44写道:
> I am only running the zeppelin word count example by clicking the
> zeppelin run arrow.
>
>
> On Mon, 20 Apr 2020, 09:42 Jeff Zhang, wrote:
>
>> How do you run flink job ? It should not always be localhost:8081
>>
>> Som Lima 于2020年4月20日周一 下午4:33写
lay.
>
>
>
>
>
--
Best Regards
Jeff Zhang
Glad to hear that.
Som Lima 于2020年4月20日周一 上午8:08写道:
> I will thanks. Once I had it set up and working.
> I switched my computers around from client to server to server to client.
> With your excellent instructions I was able to do it in 5 .minutes
>
> On Mon, 20 Apr 2020, 0
r each development.
>
>
> Anyway I kept doing fresh installs about four altogether I think.
>
> Everything works fine now
> Including remote access of zeppelin on machines across the local area
> network.
>
> Next step setup remote clusters
> Wish me luck !
>
>
>
ent.getExecutionEnvironment();
>>>>
>>>> which is same on spark.
>>>>
>>>> val spark =
>>>> SparkSession.builder.master(local[*]).appname("anapp").getOrCreate
>>>>
>>>> However if I wish to run the servers on a different physical computer.
>>>> Then in Spark I can do it this way using the spark URI in my IDE.
>>>>
>>>> Conf =
>>>> SparkConf().setMaster("spark://:").setAppName("anapp")
>>>>
>>>> Can you please tell me the equivalent change to make so I can run my
>>>> servers and my IDE from different physical computers.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
--
Best Regards
Jeff Zhang
rverica/flink-sql-gateway
>>>>
>>>> Best,
>>>> Godfrey
>>>>
>>>> Flavio Pompermaier 于2020年4月16日周四 下午4:42写道:
>>>>
>>>>> Hi Jeff,
>>>>> FLIP-24 [1] proposed to develop a SQL gateway to query Flink via SQL
>>>>> but since then no progress has been made on that point. Do you think that
>>>>> Zeppelin could be used somehow as a SQL Gateway towards Flink for the
>>>>> moment?
>>>>> Any chance that a Flink SQL Gateway could ever be developed? Is there
>>>>> anybody interested in this?
>>>>>
>>>>> Best,
>>>>> Flavio
>>>>>
>>>>> [1]
>>>>> https://cwiki.apache.org/confluence/display/FLINK/FLIP-24+-+SQL+Client
>>>>>
>>>>
--
Best Regards
Jeff Zhang
um.com/RBHa2lTIg5 <https://t.co/sUapN40tvI?amp=1> 4) Advanced
usage https://link.medium.com/CAekyoXIg5 <https://t.co/MXolULmafZ?amp=1>
Welcome to use flink on zeppelin and give feedback and comments.
--
Best Regards
Jeff Zhang
t; > look at the README.
> >
> > Any feedback or suggestion is welcomed!
> >
> > [1]
> https://ci.apache.org/projects/flink/flink-docs-master/ops/memory/mem_setup.html
> > [2]
> https://ci.apache.org/projects/flink/flink-docs-master/ops/memory/mem_migration.html
> >
> > Best,
> > Yangze Guo
>
--
Best Regards
Jeff Zhang
gt; wrote:
> >
> > Yeah, I was wondering about that. I'm using
> > `/usr/lib/flink/bin/start-scala-shell.sh yarn`-- previously I'd use
> > `/usr/lib/flink/bin/start-scala-shell.sh yarn -n ${NUM}`
> > but that deprecated option was removed.
> >
> >
ervicesUtils.java:146)
> at
> org.apache.flink.client.program.rest.RestClusterClient.(RestClusterClient.java:161)
> at
> org.apache.flink.client.deployment.StandaloneClusterDescriptor.lambda$retrieve$0(StandaloneClusterDescriptor.java:51)
> ... 38 more
>
--
Best Regards
Jeff Zhang
flink-conf.yaml can be adjust dynamically in user's program.
>>> So it will end up like some of the configurations can be overridden but
>>> some are not. The experience is not quite good for users.
>>>
>>> Best,
>>> Kurt
>>>
>>>
&
Gyula Fóra 于2020年3月5日周四 下午4:31写道:
>>>>>
>>>>>> Hi All!
>>>>>>
>>>>>> I am trying to understand if there is any way to override flink
>>>>>> configuration parameters when starting the SQL Client.
>>>>>>
>>>>>> It seems that the only way to pass any parameters is through the
>>>>>> environment yaml.
>>>>>>
>>>>>> There I found 2 possible routes:
>>>>>>
>>>>>> configuration: this doesn't work as it only sets Table specific
>>>>>> configs apparently, but maybe I am wrong.
>>>>>>
>>>>>> deployment: I tried using dynamic properties options here but
>>>>>> unfortunately we normalize (lowercase) the YAML keys so it is impossible
>>>>>> to
>>>>>> pass options like -yD or -D.
>>>>>>
>>>>>> Does anyone have any suggestions?
>>>>>>
>>>>>> Thanks
>>>>>> Gyula
>>>>>>
>>>>>
>>
>> --
>> Best, Jingsong Lee
>>
>
--
Best Regards
Jeff Zhang
s point which only shows the schema.
>
> Is there anything similar to "SHOW CREATE TABLE" or is this something that
> we should maybe add in the future?
>
> Thank you!
> Gyula
>
--
Best Regards
Jeff Zhang
tulations Jingsong! Well deserved.
> > >
> > > Best,
> > > Jark
> > >
> > > On Fri, 21 Feb 2020 at 11:32, zoudan wrote:
> > >
> > >> Congratulations! Jingsong
> > >>
> > >>
> > >> Best,
> > >> Dan Zou
> > >>
> >
> >
>
--
Best Regards
Jeff Zhang
y happy to announce the release
> of Apache Flink 1.10.0, which is the latest major release.
> >>>>>>>>
> >>>>>>>> Apache Flink® is an open-source stream processing framework for
> distributed, high-performing, always-available, and accurate data streaming
> applications.
> >>>>>>>>
> >>>>>>>> The release is available for download at:
> >>>>>>>> https://flink.apache.org/downloads.html
> >>>>>>>>
> >>>>>>>> Please check out the release blog post for an overview of the
> improvements for this new major release:
> >>>>>>>> https://flink.apache.org/news/2020/02/11/release-1.10.0.html
> >>>>>>>>
> >>>>>>>> The full release notes are available in Jira:
> >>>>>>>>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12345845
> >>>>>>>>
> >>>>>>>> We would like to thank all contributors of the Apache Flink
> community who made this release possible!
> >>>>>>>>
> >>>>>>>> Cheers,
> >>>>>>>> Gary & Yu
> >>
> >>
> >
> >
> > --
> > Best, Jingsong Lee
>
--
Best Regards
Jeff Zhang
t;> forward to your feedback!
>>>
>>> Best,
>>> Jincheng
>>>
>>> [1]
>>>
>>> https://lists.apache.org/thread.html/4a4d23c449f26b66bc58c71cc1a5c6079c79b5049c6c6744224c5f46%40%3Cdev.flink.apache.org%3E
>>> [2]
>>>
>>> https://lists.apache.org/thread.html/8273a5e8834b788d8ae552a5e177b69e04e96c0446bb90979444deee%40%3Cprivate.flink.apache.org%3E
>>> [3]
>>>
>>> https://lists.apache.org/thread.html/ra27644a4e111476b6041e8969def4322f47d5e0aae8da3ef30cd2926%40%3Cdev.flink.apache.org%3E
>>>
>>
--
Best Regards
Jeff Zhang
in in me congratulating Dian for becoming a Flink committer !
>
> Best,
> Jincheng(on behalf of the Flink PMC)
>
--
Best Regards
Jeff Zhang
;> Changing the default planner for the whole Table API & SQL is another
>> topic
>> >> and is out of scope of this discussion.
>> >>
>> >> What do you think?
>> >>
>> >> Best,
>> >> Jark
>> >>
>> >> [1]:
>> >>
>> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/streaming/joins.html#join-with-a-temporal-table
>> >> [2]:
>> >>
>> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/sql/queries.html#top-n
>> >> [3]:
>> >>
>> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/sql/queries.html#deduplication
>> >> [4]:
>> >>
>> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/tuning/streaming_aggregation_optimization.html
>> >> [5]:
>> >>
>> https://github.com/apache/flink/blob/master/flink-table/flink-sql-client/conf/sql-client-defaults.yaml#L100
>> >
>>
>>
>
> --
> Best, Jingsong Lee
>
--
Best Regards
Jeff Zhang
ies in
>> [1] even though we've supported almost all Hive versions [3] now.
>>
>> I want to hear what the community think about this, and how to achieve it
>> if we believe that's the way to go.
>>
>> Cheers,
>> Bowen
>>
>> [1] https://flink.apache.org/downloads.html
>> [2]
>> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/hive/#dependencies
>> [3]
>> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/hive/#supported-hive-versions
>>
>
--
Best Regards
Jeff Zhang
e notes are available in Jira:
> https://issues.apache.org/jira/projects/FLINK/versions/12346112
>
> We would like to thank all contributors of the Apache Flink community who
> made this release possible!
> Great thanks to @Jincheng as a mentor during this release.
>
> Regards,
> Hequn
>
>
>
--
Best Regards
Jeff Zhang
cies.html>
> doesn't work for you, then you still need a flink-shaded-hadoop-jar that
> you can download here
> <https://flink.apache.org/downloads.html#apache-flink-191>.
>
> On 25/10/2019 09:54, Jeff Zhang wrote:
>
> Hi all,
>
> There's no new flink shaded relea
st Regards
Jeff Zhang
; committer of the Flink project.
>
> Congratulations Zili Chen.
>
> regards.
>
--
Best Regards
Jeff Zhang
Static methods in interface require -target:jvm-1.8
> [ERROR] val bbTableEnv = TableEnvironment.create(bbSettings)
>
> But when I use the java programming language or the version of scala in 2.12,
> there is no problem.
>
> If I use the version of scala2.11, is there any way to solve t
stas Kloudas is joining the Flink
>>> PMC.
>>> >> Kostas is contributing to Flink for many years and puts lots of
>>> effort in helping our users and growing the Flink community.
>>> >> Please join me in congratulating Kostas!
>>> >
>>> > congratulation Kostas!
>>> >
>>> > regards.
>>>
>>>
--
Best Regards
Jeff Zhang
ncedAkkaRpcActor.onReceive(FencedAkkaRpcActor.java:40)
>>> at
>>> akka.actor.UntypedActor$$anonfun$receive$1.applyOrElse(UntypedActor.scala:165)
>>> at akka.actor.Actor$class.aroundReceive(Actor.scala:502)
>>> at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:95)
>>> at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526)
>>> at akka.actor.ActorCell.invoke(ActorCell.scala:495)
>>> at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
>>> at akka.dispatch.Mailbox.run(Mailbox.scala:224)
>>> at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
>>> at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>>> at
>>> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>>> at
>>> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>>> at
>>> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
>>>
>>>
>>>
>>>
--
Best Regards
Jeff Zhang
mitter of the Flink project.
>
> Hequn has been contributing to Flink for many years, mainly working on
> SQL/Table API features. He's also frequently helping out on the user
> mailing lists and helping check/vote the release.
>
> Congratulations Hequn!
>
> Best, Jincheng
> (on behalf of the Flink PMC)
>
>
>
--
Best Regards
Jeff Zhang
(2);
>>
>> instead of StreamExecutionEnvironment env =
>> StreamExecutionEnvironment.getExecutionEnvironment();
>>
>> With Flink 1.4.2, StreamExecutionEnvironment env =
>> StreamExecutionEnvironment.getExecutionEnvironment(); used to work on both
>> cluster as well as local environment.
>>
>> Is there any way to make
>> StreamExecutionEnvironment.getExecutionEnvironment(); work in both cluster
>> and local mode in flink 1.7.1? Specifically how to make it work locally via
>> IntelliJ.
>>
>> Thanks & Regards,
>> Vinayak
>>
>
--
Best Regards
Jeff Zhang
problem down the line with mismatch between the new releases of
>>>> Akka and Flink.
>>>>
>>>> regards.
>>>>
>>>> --
>>>> Debasish Ghosh
>>>> http://manning.com/ghosh2
>>>> http://manning.com/ghosh
>>>>
>>>> Twttr: @debasishg
>>>> Blog: http://debasishg.blogspot.com
>>>> Code: http://github.com/debasishg
>>>>
>>>>
>>>
>>> --
>>> Debasish Ghosh
>>> http://manning.com/ghosh2
>>> http://manning.com/ghosh
>>>
>>> Twttr: @debasishg
>>> Blog: http://debasishg.blogspot.com
>>> Code: http://github.com/debasishg
>>>
>>
>
> --
> Debasish Ghosh
> http://manning.com/ghosh2
> http://manning.com/ghosh
>
> Twttr: @debasishg
> Blog: http://debasishg.blogspot.com
> Code: http://github.com/debasishg
>
>
--
Best Regards
Jeff Zhang
t; integrate our application with a dedicated job scheduler like the one
> listed before (probably)..I don't know if some of them are nowadays already
> integrated with Flink..when we started coding our frontend application (2
> ears ago) none of them were using it.
>
> Best,
>
om the REST API is the fact that the job can't do anything after
>env.execute() while we need to call an external service to signal that the
> job has ended + some other details
>
> Best,
> Flavio
>
> On Tue, Jul 23, 2019 at 3:44 AM Jeff Zhang wrote:
>
>> Hi Fl
e deprecating or dropping it.
>>
>> I really appreciate your time and your insight.
>>
>> Best,
>> tison.
>>
>> [1]
>> https://lists.apache.org/thread.html/7ffc9936a384b891dbcf0a481d26c6d13b2125607c200577780d1e18@%3Cdev.flink.apache.org%3E
>>
>
>
>
--
Best Regards
Jeff Zhang
cumentation regarding the framework as i'm struggling to find a lot
> of documentation for my application online.
>
>
> thanks in advance.
>
>
> kind regards,
>
> Dante Van den Broeke
>
>
--
Best Regards
Jeff Zhang
e
> test cases on the cli client to start the users of outside the cluster. For
> instance, the command “flink run WordCounter.jar” it’s doesn’s work. So,
> could you give me some successful examples, please.
>
>
> Thanks!
>
--
Best Regards
Jeff Zhang
of the Flink project.
>>
>> Rong has been contributing to Flink for many years, mainly working on SQL
>> and Yarn security features. He's also frequently helping out on the
>> user@f.a.o mailing lists.
>>
>> Congratulations Rong!
>>
>> Best, Fabian
>> (on behalf of the Flink PMC)
>>
>
--
Best Regards
Jeff Zhang
ave a single EMR cluster with Flink and want to run multiple
>> applications on it with different flink configurations. Is there a way to
>>
>> 1. Pass the config file name for each application, or
>> 2. Overwrite the config parameters via command line arguments for the
>> application. This is similar to how we can overwrite the default
>> parameters in spark
>>
>> I searched the documents and have tried using ParameterTool with the
>> config parameter names, but it has not worked as yet.
>>
>> Thanks for your help.
>>
>> Mans
>>
>>
--
Best Regards
Jeff Zhang
se guide me how can I do this.
> Kind regards;
> syed
>
>
>
>
> --
> Sent from:
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
>
--
Best Regards
Jeff Zhang
ltiple different implementations and
> confuse users that way.
> Given that the existing Python APIs are a bit limited and not under active
> development, I would suggest to deprecate them in favor of the new API.
>
> Best,
> Stephan
>
>
--
Best Regards
Jeff Zhang
; Till
>
> On Sun, Jun 2, 2019 at 3:20 PM Jeff Zhang wrote:
>
>>
>> Hi Folks,
>>
>>
>> When I read the flink client api code, the concept of session is a little
>> vague and unclear to me. It looks like the session concept is only applied
>> in b
? Thanks.
--
Best Regards
Jeff Zhang
ache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
>
--
Best Regards
Jeff Zhang
nning-a-job-in-apache-flink-standalone-mode-on-zeppelin-i-have-this-error-to
>
> Would appreciate for any support for helping to resolve that problem.
>
>
>
> Regards,
>
> Sergey
>
>
>
>
--
Best Regards
Jeff Zhang
t; If the listeners are expected to do anything on the job, should some
> helper class to manipulate the jobs be passed to the listener method?
> Otherwise users may not be able to easily take action.
>
> Thanks,
>
> Jiangjie (Becket) Qin
>
>
>
>
> On Wed, Apr 2
from the Configuration or some
> other mechanism for example. That way it would not need to be exposed via
> the ExecutionEnvironment at all.
>
> Cheers,
> Till
>
> On Fri, Apr 19, 2019 at 11:12 AM Jeff Zhang wrote:
>
>> >>> The ExecutionEnvironment is us
ache
>> Kylin. In the case, the Flink job program is embedded into the Kylin's
>> executable context.
>>
>> If we could have this listener, it would be easier to integrate with
>> Kylin.
>>
>> Best,
>> Vino
>>
>> Jeff Zhang 于201
nJobCanceled(JobID jobId, String savepointPath);
}
Let me know your comment and concern, thanks.
--
Best Regards
Jeff Zhang
>>> breaking)
>>>> - Add fine fault tolerance, scheduling, caching also to DataStream API
>>>>
>>>> *Streaming State Evolution*
>>>> - Let all built-in serializers support stable evolution
>>>> - First class support for other evolvable formats (Protobuf, Thrift)
>>>> - Savepoint input/output format to modify / adjust savepoints
>>>>
>>>> *Simpler Event Time Handling*
>>>> - Event Time Alignment in Sources
>>>> - Simpler out-of-the box support in sources
>>>>
>>>> *Checkpointing*
>>>> - Consistency of Side Effects: suspend / end with savepoint (FLIP-34)
>>>> - Failed checkpoints explicitly aborted on TaskManagers (not only on
>>>> coordinator)
>>>>
>>>> *Automatic scaling (adjusting parallelism)*
>>>> - Reactive scaling
>>>> - Active scaling policies
>>>>
>>>> *Kubernetes Integration*
>>>> - Active Kubernetes Integration (Flink actively manages containers)
>>>>
>>>> *SQL Ecosystem*
>>>> - Extended Metadata Stores / Catalog / Schema Registries support
>>>> - DDL support
>>>> - Integration with Hive Ecosystem
>>>>
>>>> *Simpler Handling of Dependencies*
>>>> - Scala in the APIs, but not in the core (hide in separate class
>>>> loader)
>>>> - Hadoop-free by default
>>>>
>>>>
--
Best Regards
Jeff Zhang
ticipating in lots of discussions on our mailing
> lists, working on topics that are of joint interest of Flink and Beam, and
> giving talks on Flink at many events.
>
> Please join me in welcoming and congratulating Thomas!
>
> Best,
> Fabian
>
--
Best Regards
Jeff Zhang
ebook environment for vaildation of flink
> apps.
>
> Looking forward to your response
>
> Thanks
>
--
Best Regards
Jeff Zhang
.
yinhua.dai 于2019年1月25日周五 下午5:12写道:
> Thanks Guys.
> I just wondering if there is another way except hard code the list:)
> Thanks anyway.
>
>
>
> --
> Sent from:
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
>
--
Best Regards
Jeff Zhang
also be
>> extended to cover flink-dist. For example, the yarn and mesos code could
>> be spliced out into separate jars that could be added to lib manually.
>>
>> Let me know what you think.
>>
>> Regards,
>>
>> Chesnay
>>
>>
--
Best Regards
Jeff Zhang
uccess. Similarly for user u2, at
> time t6, there was no change in running count as there was no change in
> status for order o4
>
> t1 -> u1 : 1, u2 : 0
> t2 -> u1 : 1, u2 : 0
> t3 -> u1 : 2, u2 : 0
> *t4 -> u1 : 1, u2 : 0 (since o3 moved pending to success, so count is
> decreased for u1)*
> t5 -> u1 : 1, u2 : 1
> *t6 -> u1 : 1, u2 : 1 (no increase in count of u2 as o4 update has no
> change)*
>
> As I understand may be retract stream can achieve this. However I am not
> sure how. Any samples around this would be of great help.
>
> Gagan
>
>
>
--
Best Regards
Jeff Zhang
t; Please check out the release blog post for an overview of the
>>> improvements
>>> for this bugfix release:
>>> https://flink.apache.org/news/2018/12/22/release-1.5.6.html
>>>
>>> The full release notes are available in Jira:
>>>
>>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12344315
>>>
>>> We would like to thank all contributors of the Apache Flink community who
>>> made this release possible!
>>>
>>> Regards,
>>> Thomas
>>>
>>
--
Best Regards
Jeff Zhang
> org.apache.flink.runtime.entrypoint.parser.CommandLineParser.parse(CommandLineParser.java:50)
> 12/7/2018 10:44:32 AM ... 1 more
> 12/7/2018 10:44:32 AMException in thread "main"
> java.lang.NoSuchMethodError:
> org.apache.flink.runtime.entrypoint.parser.CommandLineParser.printHelp()V
> 12/7/2018 10:44:32 AM at
> org.apache.flink.container.entrypoint.StandaloneJobClusterEntryPoint.main(StandaloneJobClusterEntryPoint.java:146)
>
>
>
--
Best Regards
Jeff Zhang
assume this is a common setup in prod environments. This hasn't
> been a problem with the legacy execution mode.
>
> Any thoughts?
> Gyula
>
--
Best Regards
Jeff Zhang
Thanks Chesnay, but if user want to use connectors in scala shell, they
have to download it.
On Wed, Nov 14, 2018 at 5:22 PM Chesnay Schepler wrote:
> Connectors are never contained in binary releases as they are supposed t
> be packaged into the user-jar.
>
> On 14.11.2018 10:12
I don't see the jars of flink connectors in the binary release of flink
1.6.1, so just want to confirm whether flink binary release include these
connectors. Thanks
--
Best Regards
Jeff Zhang
ow
> key: flink with 1 window
> key: hadoop with 1 window
>
> Best, Hequn
>
>
> On Wed, Nov 14, 2018 at 10:31 AM Jeff Zhang wrote:
>
>> Hi all,
>>
>> I am a little confused with the following windows operation. Here's the
>> code,
>>
>>
rection", Types.STRING)
>> .field("rowtime", Types.SQL_TIMESTAMP)
>
>
> Btw, a unified api for source and sink is under discussion now. More
> details here[1]
>
> Best, Hequn
>
> [1]
> https://docs.google.com/document/d/1Yaxp1UJUFW-peGLt8EIidwKIZ
Hi all,
I am a little confused with the following windows operation. Here's the
code,
val senv = StreamExecutionEnvironment.getExecutionEnvironment
senv.setParallelism(1)
val data = senv.fromElements("hello world", "hello flink", "hello hadoop")
data.flatMap(line => line.split("\\s"))
.map(w =
Hi,
I hit the following error when I try to use kafka connector in flink table
api. There's very little document about how to use kafka connector in flink
table api, could anyone help me on that ? Thanks
Exception in thread "main" org.apache.flink.table.api.ValidationException:
Field 'event_ts' c
Hi,
I hit the following error when I try to use kafka connector in flink table
api. There's very little document about how to use kafka connector in flink
table api, could anyone help me on that ? Thanks
Exception in thread "main" org.apache.flink.table.api.ValidationException:
Field 'event_ts' c
The error is most likely due to classpath issue. Because classpath is
different when you running flink program in IDE and run it in cluster.
And starting another jvm process in SourceFunction doesn't seems a good
approach to me, is it possible for you to do in your custom SourceFunction ?
Ly, Th
Because flink-table is a provided dependency, so it won't be included in
the final shaded jar. I didn't find way to add custom jar to classpath via
bin/flink, does anyone know that ? Thanks
I try to run scala-shell in yarn mode in 1.5, but hit the following error.
I can run it successfully in 1.4.2. It is the same even when I change the
mode to legacy. Is this a known issue or something changed in 1.5 ? Thanks
Command I Use: bin/start-scala-shell.sh yarn -n 1
Starting Flink Shell:
86 matches
Mail list logo