Hi Philip,
Could you attach the full stack trace? Are you querying the same
job/cluster in both tests?
I am also looping in Kostas, who might know more about changes in Queryable
state between 1.4.2 and 1.5.0.
Best,
Dawid
On Thu, 19 Jul 2018 at 22:33, Philip Doctor
wrote:
> Dear Flink Users,
> I
I was hoping to join a StreamTableSource to a BatchTableSource, but I find it's
not simple. A couple of questions:
1) Other than just pushing the DataSet to a Kafka topic (either internally
or externally to the application) and reading it into a DataStream are there
any means of doing the
Hi Gregory,
I think it is some flink bug. Could you file a JIRA for it? Also which
version of flink are you using?
Best,
Dawid
On Fri, 20 Jul 2018 at 04:34, vino yang wrote:
> Hi Gregory,
>
> This exception seems a bug, you can create a issues in the JIRA.
>
> Thanks, vino.
>
> 2018-07-20 10:28
Hi Porritt,
Flink does not support streaming and batch join, currently, streaming and
batch job are both independent.
I guess your use case is streaming and dimension table join?
Unfortunately, it's not possible for the Flink SQL API to join a stream
with a common dataset now.
1)
As a workaround
Hi James,
1) Unfortunately, Flink does not support DataSet with DataStream joins as
of now. If the "batch" table is small enough you might try the solution
suggested by Vino to load it in the UDTF. You can also try implementing the
Stream version of this table yourself. You can use the
org.apache.
Hi,
Thanks. Just to clarify, where would you then invoke the savepoint
creation? I basically need to know when all data is read, create a
savepoint and then exit. I think I could just as well use the
PROCESS_CONTINUOSLY, monitor the stream outside and then use rest api to
cancel with savepoint.
A
Thanks for your answers.
In my use case I am reading from a large number of individual files. Jobs are
issued directly from the Java API, the results are collected (in memory) and
re-used partially in follow-up jobs.
I feared that using a MiniCluster or local environment I would not be able to
Hi,
We have disabled Flink Web UI for security reasons however we want to use
REST Api for monitoring purpose. For that I have set jobmanager.web.port =
-1 , rest.port=, rest.address=myhost
But I am not able to access any REST api using https://
myhost:/
Is it mandatory to have Flink Web
Hi Henkka, If you want to customize the datastream text source for your
purpose. You can use a read counter, if the value of counter would not change
in a interval you can guess all the data has been read. Just a idea, you can
choose other solution. About creating a savepoint automatically on jo
Hi Vinay, Did job manager run in node "myhost"? Did you check the port you
specified open for remote access? Can you try to start web UI, but just forbid
its port? Vino yang Thanks. On 2018-07-20 22:48 , Vinay Patil Wrote: Hi,
We have disabled Flink Web UI for security reasons however we wa
Hi Richer,
Actually for the testing , now I have reduced the number of timers to few
thousands (5-6K) but my job still gets stuck randomly. And its not
reproducible each time. next time when I restart the job it again starts
working for few few hours/days then gets stuck again.
I took thread dum
Hello all,
This is my code, just trying to make the code example in
https://ci.apache.org/projects/flink/flink-docs-release-1.5/dev/stream/
operators/process_function.html work
object ProcessFunctionTest {
def main(args: Array[String]) {
val env = StreamExecutionEnvironment.getExecutionEn
My object name is CreateUserNotificationRequests, thats why you see
CreateUserNotificationRequests
in the Error message.
I edited the object name after pasting the code...Hope there is no
confusion and I get some help.
Thanks
On Fri, Jul 20, 2018 at 10:10 AM, anna stax wrote:
> Hello all,
>
Hi,
I have a problem when running streaming Flink from jar file using CLI, the
program works fine if it runs from IDE?
The main exception is [1]
When I search for this exception, I tried to solve it by adding akka
dependencies to my pom.xml file as [2] and maven shaded plugin for jar
execution
Yeah I just went to reproduce this on a fresh environment, I blew away all the
Zookeeper data and the error went away. I'm running HA JobManager (1 active, 2
standby) and 3 TMs. I'm not sure how to fully account for this behavior yet,
it looks like I can make this run from a totally fresh envi
Hello
I am building an app but for this UC I want to test with session windows
and I am not sure if this will be expensive for the compute resources
because the gap will be 10 mins, 20 mins 60 mins because I want to trigger
an alert if the element reaches some thresholds within those periods of
ti
This is on Flink 1.4.2. I filed it as Flink-9905. Thanks!
On Fri, Jul 20, 2018 at 2:51 AM, Dawid Wysakowicz
wrote:
> Hi Gregory,
> I think it is some flink bug. Could you file a JIRA for it? Also which
> version of flink are you using?
> Best,
> Dawid
>
> On Fri, 20 Jul 2018 at 04:34, vino yang
Effectively you can't disable them selectively; reason being that they
are actually one and the same.
The ultimate solution is to build flink-dist yourself, and exclude
"flink-runtime-web" from it, which removes
the required files.
Note that being able to selectively disable them _for security
It is not the code, but I don't know what the problem is.
A simple word count with socketTextStream used to work but now gives the
same error.
Apps with kafka source which used to work is giving the same error.
When I have a source generator within the app itself works good.
So, with socketTextSt
Hi, Alexei:
What you paste is expected behavior. Jobmanager, two task managers each
should run in a docker instance.
13276 is should be the process of job manager, and it's the same
process as 789.
They have different processes id because in show them in different
namesapces(that's a concept in c
Hi antonio,
I think it worth a try to test the performance in your scenario, since job
performance can be affected by a number of factors(say your WindowFunction).
Best, Hequn
On Sat, Jul 21, 2018 at 2:59 AM, antonio saldivar
wrote:
> Hello
>
> I am building an app but for this UC I want to te
Hello
Actually I evaluate my WindowFunction with a trigger alert, having
something like below code (testing with 2 different windows), expecting 5K
elements per second arriving
SingleOutputStreamOperator windowedElem = element
.keyBy("id")
.timeWindow(Time.seconds(120))
// .window(EventTi
22 matches
Mail list logo