Hi,
For the three questions,
1. The processing time timer will be trigger. IMO you may think the
processing time timer as in parallel with the event time timer. They are
processed separately underlying. The processing time timer will be triggered
according to the realistic time.
2. I'am
Hi Aljoscha,
Thanks for your response.
With all this preliminary information collected, I’ll start a formal process.
Thank everybody for your attention.
Best,
Xingcan
> On Jul 8, 2019, at 10:17 AM, Aljoscha Krettek wrote:
>
> I think this would benefit from a FLIP, that neatly sums up the op
Hi Felipe,
> I would like to create a logical filter if there is no filter set on the
logical query. How should I implement it?
Do you mean you want to add a LogicalFilter node if the query even doesn't
contain filter? Logically, this can be done through a rule. However, it
sounds a little hack an
Hi John,
I think what Konstantin is trying to say is: Flink's Kafka consumer does
not start consuming from the Kafka commit offset when starting the
consumer, it would actually start with the offset that's last checkpointed
to external DFS. (e.g. the starting point of the consumer has no relevance
Hi Flavio,
Yes I think the handling of the DateTime in Flink can be better when
dealing with DATE TIME type of systems.
There are several limitations Jingsong briefly mentioned about
java.util.Date. Some of these limitations are even affecting correctness of
the results (e.g. Gregorian vs Julian c
Hi Flavio,
Thanks for your information.
>From your description, it seems that you only use the window to get the
start and end time. There are no aggregations happen. If this is the case,
you can get the start and end time by yourself(the
`TimeWindow.getWindowStartWithOffset()` shows how to get w
Hi Vishwas,
The value of `env.java.opts` will be passed as JVM options to both
jobmanager and taskmanager. Thus the same port is set for two processes.
If you need to pass JVM options to jobmanager and taskmanager differently,
you can use `env.java.opts.jobmanager` and `env.java.opts.taskmanager`
Hi:
I have a few questions about the stream time characteristics:
1. If the time characteristic is set to TimeCharacteristic.EventTime, but the
timers in a processor or trigger is set using registerProcessingTimeTimer (or
vice versa), then will that timer fire ?
2. Once the time character is s
Hi Yebgenya,
To use Blink's integration with Hive in SQL CLI, you can reference Blink's
documentation at [1], [2], and [3]
Note that Hive integration is actually available in **Flink master branch**
now and will be released soon as part of Flink 1.9.0. The end-to-end
integration should be feature
Hi Xuchen,
Every email in our ML asking questions **MUST** have a valid subject, to
facilitate archive search in the future and save people's time to decide
whether they can help answer your question or not by just glimpsing the
subject thru their email clients.
Though your question itself is wel
Hi guys,
I am not able to start a stand alone session with one task manager and one
job manager on the same node by adding debug option in flink-conf.yaml
as env.java.opts:
-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5005(
https://cwiki.apache.org/confluence/display/FLINK/Remote+D
Hi Vishwas,
Sorry for the late response.
Are you still facing the issue?
I have no experience with EMC ECS, but the exception suggests an issue with
the host name:
1378 Caused by: java.net.UnknownHostException:
aip-featuretoolkit.SU73ECSG1P1d.***.COM
1379 at java.net.InetAddress.getAl
Hi,
Kafka offsets are only managed by the Flink Kafka Consumer. All following
operators do not care whether the events were read from Kafka, files,
Kinesis or whichever source.
It is the responsibility of the source to include its reading position (in
case of Kafka the partition offsets) in a chec
Hi,
I am a newbie in Apache Calcite. I am trying to use it with Apache Flink.
To start I am trying to create a HelloWorld which just add a logical filter
on my query.
1 - I have my Flink app using Table API [1].
2 - I have created my Calcite filter rule which is applied to my FLink
query if I use
Hi Hequn, thanks for your answer.
What I'm trying to do is to read a stream of events that basically contains
a UserId field and, every X minutes (i.e. using a Time Window) and for each
different UserId key, query 3 different REST services to enrich my POJOs*.
For the moment what I do is to use a P
Hello,
I'm trying to use Hive tables in sql-client. How can I do this?
I have downloaded Blink from Github to be able to use catalogs in the YAML
file, but I can't run its sql-client using ./sql-client.sh embedded .
Can you please help me?
Regards
Bernadette Lazar
HINWEIS: Dies ist eine vertr
I think this would benefit from a FLIP, that neatly sums up the options, and
which then gives us also a point where we can vote and ratify a decision.
As a gut feeling, I most like Option 3). Initially I would have preferred
option 1) (because of a sense of API purity), but by now I think it’s g
Hi Flavio,
Nice to hear your ideas on Table API!
Could you be more specific about your requirements? A detailed scenario
would be quite helpful. For example, do you want to emit multi records
through the collector or do you want to use the timer?
BTW, Table API introduces flatAggregate recently(
So when we say a sink is at least once. It's because internally it's not
checking any kind of state and it sends what it has regardless, correct?
Cause I willl build a sink that calls stored procedures.
On Sun., Jul. 7, 2019, 4:03 p.m. Konstantin Knauf,
wrote:
> Hi John,
>
> in case of a failure
Hi Flavio,
yes I agree. This check is a bit confusing. The initial reason for that
was that sql.Time, sql.Date, and sql.Timestamp extend from util.Date as
well. But handling it as a generic type as Jingson mentioned might be
the better option in order to write custom UDFs to handle them.
Reg
Hi Niels,
the type handling evolved during the years and is a bit messed up
through the different layers. You are almost right with your last
assumption "Is the provided serialization via TypeInformation 'skipped'
during startup and only used during runtime?". The type extraction
returns a Kr
Hi,
Context:
I'm looking into making the Google (BigQuery compatible) HyperLogLog++
implementation available in Flink because it is simply an Apache
licensed opensource library
- https://issuetracker.google.com/issues/123269269
- https://issues.apache.org/jira/browse/BEAM-7013
- https://github.com
Hi to all,
from what I understood a ProcessWindowFunction can only be used in the
Streaming API.
Is there any plan to port them also in the Table API (in the near future)?
I'd like to do with Table API the equivalent of:
final DataStream events = env.addSource(src);
events.filter(e -> e.getCode()
Of course there are java.sql.* and java.time.* in Java but it's also true
that most of the times the POJOs you read come from an external (Maven) lib
and if such POJOs contain date fields you have to create a local version of
that POJO having the java.util.Date fields replaced by a java.sql.Date
ve
Flink 1.9 blink runner will support it as Generic Type,
But I don't recommend it. After all, there are java.sql.Date and java.time.* in
Java.
Best, JingsongLee
--
From:Flavio Pompermaier
Send Time:2019年7月8日(星期一) 15:40
To:JingsongL
Hello Konstantin,
Thank you for you answer, I’ll clarify a bit our problem as actually we have a
clear understanding of our problem now 😊.
We have 2 Kafka topics from 2 different datacenters (each with its own
watermarks – We have a watermark message injected in each of them).
We replicate these
Hi, Zhechao
Usually, if you can, share your full exception stack and where you are trying
to capture exceptions in your code (preferably with posting your relevant code
directly
). That will help us understand and locate the issue you encounter.
Best,
Haibo
At 2019-07-08 14:11:22, "Zhecha
I think I could do it for this specific use case but isn't this a big
limitation of Table API?
I think that java.util.Date should be a first class citizen in Flink..
Best,
Flavio
On Mon, Jul 8, 2019 at 4:06 AM JingsongLee wrote:
> Hi Flavio:
> Looks like you use java.util.Date in your pojo, Now
28 matches
Mail list logo