Luka Jurukovski created FLINK-10195:
---
Summary: RabbitMQ Source With Checkpointing Doesn't Backpressure
Correctly
Key: FLINK-10195
URL: https://issues.apache.org/jira/browse/FLINK-10195
Project: Flin
Alexis Sarda-Espinosa created FLINK-10194:
-
Summary: Serialization issue with Scala's AggregateDataSet[Row]
Key: FLINK-10194
URL: https://issues.apache.org/jira/browse/FLINK-10194
Project: Flin
Gary Yao created FLINK-10193:
Summary: Default RPC timeout is used when triggering savepoint via
JobMasterGateway
Key: FLINK-10193
URL: https://issues.apache.org/jira/browse/FLINK-10193
Project: Flink
Hi Wangsan,
the bahavior of DataStreamRel#translateToPlan is more or less intended.
That's why you call `toAppendStream` on the table environment. Because
you add your pipeline to the environment (from source to current operator).
However, the explain() method should not cause those side-effe
Hi Timo,
I think this may not only affect explain() method. Method
DataStreamRel#translateToPlan is called when we need translate a FlinkRelNode
into DataStream or DataSet, we add desired operators in execution environment.
By side effect, I mean that if we call DataStreamRel#translateToPlan
Fabian Hueske created FLINK-10192:
-
Summary: SQL Client table visualization mode does not update
correctly
Key: FLINK-10192
URL: https://issues.apache.org/jira/browse/FLINK-10192
Project: Flink
Till Rohrmann created FLINK-10191:
-
Summary:
WindowCheckpointingITCase.testAggregatingSlidingProcessingTimeWindow
Key: FLINK-10191
URL: https://issues.apache.org/jira/browse/FLINK-10191
Project: Flink
Hi Hequn,
Thanks for this discussion.
Personally, I’m also in favor of option 3. There are two reasons for that:
(1) A proctime-based upsert table source does not guarantee the records’ order,
which means empty delete messages may not really be "empty". Simply discarding
them may cause semanti
Hi,
this sounds like a bug to me. Maybe the explain() method is not
implemented correctly. Can you open an issue for it in Jira?
Thanks,
Timo
Am 21.08.18 um 15:04 schrieb wangsan:
Hi all,
I noticed that the DataStreamRel#translateToPlan is non-idempotent, and that
may cause the execution
Great news. Thanks a lot for managing the release Chesnay and to all who
have contributed to this release.
Cheers,
Till
On Tue, Aug 21, 2018 at 2:12 PM Chesnay Schepler wrote:
> |The Apache Flink community is very happy to announce the release of
> Apache Flink 1.5.3, which is the third bugfix
Thanks Kostas for reply,
But till there are distributions like Cloudera which latest version (5.15)
based on Hadoop 2.6
I and many other Cloudera users obliged to use an older HDFS version.
Moreover I read discussion
on Cloudera forum regarding moving to more fresh version of Hadoop, and
Cloudera
Sergei Poganshev created FLINK-10190:
Summary: Unable to use custom endpoint in Kinesis producer
Key: FLINK-10190
URL: https://issues.apache.org/jira/browse/FLINK-10190
Project: Flink
Iss
Hi,
Thanks fort starting this discussion Hequn!
These are a tricky questions.
1) Handling empty deletes in UpsertSource.
I think forwarding empty deletes would only have a semantic difference if
the output is persisted in a non-empty external table, e.g., a Cassandra
table with entries.
If we wou
Hi Artsem,
Till is correct in that getting rid of the “valid-length” file was a design
decision
for the new StreamingFileSink since the beginning. The motivation was that
users were reporting that essentially it was very cumbersome to use.
In general, when the BucketingSink gets deprecated, I
Hi all,
I noticed that the DataStreamRel#translateToPlan is non-idempotent, and that
may cause the execution plan not as what we expected. Every time we call
DataStreamRel#translateToPlan (in TableEnvirnment#explain,
TableEnvirnment#writeToSink, etc), we add same operators in execution
environ
Now I am able to write checkpoints but cannot restore from it:
java.lang.NoClassDefFoundError:
com/google/cloud/hadoop/gcsio/GoogleCloudStorageImpl$6
at
com.google.cloud.hadoop.gcsio.GoogleCloudStorageImpl.open(GoogleCloudStorageImpl.java:666)
at
com.google.cloud.hadoop.gcsio.Goo
|The Apache Flink community is very happy to announce the release of
Apache Flink 1.5.3, which is the third bugfix release for the Apache
Flink 1.5 series.
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data
streami
Thanks for reply, Till !
Buy the way, If Flink going to support compatibility with Hadoop 2.6 I
don't see another way how to achieve it.
As I mention before one of popular distributive Cloudera still based on
Hadoop 2.6 and it very sad if Flink unsupport it.
I really want to help Flink comunity to
Hi Dominik,
The SQL Client supports the same subset of SQL that you get with Java /
Scala embedded queries.
The documentation [1] covers all supported operations.
There are some limitations because certain operators require special time
attributes (row time or processing time attributes) which ar
|I'm happy to announce that we have unanimously approved this release.|
|There are 4 approving votes, 3 of which are binding:|
|* Till (binding)|
|* vino (non-binding)
|
|* Gordon (binding)
|
|* Chesnay (binding)
|
|There are no disapproving votes.|
|Thanks everyone!|
On 21.08.2018 13:53, Chesnay
+1
On 20.08.2018 17:58, Tzu-Li (Gordon) Tai wrote:
+1 (binding)
- verified checksum and gpg files
- verified source compiles (tests enabled), Scala 2.11 / without Hadoop
- e2e tests pass locally
- source release contains no binaries
- no missing release artifacts in staging area
- reviewed anno
How would that look in detail given that I am a user who wants to start a
job cluster?
On Tue, Aug 21, 2018 at 12:09 PM Renjie Liu wrote:
> But it's also dangerous if user need to submit job via the same cluster
> configuration. I mean we can use another script for starting job cluster,
> so tha
Hi Artsem,
if I recall correctly, then we explicitly decided to not support the valid
file length files with the new StreamingFileSink because they are really
hard to handle for the user. I've pulled Klou into this conversation who is
more knowledgeable and can give you a bit more advice.
Cheers,
Hi Dominik,
I think such a list would be really helpful. I've pulled Timo and Fabian
into this conversation because they probably know more.
Cheers,
Till
On Mon, Aug 20, 2018 at 12:43 PM Dominik Wosiński wrote:
> Hey,
>
> Do we have any list of current limitations of SQL Client available
> som
big +1 for this feature. A tool to get your state out of and into Flink
will be tremendously helpful.
On Mon, Aug 20, 2018 at 10:21 AM Aljoscha Krettek
wrote:
> +1 I'd like to have something like this in Flink a lot!
>
> > On 19. Aug 2018, at 11:57, Gyula Fóra wrote:
> >
> > Hi all!
> >
> > Tha
Hiroaki Yoshida created FLINK-10189:
---
Summary: FindBugs warnings: Inefficient use of keySet iterator
instead of entrySet iterator
Key: FLINK-10189
URL: https://issues.apache.org/jira/browse/FLINK-10189
But it's also dangerous if user need to submit job via the same cluster
configuration. I mean we can use another script for starting job cluster,
so that cluster administrators can separate environments.
On Tue, Aug 21, 2018 at 5:53 PM Till Rohrmann wrote:
> Hi Renjie,
>
> do you mean to not supp
Hi Renjie,
do you mean to not support deploying Flink clusters via
`FLINK_HOME/bin/flink -m yarn-cluster`? I think starting a dedicated job
cluster for a given job that way is quite useful.
Cheers,
Till
On Thu, Aug 9, 2018 at 4:33 AM Renjie Liu wrote:
> Hi:
> Flink 1.5.0 brings new deployment
Hi Niels,
Your conclusions are accurate, and I also agree with the fact that the
combination of the KeyedSerializationSchema / providing partitioners, etc.
is all a bit awkward as of the current state.
As for the proposed solutions, I personally disagree with 1), since key
partitioning, IMO, shou
29 matches
Mail list logo