Hi shaoxuan:
Does the table API'S StreamSQL grammar is compatible with calcite's StreamSQL
grammar?
1、In calcite, the tumble window is realized by using function tumble or hop.
And the function must be used with group by, like this:
SELECT
TUMBLE_END(rowtime, INTERVAL '30' MINUTE, TIME '0:
Hi Sunny,
please avoid crossposting to all mailing lists.
The dev@f.a.o list is for issues related to the development of Flink not
the development of Flink applications.
The error message is actually quite descriptive. Flink does not find the
JDBC driver class.
You need to add it to the classpath
Hello,
Has anyone had a chance to look into this? I am currently working on the
problem but I have minimal understanding of how the internal Flink Python
API works; any expertise would be greatly appreciated.
Thank you very much!
Geoffrey
On Sat, Oct 8, 2016 at 1:27 PM Geoffrey Mon wrote:
> H
Hi Guys,
I am facing JDBC error, could you please some one advise me on this error?
$ java -version
java version "1.8.0_102"
Java(TM) SE Runtime Environment (build 1.8.0_102-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.102-b14, mixed mode)
$ scala -version
Scala code runner version 2.11.
Zhenzhong Xu created FLINK-4820:
---
Summary: Slf4j / log4j version upgrade to support dynamic change
of log levels.
Key: FLINK-4820
URL: https://issues.apache.org/jira/browse/FLINK-4820
Project: Flink
Zhenzhong Xu created FLINK-4819:
---
Summary: Checkpoint metadata+data inspection tool (view / update)
Key: FLINK-4819
URL: https://issues.apache.org/jira/browse/FLINK-4819
Project: Flink
Issue Ty
Stephan Ewen created FLINK-4818:
---
Summary: RestartStrategy should track how many failed restore
attempts the same checkpoint has and fall back to earlier checkpoints
Key: FLINK-4818
URL: https://issues.apache.org/ji
Stephan Ewen created FLINK-4817:
---
Summary: Checkpoint Coordinator should be called to restore state
with a specific checkpoint ID
Key: FLINK-4817
URL: https://issues.apache.org/jira/browse/FLINK-4817
Pr
Stephan Ewen created FLINK-4816:
---
Summary: Executions from "DEPLOYING" should retain restored
checkpoint information
Key: FLINK-4816
URL: https://issues.apache.org/jira/browse/FLINK-4816
Project: Flink
Stephan Ewen created FLINK-4815:
---
Summary: Automatic fallback to earlier checkpoints when checkpoint
restore fails
Key: FLINK-4815
URL: https://issues.apache.org/jira/browse/FLINK-4815
Project: Flink
Hi Fabian, Timo, and Jark.Thanks for kicking off this FLIP. This is a really
great and promising proposal. I have a few comments to the "window" operator
proposed in this FLIP (I am hoping it is not too late to bring up this). First,
window is not always needed for the stream aggregation. There
This vote has passed with 5 binding +1 and 1 non-binding +1 vote.
Thanks to everyone who was involved. I'll go ahead and finalize and
package this release.
+1s:
Robert Metzger (binding)
Maximilian Michels (binding)
Fabian Hueske (binding)
Stephan Ewen (binding)
Neelesh Salian (non-binding)
Til
+1 (binding)
- mvn clean verify for (Scala 2.10, Hadoop 2.3.0), (Scala 2.11, Hadoop
2.4.1), (Scala 2.10, Hadoop 2.6.3), (Scala 2.11, Hadoop 2.7.2)
- Run examples using the FliRRT tool on multi node cluster.
Cheers,
Till
On Wed, Oct 12, 2016 at 5:32 PM, Neelesh Salian
wrote:
> + 1 (non-binding)
Ufuk Celebi created FLINK-4814:
--
Summary: Remove extra storage location for externalized checkpoint
metadata
Key: FLINK-4814
URL: https://issues.apache.org/jira/browse/FLINK-4814
Project: Flink
Thanks for the feedback Konstantin!
Good to hear that.
As far as the Trigger DSL is concerned,
it is not currently in the master but it will come soon.
Kostas
> On Oct 12, 2016, at 6:05 PM, Konstantin Knauf
> wrote:
>
> Hi all,
>
> thank you for looping me in. Because of the memory leak we
Hi all,
thank you for looping me in. Because of the memory leak we first
experienced we have built a work-around, which did not need to delete
timers and are still using it. So for us, I think, this would currently
not be a problem. Nevertheless, I think, it is a strong limitation if
custom trigge
Regarding S3 and the Rolling/BucketingSink, we've seen data loss when
resuming from checkpoints, as S3 FileSystem implementations flush to
temporary files while the RollingSink expects a direct flush to in-progress
files. Because there is no such think as "flush and resume writing" to S3,
I don't k
+ 1 (non-binding)
- For Hadoop 2.x: mvn clean install -DskipTests -Dhadoop.version=2.6.3
- For Hadoop 1.x: mvn clean install -DskipTests -Dhadoop.profile=1
- For Scala 2.11: tools/change-scala-version.sh 2.11, mvn clean install
-DskipTests
Ran Examples for Batch and Streaming after initiating th
Thank you Robert!
On Wed, Oct 12, 2016 at 2:55 AM Robert Metzger wrote:
> I added a JIRA for this feature request:
> https://issues.apache.org/jira/browse/FLINK-4812
>
> On Fri, Sep 30, 2016 at 6:13 PM, dan bress wrote:
>
> > Awesome! It would definitely help me troubleshoot lagging watermarks
+1 to add the source to Bahir
That is an easier way to iterate fast on this and release quickly
On Wed, Oct 12, 2016 at 10:38 AM, Robert Metzger
wrote:
> Just a quick update on this one: The bahir community started already
> discussing the first bahir-flink release. I expect it to happen soon.
So far, we have not introduced location constraints.
The reason is that this goes a bit against the paradigm of location
transparency, which is necessary for failover, dynamically adjusting
parallelism (which is a feature being worked on), etc.
On Wed, Oct 12, 2016 at 10:35 AM, Robert Metzger
wro
+1 (binding)
- Verified that no changed to LICENSE or NOTICE necessary since the last
release
- mvn clean verily for Scala 2.11, Hadoop 2.6.3 including YARN tests
On Wed, Oct 12, 2016 at 3:22 PM, Fabian Hueske wrote:
> +1 to release (binding)
>
> - checked hashes and signatures
> - checked d
Hi all,
This thread has been dormant for some time now.
Given that this change may affect user code, I am sending this as a
reminder that the discussion is still open and to re-invite anyone who
may be affected to participate.
I would suggest to leave it open till the end of next week and then,
+1 to release (binding)
- checked hashes and signatures
- checked diffs against 1.1.2: no dependencies added or modified
- successfully built Flink from source archive
- mvn clean install (Scala 2.10)
Cheers, Fabian
2016-10-12 14:05 GMT+02:00 Maximilian Michels :
> +1 (binding)
>
> - scanned
Robert Metzger created FLINK-4813:
-
Summary: Having flink-test-utils as a dependency outside Flink
fails the build
Key: FLINK-4813
URL: https://issues.apache.org/jira/browse/FLINK-4813
Project: Flink
Hi Greg,
at the moment the serialization of savepoints costs the same as the
serialization of checkpoints, because they use the same serialization
logic. In fact, with Ufuk's changes [1], a savepoint is a checkpoint with
special properties. However, in the future we will probably have different
se
Sorry, I haven't followed this development, but roughly how much more
costly is the new serialization for savepoints?
On Wed, Oct 12, 2016 at 5:51 AM, SHI Xiaogang
wrote:
> Hi all,
>
> Currently, savepoints are exactly the completed checkpoints, and Flink
> provides commands (save/run) to allow
+1 (binding)
- scanned commit history for changes
- ran "mvn clean install -Dhadoop.version=2.6.0 -Pinclude-yarn-tests"
successfully
- started cluster via "./bin/start-cluster.sh"
- run batch and streaming examples via web interface and CLI
- used web interface for monitoring
- ran example job wit
Hi Alexander,
Welcome to the Flink community.
I gave you contributor permissions for JIRA (you can assign issues to
yourself) and assigned the issue to you.
Looking forward to your contribution.
Best, Fabian
2016-10-12 13:14 GMT+02:00 Alexander Shoshin :
> Hi.
>
> I want to do something for fl
Hi.
I want to do something for flink. I would like to try to resolve this issue
https://issues.apache.org/jira/browse/FLINK-4283, cause I reproduced it in my
local repo. Could you assign it to me in JIRA.
Regards, Alexander Shoshin
I added a JIRA for this feature request:
https://issues.apache.org/jira/browse/FLINK-4812
On Fri, Sep 30, 2016 at 6:13 PM, dan bress wrote:
> Awesome! It would definitely help me troubleshoot lagging watermarks if i
> can see what watermark all my sources have seen. Thanks for looking into
> t
Robert Metzger created FLINK-4812:
-
Summary: Export currentLowWatermark metric also for sources
Key: FLINK-4812
URL: https://issues.apache.org/jira/browse/FLINK-4812
Project: Flink
Issue Type
Hi all,
Currently, savepoints are exactly the completed checkpoints, and Flink
provides commands (save/run) to allow saving and restoring jobs. But in the
near future, savepoints will be very different from checkpoints because
they will have common serialization formats and allow recover from majo
Stephan Ewen created FLINK-4811:
---
Summary: Checkpoint Overview should list failed checkpoints
Key: FLINK-4811
URL: https://issues.apache.org/jira/browse/FLINK-4811
Project: Flink
Issue Type: Su
Stephan Ewen created FLINK-4810:
---
Summary: Checkpoint Coordinator should fail ExecutionGraph after
"n" unsuccessful checkpoints
Key: FLINK-4810
URL: https://issues.apache.org/jira/browse/FLINK-4810
Proj
Stephan Ewen created FLINK-4809:
---
Summary: Operators should tolerate checkpoint failures
Key: FLINK-4809
URL: https://issues.apache.org/jira/browse/FLINK-4809
Project: Flink
Issue Type: Sub-tas
Stephan Ewen created FLINK-4808:
---
Summary: Allow skipping failed checkpoints
Key: FLINK-4808
URL: https://issues.apache.org/jira/browse/FLINK-4808
Project: Flink
Issue Type: New Feature
Aff
Just a quick update on this one: The bahir community started already
discussing the first bahir-flink release. I expect it to happen soon.
I would really like to see the netty source in Bahir.
On Wed, Sep 28, 2016 at 3:18 PM, Stephan Ewen wrote:
> The Bahir-Flink stuff is fairly new - the first
Hi Mariano,
currently, there is nothing available in Flink to execute an operation on a
specific machine.
Regards,
Robert
On Wed, Sep 28, 2016 at 9:40 PM, Mariano Gonzalez <
mariano.gonza...@uptake.com> wrote:
> I need to load a PFA (portable format for analytics) that can be around 30
> GB an
39 matches
Mail list logo