Hi,
I was trying the QueryableState from the pull request
https://github.com/apache/flink/pull/2051
I am doing the following:
1. Make the stream queryable by calling
myKeyedStreamd.asQueryableState("my-state",myStateDescriptor)
2. Create a client that takes a job id, conf, query-name and key,
nu
Hi,
When we use RocksDB as state backend, how does the backend state get
updated after some elements are evicted from the window?
I don't see any update call being made to remove the element from the state
stored in RocksDB.
It looks like the RocksDBListState is only having get() and add() method
I added to the "Application Development" Docs the Section "Types,
TypeInformation, Serialization".
I think that is an important enough aspect to warrant separate docs.
On Mon, Jul 18, 2016 at 3:36 PM, Till Rohrmann wrote:
> +1 for the FLIP and making streaming the common case. Very good proposal
Aljoscha Krettek created FLINK-4238:
---
Summary: Only allow/require query for Tuple Stream in CassandraSink
Key: FLINK-4238
URL: https://issues.apache.org/jira/browse/FLINK-4238
Project: Flink
No, there is no issue for now. It's just not theoretically 100% safe but
the way we use it for now is not problematic.
On Wed, 20 Jul 2016 at 16:07 Maximilian Michels wrote:
> Is there a JIRA issue for this?
>
> On Mon, Jul 18, 2016 at 12:15 PM, Aljoscha Krettek
> wrote:
> > Ah I see, Stephan a
Is there a JIRA issue for this?
On Mon, Jul 18, 2016 at 12:15 PM, Aljoscha Krettek wrote:
> Ah I see, Stephan and I had a quick chat and it's for cases where there are
> 42s around the edges of the key/namespace.
>
> On Mon, 18 Jul 2016 at 11:51 Aljoscha Krettek wrote:
>
>> In which cases is it
I will answer Radu's private e-mail here:
Sorry to bother you ... I am still running in the same problem and I cannot
figure out why.
I have download and recompile the last branch of flink 1.1. I also tried using
the jar snapshot from the website but I get the same error.
What I am doing:
I am
Kostas Kloudas created FLINK-4237:
-
Summary: ClassLoaderITCase.testDisposeSavepointWithCustomKvState
fails due to Timeout Futures
Key: FLINK-4237
URL: https://issues.apache.org/jira/browse/FLINK-4237
Gary Yao created FLINK-4236:
---
Summary: Flink Dashboard stops showing list of uploaded jars if
main method cannot be looked up
Key: FLINK-4236
URL: https://issues.apache.org/jira/browse/FLINK-4236
Project: F
Till Rohrmann created FLINK-4235:
Summary: ClassLoaderITCase.testDisposeSavepointWithCustomKvState
timed out on Travis
Key: FLINK-4235
URL: https://issues.apache.org/jira/browse/FLINK-4235
Project: Fl
Till Rohrmann created FLINK-4234:
Summary: CassandraConnectorTest causes travis build to time out
Key: FLINK-4234
URL: https://issues.apache.org/jira/browse/FLINK-4234
Project: Flink
Issue Ty
You can always find the latest nightly snapshot version here:
http://flink.apache.org/contribute-code.html (at the end of the page)
Am 20/07/16 um 14:08 schrieb Radu Tudoran:
Hi,
I am also using v1.1...with eclipse.
i will re-download the source and build it again.
Is there also a binary vers
Hi,
I am also using v1.1...with eclipse.
i will re-download the source and build it again.
Is there also a binary version for version 1.1 (i would like to test also
againat that) particularly if the issue persists.
otherwise i am downloading and building the version from the main git branch...
I also tried it again with the latest 1.1-SNAPSHOT and everything works.
This Maven issue has been solved in FLINK-4111.
Am 20/07/16 um 13:43 schrieb Suneel Marthi:
I am not seeing an issue with this code Radu, this is from present
1.1-Snapshot.
This is what I have and it works (running from
I am not seeing an issue with this code Radu, this is from present
1.1-Snapshot.
This is what I have and it works (running from within IntelliJ and not cli)
:
List> input = new ArrayList<>();
input.add(new Tuple3<>(3L,"test",1));
input.add(new Tuple3<>(5L,"test2",2));
StreamExecutionEnvironment
Hi Chen!
If I understand, you want to implement a custom way of triggering
checkpoints, based on messages in the input message queue (for example
based on Kafka events)? Basically to trigger a checkpoint when you have
received a special message through each Kafka partition?
Please let me know if
Hi,
I think that was just related to the DataSet API. If I'm not mistaken
changing the parallelism should work after a "partitionCustom()".
Cheers,
Aljoscha
On Tue, 19 Jul 2016 at 19:25 Jaromir Vanek wrote:
> Aljoscha Krettek-2 wrote
> > Hi,
> > it should be possible to set the parallelism on t
Hi,
As far as I managed to isolate the cause of the error so far it has to do with
some mismatch in the function call
val traitDefs:ImmutableList[RelTraitDef[_ <: RelTrait]] = config.getTraitDefs
I am not sure thought why it is not working because when I tried to make a
dummy test by creating
CC Timo who I know is working on Table API and SQL.
On Tue, Jul 19, 2016 at 6:14 PM, Radu Tudoran wrote:
> Hi,
>
> I am not sure that this problem was solved. I am using the last pom to
> compile the table API.
>
> I was trying to run a simple program.
>
>
> ArrayList> input = new ArrayList St
I think it looks like Beam rather than Hadoop :)
What Stephan meant was that he wanted a dedicated output method in the
ProcessWindowFunction. I agree with Aljoscha that we shouldn't expose
the collector.
On Tue, Jul 19, 2016 at 10:45 PM, Aljoscha Krettek wrote:
> You mean keep the Collector? I
20 matches
Mail list logo