Hi,
If you wan't to play with it you can find the source and basic documentation
here: https://github.com/ottogroup/flink-spector.
The framework is for now feature complete. At the moment I'm working on
exposing some more functionality to the user, making the dsl more intuitive
and scalatest suppo
Hi Alex,
How's your infra coming along? I'd love to up my unit testing game with
your improvements :)
-n
On Mon, Nov 23, 2015 at 12:20 AM, lofifnc wrote:
> Hi Nick,
>
> This is easily achievable using the framework I provide.
> createDataStream(Input input) does actually return a
> DataStreamS
Hi Nick,
This is easily achievable using the framework I provide.
createDataStream(Input input) does actually return a DataStreamSource.
And the call of assertStream(DataStream datastream, OutputMatcher
matcher) just attaches a TestSink to the datastream, but you can create
the test sink manually
Very interesting Alex!
One other thing I find useful in building data flows is using "builder"
functions that hide the details of wiring up specific plumbing on generic
input parameters. For instance a void wireFoo(DataSource source,
SinkFunction sink) { ... }. It would be great to have test tools
Hi,
I'm currently working on improving the testing process of flink streaming
applications.
I have written a test runtime that takes care of execution, collecting the
output and applying a verifier to it. The runtime is able to provide test
sources and sinks that run in parallel.
On top of that i
Please see the above gist: my test makes no assertions until after the
env.execute() call. Adding setParallelism(1) to my sink appears to
stabilize my test. Indeed, very helpful. Thanks a lot!
-n
On Wed, Nov 18, 2015 at 9:15 AM, Stephan Ewen wrote:
> Okay, I think I misunderstood your problem.
Okay, I think I misunderstood your problem.
Usually you can simply execute tests one after another by waiting until
"env.execute()" returns. The streaming jobs terminate by themselves once
the sources reach end of stream (finite streams are supported that way) but
make sure all records flow throug
Sorry Stephan but I don't follow how global order applies in my case. I'm
merely checking the size of the sink results. I assume all tuples from a
given test invitation have sunk before the next test begins, which is
clearly not the case. Is there a way I can place a barrier in my tests to
ensure o
There is no global order in parallel streams, it is something that
applications need to work with. We are thinking about adding operations to
introduce event-time order (at the cost of some delay), but that is only
plans at this point.
What I do in my tests is run the test streams in parallel, bu
On Thu, Nov 5, 2015 at 6:58 PM, Nick Dimiduk wrote:
> Thanks Stephan, I'll check that out in the morning. Generally speaking, it
> would be great to have some single-jvm example tests for those of us
> getting started. Following the example of WindowingIntegrationTest is
> mostly working, though
No that is not possible since you cannot access DataSets from inside UDFs.
And select and where operations are translated into a filter operation on a
DataSet.
On Fri, Nov 6, 2015 at 6:03 PM, Nick Dimiduk wrote:
> Promising observation, Till. Is it possible to access Table API's
> select and w
Promising observation, Till. Is it possible to access Table API's
select and where operators from within such a flatMap?
-n
On Fri, Nov 6, 2015 at 6:19 AM, Till Rohrmann wrote:
> Hi Nick,
>
> I think a flatMap operation which is instantiated with your list of
> predicates should do the job. Thus
Hi Nick,
I think a flatMap operation which is instantiated with your list of
predicates should do the job. Thus, there shouldn’t be a need to dig deeper
than the DataStream for the first version.
Cheers,
Till
On Fri, Nov 6, 2015 at 3:58 AM, Nick Dimiduk wrote:
> Thanks Stephan, I'll check th
Thanks Stephan, I'll check that out in the morning. Generally speaking, it
would be great to have some single-jvm example tests for those of us
getting started. Following the example of WindowingIntegrationTest is
mostly working, though reusing my single sink instance with its static
collection res
Hey!
There is also a collect() sink in the "flink-streaming-contrib" project,
see here:
https://github.com/apache/flink/blob/master/flink-contrib/flink-streaming-contrib/src/main/java/org/apache/flink/contrib/streaming/DataStreamUtils.java
It should work well locally for testing. In that case you
Hi Robert,
It seems "type" was what I needed. This it also looks like the test
jar has an undeclared dependency. In the end, the following allowed me
to use TestStreamEnvironment for my integration test. Thanks a lot!
-n
org.apache.flink
flink-streaming-core
${flink.versio
Hi Nick,
we are usually publishing the test artifacts. Can you try and replace the
tag by test-jar:
org.apache.flink
flink-streaming-core
${flink.version}
test-jar
test
On Thu, Nov 5, 2015 at 8:18 PM, Nick Dimiduk wrote:
> Hello,
>
> I'm attempting integration tests for my
17 matches
Mail list logo