+1 from me
- Built with tests on osx/linux for Hadoop 2.6.0
- Ran several large scale streaming programs for days on RC2 and now for
almost a day on RC3 without any issues on YARN
- Tested Savepoint/External checkpoints/rescaling
- Tested user metrics with custom reporter
Gyula
Tzu-Li (Gordon) T
+1 (non-binding)
Thanking you.
With Regards
Sree
On Monday, January 30, 2017 10:06 PM, Tzu-Li (Gordon) Tai
wrote:
+1 (non-binding)
- Tested TaskManager failures on Mesos / Standalone with exactly-once guarantees
- Above tests also done against Kafka 0.8 / 0.9 / 0.10, offsets committed
+1 (non-binding)
- Tested TaskManager failures on Mesos / Standalone with exactly-once guarantees
- Above tests also done against Kafka 0.8 / 0.9 / 0.10, offsets committed
correctly back to ZK (manual check for 0.8 due to FLINK-4822)
- Tested Kafka 0.10 server-side timestamps
- Verified Async I/O
Hi Sree,
The expected Flink version for support for Elasticsearch 5 is Flink 1.3.0.
The release ETA for 1.3.0 is near the end of May 2017, based on the recently
announced timely-based release schedules.
The pull requests for Elasticsearch 5 support just needs to be reviewed before
they are merge
Hi Flink Team,
What is the latest on supporting elasticsearch v5 ? any ETA
?https://github.com/apache/flink/pull/2767
For my situation, it is becoming a deal breaker for choosing Storm/Spark
instead ?!
Thanking you.
With Regards
Sree
Hi Radu,
yes, the clean-up timeout would need to be defined somewhere.
I would actually prefer to do that within the query, because the clean-up
timeout affects the result and hence the semantics of the computed result.
This could look for instance as
SELECT a, sum(b)
FROM myTable
WHERE rowtime B
Hi Radu,
Updates of the result (materialized view) are not always simple appends. If
the query is a non-windowed aggregation or a windowed aggregation (or join)
with late data, some parts of the result need to be removed or updated.
I think in order to implement the second option, we would need to
+1 from my side
- Checked the LICENSE and NOTICE files
- No binary executable in the release
- Clean build and tests for Linux Scala 2.11, Hadoop 2.6.2
- Ran a streaming program with Async I/O against Redis
- Ran examples on a local cluster - all log files are sane
- checked the conten
Hi Fabian,
Thanks for the clarifications. I have a follow up question: you say that
operations are expected to be bounded in space and time (e.g., the optimizer
will do a cleanup after a certain timeout period). - can I assume that this
will imply that we will have at the level of the system a
+1 from my side:
- Re-verified signatures and checksums
- Re-checked out the Java quickstarts and ran the jobs
- Re-checked that all poms point to 1.2.0
- Re-ran streaming state machine with Kafka source, RocksDB backend
and master and worker failures (standalone cluster)
- Tested externalized che
Already on it. I closed some and commented on others to ask people whether
we can close. :-)
On Sun, 29 Jan 2017 at 22:31 Robert Metzger wrote:
> Thank you for fixing the link Chesnay.
>
> Over the weekend, I've assigned all JIRAs without a component to one.
> I think some of the assignments wer
Thanks again for all your feedback on my proposal!
The discussion was open for the last 12 days and the majority was very
positive about it.
I will keep an eye on the valid concerns Greg raised (neglected PRs,
unstable releases) and see how we can solve them.
I'll update the wiki pages and includ
Hi,
I would like to ask for a further clarifications about the statement:
" a streaming query should be equivalent to the result of a batch query that is
executed on the materialized stream "
I do agree with the principle but the question that we I would like to ask is
how do we interpret the r
Hi Radu,
I think it is most important to get the semantics of a streaming query
right.
In my opinion, the result of a streaming query should be equivalent to the
result of a batch query that is executed on the materialized stream.
It should not matter whether you append the records received from a
Hi Fabian,
Thanks for the link and for the remarks.
I do not imagine the behavior of the inner query necessary on the lines you
describe. I specifically refer to " is applies as well for the inner query.
However, as the result of the inner query evolves, also the result of the join
needs to b
Hi Radu,
I thought about your join proposal again and think there is an issue with
the semantics.
The problem is that the result of a query is recomputed as new data arrives
in the dynamic table.
This applies as well for the inner query. However, as the result of the
inner query evolves, also the
16 matches
Mail list logo