Hi,
I would like to use a Spark 2.0 History Server instance on spark1.6
generated eventlogs. (This is because clicking the refresh button in
browser, updates the UI with latest events, where-as in the 1.6 code base,
this does not happen)
My question is whether this is safe to do and are they an
It should work fine. 2.0 dropped support for really old event logs
(pre-Spark 1.3 I think), but 1.6 should work, and if it doesn't it
should be considered a bug.
On Thu, Sep 15, 2016 at 10:21 AM, Mario Ds Briggs
wrote:
> Hi,
>
> I would like to use a Spark 2.0 History Server instance on spark1.6
They should be compatible.
On Thu, Sep 15, 2016 at 10:21 AM, Mario Ds Briggs
wrote:
> Hi,
>
> I would like to use a Spark 2.0 History Server instance on spark1.6
> generated eventlogs. (This is because clicking the refresh button in
> browser, updates the UI with latest events, where-as in the
What is meant by:
"""
(This is because clicking the refresh button in browser, updates the UI
with latest events, where-as in the 1.6 code base, this does not happen)
"""
Hasn't refreshing the page updated all the information in the UI through
the 1.x line?
I had checked in 1.6.2 and it doesnt. I didnt check in lower versions. The
history server logs do show a 'Replaying log path: file:xxx.inprogress'
when the file is changed , but a refresh on UI doesnt show the new
jobs/stages/tasks whatever
thanks
Mario
From: Ryan Williams
To: Reynold
this func is in Partitioner
def getPartition(key: Any): Int = key match {
case null => 0
//case None => 0
case _ => Utils.nonNegativeMod(key.hashCode, numPartitions)
}
--
View this message in context:
http://apache-spark-developers-list.1001551.n3.nabble.com/Why-we-get-0-when-th
who can give me an example of the use of RangePartitioner.hashCode, thank
you!
--
View this message in context:
http://apache-spark-developers-list.1001551.n3.nabble.com/What-s-the-use-of-RangePartitioner-hashCode-tp18953.html
Sent from the Apache Spark Developers List mailing list archive at N
What else do you expect to get? A non-zero hash value?
It can technically be any constant.
On Thu, Sep 15, 2016 at 6:15 PM, WangJianfei <
wangjianfe...@otcaix.iscas.ac.cn> wrote:
> this func is in Partitioner
> def getPartition(key: Any): Int = key match {
> case null => 0
> //case No
When the key is not In the rdd, I can also get an value , I just feel a
little strange.
--
View this message in context:
http://apache-spark-developers-list.1001551.n3.nabble.com/Why-we-get-0-when-the-key-is-null-tp18952p18955.html
Sent from the Apache Spark Developers List mailing list archive
We tried it and it works that v2.0 History Server can read the v1.6 logs.
Note that the UI has a regression. When there are too many jobs, the UI
will freeze because the new code tries to cache everything. We submitted a
JIRA: https://issues.apache.org/jira/browse/SPARK-17243
On Thu, Sep 15, 2016
class HashPartitioner(partitions: Int) extends Partitioner {
require(partitions >= 0, s"Number of partitions ($partitions) cannot be
negative.")
the soruce code require(partitions >=0) ,but I don't know why it makes sense
when the partitions is 0.
--
View this message in context:
http://apac
11 matches
Mail list logo