That suggestion got lost along the way and IIRC the patch didn't have
that. It's a good idea though, if nothing else to provide a simple
means for backwards compatibility.
I created a JIRA for this. It's very straightforward so maybe someone
can pick it up quickly:
https://issues.apache.org/jira/b
Yeah, the approximation of hssian in LBFGS isn't stateless, and it does
depend on previous LBFGS step as Xiangrui also pointed out. It's surprising
that it works without error message. I also saw the loss is fluctuating
like SGD during the training.
We will remove the miniBatch mode in LBFGS in Sp
Yeah, that's probably the easiest though obviously pretty hacky.
I'm surprised that the hessian approximation isn't worse than it is. (As
in, I'd expect error messages.) It's obviously line searching much more, so
the approximation must be worse. You might be interested in this online
bfgs:
http:/
Have a quick hack to understand the behavior of SLBFGS
(Stochastic-LBFGS) by overwriting the breeze iterations method to get the
current LBFGS step to ensure that the objective function is the same during
the line search step. David, the following is my code, have a better way to
inject into it?
h
Thanks. I'm fine with the logic change, although I was a bit surprised to
see Hadoop used for file I/O.
Anyway, the jira issue and pull request discussions mention a flag to
enable overwrites. That would be very convenient for a tutorial I'm
writing, although I wouldn't recommend it for normal use
Hi Dean,
We always used the Hadoop libraries here to read and write local
files. In Spark 1.0 we started enforcing the rule that you can't
over-write an existing directory because it can cause
confusing/undefined behavior if multiple jobs output to the directory
(they partially clobber each other'
Sorry got cut off. For 0.9.0 and 1.0.0 they are not binary compatible
and in a few cases not source compatible. 1.X will be source
compatible. We are also planning to support binary compatibility in
1.X but I'm waiting util we make a few releases to officially promise
that, since Scala makes this p
> What are the expectations / guarantees on binary compatibility between
> 0.9 and 1.0?
There are not guarantees.
Hi Patrick,
What are the expectations / guarantees on binary compatibility between
0.9 and 1.0?
You mention some API changes, which kinda hint that binary
compatibility has already been broken, but just wanted to point out
there are other cases. e.g.:
Exception in thread "main" java.lang.reflect
Hi All,
Since we're launching Spark Yarn Job in our tomcat application, the default
behavior of calling System.exit when job is finished or runs into any error
isn't desirable.
We create this PR https://github.com/apache/spark/pull/490 to address this
issue. Since the logical is fairly straightfo
I'm observing one anomalous behavior. With the 1.0.0 libraries, it's using
HDFS classes for file I/O, while the same script compiled and running with
0.9.1 uses only the local-mode File IO.
The script is a variation of the Word Count script. Here are the "guts":
object WordCount2 {
def main(arg
Hey All,
This is not an official vote, but I wanted to cut an RC so that people can
test against the Maven artifacts, test building with their configuration,
etc. We are still chasing down a few issues and updating docs, etc.
If you have issues or bug reports for this release, please send an e-ma
12 matches
Mail list logo