For posterity:
**the** way to prepare the Java environment in a terminal session on MacOS is
as follows:
export JAVA_HOME=$(/usr/libexec/java_home -v1.8)
or
export JAVA_HOME=$(/usr/libexec/java_home -v11)
etc
There is no need to mess with $PATH or anything else. It has been like this for
at le
Hi there,
I was going to -1 this because of the com.github.rdblue:brotli-codec:0.1.1
dependency, which is not available on Maven Central, and therefore is not
available from our repository manager (Nexus).
Historically most places I have worked have avoided other public maven
repositories bec
run ./dev/change-scala-version.sh 2.13 ? that's required first to
update POMs. It works fine for me.
On Thu, Aug 26, 2021 at 8:33 PM Stephen Coy
mailto:s...@infomedia.com.au.invalid>> wrote:
Hi all,
Being adventurous I have built the RC1 code with:
-Pyarn -Phadoop-3.2 -Pyarn
works fine for me.
On Thu, Aug 26, 2021 at 8:33 PM Stephen Coy
mailto:s...@infomedia.com.au.invalid>> wrote:
Hi all,
Being adventurous I have built the RC1 code with:
-Pyarn -Phadoop-3.2 -Pyarn -Phadoop-cloud -Phive-thriftserver -Phive-2.3
-Pscala-2.13 -Dhadoop.version=3.2.2
And then a
Hi all,
Being adventurous I have built the RC1 code with:
-Pyarn -Phadoop-3.2 -Pyarn -Phadoop-cloud -Phive-thriftserver -Phive-2.3
-Pscala-2.13 -Dhadoop.version=3.2.2
And then attempted to build my Java based spark application.
However, I found a number of our unit tests were failing with:
j
Sorry, I forgot:
[scoy@Steves-Core-i9-2 core]$ java -version
openjdk version "1.8.0_262"
OpenJDK Runtime Environment (AdoptOpenJDK)(build 1.8.0_262-b10)
OpenJDK 64-Bit Server VM (AdoptOpenJDK)(build 25.262-b10, mixed mode)
which is on MacOS 10.15.7
On 13 Oct 2020, at 12:47 pm, S
Hi all,
When trying to build current master with a simple:
mvn clean install
I get a consistent unit test failure in core:
[ERROR] Tests run: 6, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 5.403 s
<<< FAILURE! - in org.apache.spark.launcher.SparkLauncherSuite
[ERROR] testSparkLauncherGet
Hi Steve,
While I understand your point regarding the mixing of Hadoop jars, this does
not address the java.lang.ClassNotFoundException.
Prebuilt Apache Spark 3.0 builds are only available for Hadoop 2.7 or Hadoop
3.2. Not Hadoop 3.1.
The only place that I have found that missing class is in t
be a few lines of SQL with an
aggregate, collect_list(), and joins.
On Thu, May 21, 2020 at 11:27 PM Stephen Coy
mailto:s...@infomedia.com.au.invalid>> wrote:
Hi there,
This will be a little long so please bear with me. There is a buildable example
available at
https://aus01.safelinks.prot
Hi there,
This will be a little long so please bear with me. There is a buildable example
available at https://github.com/sfcoy/sfcoy-spark-cce-test.
Say I have the following three tables:
Machines
Id,MachineType
11,A
12,B
23,B
24,A
25,B
Bolts
MachineType,Description
A,2
On 1 Apr 2020, at 5:20 pm, Sean Owen
mailto:sro...@gmail.com>> wrote:
It can be published as "3.0.0-rc1" but how do we test that to vote on it
without some other RC1 RC1
I’m not sure what you mean by this question?
This email contains confidential information of and is the copyright of
Info
ownloaded in the Spark official website, etc.
On Wed, Apr 1, 2020 at 12:32 PM Sean Owen
mailto:sro...@gmail.com>> wrote:
These are release candidates, not the final release, so they won't be published
to Maven Central. The naming matches what the final release would be.
On Tue, Mar 31, 2
at 11:25 PM Stephen Coy
mailto:s...@infomedia.com.au.invalid>> wrote:
Furthermore, the spark jars in these bundles all look like release versions:
[scoy@Steves-Core-i9 spark-3.0.0-bin-hadoop3.2]$ ls -l jars/spark-*
-rw-r--r--@ 1 scoy staff 9261223 31 Mar 20:55
jars/spark-catalyst_2.12-3.0.0.
-unsafe_2.12-3.0.0.jar
-rw-r--r--@ 1 scoy staff 329764 31 Mar 20:55 jars/spark-yarn_2.12-3.0.0.jar
At least they have not yet shown up on Maven Central…
Steve C
On 1 Apr 2020, at 3:18 pm, Stephen Coy
mailto:s...@infomedia.com.au.INVALID>> wrote:
The download artifacts are all seem to have th
The download artifacts are all seem to have the “RC1” missing from their names.
e.g. spark-3.0.0-bin-hadoop3.2.tgz
Cheers,
Steve C
On 1 Apr 2020, at 2:04 pm, Reynold Xin
mailto:r...@databricks.com>> wrote:
Please vote on releasing the following candidate as Apache Spark version 3.0.0.
The v
value is already fucked up
The following is the change log.
- When we switched the default value of `convertMetastoreParquet`. (at
Apache Spark 1.2)
- When we switched the default value of `convertMetastoreOrc` (at Apache
Spark 2.4)
- When we switched `CREATE TABLE` itself. (Cha
Hi there,
I’m kind of new around here, but I have had experience with all of all the so
called “big iron” databases such as Oracle, IBM DB2 and Microsoft SQL Server as
well as Postgresql.
They all support the notion of “ANSI padding” for CHAR columns - which means
that such columns are always
17 matches
Mail list logo