ay would need some handling.
>
> I will check with the author of this code, I think this code can be
> contributed to Spark.
>
> Hemant
> www.snappydata.io
> linkedin.com/company/snappydata
>
> On Wed, Oct 7, 2015 at 3:30 PM, Ophir Cohen wrote:
>
>> From which jar
wrote:
> An approach can be to wrap your MutableRow in WrappedInternalRow which is
> a child class of Row.
>
> Hemant
> www.snappydata.io
> linkedin.com/company/snappydata
>
>
> On Tue, Oct 6, 2015 at 3:21 PM, Ophir Cohen wrote:
>
>> Hi Guys,
>> I'm
Hi Guys,
I'm upgrading to Spark 1.5.
In our previous version (Spark 1.3 but it was OK on 1.4 as well) we created
GenericMutableRow
(org.apache.spark.sql.catalyst.expressions.GenericMutableRow) and return it
as org.apache.spark.sql.Row
Starting from Spark 1.5 GenericMutableRow isn't extends Row.
Nop, I'm checking it out thanks!
On Tue, Sep 29, 2015 at 3:30 PM, Ted Yu wrote:
> Have you seen this thread ?
> http://search-hadoop.com/m/q3RTtGwP431AQ2B41
>
> Plugin metastore version for your deployment.
>
> Cheers
>
> On Sep 29, 2015, at 5:20 AM, Ophir Cohen
Hi,
I'm using Spark on top of Hive.
As I want to keep old tables I store the DataFrame into tmp table in hive
and when finished successfully I rename the table.
In last few days I've upgrade to use Spark 1.4.1, and as I'm using aws emr
I got Hive 1.0.
Now when I try to rename the table I get the
Hi,
I'm working on my companie's system that constructs out of Spark, Zeppelin,
Hive and some other technology and wonder regarding to ability to stop
contexts.
Working on the test framwork for the system, when run tests someting I
would like to create new SparkContext in order to run the tests on
A short update: eventually we manually upgraded to 1.3.1 and the problem
fixed.
On Apr 26, 2015 2:26 PM, "Ophir Cohen" wrote:
> I happened to hit the following issue that prevents me from using UDFs
> with case classes: https://issues.apache.org/jira/browse/SPARK-6054.
>
I happened to hit the following issue that prevents me from using UDFs with
case classes: https://issues.apache.org/jira/browse/SPARK-6054.
The issue already fixed for 1.3.1 but we are working on Amazon and it looks
that Amazon provide deployment of Spark 1.3.1 using their scripts.
Did someone en
I wrote few mails here regarding this issue.
After further investigation I think there is a bug in Spark 1.3 in saving
Hive tables.
(hc is HiveContext)
1. Verify the needed configuration exists:
scala> hc.sql("set hive.exec.compress.output").collect
res4: Array[org.apache.spark.sql.Row] =
Array([
ationMaps:{})), partitionKeys:[],
parameters:{spark.sql.sources.schema.part.0={"type":"struct","fields":[{"name":"sid","type":"integer","nullable":true,"metadata":{}},{"name":"typeid","type
Sadly I'm encounter too many issues migrating my code to Spark 1.3
I wrote one problem on other mail but my main problem is that I can't set
the right compression type.
In Spark 1.2.1 setting the following values was enough:
hc.setConf("hive.exec.compress.output", "true")
hc.setConf("mapreduce
I think I encounter the same problem, I'm trying to turn on the compression
of Hive.
I have the following lines:
def initHiveContext(sc: SparkContext): HiveContext = {
val hc: HiveContext = new HiveContext(sc)
hc.setConf("hive.exec.compress.output", "true")
hc.setConf("mapreduce.output.
Lately we upgraded our Spark to 1.3.
Not surprisingly, over the way I find few incomputability between the
versions and quite expected.
I found change that I'm interesting to understand it origin.
env: Amazon EMR, Spark 1.3, Hive 0.13, Hadoop 2.4
In Spark 1.2.1 I ran from the code query such:
SHOW
BTW
This:
hc.sql("show tables").collect
Works great!
On Tue, Apr 21, 2015 at 10:49 AM, Ophir Cohen wrote:
> Lately we upgraded our Spark to 1.3.
> Not surprisingly, over the way I find few incomputability between the
> versions and quite expected.
> I found change t
20, 2015 at 5:43 PM, Ophir Cohen wrote:
> Hi,
> Today I upgraded our code and cluster to 1.3.
> We are using Spark 1.3 in Amazon EMR, ami 3.6, include history server and
> Ganglia.
>
> I also migrated all deprecated SchemaRDD into DataFrame.
> Now when I'm trying t
Interesting:
Remove the history server, '-a' option and using ami 3.5 fixed the problem.
Now the question is: what made the change?...
I vote for the '-a' but let me update...
On Mon, Apr 20, 2015 at 5:43 PM, Ophir Cohen wrote:
> Hi,
> Today I upgraded our code an
Hi,
Today I upgraded our code and cluster to 1.3.
We are using Spark 1.3 in Amazon EMR, ami 3.6, include history server and
Ganglia.
I also migrated all deprecated SchemaRDD into DataFrame.
Now when I'm trying to read a parquet files from s3 I get the below
exception.
Actually it not a problem if
ying table). In the
> query compilation process, we will first analyze this query and resolved
> those attribute references. A resolved attribute reference means that this
> reference is valid and we know where to get the column values from the
> input. Hope this is helpful.
>
> On Tue,
here is no id
> associated with it.
>
> On Tue, Mar 17, 2015 at 2:08 PM, Ophir Cohen wrote:
>
>> Interesting, I thought the problem is with the method itself.
>> I will check it soon and update.
>> Can you elaborate what does it mean the # and the number? Is that a
>
Ok, I managed to solve it.
As the issue in jira suggests it fixed in 1.2.1, i probably had some old
jars in the classpath.
Cleaning everything and rebuild eventually solve the problem.
On Mar 17, 2015 12:25 PM, "Ophir Cohen" wrote:
> Hi Guys and great job!
> I encounter a weird
diluted_d" was not resolved? Can you check if
> basic_null_diluted_d is in you table?
>
> On Tue, Mar 17, 2015 at 9:34 AM, Ophir Cohen wrote:
>
>> Hi Guys,
>> I'm registering a function using:
>> sqlc.registerFunction("makeEstEntry",ReutersData
Hi Guys,
I'm registering a function using:
sqlc.registerFunction("makeEstEntry",ReutersDataFunctions.makeEstEntry _)
Then I register the table and try to query the table using that function
and I get:
org.apache.spark.sql.catalyst.errors.package$TreeNodeException: Unresolved
attributes:
'makeEstEn
Hi Guys and great job!
I encounter a weird problem on local mode and I'll be glad to solve it
out...
When trying to save ScehmaRDD into Hive table it fails with
'TreeNodeException: Unresolved plan found'
I have found similar issue in Jira:
https://issues.apache.org/jira/browse/SPARK-4825 but I'm us
23 matches
Mail list logo