unless they are also in the group by clause
> or are inside of an aggregate function.
>
> On Jul 18, 2014 5:12 AM, "Martin Gammelsæter"
> wrote:
>>
>> Hi again!
>>
>> I am having problems when using GROUP BY on both SQLContext and
>> HiveCo
n(Executor.scala:187)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:745)
What am I doing wrong?
--
Best regards,
Martin Gammelsæter
the cluster using spark-ec2 from the 1.0.1 release, so I’m
> assuming that’s taken care of, at least in theory.
>
> I just spun down the clusters I had up, but I will revisit this tomorrow and
> provide the information you requested.
>
> Nick
--
Mvh.
Martin Gammelsæter
92209139
addJar every time the app starts up, and
instead manually add the jar to the classpath of every worke), but I
can't seem to find out how)
--
Best regards,
Martin Gammelsæter
-
> My Blog: https://www.dbtsai.com
> LinkedIn: https://www.linkedin.com/in/dbtsai
>
>
> On Tue, Jul 8, 2014 at 8:43 AM, Koert Kuipers wrote:
>> do you control your cluster and spark deployment? if so, you can try to
>> rebuild with jetty 9.x
>>
>>
>> On T
y ideas on how to solve this? Spark seems to use jetty
8.1.14, while dropwizard uses jetty 9.0.7, so that might be the source
of the problem. Any ideas?
On Tue, Jul 8, 2014 at 2:58 PM, Martin Gammelsæter
wrote:
> Hi!
>
> I am building a web frontend for a Spark app, allowing users to inpu
llowing me to embed it in my own web
application.
Is this possible?
--
Best regards,
Martin Gammelsæter
y if no hive-site.xml is present.
Nice, that sounds like it'll solve my problems. Just for clarity, is
LocalHiveContext and HiveContext equal if no hive-site.xml is present,
or are there still differences?
--
Best regards,
Martin Gammelsæter
e Datastax spark driver?
>
> Mohammed
>
> -Original Message-
> From: Martin Gammelsæter [mailto:martingammelsae...@gmail.com]
> Sent: Friday, July 4, 2014 12:43 AM
> To: user@spark.apache.org
> Subject: Re: How to use groupByKey and CqlPagingInputFormat
>
> On Th
On Fri, Jul 4, 2014 at 11:39 AM, Michael Armbrust
wrote:
> On Fri, Jul 4, 2014 at 1:59 AM, Martin Gammelsæter
> wrote:
>>
>> is there any way to write user defined functions for Spark SQL?
> This is coming in Spark 1.1. There is a work in progress PR here:
> https://gith
ve, and I don't have
>> Hive in my stack (please correct me if I'm wrong).
>>
>> I would love to be able to do something like the following:
>>
>> val casRdd = sparkCtx.cassandraTable("ks", "cf")
>>
>> // registerAsTable etc
>&
ng:
val casRdd = sparkCtx.cassandraTable("ks", "cf")
// registerAsTable etc
val res = sql("SELECT id, xmlGetTag(xmlfield, 'sometag') FROM cf")
--
Best regards,
Martin Gammelsæter
On Thu, Jul 3, 2014 at 10:29 PM, Mohammed Guller wrote:
> Martin,
>
> 1) The first map contains the columns in the primary key, which could be a
> compound primary key containing multiple columns, and the second map
> contains all the non-key columns.
Ah, thank you, that makes sense.
> 2) try
g the
first question) is why am I not allowed to do a groupByKey in the
above code? I understand that the type does not have that function,
but I'm unclear on what I have to do to make it work.
--
Best regards,
Martin Gammelsæter
-newapihadooprdd-java-equivalent-of-scalas-classof
where I have also asked the same question. Any pointers on how to use
.newAPIHadoopRDD() and CqlPagingInputFormat from Java is greatly
appreciated! (Either here or on Stack Overflow)
--
Best regards,
Martin Gammelsæter
15 matches
Mail list logo