Thank you Tsai.
Holden, would you mind posting the JIRA issue id here? I searched but
found nothing. Thanks.
2015-10-23 1:36 GMT+08:00 DB Tsai :
> There is a JIRA for this. I know Holden is interested in this.
>
>
> On Thursday, October 22, 2015, YiZhi Liu wrote:
>>
>> Would someone mind giving
+kylin dev list
周千昊 于2015年10月23日周五 上午10:20写道:
> Hi, Reynold
> Using glom() is because it is easy to adapt to calculation logic
> already implemented in MR. And o be clear, we are still in POC.
> Since the results shows there is almost no difference between this
> glom stage and the MR
Hi, Reynold
Using glom() is because it is easy to adapt to calculation logic
already implemented in MR. And o be clear, we are still in POC.
Since the results shows there is almost no difference between this
glom stage and the MR mapper, using glom here might not be the issue.
I w
On 22 Oct 2015, at 21:54, Chester Chen
mailto:ches...@alpinenow.com>> wrote:
Thanks Steve
Likes the slides on kerberos, I have enough scars from Kerberos while
trying to integrated it with (Pig, MapRed, Hive JDBC, and HCatalog and Spark)
etc. I am still having trouble making Impersonat
Thanks Steve
Likes the slides on kerberos, I have enough scars from Kerberos
while trying to integrated it with (Pig, MapRed, Hive JDBC, and HCatalog
and Spark) etc. I am still having trouble making Impersonating to work for
HCatalog. I might send you an offline email to ask some pointers
On 22 Oct 2015, at 19:32, Chester Chen
mailto:ches...@alpinenow.com>> wrote:
Steven
You summarized mostly correct. But there is a couple points I want to
emphasize.
Not every cluster have the Hive Service enabled. So The Yarn Client
shouldn't try to get the hive delegation token ju
Why do you do a glom? It seems unnecessarily expensive to materialize each
partition in memory.
On Thu, Oct 22, 2015 at 2:02 AM, 周千昊 wrote:
> Hi, spark community
> I have an application which I try to migrate from MR to Spark.
> It will do some calculations from Hive and output to h
Steven
You summarized mostly correct. But there is a couple points I want to
emphasize.
Not every cluster have the Hive Service enabled. So The Yarn Client
shouldn't try to get the hive delegation token just because security mode
is enabled.
The Yarn Client code can check if the s
There is a JIRA for this. I know Holden is interested in this.
On Thursday, October 22, 2015, YiZhi Liu wrote:
> Would someone mind giving some hint?
>
> 2015-10-20 15:34 GMT+08:00 YiZhi Liu >:
> > Hi all,
> >
> > I noticed that in ml.classification.LogisticRegression, users are not
> > allowed
A similar issue occurs when interacting with Hive secured by Sentry.
https://issues.apache.org/jira/browse/SPARK-9042
By changing how Hive Context instance is created, this issue might also be
resolved.
On Thu, Oct 22, 2015 at 11:33 AM Steve Loughran
wrote:
> On 22 Oct 2015, at 08:25, Chester C
You can use the following link:
https://issues.apache.org/jira/secure/CreateIssue!default.jspa
Remember to select Spark as the project.
On Thu, Oct 22, 2015 at 9:38 AM, Richard Marscher
wrote:
> Hi,
>
> I'm working on following the guidelines for contributing code to Spark and
> am trying to cr
Hi,
I'm working on following the guidelines for contributing code to Spark and
am trying to create a related JIRA issue. I'm logged into my user on
issues.apache.org, but I don't seem to have an option to create an issue,
just browse/search existing.
Any help would be appreciated!
Thanks
--
*R
Would someone mind giving some hint?
2015-10-20 15:34 GMT+08:00 YiZhi Liu :
> Hi all,
>
> I noticed that in ml.classification.LogisticRegression, users are not
> allowed to set initial coefficients, while it is supported in
> mllib.classification.LogisticRegressionWithSGD.
>
> Sometimes we know sp
On 22 Oct 2015, at 08:25, Chester Chen
mailto:ches...@alpinenow.com>> wrote:
Doug
We are not trying to compiling against different version of hive. The
1.2.1.spark hive-exec is specified on spark 1.5.2 Pom file. We are moving from
spark 1.3.1 to 1.5.1. Simply trying to supply the needed dep
I guess the order is guaranteed unless you set
the spark.streaming.concurrentJobs to a higher number than 1.
Thanks
Best Regards
On Mon, Oct 19, 2015 at 12:28 PM, Renjie Liu
wrote:
> Hi, all:
> I've read source code and it seems that there is no guarantee that the
> order of processing of each
Hi,
I don't know much about you particular use case, but most (if not all) of
the Spark command line parameters can also be specified as properties.
You should try to use
SparkLauncher.setConf("spark.executor.instances", "3")
HTH,
Luc
Luc Bourlier
*Spark Team - Typesafe, Inc.*
luc.bourl...@typ
Hi, spark community
I have an application which I try to migrate from MR to Spark.
It will do some calculations from Hive and output to hfile which will
be bulk load to HBase Table, details as follow:
Rdd input = getSourceInputFromHive()
Rdd> mapSideResult =
input.glom().mapP
Doug
We are not trying to compiling against different version of hive. The
1.2.1.spark hive-exec is specified on spark 1.5.2 Pom file. We are moving from
spark 1.3.1 to 1.5.1. Simply trying to supply the needed dependency. The rest
of application (besides spark) simply uses hive 0.13.1.
> On Oct 21, 2015, at 8:45 PM, Chester Chen wrote:
>
> Doug
> thanks for responding.
> >>I think Spark just needs to be compiled against 1.2.1
>
>Can you elaborate on this, or specific command you are referring ?
>
>In our build.scala, I was including the following
>
> "org.spark
19 matches
Mail list logo