Is this
> a common use case?
> Is there a better way to solve my problem?
>
> Thanks
>
--
[image: Sigmoid Analytics] <http://htmlsig.com/www.sigmoidanalytics.com>
*Arush Kharbanda* || Technical Teamlead
ar...@sigmoidanalytics.com || www.sigmoidanalytics.com
can I acquire when I submit app ? Is it greedy
> mode (as many as I can acquire )?
>
--
[image: Sigmoid Analytics] <http://htmlsig.com/www.sigmoidanalytics.com>
*Arush Kharbanda* || Technical Teamlead
ar...@sigmoidanalytics.com || www.sigmoidanalytics.com
>
>>
>>
>> Since spark can run multiple tasks in one executor, so I am curious to
>> know how does spark manage memory across these tasks. Say if one executor
>> takes 1GB memory, then if this executor can run 10 tasks simultaneously,
>> then each task can consume 100MB on average. Do I understand it correctly ?
>> It doesn't make sense to me that spark run multiple tasks in one executor.
>>
>>
>>
>
>
--
[image: Sigmoid Analytics] <http://htmlsig.com/www.sigmoidanalytics.com>
*Arush Kharbanda* || Technical Teamlead
ar...@sigmoidanalytics.com || www.sigmoidanalytics.com
-mail: user-h...@spark.apache.org
>
>
--
[image: Sigmoid Analytics] <http://htmlsig.com/www.sigmoidanalytics.com>
*Arush Kharbanda* || Technical Teamlead
ar...@sigmoidanalytics.com || www.sigmoidanalytics.com
nd if the spark driver disable , the spark application will be
> crashed on yarn. appreciate for any suggestions and idea .
>
> Thank you!
>
--
[image: Sigmoid Analytics] <http://htmlsig.com/www.sigmoidanalytics.com>
*Arush Kharbanda* || Technical Teamlead
ar...@sigmoidanalytics.com || www.sigmoidanalytics.com
this message in context:
>>> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-SQL-performance-issue-tp22627.html
>>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>>
>>> -
>>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>>> For additional commands, e-mail: user-h...@spark.apache.org
>>>
>>>
>
--
[image: Sigmoid Analytics] <http://htmlsig.com/www.sigmoidanalytics.com>
*Arush Kharbanda* || Technical Teamlead
ar...@sigmoidanalytics.com || www.sigmoidanalytics.com
Yes, i am able to reproduce the problem. Do you need the scripts to create
the tables?
On Thu, Apr 16, 2015 at 10:50 PM, Yin Huai wrote:
> Can your code that can reproduce the problem?
>
> On Thu, Apr 16, 2015 at 5:42 AM, Arush Kharbanda <
> ar...@sigmoidanalytics.com> wrote
Hi
As per JIRA this issue is resolved, but i am still facing this issue.
SPARK-2734 - DROP TABLE should also uncache table
--
[image: Sigmoid Analytics] <http://htmlsig.com/www.sigmoidanalytics.com>
*Arush Kharbanda* || Technical Teamlead
ar...@sigmoidanalyti
uld able to data integrity check through
> configurations (like xml, json, etc); Please share your thoughts..
>
>
> Thanks
>
> Sathish
>
--
[image: Sigmoid Analytics] <http://htmlsig.com/www.sigmoidanalytics.com>
*Arush Kharbanda* || Technical Teamlead
ar...@sigmoidanalytics.com || www.sigmoidanalytics.com
ive) from Spark
> SQL. Otherwise how would anyone do analytics since the source tables are
> always either persisted directly on HDFS or through Hive.
>
>
> On Fri, Mar 27, 2015 at 1:15 PM, Arush Kharbanda <
> ar...@sigmoidanalytics.com> wrote:
>
>> Since hive and sp
ttp://htmlsig.com/www.sigmoidanalytics.com>
*Arush Kharbanda* || Technical Teamlead
ar...@sigmoidanalytics.com || www.sigmoidanalytics.com
initial version of window function support in 1.4.0. But it's not a promise
>> yet.
>>
>> Cheng
>>
>> On 3/26/15 7:27 PM, Arush Kharbanda wrote:
>>
>> Its not yet implemented.
>>
>> https://issues.apache.org/jira/browse/SPARK-1442
>>
.org/confluence/display/Hive/LanguageManual+WindowingAndAnalytics
>
>
> Some tutorial or documentation where I can see all features supported by
> Spark SQL?
>
>
> Thanks!!!
> --
>
>
> Regards.
> Miguel Ángel
>
--
[image: Sigmoid Analytics] <http:
ain$1(SparkSubmit.scala:166)
> at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)
> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)
> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>
>
>
>
> --
> View this message in
it?
>
> --
> RGRDZ Harut
>
--
[image: Sigmoid Analytics] <http://htmlsig.com/www.sigmoidanalytics.com>
*Arush Kharbanda* || Technical Teamlead
ar...@sigmoidanalytics.com || www.sigmoidanalytics.com
m.
>>
>> -
>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>> For additional commands, e-mail: user-h...@spark.apache.org
>>
>>
--
[image: Sigmoid Analytics] <http://htmlsig.com/www.sigmoidanalytics.com>
*Arush Kharbanda* || Technical Teamlead
ar...@sigmoidanalytics.com || www.sigmoidanalytics.com
, is this correct?
>
> Thanks
>
> -
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>
--
[image: Sigmoid Analytics] <http://h
e beginning or only a subset of it? How can I limit the size of
> state that is kept in checkpoint?
>
> Thank you
> -Binh
>
> On Tue, Mar 17, 2015 at 11:47 PM, Arush Kharbanda <
> ar...@sigmoidanalytics.com> wrote:
>
>> Hi
>>
>> Yes spark streaming is cap
correct?
>If not please correct me!
>- For the Statefull aggregation, What does Spark-Streaming keep when
>it saves checkpoint?
>
> Please kindly help!
>
> Thanks
> -Binh
>
--
[image: Sigmoid Analytics] <http://htmlsig.com/www.sigmoidanalytics.com>
*Arush Kharbanda* || Technical Teamlead
ar...@sigmoidanalytics.com || www.sigmoidanalytics.com
;
>
--
[image: Sigmoid Analytics] <http://htmlsig.com/www.sigmoidanalytics.com>
*Arush Kharbanda* || Technical Teamlead
ar...@sigmoidanalytics.com || www.sigmoidanalytics.com
ow-number-rank-tp22072.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> -
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spa
torCommand.execute(ConstructorCommand.java:68)
> at py4j.GatewayConnection.run(GatewayConnection.java:207)
> at java.lang.Thread.run(Unknown Source)
>
> What is wrong on my side?
>
> Should I run some scripts before spark-submit.cmd?
>
> Regards,
> Sergey.
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/can-not-submit-job-to-spark-in-windows-tp21824.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> -
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>
--
[image: Sigmoid Analytics] <http://htmlsig.com/www.sigmoidanalytics.com>
*Arush Kharbanda* || Technical Teamlead
ar...@sigmoidanalytics.com || www.sigmoidanalytics.com
it?
>
> I have windows 7 and spark 1.2.1
>
> Sergey.
>
>
>
> -
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>
--
[im
oop.version should be used. Can
> anybody please elaborate on how to specify tat SBT should fetch hadoop-core
> from Intel which is in our internal repository?
>
> Thanks & Regards,
> Meethu M
>
--
[image: Sigmoid Analytics] <http://htmlsig.com/www.sigmoidanalytics.com>
*Arush Kharbanda* || Technical Teamlead
ar...@sigmoidanalytics.com || www.sigmoidanalytics.com
s.package$TreeNodeException: Unresolved
> attributes: *, tree:
>
> How do I solve this?
>
>
> --
> Regards,
> Anusha
>
--
[image: Sigmoid Analytics] <http://htmlsig.com/www.sigmoidanalytics.com>
*Arush Kharbanda* || Technical Teamlead
ar...@sigmoidanalytics.com || www.sigmoidanalytics.com
ka.actor.ActorSystemImpl.start(ActorSystem.scala:588)
>> at akka.actor.ActorSystem$.apply(ActorSystem.scala:111)
>> at akka.actor.ActorSystem$.apply(ActorSystem.scala:104)
>> at
>> org.apache.spark.util.AkkaUtils$.org$apache$spark$util$AkkaUtils$$doCreateActorSystem(AkkaUtils.scala:121)
>> at org.apache.spark.util.AkkaUtils$$anonfun$1.apply(AkkaUtils.scala:54)
>> at org.apache.spark.util.AkkaUtils$$anonfun$1.apply(AkkaUtils.scala:53)
>> at
>> org.apache.spark.util.Utils$$anonfun$startServiceOnPort$1.apply$mcVI$sp(Utils.scala:1454)
>> at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
>> at org.apache.spark.util.Utils$.startServiceOnPort(Utils.scala:1450)
>> at org.apache.spark.util.AkkaUtils$.createActorSystem(AkkaUtils.scala:56)
>> at org.apache.spark.SparkEnv$.create(SparkEnv.scala:156)
>> at org.apache.spark.SparkContext.(SparkContext.scala:203)
>> at
>> com.algofusion.reconciliation.execution.utils.ExecutionUtils.(ExecutionUtils.java:130)
>> ... 2 more
>>
>> Regards,
>> Sarath.
>>
>
>
--
[image: Sigmoid Analytics] <http://htmlsig.com/www.sigmoidanalytics.com>
*Arush Kharbanda* || Technical Teamlead
ar...@sigmoidanalytics.com || www.sigmoidanalytics.com
---
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>
--
[image: Sigmoid Analytics] <http://htmlsig.com/www.sigmoidanalytics.com>
*Arush Kharbanda* || Technical Teamlead
ar...@sigmoidanalytics.com || www.sigmoidanalytics.com
has any tips for what I should look into it would be appreciated.
>
> Thanks.
>
> Darin.
>
> -
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>
--
[image: Sigmoid Analytics] <http://htmlsig.com/www.sigmoidanalytics.com>
*Arush Kharbanda* || Technical Teamlead
ar...@sigmoidanalytics.com || www.sigmoidanalytics.com
itional commands, e-mail: user-h...@spark.apache.org
>
>
--
[image: Sigmoid Analytics] <http://htmlsig.com/www.sigmoidanalytics.com>
*Arush Kharbanda* || Technical Teamlead
ar...@sigmoidanalytics.com || www.sigmoidanalytics.com
r fast
> forward time.
>
> Any feedback would be greatly appreciated!
>
> Thank you,
> Matus
>
> -
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apac
ish
>
--
[image: Sigmoid Analytics] <http://htmlsig.com/www.sigmoidanalytics.com>
*Arush Kharbanda* || Technical Teamlead
ar...@sigmoidanalytics.com || www.sigmoidanalytics.com
gt;
> My system:
>
> - ubuntu amd64
> - spark 1.2.1
> - yarn from hadoop 2.6 stable
>
>
> Thanks,
>
> [image: --]
> Xi Shen
> [image: http://]about.me/davidshen
> <http://about.me/davidshen?promo=email_sig>
> <http://about.me/davidshen>
&g
out.
>
> On Wed, Feb 18, 2015 at 11:06 AM, Arush Kharbanda <
> ar...@sigmoidanalytics.com> wrote:
>
>> I find monoids pretty useful in this respect, basically separating out
>> the logic in a monoid and then applying the logic to either a stream or a
>> batch. A
the
> MetaRDD wrapper would delegate accordingly.
>
> Just would like to know the official best practice from the spark
> community though.
>
> Thanks,
>
--
[image: Sigmoid Analytics] <http://htmlsig.com/www.sigmoidanalytics.com>
*Arush Kharbanda* || Technical Teamlead
ar...@sigmoidanalytics.com || www.sigmoidanalytics.com
ark User List mailing list archive at Nabble.com.
>
> -
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>
--
[image: Sigmoid Analytics] <http://htmlsig.c
ed data and Tomcat asks Spark.
>
> Is this the right way? Or is there a better way to connect my mobile
> apps with the Spark backend?
>
> I hope that I'm not the first one who want to do this.
>
>
>
> Ralph
>
>
--
[image: Sigmoid Analytics] <http://htmls
per.java:152)
>> at org.mortbay.jetty.Server.handle(Server.java:326)
>> at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
>> at
>> org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
>> at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
>> at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
>> at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
>> at
>> org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
>> at
>> org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
>>
>> Powered by Jetty://
>>
>> --
>> Thanks & Regards,
>>
>> *Mukesh Jha *
>>
>
>
--
[image: Sigmoid Analytics] <http://htmlsig.com/www.sigmoidanalytics.com>
*Arush Kharbanda* || Technical Teamlead
ar...@sigmoidanalytics.com || www.sigmoidanalytics.com
ew org.apache.spark.sql.SQLContext(sc)
>
> val jsonRdd = sqlContext.jsonRDD(results)
>
> val parquetTable = sqlContext.parquetFile(parquetFilePath)
>
> parquetTable.registerTempTable(tableName)
>
> jsonRdd.insertInto(tableName)
>
>
> Regards,
>
> Vasu
ann3
> linkedin https://www.linkedin.com/in/ralphbergmann
> gulp https://www.gulp.de/Profil/RalphBergmann.html
> github https://github.com/the4thfloor
>
>
> pgp key id 0x421F9B78
> pgp fingerprint CEE3 7AE9 07BE 98DF CD5A E69C F131 4A8E 421F 9B78
>
--
[image: Sigmoid Analytics] <http://htmlsig.com/www.sigmoidanalytics.com>
*Arush Kharbanda* || Technical Teamlead
ar...@sigmoidanalytics.com || www.sigmoidanalytics.com
ache-spark-user-list.1001560.n3.nabble.com/Class-loading-issue-spark-files-userClassPathFirst-doesn-t-seem-to-be-working-tp21693.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> -----
> To unsubscribe, e-mail: u
om Spark UI, I can see
>>> the nodes with maximum memory usage is consuming around 6GB, while
>>> "spark.executor.memory" is set to be 20GB.
>>>
>>> I am very confused that the program is not running fast enough, while
>>> hardware resources are not in shortage. Could you please give me some hints
>>> about what decides the performance of a Spark application from hardware
>>> perspective?
>>>
>>> Thanks!
>>>
>>> Julaiti
>>>
>>>
>>
>
--
[image: Sigmoid Analytics] <http://htmlsig.com/www.sigmoidanalytics.com>
*Arush Kharbanda* || Technical Teamlead
ar...@sigmoidanalytics.com || www.sigmoidanalytics.com
(size: 3.0 KB, free: 530.3 MB)
> 15/02/14 10:30:16 INFO BlockManagerMaster: Updated info of block
> broadcast_0_piece0
> 15/02/14 10:30:16 INFO SparkContext: Created broadcast 0 from broadcast at
> DAGScheduler.scala:838
> 15/02/14 10:30:16 INFO DAGScheduler: Submitting 3 missing tasks from Stage
> 0 (CassandraRDD[0]
;
>>> I am referring to https://issues.apache.org/jira/browse/SPARK-4925
>>> (Hive Thriftserver Maven Artifact). Can somebody point me (URL) to the
>>> artifact in a public repository ? I have not found it @Maven Central.
>>>
>>> Thanks,
>>> Mar
gt;
> 15/02/17 00:43:40 INFO scheduler.JobScheduler: Added jobs for time
> 142415187 ms
>
> But I didn't the output from the code: "Received X flumes events"
>
> I am no idea where the problem is, any idea? Thanks
>
>
> --
>
>
--
[image: Sigmoid Analytics] <http://htmlsig.com/www.sigmoidanalytics.com>
*Arush Kharbanda* || Technical Teamlead
ar...@sigmoidanalytics.com || www.sigmoidanalytics.com
--
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>
--
[image: Sigmoid Analytics] <http://htmlsig.com/www.sigmoidanalytics.com>
*Arush Kharbanda* || Technical Teamlead
ar...@sigmoidanalytics.com || www.sigmoidanalytics.com
ameSize=1000.
>
> ..Manas
>
> On Thu, Feb 12, 2015 at 2:05 PM, Arush Kharbanda <
> ar...@sigmoidanalytics.com> wrote:
>
>> What is your cluster configuration? Did you try looking at the Web UI?
>> There are many tips here
>>
>> http://spark.apache.org/
.scala:653)
> at
> org.apache.spark.deploy.master.Master$$anonfun$receiveWithLogging$1$$anonfun$applyOrElse$29.apply(Master.scala:399)
>
> Can anyone help?
>
> ..Manas
>
--
[image: Sigmoid Analytics] <http://htmlsig.com/www.sigmoidanalytics.com>
*Arush Kharbanda* || Technical Teamlead
ar...@sigmoidanalytics.com || www.sigmoidanalytics.com
; Im creating a development machine in AWS and i would like to protect the
> port 8080 using a password.
>
> Is it possible?
>
>
> Best Regards
>
> *Jairo Moreno*
>
--
[image: Sigmoid Analytics] <http://htmlsig.com/www.sigmoidanalytics.com>
*Arush Kharbanda* |
park handle our use case?
>
> Any advice appreciated.
>
> Regards
> John
>
>
> -
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>
--
[imag
>
--
[image: Sigmoid Analytics] <http://htmlsig.com/www.sigmoidanalytics.com>
*Arush Kharbanda* || Technical Teamlead
ar...@sigmoidanalytics.com || www.sigmoidanalytics.com
rk SQL?
>
> I am using 8.3 of tableau with the SparkSQL Connector.
>
> Thanks for the assistance.
>
> -Todd
>
> On Wed, Feb 11, 2015 at 2:34 AM, Arush Kharbanda <
> ar...@sigmoidanalytics.com> wrote:
>
>> BTW what tableau connector are you using?
>>
&
the Apache Spark User List mailing list archive at Nabble.com.
>
> -
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>
--
[image: Sigmoid Analytics] <http://htmlsig.com/www.sigmoidanalytics.com>
*Arush Kharbanda* || Technical Teamlead
ar...@sigmoidanalytics.com || www.sigmoidanalytics.com
at
>>> java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1508)
>>> at
>>>
>>> java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1431)
>>> at
>>> java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1177)
>>> at
>>>
>>> java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1547)
>>>
>>> If you have any pointers for me on how to debug this, that would be very
>>> useful. I tried running with both spark 1.2.0 and 1.1.1, getting the same
>>> error.
>>>
>>>
>>>
>>>
>>> --
>>> View this message in context:
>>> http://apache-spark-user-list.1001560.n3.nabble.com/Lost-task-connection-closed-tp21361p21371.html
>>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>>
>>> -
>>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>>> For additional commands, e-mail: user-h...@spark.apache.org
>>>
>>>
>>
>
--
[image: Sigmoid Analytics] <http://htmlsig.com/www.sigmoidanalytics.com>
*Arush Kharbanda* || Technical Teamlead
ar...@sigmoidanalytics.com || www.sigmoidanalytics.com
BTW what tableau connector are you using?
On Wed, Feb 11, 2015 at 12:55 PM, Arush Kharbanda <
ar...@sigmoidanalytics.com> wrote:
> I am a little confused here, why do you want to create the tables in
> hive. You want to create the tables in spark-sql, right?
>
> If you are no
taStore.java:497)
> at
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:475)
> at
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:523)
> at
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:397)
> at
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.(HiveMetaStore.java:356)
> at
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.(RetryingHMSHandler.java:54)
> at
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:59)
> at
> org.apache.hadoop.hive.metastore.HiveMetaStore.newHMSHandler(HiveMetaStore.java:4944)
> at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:171)
> ... 98 more
> Caused by: java.lang.ClassNotFoundException:
> org.datanucleus.exceptions.NucleusException
> at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
> ... 124 more
>
>
>
> Regards,
> Kundan
>
--
[image: Sigmoid Analytics] <http://htmlsig.com/www.sigmoidanalytics.com>
*Arush Kharbanda* || Technical Teamlead
ar...@sigmoidanalytics.com || www.sigmoidanalytics.com
:35 PM, Todd Nist wrote:
>
>> Arush,
>>
>> Thank you will take a look at that approach in the morning. I sort of
>> figured the answer to #1 was NO and that I would need to do 2 and 3 thanks
>> for clarifying it for me.
>>
>> -Todd
>>
>> On Tue, Fe
1. Can the connector fetch or query schemaRDD's saved to Parquet or JSON
files? NO
2. Do I need to do something to expose these via hive / metastore other
than creating a table in hive? Create a table in spark sql to expose via
spark sql
3. Does the thriftserver need to be configured to expose t
No, zeromq api is not supported in python as of now.
On 5 Feb 2015 21:27, "Sasha Kacanski" wrote:
> Does pyspark supports zeroMQ?
> I see that java does it, but I am not sure for Python?
> regards
>
> --
> Aleksandar Kacanski
>
k.shuffle.service.enabled" in the spark-defaults.conf.
>
> The code mentions that this is supposed to be run inside the Nodemanager
> so I'm assuming it needs to be wired up in the yarn-site.xml under the
> "yarn.nodemanager.aux-services" property?
>
>
>
>
--
Yes they are.
On Fri, Feb 6, 2015 at 5:06 PM, Mohit Durgapal
wrote:
> Just wanted to know If my emails are reaching the user list.
>
>
> Regards
> Mohit
>
--
[image: Sigmoid Analytics] <http://htmlsig.com/www.sigmoidanalytics.com>
*Arush Kharbanda* |
Sigmoid Analytics] <http://htmlsig.com/www.sigmoidanalytics.com>
*Arush Kharbanda* || Technical Teamlead
ar...@sigmoidanalytics.com || www.sigmoidanalytics.com
gt;
> > Regards,
> >
> >
> >
> > Shuai
>
>
> ---------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>
--
[image: Sigmoid Analytics] <http://htmlsig.com/www.sigmoidanalytics.com>
*Arush Kharbanda* || Technical Teamlead
ar...@sigmoidanalytics.com || www.sigmoidanalytics.com
ll,
> but how to set for programs running inside eclipse?
>
> Regards,
>
--
[image: Sigmoid Analytics] <http://htmlsig.com/www.sigmoidanalytics.com>
*Arush Kharbanda* || Technical Teamlead
ar...@sigmoidanalytics.com || www.sigmoidanalytics.com
s?
> 4. Why is trying with more ports?
>
> I look forward for your answers.
> Regards.
> Florin
>
>
>
--
[image: Sigmoid Analytics] <http://htmlsig.com/www.sigmoidanalytics.com>
Arush Kharbanda || Technical Teamlead
ar...@sigmoidanalytics.com || www.sigmoidanalytics.com
t;> pradhandeep1...@gmail.com> wrote:
>>>>
>>>>> Hi,
>>>>> Is there any better operation than Union. I am using union and the
>>>>> cluster is getting stuck with a large data set.
>>>>>
>>>>> Thank you
>>>>>
>>>>
>>>>
>>>
>>
>
--
[image: Sigmoid Analytics] <http://htmlsig.com/www.sigmoidanalytics.com>
*Arush Kharbanda* || Technical Teamlead
ar...@sigmoidanalytics.com || www.sigmoidanalytics.com
Can you share your log4j file.
On Sat, Jan 31, 2015 at 1:35 PM, Arush Kharbanda wrote:
> Hi Ankur,
>
> Its running fine for me for spark 1.1 and changes to log4j properties
> file.
>
> Thanks
> Arush
>
> On Fri, Jan 30, 2015 at 9:49 PM, Ankur Srivastava <
>
4.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> -
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
&
Promise.ready(Promise.scala:219)
>>
>> at
>> scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
>>
>> at
>> scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
>>
>> at
>> scala.c
rkContext(conf)
> val data = sc.textFile("/home/amit/testData.csv").cache()
> val result = data.mapPartitions(pLines).groupByKey
> //val list = result.filter(x=> {(x._1).contains("24050881")})
>
> }
>
> }
>
>
> Here groupByKey is
ache.org/maven2/org/apache/apache/14/apache-14.pom
>>
>> On Thu, Jan 29, 2015 at 11:35 AM, Soumya Simanta <
>> soumya.sima...@gmail.com> wrote:
>>
>>>
>>>
>>> On Thu, Jan 29, 2015 at 11:05 AM, Arush Kharbanda <
>>> ar...@sigmoidanaly
$mcV$sp(CoarseGrainedExecutorBackend.scala:125)
>
> at
> org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:53)
>
> at
> org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:52)
>
> ... 7 more
>
--
[image: Sigmoid Analytics] <http://htmlsig.com/www.sigmoidanalytics.com>
*Arush Kharbanda* || Technical Teamlead
ar...@sigmoidanalytics.com || www.sigmoidanalytics.com
file: Connection
> refused from
> http://repo.maven.apache.org/maven2/org/apache/apache/14/apache-14.pom
> and 'parent.relativePath' points at wrong local POM @ line 21, column 11
> [error] Use 'last' for the full log.
> Project loading failed: (r)etry, (q)uit, (l)ast, or (i)gnore?
>
>
--
[image: Sigmoid Analytics] <http://htmlsig.com/www.sigmoidanalytics.com>
*Arush Kharbanda* || Technical Teamlead
ar...@sigmoidanalytics.com || www.sigmoidanalytics.com
t;
> -Socrates
>
--
[image: Sigmoid Analytics] <http://htmlsig.com/www.sigmoidanalytics.com>
*Arush Kharbanda* || Technical Teamlead
ar...@sigmoidanalytics.com || www.sigmoidanalytics.com
k.storage.BlockManagerMaster.askDriverWithReply(BlockManagerMaster.scala:218)
> at
>
> org.apache.spark.storage.BlockManagerMaster.removeBroadcast(BlockManagerMaster.scala:126)
>
>
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.10
-yarn-common 2.4.0 for example but to no avail. I've also tried
> setting a number of different repositories to see if maybe one of them
> might have that dependency. Still no dice.
>
> What's the best way to resolve this for a quickstart situation? Do I have
> to
ng-Hive-tp21412.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> -
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>
--
[image: Sigmoid Analytics]
--
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>
--
[image: Sigmoid Analytics] <http://htmlsig.com/www.sigmoidanalytics.com>
*Arush Kharbanda* || Technical Teamlead
ar...@sigmoidanalytics.com || www.sigmoidanalytics.com
77 matches
Mail list logo