case?
>
> Regards,
> Gourav
>
> On Mon, May 2, 2016 at 5:59 PM, Ted Yu wrote:
>
>> That's my interpretation.
>>
>> On Mon, May 2, 2016 at 9:45 AM, Buntu Dev <
>> buntu...@gmail.com> wrote:
>>
>>> Thanks Ted, I thought the avg. block s
n Sat, May 7, 2016 at 11:48 PM, Buntu Dev wrote:
> > I'm using pyspark dataframe api to sort by specific column and then
> saving
> > the dataframe as parquet file. But the resulting parquet file doesn't
> seem
> > to be sorted.
> >
> > Applying sort a
I'm using pyspark dataframe api to sort by specific column and then saving
the dataframe as parquet file. But the resulting parquet file doesn't seem
to be sorted.
Applying sort and doing a head() on the results shows the correct results
sorted by 'value' column in desc order, as shown below:
~~~
Mon, May 2, 2016 at 6:21 AM, Ted Yu wrote:
> Please consider decreasing block size.
>
> Thanks
>
> > On May 1, 2016, at 9:19 PM, Buntu Dev wrote:
> >
> > I got a 10g limitation on the executors and operating on parquet dataset
> with block size 70M with 200 blocks.
I got a 10g limitation on the executors and operating on parquet dataset
with block size 70M with 200 blocks. I keep hitting the memory limits when
doing a 'select * from t1 order by c1 limit 100' (ie, 1M). It works if
I limit to say 100k. What are the options to save a large dataset without
ru
at 6:01 PM, Krishna wrote:
> I recently encountered similar network related errors and was able to fix
> it by applying the ethtool updates decribed here [
> https://issues.apache.org/jira/plugins/servlet/mobile#issue/SPARK-5085]
>
>
> On Friday, April 29, 2016, Buntu Dev wrot
e error
I would ultimately want to store the result set as parquet. Are there any
other options to handle this?
Thanks!
On Wed, Apr 27, 2016 at 11:10 AM, Buntu Dev wrote:
> I got 14GB of parquet data and when trying to apply order by using spark
> sql and save the first 1M rows but ke
I got 14GB of parquet data and when trying to apply order by using spark
sql and save the first 1M rows but keeps failing with "Connection reset by
peer: socket write error" on the executors.
I've allocated about 10g to both driver and the executors along with
setting the maxResultSize to 10g but
roduce it (could generate fake
> dataset)?
>
> On Sat, Apr 9, 2016 at 4:33 PM, Buntu Dev wrote:
> > I've allocated about 4g for the driver. For the count stage, I notice the
> > Shuffle Write to be 13.9 GB.
> >
> > On Sat, Apr 9, 2016 at 11:43 AM, Ndjido Ardo
gt; Looks like the exception occurred on driver.
>
> Consider increasing the values for the following config:
>
> conf.set("spark.driver.memory", "10240m")
> conf.set("spark.driver.maxResultSize", "2g")
>
> Cheers
>
> On Sat, Apr 9, 2016 a
t; Pozdrawiam,
> Jacek Laskowski
>
> https://medium.com/@jaceklaskowski/
> Mastering Apache Spark http://bit.ly/mastering-apache-spark
> Follow me at https://twitter.com/jaceklaskowski
>
>
> On Sat, Apr 9, 2016 at 7:51 PM, Buntu Dev wrote:
> > I'm running th
I'm running this motif pattern against 1.5M vertices (5.5mb) and 10M (60mb)
edges:
tgraph.find("(a)-[]->(b); (c)-[]->(b); (c)-[]->(d)")
I keep running into Java heap space errors:
~
ERROR actor.ActorSystemImpl: Uncaught fatal error from thread
[sparkDriver-akka.actor.default-dispatcher-33]
I've allocated about 4g for the driver. For the count stage, I notice the
Shuffle Write to be 13.9 GB.
On Sat, Apr 9, 2016 at 11:43 AM, Ndjido Ardo BAR wrote:
> What's the size of your driver?
> On Sat, 9 Apr 2016 at 20:33, Buntu Dev wrote:
>
>> Actually, df.show() wor
Actually, df.show() works displaying 20 rows but df.count() is the one
which is causing the driver to run out of memory. There are just 3 INT
columns.
Any idea what could be the reason?
On Sat, Apr 9, 2016 at 10:47 AM, wrote:
> You seem to have a lot of column :-) !
> df.count() displays the si
I tried setting both the hdfs and parquet block size but write to parquet
did not seem to have had any effect on the total number of blocks or the
average block size. Here is what I did:
sqlContext.setConf("dfs.blocksize", "134217728")
sqlContext.setConf("parquet.block.size", "134217728")
sql
You may want to read this post regarding Spark with Drools:
http://blog.cloudera.com/blog/2015/11/how-to-build-a-complex-event-processing-app-on-apache-spark-and-drools/
On Wed, Nov 4, 2015 at 8:05 PM, Daniel Mahler wrote:
> I am not familiar with any rule engines on Spark Streaming or even pl
Thanks.. I was using Scala 2.11.1 and was able to
use algebird-core_2.10-0.1.11.jar with spark-shell.
On Thu, Oct 30, 2014 at 8:22 AM, Ian O'Connell wrote:
> Whats the error with the 2.10 version of algebird?
>
> On Thu, Oct 30, 2014 at 12:49 AM, thadude wrote:
>
>> I've tried:
>>
>> . /bin/spa
Thanks Akhil.
On Mon, Oct 20, 2014 at 1:57 AM, Akhil Das
wrote:
> Its a known bug in JDK7 and OSX's naming convention, here's how to resolve
> it:
>
> 1. Get the Snappy jar file from
> http://central.maven.org/maven2/org/xerial/snappy/snappy-java/
> 2. Copy the appropriate one to your project'
wrote:
> Your RDD does not contain pairs, since you ".map(_._2)" (BTW that can
> just be ".values"). "Hadoop files" means "SequenceFiles" and those
> store key-value pairs. That's why the method only appears for
> RDD[(K,V)].
>
> On Fri,
Thanks Sean, but I'm importing org.apache.spark.streaming.
StreamingContext._
Here are the spark imports:
import org.apache.spark.streaming._
import org.apache.spark.streaming.StreamingContext._
import org.apache.spark.streaming.kafka._
import org.apache.spark.SparkConf
val stream =
then it may allow if the schema
> changes are append only. Otherwise existing Parquet files have to be
> migrated to new schema.
>
> - Original Message -
> From: "Buntu Dev"
> To: "Soumitra Kumar"
> Cc: u...@spark.incubator.apache.org
> Sent: T
Thanks for the info Soumitra.. its a good start for me.
Just wanted to know how you are managing schema changes/evolution as
parquetSchema is provided to setSchema in the above sample code.
On Tue, Oct 7, 2014 at 10:09 AM, Soumitra Kumar
wrote:
> I have used it to write Parquet files as:
>
> va
Thanks for the update.. I'm interested in writing the results to MySQL as
well, can you share some light or code sample on how you setup the
driver/connection pool/etc.?
On Thu, Sep 25, 2014 at 4:00 PM, maddenpj wrote:
> Update for posterity, so once again I solved the problem shortly after
> po
Thanks Michael for confirming!
On Thu, Jul 31, 2014 at 2:43 PM, Michael Armbrust
wrote:
> The performance should be the same using the DSL or SQL strings.
>
>
> On Thu, Jul 31, 2014 at 2:36 PM, Buntu Dev wrote:
>
>> I was not sure if registerAsTable() and then query again
ing in 1.0.0 using DSL only. Just curious,
> why don't you use the hql() / sql() methods and pass a query string
> in?
>
> [1] https://github.com/apache/spark/pull/1211/files
>
> On Thu, Jul 31, 2014 at 2:20 PM, Buntu Dev wrote:
> > Thanks Zongheng for the pointer. Is
Thanks Zongheng for the pointer. Is there a way to achieve the same in
1.0.0 ?
On Thu, Jul 31, 2014 at 1:43 PM, Zongheng Yang wrote:
> countDistinct is recently added and is in 1.0.2. If you are using that
> or the master branch, you could try something like:
>
> r.select('keyword, countDis
26 matches
Mail list logo