On Tue, Aug 16, 2016 at 10:51 PM, Yin Huai wrote:
> Do you want to try it?
Yes, indeed! I'd be more than happy. Guide me if you don't mind. Thanks.
Should I create a JIRA for this?
Jacek
-
To unsubscribe e-mail: dev-unsubscr.
Hi Jacek,
We will try to create the default database if it does not exist. Hive
actually relies on that AlreadyExistsException to determine if a db already
exists and ignore the error to implement the logic of "CREATE DATABASE IF
NOT EXISTS". So, that message does not mean any bad thing happened.
Hi,
I'm working with today's build and am facing the issue:
scala> Seq(A(4)).toDS
16/08/16 19:26:26 ERROR RetryingHMSHandler:
AlreadyExistsException(message:Database default already exists)
at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_database(HiveMetaStore.java:891)
.
Hi Tim,
AWESOME. Thanks a lot for releasing it. That makes me even more eager
to see it in Spark's codebase (and replacing the current RDD-based
API)!
Pozdrawiam,
Jacek Laskowski
https://medium.com/@jaceklaskowski/
Mastering Apache Spark 2.0 http://bit.ly/mastering-apache-spark
Follow me at
Welcome Felix!
On Mon, Aug 15, 2016 at 6:16 AM, mayur bhole
wrote:
> Congrats Felix!
>
> On Mon, Aug 15, 2016 at 2:57 PM, Paul Roy wrote:
>
>> Congrats Felix
>>
>> Paul Roy.
>>
>> On Mon, Aug 8, 2016 at 9:15 PM, Matei Zaharia
>> wrote:
>>
>>> Hi all,
>>>
>>> The PMC recently voted to add Felix
Hi Tim.
Could you share link to the release docs as well?
Thanks,
Shagun
https://twitter.com/shagunsodhani
On Tue, Aug 16, 2016 at 10:02 PM, Tim Hunter
wrote:
> Hello all,
> I have released version 0.2.0 of the GraphFrames package. Apart from a few
> bug fixes, it is the first release publishe
Hello all,
I have released version 0.2.0 of the GraphFrames package. Apart from a few
bug fixes, it is the first release published for Spark 2.0 and both scala
2.10 and 2.11. Please let us know if you have any comment or questions.
It is available as a Spark package:
https://spark-packages.org/pac
Hi,
I have been using a standalone spark cluster (v1.4.x) with the following
configurations. 2 nodes with 1 core each and 4g memory workers in each
node. So I had 2 executors for my app with 2 cores and 8g memory in total.
I have a table in a MySQL database which has around 10million rows. It has
In addition,you can also set spark.sql.adaptive.enabled=true (default=false)
,enable adaptive query execution。
--
View this message in context:
http://apache-spark-developers-list.1001551.n3.nabble.com/Resultant-RDD-after-a-group-by-query-always-returns-200-partitions-tp18647p18650.html
Sent fr
That's the default shuffle partitions with Spark, You can tune it using
spark.sql.shuffle.partitions.
Regards,
Rishitesh Mishra,
SnappyData . (http://www.snappydata.io/)
https://in.linkedin.com/in/rishiteshmishra
On Tue, Aug 16, 2016 at 11:31 AM, Niranda Perera
wrote:
> Hi,
>
> I ran the follo
10 matches
Mail list logo