Well, I could try to do that,
but *partitionBy *method is anyway only supported for Parquet format even
in Spark 1.5.1

Narek

Narek Galstyan

Նարեկ Գալստյան

On 27 December 2015 at 21:50, Ted Yu <yuzhih...@gmail.com> wrote:

> Is upgrading to 1.5.x a possibility for you ?
>
> Cheers
>
> On Sun, Dec 27, 2015 at 9:28 AM, Նարեկ Գալստեան <ngalsty...@gmail.com>
> wrote:
>
>>
>> http://spark.apache.org/docs/1.4.1/api/scala/index.html#org.apache.spark.sql.DataFrameWriter
>>  I did try but it all was in vain.
>> It is also explicitly written in api docs that it only supports Parquet.
>>
>> ​
>>
>> Narek Galstyan
>>
>> Նարեկ Գալստյան
>>
>> On 27 December 2015 at 17:52, Igor Berman <igor.ber...@gmail.com> wrote:
>>
>>> have you tried to specify format of your output, might be parquet is
>>> default format?
>>> df.write().format("json").mode(SaveMode.Overwrite).save("/tmp/path");
>>>
>>> On 27 December 2015 at 15:18, Նարեկ Գալստեան <ngalsty...@gmail.com>
>>> wrote:
>>>
>>>> Hey all!
>>>> I am willing to partition *json *data by a column name and store the
>>>> result as a collection of json files to be loaded to another database.
>>>>
>>>> I could use spark's built in *partitonBy *function but it only outputs
>>>> in parquet format which is not desirable for me.
>>>>
>>>> Could you suggest me a way to deal with this problem?
>>>> Narek Galstyan
>>>>
>>>> Նարեկ Գալստյան
>>>>
>>>
>>>
>>
>

Reply via email to