Unsubscribe
Unsubscribe
Unsubscribe
Hello, I'm with spark 2.1.0 with scala and I'm registering all classes with
kryo, and I have a problem registering this class,
org.apache.spark.sql.execution.datasources.PartitioningAwareFileIndex$SerializableFileStatus$SerializableBlockLocation[]
I can't register with
classOf[Array[Class.forNam
Hello,
Do you use df.write or you make with hivecontext.sql(" insert into ...")?
Angel.
El 12 jun. 2017 11:07 p. m., "Yong Zhang" escribió:
> We are using Spark *1.6.2* as ETL to generate parquet file for one
> dataset, and partitioned by "brand" (which is a string to represent brand
> in this
ns
Thanks,
Asmath
On Tue, May 2, 2017 at 1:38 PM, Angel Francisco Orta <
angel.francisco.o...@gmail.com> wrote:
> Have you tried to make partition by join's field and run it by segments,
> filtering both tables at the same segments of data?
>
> Example:
>
> Val ta
join on these tables now.
>
>
>
> On Tue, May 2, 2017 at 1:27 PM, Angel Francisco Orta <
> angel.francisco.o...@gmail.com> wrote:
>
>> Hello,
>>
>> Is the tables partitioned?
>> If yes, what is the partition field?
>>
>> Thanks
>>
>>
&
Hello,
Is the tables partitioned?
If yes, what is the partition field?
Thanks
El 2 may. 2017 8:22 p. m., "KhajaAsmath Mohammed"
escribió:
Hi,
I am trying to join two big tables in spark and the job is running for
quite a long time without any results.
Table 1: 192GB
Table 2: 92 GB
Does any