:
>
> rdd.mapPartitions {
> result.foreach
> result
> }
>
>
>
> On Thu, Aug 27, 2015 at 2:22 PM, Ahmed Nawar
> wrote:
>
>> Yes, of course, I am doing that. But once i added results.foreach(row=>
>> {}) i pot empty RDD.
>>
>>
>>
&g
015 at 10:18 PM, Cody Koeninger wrote:
> You need to return an iterator from the closure you provide to
> mapPartitions
>
> On Thu, Aug 27, 2015 at 1:42 PM, Ahmed Nawar
> wrote:
>
>> Thanks for foreach idea. But once i used it i got empty rdd. I think
>> because &qu
27, 2015 at 5:11 PM, Cody Koeninger wrote:
>
> Map is lazy. You need an actual action, or nothing will happen. Use
> foreachPartition, or do an empty foreach after the map.
>
> On Thu, Aug 27, 2015 at 8:53 AM, Ahmed Nawar
> wrote:
>
>> Dears,
>>
>> I nee
Dears,
I needs to commit DB Transaction for each partition,Not for each row.
below didn't work for me.
rdd.mapPartitions(partitionOfRecords => {
DBConnectionInit()
val results = partitionOfRecords.map(..)
DBConnection.commit()
results
})
Best regards,
Ahmed Atef Nawwar
Data Man
Dears,
I needs to commit DB Transaction for each partition,Not for each row.
below didn't work for me.
rdd.mapPartitions(partitionOfRecords => {
DBConnectionInit()
val results = partitionOfRecords.map(..)
DBConnection.commit()
})
Best regards,
Ahmed Atef Nawwar
Data Management
awar
> m: 09820890034
>
>
>
>
>
>
> On Mon, Mar 23, 2015 at 2:18 PM, Ahmed Nawar
> wrote:
>
>> Dears,
>>
>> Is there any way to validate the CSV, Json ... Files while loading to
>> DataFrame.
>> I need to ignore corrupted rows.(Rows with not matching with the
>> schema).
>>
>>
>> Thanks,
>> Ahmed Nawwar
>>
>
>
Dear Taotao,
Yes, I tried sparkCSV.
Thanks,
Nawwar
On Mon, Mar 23, 2015 at 12:20 PM, Taotao.Li wrote:
> can it load successfully if the format is invalid?
>
> --
> *发件人: *"Ahmed Nawar"
> *收件人: *user@spark.apache.org
> *发送时间: *星
Dears,
Is there any way to validate the CSV, Json ... Files while loading to
DataFrame.
I need to ignore corrupted rows.(Rows with not matching with the
schema).
Thanks,
Ahmed Nawwar
Dear Yu,
Are you mean "scalastyle-output.xml"? i coped its content below
On Tue, Mar 17, 2015 at 4:11 PM, Ted Yu wrote:
> Can you look in build output for scalastyle warning in mllib module ?
>
> Cheers
>
>
>
> On Mar 17, 2015, at 3:00 AM,
anks
>
>
>
> On Mar 17, 2015, at 1:47 AM, Ahmed Nawar wrote:
>
> Dears,
>
> Is there any instructions to build spark 1.3.0 on windows 7.
>
> I tried "mvn -Phive -Phive-thriftserver -DskipTests clean package" but
> i got below errors
>
>
>
Sorry for old subject i am correcting it.
On Tue, Mar 17, 2015 at 11:47 AM, Ahmed Nawar wrote:
> Dears,
>
> Is there any instructions to build spark 1.3.0 on windows 7.
>
> I tried "mvn -Phive -Phive-thriftserver -DskipTests clean package" but
> i got bel
Dears,
Is there any instructions to build spark 1.3.0 on windows 7.
I tried "mvn -Phive -Phive-thriftserver -DskipTests clean package" but
i got below errors
[INFO] Spark Project Parent POM ... SUCCESS [
7.845 s]
[INFO] Spark Project Networking .
12 matches
Mail list logo