That would be me then ;-)

I'm working on a patch.

Cheers,

On 14 August 2015 at 23:43, Reynold Xin <r...@databricks.com> wrote:

> I pinged the IBM team to submit a patch that would work on IBM JVM.
>
>
> On Fri, Aug 14, 2015 at 11:27 AM, Pete Robbins <robbin...@gmail.com>
> wrote:
>
>> ref: https://issues.apache.org/jira/browse/SPARK-9370
>>
>> The code to handle BigInteger types in
>>
>> org.apache.spark.sql.catalyst.expressions.UnsafeRowWriters.java
>>
>> and
>>
>> org.apache.spark.unsafe.Platform.java
>>
>> is dependant on the implementation of java.math.BigInteger
>>
>> eg:
>>
>>       try {
>>         signumOffset =
>> _UNSAFE.objectFieldOffset(BigInteger.class.getDeclaredField("signum"));
>>         magOffset =
>> _UNSAFE.objectFieldOffset(BigInteger.class.getDeclaredField("mag"));
>>       } catch (Exception ex) {
>>         // should not happen
>>       }
>>
>> This is relying on there being fields "int signum" and "int[] mag"
>>
>> These implementaton fields are not part of the Java specification for
>> this class so can not be relied upon.
>>
>> We are running Spark on IBM jdks and their spec-compliant implementation
>> has different internal fields. This causes an abort when running on these
>> java runtimes. There is also no guarantee that any future implentations of
>> OpenJDK will maintain these field names.
>>
>> I think we need to find an implementation of these Spark functions that
>> only relies on Java compliant classes rather than specific implementations.
>>
>> Any thoughts?
>>
>> Cheers,
>>
>>
>>
>>
>>
>>
>

Reply via email to