In general all PRs should be made against master.  When necessary, we can
back port them to the 1.1 branch as well.  However, since we are in
code-freeze for that branch, we'll only do that for major bug fixes at this
point.


On Thu, Aug 21, 2014 at 10:58 AM, Dmitriy Lyubimov <[email protected]>
wrote:

> ok i'll try. happen to do that a lot to other tools.
>
> So I am guessing you are saying if i wanted to do it now, i'd start
> against https://github.com/apache/spark/tree/branch-1.1 and PR against it?
>
>
> On Thu, Aug 21, 2014 at 12:28 AM, Michael Armbrust <[email protected]
> > wrote:
>
>> I do not know of any existing way to do this.  It should be possible
>> using the new public API for applying schema (will be available in 1.1) to
>> an RDD.  Basically you'll need to convert the proto buff records into rows,
>> and also create a StructType that represents the schema.  With this two
>> things you can all the applySchema method on SparkContext.
>>
>> Would be great if you could contribute this back.
>>
>>
>> On Wed, Aug 20, 2014 at 5:57 PM, Dmitriy Lyubimov <[email protected]>
>> wrote:
>>
>>> Hello,
>>>
>>> is there any known work to adapt protobuf schema to Spark QL data
>>> sourcing? If not, would it present interest to contribute one?
>>>
>>> thanks.
>>> -d
>>>
>>
>>
>

Reply via email to