Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/1658#issuecomment-61379142
Thanks @kmader, I merged this now. I manually amended the patch a bit to
fix style issues (there were still a bunch of commas without spaces, etc), and
I also changed the name of the recordLength property in Hadoop JobConfs to
start with org.apache.spark so that it's less likely to clash with other Hadoop
properties. Finally I marked this API as `@Experimental` for now since it's new
in this release, though we can probably make it non-experimental in 1.3.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]