at 6:01 PM Yuming Wang wrote:
>
>> +1 for this proposal.
>>
>> On Fri, Apr 16, 2021 at 5:15 AM Karen wrote:
>>
>>> We could leave space in the numbering system, but a more flexible method
>>> may be to have the severity as a field associated with the
I've created a PR to add the error message guidelines to the Spark
contributing guide. Would appreciate some eyes on it!
https://github.com/apache/spark-website/pull/332
On Wed, Apr 14, 2021 at 5:34 PM Yuming Wang wrote:
> +1 LGTM.
>
> On Thu, Apr 15, 2021 at 1:50 AM Karen wr
same problem. S3 throttling: 503.
> DynamoDB: 500 + one of two different messages. see
> com.amazonaws.retry.RetryUtils for the details )
>
> On Wed, 14 Apr 2021 at 20:04, Karen wrote:
>
>> Hi all,
>>
>> We would like to kick off a discussion on adding error ID
nd result:
Before:
AnalysisException: Cannot find column ‘fakeColumn’; line 1 pos 14;
After:
AnalysisException: SPK-12345 COLUMN_NOT_FOUND: Cannot find column
‘fakeColumn’; line 1 pos 14; (SQLSTATE 42704)
Please let us know what you think about this proposal! We’d love to hear
what you think.
Best,
Karen Feng
days to make sure
>> people are happy with it.
>>
>> 2021년 4월 14일 (수) 오전 6:38, Karen 님이 작성:
>>
>>> If the proposed guidelines look good, it would be useful to share these
>>> guidelines with the wider community. A good landing page for contributors
>>> could be
If the proposed guidelines look good, it would be useful to share these
guidelines with the wider community. A good landing page for contributors
could be https://spark.apache.org/contributing.html. What do you think?
Thank you,
Karen Feng
On Wed, Apr 7, 2021 at 8:19 PM Hyukjin Kwon wrote
rough
draft to kick off this discussion:
https://docs.google.com/document/d/12k4zmaKmmdm6Pk63HS0N1zN1QT-6TihkWaa5CkLmsn8/edit?usp=sharing
.
Please let us know what you think should be in the guideline! We look
forward to building this as a community.
Thank you,
Karen Feng
Hi all,
I am concerned that the API-breaking changes in SPARK-25908 (as well as
SPARK-16775, and potentially others) will make the migration process from
Spark 2 to Spark 3 unnecessarily painful. For example, the removal of
SQLContext.getOrCreate will break a large number of libraries currently
bu