Well, technically speaking annotation and actual are not the same thing.
Many parts of Spark API might require heavy overloads to either capture
relationships between arguments (for example in case of ML) or to
capture at least rudimentary relationships between inputs and outputs
(i.e. udfs).

Just saying...



On 8/27/20 6:09 PM, Driesprong, Fokko wrote:
> Also, it is very cumbersome to add everything to the pyi file. In
> practice, this means copying the method definition from the py file
> and paste it into the pyi file. This hurts my developers' heart, as it
> violates the DRY principle. 


>
> I see many big projects using regular annotations: 
> -
> Pandas: 
> https://github.com/pandas-dev/pandas/blob/master/pandas/io/parquet.py#L51

That's probably not a good example, unless something changed
significantly lately. The last time I participated in the discussion
Pandas didn't type check and had no clear timeline for advertising
annotations.


-- 
Best regards,
Maciej Szymkiewicz

Web: https://zero323.net
Keybase: https://keybase.io/zero323
Gigs: https://www.codementor.io/@zero323
PGP: A30CEF0C31A501EC

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to