On Tue, 13 Oct 2020 05:58:45 -0000 "Ma Lin" <[email protected]> wrote: > > I heard in data science domain, the data is often huge, such as hundreds of > GB or more. If people can make full use of multi-core CPU to compress, the > experience will be much better than zlib.
This is true, but in data science it is extremely beneficial to use specialized file formats, such as Parquet (which incidentally can use zstd under the hood). In that case, the compression is built in the Parquet implementation, and won't depend on zstd being available in the Python standard library. Regards Antoine. _______________________________________________ Python-ideas mailing list -- [email protected] To unsubscribe send an email to [email protected] https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/[email protected]/message/G6UDKSPAK7Z3HXWGB74L7T5WCSXHN2JP/ Code of Conduct: http://python.org/psf/codeofconduct/
